id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.08581
$Q\bar Qqqq$ Quark System, Compact Pentaquark, and Gauge/String Duality (Part II)
This is the second of two companion papers in which we continue to develop the construction of the doubly heavy pentaquark systems using the gauge/string duality. In this paper, we propose a stringy description of the $Q\bar Qqqq$ system in the case of two light flavors. Our goal is to explore the lower-lying Born-Oppenheimer potentials as a function of the separation distance between the heavy quark-antiquark pair. The analysis shows that the ground state Born-Oppenheimer potential is described in terms of both hadro-quarkonia and hadronic molecules. Meanwhile a standard pentaquark configuration, which describes a genuine five-quark interaction, makes the dominant contribution to a higher lying potential. This configuration has an antiquark-diquark-diquark structure $\bar Q[qq][Qq]$ for separations larger than $0.1\,\text{fm}$. The latter enables us to establish a relation among the masses of hadrons in the heavy quark limit. To describe the structure of the potentials more clear, we define some critical separations that are related to the processes of string reconnection, breaking and junction annihilation. Additionally, we consider the generalized baryon vertices, where more than three strings can meet, and explore their implications for the pentaquark systems.
Oleg Andreev
2023-06-14T15:34:16Z
http://arxiv.org/abs/2306.08581v4
# \(Q\bar{Q}qqq\) Quark System, Compact Pentaquark, and Gauge/String Duality (Part II) ###### Abstract This is the second of two companion papers in which we continue to develop the construction of the doubly heavy pentaquark systems using the gauge/string duality. In this paper, we propose a stringy description of the \(Q\bar{Q}qqq\) system in the case of two light flavors. Our goal is to explore the lower-lying Born-Oppenheimer potentials as a function of the separation distance between the heavy quark-antiquark pair. The analysis shows that the ground state Born-Oppenheimer potential is described in terms of both hadro-quarkonia and hadronic molecules. Meanwhile a standard pentaquark configuration, which describes a genuine five-quark interaction, makes the dominant contribution to a higher lying potential. This configuration has an antiquark-diquark-diquark structure \(\bar{Q}[qq][Qq]\) for separations larger than \(0.1\,\mathrm{fm}\). The latter enables us to establish a relation among the masses of hadrons in the heavy quark limit. To describe the structure of the potentials more clear, we define some critical separations that are related to the processes of string reconnection, breaking and junction annihilation. Additionally, we consider the generalized baryon vertices, where more than three strings can meet, and explore their implications for the pentaquark systems. + Footnote †: preprint: LMU-ASC 17/23 ## I Introduction Since the proposal of the quark model in the 1960s by Gell-Mann [1] and Zweig [2], the existence of exotic hadrons has remained a challenge for the physics of strong interactions. The recent observations of hidden-charm pentaquark \(P_{c}\) states by the LHCb Collaboration [3] has not only revived but also reinforced the longstanding interest in understanding the nature of pentaquarks [4]. These \(P_{c}\) states are the examples of doubly heavy pentaquarks, specifically of type \(Q\bar{Q}qqq\). In general, there may exist other pentaquark states of this type, including those with the bottom quark. One way to handle doubly heavy quark systems is as follows. Due to the significant difference in quark masses, it appears reasonable to employ the Born-Oppenheimer (B-O) approximation, which was originally developed for use in atomic and molecular physics [5].1 In this framework the corresponding B-O potentials are defined as the energies of stationary configurations of the gluon and light quark fields in the presence of the static heavy quark sources. The hadron spectrum is then determined by solving the Schrodinger equation using these potentials. Footnote 1: For further elaboration on these ideas in the context of QCD, see [6]. Lattice gauge theory is a well-established tool for studying non-perturbative QCD. Nevertheless, it still remains to be seen what it can and can't do with regard to the doubly heavy pentaquark systems. In the meantime, the gauge/string duality offers a powerful way for gaining valuable insights into this problem.2 However, the existing literature notably lacks discussion on the nature of doubly-heavy pentaquarks within this framework. Bridging this gap is one of the main objectives of this paper. Footnote 2: A comprehensive review of the gauge/string duality in relation to QCD can be found in the book [7]. This is the second of two companion papers in which we continue to develop the construction of the doubly heavy pentaquark systems using the gauge/string duality [8]. The paper is organized as follows. In Sec.II, we briefly recall some preliminary results and set the framework for the convenience of the reader. Then in Sec.III, we construct and analyze a set of string configurations in five dimensions that provide a dual description of the low-lying B-O potentials in the heavy quark limit. In the process, we introduce several length scales that characterize transitions between different configurations. These length scales are in fact related to different types of string interactions, including string reconnection, breaking, and junction (baryon vertex) annihilation. In Sec.IV, we consider some aspects of gluonic excitations, with a special focus on generalized baryon vertices and their implications for the pentaquark systems. Moving on to Sec.V, we discuss a way to make the effective string model more realistic and suggest a relation among hadron masses. We conclude in Sec.VI by making a few comments on the consequences of our findings and discussing directions for future work. Appendix A contains notation and definitions. Additionally, to ensure the paper is self-contained, we include the necessary results and technical details in Appendices B and C. Preliminaries ### General procedure In the presence of light quarks, the B-O potentials can be determined along the lines of lattice QCD. To do this, a mixing analysis based on a correlation matrix is necessary, as explained in [9] in the case of string breaking. The diagonal elements of this matrix correspond to the energies of stationary string configurations, while the off-diagonal elements describe transitions between these configurations. The potentials can then be determined by calculating the eigenvalues of the matrix. Now consider the \(Q\bar{Q}qqq\) quark system and examine the corresponding string configurations within the four-dimensional string models [10]. In our discussion, we'll assume \(N_{f}=2\), which means there are two dynamical flavors with equal mass (\(u\) and \(d\) quarks).3 First, let's examine string configurations with only the valence quarks. These are the basic configurations shown in Figure 1. Footnote 3: Extending the analysis to \(N_{f}=2+1\), by including the \(s\) quark, is straightforward. Each configuration consists of the valence quarks and antiquark connected by the strings and looks like a pair of non-interacting hadrons. Clearly, this is true only if the hadrons are well-separated. If they are not, configuration (a) describes a hadro-quarkonium state, a \(Q\bar{Q}\) pair in a nucleon cloud, and configuration (b) describes a hadron molecule. To get further, we assume that other (excited) configurations can be constructed by adding extra string junctions and virtual quark-antiquark pairs to the basic configurations. This also results in an increased number of strings and, therefore, intuitively indicates that these configurations possess higher energies. So to some extent, the junctions and \(q\bar{q}\) pairs can be thought of as kinds of elementary excitations. For our purposes, relatively simple configurations suffice. In particular, adding a pair of junctions to the basic configurations results in the pentaquark configuration illustrated in Figure 2. Since it describes the genuine five-body interaction of quarks, we call it the pentaquark configuration.4 Footnote 4: As we will see in Sec.III, such a configuration makes the dominant contribution to one of the B-O low-lying potential at small heavy quark separations. Because of this, we will add the word ”compact” as a prefix. Similarly, adding one \(q\bar{q}\) pair results in the configurations shown in Figure 3. The configurations (d) and (e) are simple modifications of the configurations (a) and (b), respectively. The configuration (f) is obtained from those by quark exchange. One can interpret configuration (d) as a hadro-quarkonium state, namely a \(Q\bar{Q}\) pair surrounded by a pion-nucleon cloud, while the other configurations can be interpreted as hadron molecules within pion and nucleon clouds. It is noteworthy that other elementary excitations may be involved. We return to this issue in Sec.IV. Figure 1: Basic string configurations. Three strings may join at a point known as the string junction [11]. Here and later, non-excited strings are denoted by straight lines. Figure 2: A pentaquark configuration. types of interactions which will be discussed in the following sections. This is part of the big picture of QCD strings. Later on, we will introduce the notion of a critical separation between the heavy quarks, which characterizes each interaction. This is helpful for gaining a deeper understanding of the physics of QCD strings and the structure of B-O potentials. ### A short account of the five-dimensional string model In our study of the \(Q\bar{Q}qqq\) system, we will use the formalism recently developed in [12]. This formalism is general and can be adapted to any model of AdS/QCD, although we illustrate it by performing calculations in one of the simplest models. For the purposes of this paper, we consider a five-dimensional Euclidean space with a metric \[ds^{2}={\rm e}^{{\rm e}r^{2}}\frac{R^{2}}{r^{2}}\Big{(}dt^{2}+(dx^{i})^{2}+dr^{ 2}\Big{)}\,, \tag{1}\] where \(r\) is the fifth dimension of the space. Such a space represents a deformation of the Euclidean AdS\({}_{5}\) space of radius \(R\), with a deformation parameter \({\sf s}\). The boundary is at \(r=0\), and the so-called soft wall at \(r=1/\sqrt{{\sf s}}\). This model is particularly appealing due to its relative computational simplicity and its potential for phenomenological applications. Here let us just mention that the model of [13] provides a good fit to the lattice data obtained for the heavy quark potential [14].5 Footnote 5: See also [15] for another good example. To construct the string configurations of Figures 1-3 in five dimensions, we need certain building blocks. The first is a Nambu-Goto string governed by the action \[S_{\mbox{\tiny{NG}}}=\frac{1}{2\pi\alpha^{\prime}}\int d^{2}\xi\,\sqrt{ \gamma^{(2)}}\,. \tag{2}\] Figure 4: Some string interactions: (a) reconnection, (b) breaking, (c) junction annihilation, (d) junction fusion. Figure 3: String configurations with one virtual quark pair. Here \(\gamma\) is an induced metric, \(\alpha^{\prime}\) is a string parameter, and \(\xi^{i}\) are world-sheet coordinates. The second is a high-dimensional counterpart of the string junction, known as the baryon vertex. In the AdS/CFT correspondence, this vertex is supposed to be a dynamic object which is a five brane wrapped on an internal space \(\mathbf{X}\)[16], and correspondingly the antibaryon vertex is an antibrane. Both objects look point-like in five dimensions. In [15] it was observed that the action for the baryon vertex, written in the static gauge, \[S_{\rm vert}=\tau_{v}\int dt\,\frac{\mathrm{e}^{-2\mathbf{s}\tau^{2}}}{r} \tag{3}\] yields very satisfactory results, when compared to the lattice calculations of the three-quark potential. Note that \(S_{\rm vert}\) represents the worldvolume of the brane if \(\tau_{v}=\mathcal{T}_{5}R\,{\rm vol}(\mathbf{X})\), with \(\mathcal{T}_{5}\) the brane tension. Unlike AdS/CFT, we treat \(\tau_{v}\) as a free parameter to account for \(\alpha^{\prime}\)-corrections as well as the possible impact of other background fields.6 In the case of zero baryon chemical potential, it is natural to suggest the same action for the antibaryon vertex, such that \(S_{\rm vert}=S_{\rm vert}\). Footnote 6: Similar to AdS/CFT, there is an expectation of the presence of an analogue of the Ramond-Ramond fields on \(\mathbf{X}\). To model the two light quarks of equal mass, we introduce a background scalar field \(\mathrm{T}(r)\), as proposed in [17]. This scalar field couples to the worldsheet boundary as an open string tachyon \(S_{\rm q}=\int d\tau e\mathrm{T}\), where \(\tau\) is a coordinate on the boundary and \(e\) is a boundary metric (an einbein field). Thus, the light quarks are at string endpoints in the interior of five-dimensional space. For our purposes, we only consider a constant field \(\mathrm{T}_{0}\) and worldsheets with straight-line boundaries in the \(t\)-direction. In this case, the action written in the static gauge can be expressed as \[S_{\rm q}=\mathrm{T}_{0}R\int dt\frac{\mathrm{e}^{\frac{1}{2}\mathbf{s}\tau^{2 }}}{r} \tag{4}\] and recognized as the action of a point particle of mass \(\mathrm{T}_{0}\) at rest.7 Clearly, at zero baryon chemical potential the same action also describes the light antiquarks, and thus \(S_{\rm\bar{q}}=S_{\rm q}\). Footnote 7: The masses of the light quarks can be determined by fitting the string breaking distance for the \(Q\bar{Q}\) system to the lattice data of [18], which yields \(m_{u/d}=46.6,{\rm MeV}\)[19] for the parameter values used in this paper. It's worth noting the visual analogy between tree level Feynman diagrams and static string configurations. In the language of Feynman diagrams, the building blocks mentioned above respectively play the roles of propagators, vertices, and tadpoles. ## III The string theory analysis in five dimensions Now we will describe the \(Q\bar{Q}qqq\) system in five dimensions. Our basic approach is as follows: following the hadro-quarkonium picture [20], we consider the light quarks as clouds, and therefore, it only makes sense to speak about their average positions, or equivalently, the centers of the clouds. The heavy quarks are point-like objects inside the clouds. Our goal is to determine the low-lying B-O potentials as a function of the distance between the heavy quark and antiquark. We begin our discussion with the basic string configurations and then move on to the remaining ones before ending with the potentials. To get an intuitive idea of what a configuration looks like in five dimensions, one can place it on the boundary of five-dimensional space. A gravitational force pulls the light quarks and strings into the interior, while the heavy (static) quarks remain at rest. This mostly helps, but there are some exceptions. We will see shortly that the shape of several configurations changes with the separation between the heavy quarks, making the problem more complicated. ### The disconnected configurations (a) and (b) Consider configuration (a), which can be interpreted as a \(Q\bar{Q}\) pair in a nucleon cloud. In the following discussion, we will average over all possible nucleon positions. It was observed in [21] that in the case of a pion cloud, the total energy is almost equal to the sum of the rest energies of two hadrons. We will assume that such a factorization also holds in the present case.8 In five dimensions, the configuration looks like the one shown in Figure 5(a), Footnote 8: In general, the factorization takes place if the hadrons are far apart, but for shorter distances they do interact with each other. The partial answer to this will become clear from the construction of the B-O potentials via a matrix Hamiltonian, where hadronic interactions are encoded in the off-diagonal terms of the Hamiltonian. , consisting of two parts: the lower corresponds to the \(Q\bar{Q}\) system, and the upper to the nucleon. The total energy is the sum of two terms \[E^{(\text{a})}=E_{\text{\tiny Q\bar}}+E_{\text{\tiny 3q}}. \tag{1}\] In the static limit, \(E_{\text{\tiny Q\bar}}\) was computed in [13].9 Meanwhile, \(E_{\text{\tiny 3q}}\) was computed in [8] and is given by Footnote 9: For convenience, we give a brief summary of the results in Appendix B. \[E_{\text{\tiny 3q}}=3\mathsf{g}\sqrt{\frac{\mathsf{s}}{\text{q}_{3}}}\Big{(} \mathsf{k}\text{e}^{-2\text{q}_{3}}+\mathsf{n}\text{e}^{\frac{1}{2}\text{q}_{ 3}}\Big{)}\,. \tag{2}\] Here \(\mathsf{g}=\frac{R^{2}}{2\pi\alpha^{\prime}}\), \(\mathsf{k}=\frac{\tau_{\mathsf{k}}}{3\mathsf{g}}\), \(\mathsf{n}=\frac{\mathsf{T}_{\text{\tiny Q}}R}{\mathsf{g}}\), and \(\text{q}_{3}\) is a solution to the equation \[\mathsf{k}(1+4\text{q}_{3})+\mathsf{n}(1-\text{q}_{3})\text{e}^{\frac{\mathsf{ s}}{2}\text{q}_{3}}=0\,. \tag{3}\] This equation is the force balance equation in the \(r\)-direction. It is derived by varying the action \(S=S_{\text{\tiny v}}+3S_{\text{q}}\) with respect to \(r_{3q}\). Note that \(\text{q}_{3}=\mathsf{s}r_{3q}^{2}\). Now let's consider configuration (b). Again, the total energy is just the sum of the rest energies \[E^{(\text{b})}=E_{\text{\tiny q\bar}}+E_{\text{\tiny Qqq}}\,. \tag{4}\] The first term is the rest energy of a heavy-light meson which equals to \(E_{\text{\tiny Q\bar}}\) at zero baryon chemical potential. The latter computed in [12] is \[E_{\text{\tiny Q\bar}}=\mathsf{g}\sqrt{\mathsf{s}}\Big{(}\mathcal{Q}(\text{ q})+\mathsf{n}\frac{\text{e}^{\frac{\mathsf{e}^{\frac{\mathsf{s}}{2}\text{q}}}{ \sqrt{4}}}}\Big{)}+c\,, \tag{5}\] where the function \(\mathcal{Q}\) is defined in Appendix A, \(c\) is a normalization constant, and q is a solution to the equation Figure 5: The basic configurations in five dimensions. The heavy quark and antiquark are placed on the boundary at \(r=0\) and are separated by a distance of \(\ell\). The light quarks, baryon vertices and nucleon are in the interior at \(r=r_{q}\), \(r=r_{v}\), and \(r=r_{\text{\tiny 3q}}\), respectively. \[\mathsf{n}(q-1)+\mathrm{e}^{\frac{q}{2}}=0 \tag{10}\] in the interval \([0,1]\). This equation is nothing else but the force balance equation in the \(r\)-direction and is derived by varying the action \(S=S_{\textsc{sc}}+S_{\mathrm{q}}\) with respect to \(r_{q}\). Note that \(q=\mathsf{sr}_{q}^{2}\). The second term represents the rest energy of a heavy-light baryon. It was also computed in [12], with the result \[E_{\textsc{qq}}=\mathsf{g}\sqrt{\mathsf{s}}\Big{(}2\mathcal{Q}(\mathrm{q})- \mathcal{Q}(\mathrm{v})+2\mathsf{n}\frac{\mathrm{e}^{\frac{1}{2}\mathsf{q}}}{ \sqrt{\mathrm{q}}}+3\mathrm{k}\frac{\mathrm{e}^{-2\mathrm{v}}}{\sqrt{\mathrm{ v}}}\Big{)}+c\,. \tag{11}\] Here v a solution to the equation \[1+3\mathrm{k}(1+4v)\mathrm{e}^{-3v}=0 \tag{12}\] and \(v=\mathsf{sr}_{v}^{2}\). The above equation is the force balance equation in the \(r\)-direction at \(r=r_{v}\). It is derived by varying the action \(S=3S_{\textsc{sc}}+2S_{\mathrm{q}}+S_{\mathrm{vert}}\) with respect to \(r_{v}\). We conclude our discussion of the basic configurations with some remarks. Firstly, it was shown in [22] that in the interval \([0,1]\), Eq.(12) has solutions if and only if \(\mathsf{k}\) is restricted to the range \(-\frac{\mathrm{e}^{3}}{15}<\mathsf{k}\leq-\frac{1}{4}\mathrm{e}^{\frac{1}{4}}\). In particular, there exists a single solution \(\mathrm{v}=\frac{1}{12}\) at \(\mathsf{k}=-\frac{1}{4}\mathrm{e}^{\frac{1}{4}}\). Secondly, the analysis of configuration (b) assumes that \(\mathrm{v}\leq\mathrm{q}\). Although this is not true for all possible parameter values, it definitely is for those we use to make predictions. Finally, the solutions q and v are associated with the light quarks and baryon vertices, and as such, they are independent of the separation of the heavy quarks. ### The connected configuration (c) Having understood the basic string configurations, we can now discuss the pentaquark configuration (c). In doing so, it is natural to suggest that if a configuration contributes to the ground state, or at least to one of the low excited states, its shape is dictated by symmetry. For the configuration at hand, the most symmetric case involves placing all the light quarks in the middle between the heavy quark sources. This is a good starting point for small separations. At larger separations, the pentaquark configuration does change shape, as we will see shortly. #### iii.2.1 Small \(\ell\) In this case the corresponding string configuration is depicted in Figure 6. From a four-dimensional perspective Figure 6: The pentaquark configuration for small \(\ell\). The light quarks and baryon vertices are on the \(r\)-axis at \(r=r_{q}\), \(r=r_{v}\), and \(r=r_{v}\). Here and later, \(\alpha\) represents the tangent angle at the endpoint of the first string. the light quarks are located in the middle between the heavy ones. It is assumed that \(r_{q}\), \(r_{v}\), and \(r_{\bar{v}}\) satisfy the condition \(r_{q}>r_{v}>r_{\bar{v}}\), which is indeed true for the parameter values we are using. The total action is the sum of the Nambu-Goto actions plus the actions for the vertices and light quarks \[S=\sum_{i=1}^{6}S_{\text{\tiny{BG}}}^{(i)}+3S_{\text{\tiny{vert}}}+3S_{\text{ \tiny{q}}}\,. \tag{10}\] If one chooses the static gauge \(\xi^{1}=t\) and \(\xi^{2}=r\) for the Nambu-Goto actions and considers the \(x\)'s as a function of \(r\), then the boundary conditions for them are \[x^{(1;2)}(0)=\mp\frac{1}{2}\ell\,,\qquad x^{(1,2,3,4)}(r_{\bar{v}})=x^{(4,5,6) }(r_{v})=x^{(3,5,6)}(r_{q})=0\,. \tag{11}\] Now the action takes the form10 Footnote 10: We drop the subscript \((i)\) when it does not cause confusion. \[S=\mathsf{g}T\bigg{(}2\int_{0}^{r_{\bar{v}}}\frac{dr}{r^{2}}\,\mathsf{e}^{ \mathsf{s}r^{2}}\sqrt{1+(\partial_{r}x)^{2}}\;+\int_{r_{\bar{v}}}^{r_{q}}\frac {dr}{r^{2}}\,\mathsf{e}^{\mathsf{s}r^{2}}+2\int_{r_{\bar{v}}}^{r_{q}}\frac{dr} {r^{2}}\,\mathsf{e}^{\mathsf{s}r^{2}}+6\mathsf{k}\,\frac{\mathrm{e}^{-2 \mathsf{s}r_{\bar{v}}^{2}}}{r_{\bar{v}}}+3\mathsf{k}\,\frac{\mathrm{e}^{-2 \mathsf{s}r_{v}^{2}}}{r_{v}}+3\mathsf{n}\frac{\mathrm{e}^{\frac{1}{2}\mathsf{ s}r_{q}^{2}}}{r_{q}}\bigg{)}\,, \tag{12}\] where \(T=\int dt\) and \(\partial_{r}x=\frac{\partial x}{\partial r}\). We set \(x=const\) for all the strings stretched along the \(r\)-axis. The integrals represent the contributions of the strings, while the remaining terms represent the contributions of the vertices and light quarks. To find a stable configuration, we extremize the action with respect to \(x\), which describes the profiles of strings (1) and (2), and with respect to \(r_{\bar{v}}\), \(r_{v}\), and \(r_{q}\), which describe the locations of the vertices and light quarks. As explained in Appendix B of [19], varying with respect to \(x\) gives the expressions for the separation distance and the energy of the strings \[\ell=\frac{2}{\sqrt{\mathsf{s}}}\mathcal{L}^{+}(\alpha,\bar{v})\,,\qquad E^{( 1,2)}=\mathsf{g}\sqrt{\mathsf{s}}\,\mathcal{E}^{+}(\alpha,\bar{v})+c\,. \tag{13}\] Here \(c\) is the normalization constant as before. It is easy to see that varying the action with respect to \(r_{q}\) and \(r_{v}\) leads to Eqs.(9) and (10). Putting all together, we find \[E^{(\text{c})}=\frac{S}{T}=E_{\text{\tiny{GBqvert}}}=\mathsf{g}\sqrt{\mathsf{ s}}\bigg{(}2\mathcal{E}^{+}(\alpha,\bar{v})+3\mathcal{Q}(\text{q})-2\mathcal{Q}( \bar{v})-\mathcal{Q}(\text{v})+6\mathsf{k}\frac{\mathrm{e}^{-2\bar{v}}}{ \sqrt{\bar{v}}}+3\mathsf{k}\frac{\mathrm{e}^{-2\text{v}}}{\sqrt{\bar{v}}}+3 \mathsf{n}\frac{\mathrm{e}^{\frac{1}{2}\text{q}}}{\sqrt{\bar{\text{q}}}} \bigg{)}+2c\,. \tag{14}\] We have used the fact that \(\int_{\text{q}}^{b}\frac{dx}{x^{2}}\mathrm{e}^{cx^{2}}=\sqrt{c}\big{(}\mathcal{ Q}(cb^{2})-\mathcal{Q}(ca^{2})\big{)}\). Here, \(\bar{v}=\mathsf{s}r_{\bar{v}}^{2}\), and the functions \(\mathcal{L}^{+}\) and \(\mathcal{E}^{+}\) are defined in Appendix A. Finally, varying with respect to \(r_{\bar{v}}\) leads to the equation \[\sin\alpha=1+3\mathsf{k}(1+4\bar{v})\mathrm{e}^{-3\bar{v}}\,, \tag{15}\] which is nothing else but the force balance equations in the \(r\)-direction at \(r=r_{\bar{v}}\). Thus, the energy of the pentaquark configuration is given parametrically by \(E_{\text{\tiny{GBqvert}}}=E_{\text{\tiny{GBqvert}}}(\bar{v})\) and \(\ell=\ell(\bar{v})\), where the parameter \(\bar{v}\) varies from \(0\) to v. The lower limit is determined by \(\ell(0)=0\), and the upper limit by \(\bar{v}=\text{v}\), which corresponds to the situation where string (4) shrinks into a point. #### iii.1.2 Slightly larger \(\ell\) A straightforward numerical analysis of (13) shows that \(\ell(\bar{v})\) increases monotonically and remains finite at \(\bar{v}=\text{v}\). This implies that the \(V\bar{V}\) pair gradually moves deeper into the bulk until it reaches the baryon vertex \(V\), whose position is independent of the separation between the heavy quarks. As a result, the configuration becomes that of Figure 7(c'), where string (4) has collapsed to a point. It turns out that proceeding further with such a configuration is impossible. As explained in Appendix C, it only exists for separations slightly exceeding \(\ell\)(v). A possible way out is to consider another configuration in which the vertices are spatially separated as depicted in Figure 7(c). It can be obtained from configuration (c') by splitting the baryon vertices and stretching a string between them. Formally, this configuration is also governed by the action (11), but with the boundary conditions replaced by \[x^{(1;2)}(0)=\mp\frac{1}{2}\ell\,,\qquad x^{(1,3,4)}(r_{v})=x^{(3)}(r_{q})=-x_{ v}\,,\qquad x^{(2,4,5,6)}(r_{\bar{v}})=x^{(5,6)}(r_{q})=0\,. \tag{18}\] So it now reads \[\begin{split} S&=\mathsf{g}T\bigg{(}\int_{0}^{r_{v }}\frac{dr}{r^{2}}\,\mathrm{e}^{\mathsf{s}r^{2}}\sqrt{1+(\partial_{r}x)^{2}}+ \int_{0}^{r_{e}}\frac{dr}{r^{2}}\,\mathrm{e}^{\mathsf{s}r^{2}}\sqrt{1+( \partial_{r}x)^{2}}+\int_{r_{v}}^{r_{q}}\frac{dr}{r^{2}}\,\mathrm{e}^{\mathsf{ s}r^{2}}+\int_{r_{v}}^{r_{e}}\frac{dr}{r^{2}}\,\mathrm{e}^{\mathsf{s}r^{2}} \sqrt{1+(\partial_{r}x)^{2}}\\ &\qquad+2\int_{r_{e}}^{r_{q}}\frac{dr}{r^{2}}\,\mathrm{e}^{ \mathsf{s}r^{2}}+3\mathsf{k}\,\frac{\mathrm{e}^{-2\mathsf{s}r_{v}^{2}}}{r_{v}} +6\mathsf{k}\,\frac{\mathrm{e}^{-2\mathsf{s}r_{v}^{2}}}{r_{\bar{v}}}+3 \mathsf{n}\frac{\mathrm{e}^{\frac{1}{2}\mathsf{s}r_{q}^{2}}}{r_{q}}\,\bigg{)} \,.\end{split} \tag{19}\] Here we set \(x^{(3,5,6)}=const\). The integrals correspond to the contributions of strings (1)-(6), respectively. Given the action, it is straightforward to extremize it with respect to \(x_{v}\) and \(r_{v}\), which describe the location of the single baryon vertex. The result can be conveniently expressed in a vector form as follows \[\mathbf{e}_{1}+\mathbf{e}_{3}+\mathbf{e}_{4}+\mathbf{f}_{v}=0\,, \tag{20}\] where \(\mathbf{e}_{1}=\mathsf{g}w(r_{v})(-\cos\alpha,-\sin\alpha)\), \(\mathbf{e}_{3}=\mathsf{g}w(r_{v})(0,1)\), \(\mathbf{e}_{4}=\mathsf{g}w(r_{v})(\cos\alpha_{4},\sin\alpha_{4})\), and \(\mathbf{f}_{v}=(0,-3\mathsf{g}\mathsf{k}\,\partial_{r_{v}}\frac{\mathrm{e}^ {-2\mathsf{v}r_{v}^{2}}}{r_{v}})\), with \(w(r)=\mathrm{e}^{\mathsf{s}r^{2}}/r^{2}\) and \(\alpha_{i}\leq\frac{\pi}{2}\). This is the force balance equation at the vertex position, as shown in Figure 7(c). Its \(x\)-component reduces to \[\cos\alpha-\cos\alpha_{4}=0\,. \tag{21}\] Since the equation has a straightforward solution \(\alpha_{4}=\alpha\), it implies that strings (1) and (4) are smoothly joined together to form a single string, which we refer to as string (1). The vertex, therefore, does not affect the string.11 If Figure 7: Left: The configuration of Figure 6 at \(\bar{v}=\mathrm{v}\). Right: The pentaquark configuration for \(\ell\) ranging from \(\ell\)(v) to \(\ell\)(q). so, then the \(r\)-component becomes equivalent to Eq.(10) whose solution is given by \({\rm v}\). As a result, the action takes the form \[S={\sf g}T\bigg{(}2\int_{0}^{r_{\rm e}}\frac{dr}{r^{2}}\,{\rm e}^{{\sf s}r^{2}} \sqrt{1+(\partial_{r}x)^{2}}+\int_{r_{\rm v}}^{r_{\rm q}}\frac{dr}{r^{2}}\,{\rm e }^{{\sf s}r^{2}}+2\int_{r_{\rm e}}^{r_{\rm q}}\frac{dr}{r^{2}}\,{\rm e}^{{\sf s} r^{2}}+3{\rm k}\,\frac{{\rm e}^{-2{\sf s}r_{\rm v}^{2}}}{r_{\rm v}}+6{\rm k}\, \frac{{\rm e}^{-2{\sf s}r_{\rm v}^{2}}}{r_{\bar{v}}}+3{\rm n}\frac{{\rm e}^{ \frac{1}{2}{\sf s}r_{\rm q}^{2}}}{r_{q}}\,\bigg{)}\,. \tag{29}\] Here the first integral corresponds to the contributions of string (1)-(2), and \(r_{\rm v}=\sqrt{{\rm v}/{\sf s}}\). Note that varying the action with respect to \(r_{q}\) and \(r_{\bar{v}}\) results respectively in Eqs.(9) and (20). By essentially the same arguments that we gave for the expression (21), the energy of this configuration can be written as \[E_{\rm QQqvq}=2{\sf g}\sqrt{{\sf s}}\bigg{(}{\cal E}^{+}(\alpha,\bar{v})+{ \cal Q}({\rm q})-{\cal Q}(\bar{v})+3{\rm k}\frac{{\rm e}^{-2\bar{v}}}{\sqrt{ \bar{v}}}+{\sf n}\frac{{\rm e}^{\frac{1}{2}{\rm q}}}{\sqrt{\bar{q}}}\bigg{)}+E _{0}+2c\,, \tag{30}\] where \(E_{0}={\sf g}\sqrt{{\sf s}}\big{(}{\cal Q}({\rm q})-{\cal Q}({\rm v})+3{\rm k }\frac{{\rm e}^{-2\nu}}{\sqrt{\bar{v}}}+{\sf n}\frac{{\rm e}^{\frac{1}{2}{ \rm q}}}{\sqrt{\bar{q}}}\big{)}\). The parameter \(\bar{v}\) takes values in the interval \([{\rm v},{\rm q}]\). At this point, two remarks are in order. Firstly, as seen from Figure 7(c), the spatial positions of the light quarks along the \(x\)-axis suggest an antiquark-diquark-diquark \(\bar{Q}[qq][Qq]\) structure.12 Such a structure was assumed in [23] and was found to be phenomenologically useful. Secondly, it was demonstrated in [24] that the connected tetraquark configuration for the \(\bar{Q}Qqq\) system has an antiquark-antiquark-diquark \(\bar{Q}\bar{Q}[qq]\) structure. Since the diquark \([Qq]\) is color-antitriplet, it is reasonable to assume that there exists a relation between the energies of the pentaquark and tetraquark configurations. A closer inspection shows that this is indeed the case. The first term in (30) is equal to the energy of the tetraquark configuration [24], and thus the energies are just shifted by a constant equal to the second term. Explicitly, Footnote 12: In fact, the separation between the \(Q\) and \(q\) (attached to string (3)) quarks decreases as the heavy quark separation increases. \[E_{\rm QQqq}(\ell)=E_{\rm QQ\bar{q}\bar{q}}(\ell)+E_{0}\qquad{\rm for}\quad \ell\geq\ell({\rm v})\,. \tag{31}\] We have used the fact that \(E_{\rm\bar{Q}\bar{Q}qq}=E_{\rm QQ\bar{q}\bar{q}}\) at zero baryon chemical potential. #### iii.2.3 Intermediate and large \(\ell\) Numerical analysis shows that \(\ell(v)\) is finite at \(\bar{v}={\rm q}\), where the vertices reach the light quarks. So, to get further, we must consider the configuration shown in Figure 8 on the left. One can think of that as the strings (5) and (6) collapsing to a point. Note that the single baryon vertex remains at \(r_{v}=r_{\rm v}\). In this case, the boundary conditions (20) and action (29) become Figure 8: The pentaquark configuration for intermediate (left) and large (right) heavy quark separations. The horizontal line represents the soft wall at \(r=1/\sqrt{{\sf s}}\). \[x^{(1;2)}(0)=\mp\frac{1}{2}\ell\,,\qquad x^{(3)}(r_{\rm v})=x^{(3)}(r_{q})=-x_{v} \,.\qquad x^{(1,2)}(r_{\bar{v}})=0 \tag{3.22}\] and \[S=\mathsf{g}T\bigg{(}2\int_{0}^{r_{v}}\frac{dr}{r^{2}}\,{\rm e}^{\mathsf{s}r^{ 2}}\sqrt{1+(\partial_{r}x)^{2}}+\int_{r_{\rm v}}^{r_{q}}\frac{dr}{r^{2}}\,{\rm e }^{\mathsf{s}r^{2}}+3\,{\rm k}\,\frac{{\rm e}^{-2\mathsf{s}r_{v}^{2}}}{r_{\rm v }}+\frac{2}{r_{\bar{v}}}\Big{(}3{\rm k}\,{\rm e}^{-2\mathsf{s}r_{v}^{2}}+{\rm n }{\rm e}^{\frac{1}{2}\mathsf{s}r_{q}^{2}}\Big{)}+{\sf n}\frac{{\rm e}^{\frac{1 }{2}\mathsf{s}r_{q}^{2}}}{r_{q}}\,\bigg{)}\,. \tag{3.23}\] Varying the action with respect to \(r_{q}\) leads to Eqs.(3.6), as before. However, varying the action with respect to \(r_{\bar{v}}\) leads to \[\sin\alpha=3{\sf k}(1+4\bar{v}){\rm e}^{-3\bar{v}}+{\sf n}(1-\bar{v}){\rm e}^ {-\frac{1}{2}\bar{v}}\,. \tag{3.24}\] Since the tangent angle \(\alpha\) is non-negative, the formula (3.12) for the separation distance still holds. On the other hand, the formula (3.20) for the energy of the configuration is replaced by \[E_{\rm Q\bar{Q}qq\rm q}=2\mathsf{g}\sqrt{\bar{s}}\bigg{(}{\cal E}^{+}(\alpha, \bar{v})+\frac{3{\rm k}{\rm e}^{-2\bar{v}}+{\sf n}{\rm e}^{\frac{1}{2}\bar{v} }}{\sqrt{\bar{v}}}\bigg{)}+E_{0}+2c\,. \tag{3.25}\] For the parameter values we are using, \(\alpha\) is a decreasing function of \(\bar{v}\). It reaches zero at \(\bar{v}=\bar{\rm v}_{0}\), which is a solution to the equation \[3{\sf k}(1+4\bar{v})+{\sf n}(1-\bar{v}){\rm e}^{\frac{5}{2}\bar{v}}=0\,. \tag{3.26}\] This solution defines the upper limit for \(\bar{v}\). Therefore, the energy of the configuration is given in parametric form by \(E_{\rm Q\bar{Q}qq\rm q}=E_{\rm Q\bar{Q}qq\rm q}(\bar{v})\) and \(\ell=\ell(\bar{v})\), with the parameter varying from \({\sf q}\) to \(\bar{\rm v}_{0}\). This is not the whole story, however, as \(\ell\) remains finite at \(\bar{v}=\bar{\rm v}_{0}\). So, we come to the question of what to do about it. The answer is that if \(\alpha\) changes sign from positive to negative, \(\ell\) continues to increase. In this case, the configuration profile becomes convex near \(x=0\), as shown in Figure 8 on the right. The strings continue to descend deeper in the bulk until they finally reach the soft wall. As a result, the separation between the heavy quark sources becomes infinite. The expressions for the separation distance and energy can be obtained by simply replacing \({\cal L}^{+}\) and \({\cal E}^{+}\) with \({\cal L}^{-}\) and \({\cal E}^{-}\), as explained in Appendix B of [19]. So, we have \[\ell=\frac{2}{\sqrt{\bar{s}}}{\cal L}^{-}(\lambda,\bar{v}) \tag{3.27}\] and \[E_{\rm Q\bar{Q}qq\rm q}=2\mathsf{g}\sqrt{\bar{s}}\bigg{(}{\cal E}^{-}(\lambda,\bar{v})+\frac{3{\rm k}{\rm e}^{-2\bar{v}}+{\sf n}{\rm e}^{\frac{1}{2}\bar{v} }}{\sqrt{\bar{v}}}\bigg{)}+E_{0}+2c\,. \tag{3.28}\] The functions \({\cal L}^{-}\) and \({\cal E}^{-}\) are given in Appendix A. The dimensionless parameter \(\lambda\) is defined by \(\lambda=\mathsf{s}r_{0}^{2}\), where \(r_{0}=\max r(x)\) (see Figure 8). Using (3.24), \(\lambda\) can be conveniently expressed in terms of \(\bar{v}\) as [19] \[\lambda(\bar{v})=-{\rm ProductLog}\bigg{[}-\bar{v}{\rm e}^{-\bar{v}}\bigg{(}1- \Big{(}3{\sf k}(1+4\bar{v}){\rm e}^{-3\bar{v}}+{\sf n}(1-\bar{v}){\rm e}^{- \frac{1}{2}\bar{v}}\Big{)}^{2}\bigg{)}^{-\frac{1}{2}}\bigg{]}\,. \tag{3.29}\] Here ProductLog\((z)\) denotes the principal solution for \(w\) in \(z=w{\rm e}^{w}\)[25]. The parameter \(\bar{v}\) varies from \(\bar{\rm v}_{0}\) to \(\bar{\rm v}_{1}\), which is found by solving the equation \(\lambda=1\), or equivalently the equation \[\sqrt{1-\bar{v}^{2}{\rm e}^{2(1-\bar{v})}}+3{\sf k}(1+4\bar{v}){\rm e}^{-3\bar {v}}+{\sf n}(1-\bar{v}){\rm e}^{-\frac{1}{2}\bar{v}}=0\,. \tag{3.30}\] This is because \({\cal L}^{-}\) becomes infinite at \(\lambda=1\) (see Appendix A). To summarize, \(E_{\rm Q\bar{Q}qq\rm q}\) is a piecewise function of \(\ell\), and the shape of the configuration (c) depends on the separation distance between the heavy quark sources. Furthermore, for \(\ell>\ell({\rm v})\) the model provides an explicit realization of the antiquark-diquark-diquark scheme of the pentaquark, as proposed in [23]. The limiting cases As preparation for computing critical separations, we need some details on the behavior of \(E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}\) for both small and large \(\ell\). We begin with the case of small \(\ell\). The relevant configuration for such a limit is depicted in Figure 6, because \(\ell\) vanishes at \(\bar{v}=0\). So, taking the limit \(\bar{v}\to 0\) in Eqs.(3.12) and (3.13) with the help of Eqs.(A.2) and (A.6), we find \[\ell=\sqrt{\frac{\bar{v}}{\sf s}}\Big{(}\ell_{0}+\ell_{1}\bar{v} \Big{)}+o\big{(}\bar{v}^{\frac{3}{2}}\big{)}\,,\qquad E_{{\rm Q}{\rm Q}{\rm q} {\rm q}{\rm q}}={\sf g}\sqrt{\frac{\sf s}{\bar{v}}}\Big{(}E_{0}+E_{1}\bar{v} \Big{)}+E_{{\rm Q}{\rm q}{\rm q}}+E_{{\rm Q}{\bar{\rm q}}}+o\big{(}\bar{v}^{ \frac{1}{2}}\big{)}\,. \tag{3.31}\] The expansion coefficients are given by \[\ell_{0}=\frac{1}{2}\tau^{-\frac{1}{2}}B\big{(}\tau^{2};\tfrac{3} {4},\tfrac{1}{2}\big{)}\,,\qquad\ell_{1}=\frac{1}{2}\tau^{-\frac{3}{2}}\Big{(} 3\tau\frac{1+2{\sf k}}{2+3{\sf k}}B\big{(}\tau^{2};\tfrac{3}{4},-\tfrac{1}{2} \big{)}-B\big{(}\tau^{2};\tfrac{5}{4},-\tfrac{1}{2}\big{)}\Big{)}\,, \tag{3.32}\] \[E_{0}=2(1+3{\sf k})+\frac{1}{2}\tau^{\frac{1}{2}}B\big{(}\tau^{2 };-\tfrac{1}{4},\tfrac{1}{2}\big{)}\,,\quad E_{1}=\frac{3}{2}(1+2{\sf k}) \Big{(}-\frac{12{\sf k}}{1+3{\sf k}}+\frac{\tau^{\frac{1}{2}}}{2+3{\sf k}}B \big{(}\tau^{2};\tfrac{3}{4},-\tfrac{1}{2}\big{)}-\frac{\tau^{-\frac{1}{2}}}{ 1+2{\sf k}}B\big{(}\tau^{2};\tfrac{5}{4},-\tfrac{1}{2}\big{)}\Big{)}\,, \tag{3.33}\] where \(\tau=\sqrt{-3{\sf k}(2+3{\sf k})}\,\).13 It is easy to eliminate \(\bar{v}\) from the pair of equations (3.31) to obtain a nonlinear expression for \(E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}{\rm q}}\) Footnote 13: Note that \(\tau\) is real with our choice of \({\sf k}=-\tfrac{1}{4}{\sf e}^{\frac{1}{2}}\) (see the next subsection). \[E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}=-\frac{\alpha_{{\rm Q}\bar{\rm Q }_{\rm q}{\rm q}{\rm q}{\rm q}}}{\ell}+E_{{\rm Q}{\rm q}{\rm q}}+\mathbf{\sigma}_{ {\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}{\rm q}}\,\ell+o(\ell)\,. \tag{3.34}\] Here \[\alpha_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}=-{\sf g}\ell_{0}E_{0}\,, \qquad\mathbf{\sigma}_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}=\frac{1}{\ell_{0} }\Big{(}E_{1}+\frac{\ell_{1}}{\ell_{0}}E_{0}\Big{)}{\sf g}{\sf s}\,. \tag{3.35}\] Interestingly, the leading term in (3.34) is the same as in the small-\(\ell\) expansion of \(E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}}\) which is the energy of the connected tetraquark configuration for the \(Q\bar{Q}q\bar{q}\) system [26]. This has an intuitive explanation: the presence of one additional light quark has no impact on the behavior of the heavy quark sources at extremely small separations. Moving on to the case of large \(\ell\), we can use the relation (3.21) along with the asymptotic expansion of \(E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}}\)[24] to derive the corresponding expression for \(E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}\). So, we get \[E_{{\rm Q}{\rm Q}{\rm q}{\rm q}{\rm q}}=\sigma\ell-2{\sf g}{\sf s }\sqrt{s}\,I_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}+E_{0}+2c+o(1)\,,\qquad \text{with}\qquad I_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}=\mathcal{I}( \bar{\rm v}_{1})-\frac{{\sf n}{\sf e}^{\frac{1}{2}\bar{\rm v}_{1}}+3{\sf k}{ \rm e}^{-2\bar{\rm v}_{1}}}{\sqrt{\bar{\rm v}_{1}}}\,. \tag{3.36}\] The function \(\mathcal{I}\) is defined in Appendix A. Notably, the constant term in the above expansions differs from each other. On the other hand, the coefficient \(\sigma\) remains the same in all the known cases of connected string configurations (\(Q\bar{Q}\)[13], \(QQQ\)[15], \(QQq\)[22], etc.), as expected for the string tension. Another useful expansion for \(E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}\) is as follows. Instead of directly using the expressions (3.12) and (3.13), we could take the relation (3.21) and formally expand \(E_{\bar{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}}(\ell)\) in powers of \(\ell\). This results in \[E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}{\rm q}}=-\frac{\alpha_{{\rm Q}{\rm Q}}}{ \ell}+2E_{{\rm Q}{\rm q}{\rm q}}-E_{{\rm Q}{\bar{\rm q}}}+\mathbf{\sigma}_{{\rm Q}{ \rm Q}}\,\ell+c+o(\ell)\,. \tag{3.37}\] We have used the small-\(\ell\) expansion of \(E_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}}\)[24] together with \(E_{\bar{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}}=E_{{\rm Q}\bar{\rm Q}\bar{\rm q}}\). The coefficients \(\alpha_{{\rm Q}{\rm Q}}\) and \(\mathbf{\sigma}_{{\rm Q}{\rm Q}}\) are defined similarly to \(\alpha_{{\rm Q}\bar{\rm Q}_{\rm q}{\rm q}}\) and \(\mathbf{\sigma}_{{\rm Q}{\rm Q}{\rm q}{\rm q}}\), but with the \(\ell\)'s and \(E\)'s replaced by \[\mathbf{\ell}_{0}=\frac{1}{2}\xi^{-\frac{1}{2}}B\big{(}\xi^{2};\tfrac{3 }{4},\tfrac{1}{2}\big{)}\,,\qquad\mathbf{\ell}_{1}=\frac{1}{2}\xi^{-\frac{3}{2}} \Big{(}\big{(}2\xi+\frac{3}{4}\frac{{\sf k}-1}{\xi}\big{)}B\big{(}\xi^{2}; \tfrac{3}{4},-\tfrac{1}{2}\big{)}-B\big{(}\xi^{2};\tfrac{5}{4},-\tfrac{1}{2} \big{)}\Big{)}\,, \tag{3.38}\] \[\mathbf{E}_{0}=1+3{\sf k}+\frac{1}{2}\xi^{\frac{1}{2}}B\big{(}\xi^{2}; -\tfrac{1}{4},\tfrac{1}{2}\big{)}\,,\quad\mathbf{E}_{1}=\mathbf{\xi}\mathbf{\ell}_{1}-1-6{ \sf k}+\frac{1}{2}B\big{(}\xi^{2};\tfrac{1}{4},\tfrac{1}{2}\big{)}\,, \tag{3.39}\] where \(\xi=\frac{\sqrt{3}}{2}\sqrt{1-2{\sf k}-3{\sf k}^{2}}\). This approximation has an advantage over the expansion (3.34) near \(\ell=0.2\,\text{fm}\), as we will see shortly. Putting all the pieces together Now let's discuss the gluing of all the branches of \(E_{\text{\tiny Q\bar{Q}}_{\text{sqrt}}}(\ell)\). For this, we need to specify the model parameters. Here, we use one of the two parameter sets suggested in [12], which is mainly resulted from fitting the lattice QCD data to the string model we are considering. The value of \(\mathsf{s}\) is fixed from the slope of the Regge trajectory of \(\rho(n)\) mesons in the soft wall model with the geometry (2.1). As a result, we get \(\mathsf{s}=0.45\,\text{GeV}^{2}\)[27]. Then, fitting the value of the string tension \(\sigma\) (see Eq.(B.4)) to its value in [18], we get \(\mathsf{g}=0.176\). The parameter \(\mathsf{n}\) is adjusted to reproduce the lattice result for the string breaking distance in the \(Q\bar{Q}\) system. With \(\ell_{\text{\tiny Q\bar{Q}}}=1.22\,\text{fm}\) for the \(u\) and \(d\) quarks [18], we get \(\mathsf{n}=3.057\)[12]. In principle, the value of \(\mathsf{k}\) could be adjusted to fit the lattice data for the three-quark potential, as done in [15] for pure \(SU(3)\) gauge theory. But there are no lattice data available for QCD with two light quarks. There are still two special options: \(\mathsf{k}=-0.102\) motivated by phenomenology14 and \(\mathsf{k}=-0.087\) obtained from the lattice data for pure gauge theory [15]. However, both values are outside of the range of allowed values for \(\mathsf{k}\) as follows from the analysis of Eq.(3.8). Therefore, in this situation, it is reasonable to choose \(\mathsf{k}=-\frac{1}{4}\mathsf{e}^{\frac{1}{4}}\) with is the closest to those values. Footnote 14: Note that \(\mathsf{k}=-0.102\) is a solution to the equation \(\alpha_{\text{\tiny QQ}}(\mathsf{k})=\frac{1}{2}\alpha_{\text{\tiny Q\bar{Q}}}\), which follows from the phenomenological rule \(E_{\text{\tiny QQ}}(\ell)=\frac{1}{2}E_{\text{\tiny Q\bar{Q}}}(\ell)\) in the limit \(\ell\to 0\). Having fixed the model parameters, we can immediately perform some simple but important calculations. First, let's check that \(\mathsf{q}>\mathsf{v}\). That is, our construction of the string configurations makes sense. From Eqs.(3.6) and (3.8), we find that \(\mathsf{q}=0.566\) and \(\mathsf{v}=\frac{1}{12}\), as desired. In addition, from (3.26) and (3.30), we get \(\bar{\mathsf{v}}_{\text{\tiny o}}=0.829\) and \(\bar{\mathsf{v}}_{\text{\tiny i}}=0.930\). Second, given \(\mathsf{v}\), one can immediately estimate the smallest separation between the heavy quarks for the configuration shown in Figure 7(c). This gives \(\ell(\mathsf{v})=0.106\,\text{fm}\). It is quite surprising that the antiquark-diquark-diquark scheme arises already at such small separations. Plotting \(E_{\text{\tiny Q\bar{Q}}_{\text{sqrt}}}\) as a function of \(\ell\) has become a straightforward task. The result is shown in Figure 9 on the left. From this Figure it is seen that all the pieces of the function are smoothly glued together. Additionally, \(E_{\text{\tiny Q\bar{Q}}_{\text{sqrt}}}(\ell)\) approximates to a linear function for separations greater than \(0.45,\text{fm}\). For future reference, we note that the function \(E_{\text{\tiny Q\bar{Q}}_{\text{sqrt}}}(\ell)\) is better approximated by (3.37) than by (3.34) near \(\ell=0.20\,\text{fm}\). This is illustrated in the above Figure on the right. ### The disconnected configurations (d)-(f) We begin by considering configuration (d), which is obtained by adding a \(q\bar{q}\) pair (pion) to configuration (a). The pion is placed in the interior at \(r=r_{z_{0}}\), resulting in the configuration shown in Figure 10(d). This configuration can be interpreted as a hadroquarkonium state: a \(Q\bar{Q}\) pair in a pion-nucleon cloud. Although there are no calculations available for this case on the lattice, we will assume that adding a pion and averaging over its position leads to an energy increase by \(E_{\text{\tiny q\bar{q}}}\). Thus, the total energy is Figure 9: \(E_{\text{\tiny Q\bar{Q}}_{\text{sqrt}}}\) vs \(\ell\). In this and subsequent Figures, we set \(c=0.623\,\text{GeV}\). Left: The result of plotting the piecewise function \(E_{\text{\tiny Q\bar{Q}}_{\text{sqrt}}}\). The dashed curve corresponds to the configuration shown in Figure 6, while the solid curve to the remaining configurations for which the antiquark-diquark-diquark scheme holds. Right: The function near \(\ell=0.20\,\text{fm}\). The dotted and dashed curves correspond to the approximations (3.34) and (3.37), respectively. \[E^{(\rm d)}=E^{(a)}+E_{\rm\bar{q}\bar{i}}=E_{\rm Q\bar{Q}}+E_{\rm 3q}+E_{\rm\bar{q} \bar{i}}\,. \tag{3.40}\] \(E_{\rm\bar{q}\bar{i}}\) was computed in [26] with the result \[E_{\rm\bar{q}\bar{i}}=2\mathsf{n}\sqrt{\mathsf{g}\sigma}\,. \tag{3.41}\] Here \(\sigma\) is the string tension. Note that \(r_{\rm 3q}<r_{\rm 2q}=1/\sqrt{\mathsf{s}}\). Similarly, configuration (e) is obtained by adding a \(q\bar{q}\) pair to configuration (b). In five dimensions, the corresponding configuration is shown in Figure 10 (e). It can be interpreted as a pair of heavy-light hadrons in a pion cloud. By the same assumption that we have made previously in our treatment of configuration (d), the energy is given by \[E^{(\rm e)}=E^{(b)}+E_{\rm q\bar{i}}=E_{\rm Q\bar{q}}+E_{\rm Qqq}+E_{\rm\bar{q }\bar{i}}\,. \tag{3.42}\] Finally, for configuration (f) we expect \[E^{(\rm f)}=2E_{\rm Q\bar{q}}+E_{\rm 3q}\,, \tag{3.43}\] as in [8]. This configuration can be interpreted as a pair of heavy-light mesons surrounded by a nucleon cloud. It is worth noting that it may arise from configuration (a) through string breaking in the \(Q\bar{Q}\) pair (see Figure 21). ### What we have learned It is instructive to see how the energies of the configurations mentioned above depend on the separation between the heavy quark-antiquark pair. In Figure 11 we plot those for our parameter values. It is obvious from the plot that the energy of the ground state is determined by the contributions from configurations (a) and (b). Therefore, we have \(V_{0}=\min\{E_{\rm Q\bar{Q}}+E_{\rm 3q},E_{\rm Qqq}+E_{\rm q\bar{Q}}\}\). The potential interpolates between \(E_{\rm Q\bar{Q}}+E_{\rm 3q}\) at small separations and \(E_{\rm Qqq}+E_{\rm\bar{q}\bar{Q}}\) at larger ones. An important observation is that the transition between these two regimes occurs at a relatively small length scale of about \(0.2\,\mathrm{fm}\). To quantify this observation, we define a critical separation distance by \[E_{\rm Q\bar{Q}}(l_{\rm Qq})+E_{\rm 3q}=E_{\rm Qqq}+E_{\rm Q\bar{q}}\,. \tag{3.44}\] It is natural to interpret \(l_{\rm Qq}\) as a scale that distinguishes the descriptions in terms of the hadroquarkonium state and hadronic molecule. From a string theory perspective, the transition occurs through string reconnection: \(Q\bar{Q}+qqq\to Qqq+q\bar{Q}\), as sketched in Figure 4(a). Due to small value of \(l_{\rm Qq}\), we can solve the equation approximately by neglecting all but the first three terms in \(E_{\rm Q\bar{Q}}\). With the help of (B.2), the solution can be witten as Figure 10: Configurations (d), (e), and (f) in five dimensions. \[l_{\rm Q_{q}}\approx\frac{1}{2\boldsymbol{\sigma}_{\rm Q\bar{q}}}\Big{(}E_{\rm Q_{ qq}}+E_{\rm Q\bar{q}}}-E_{\rm 3_{q}}-2c\Big{)}+\sqrt{\frac{\alpha_{\rm Q\bar{q}}}{ \boldsymbol{\sigma}_{\rm Q\bar{q}}}+\frac{1}{4\boldsymbol{\sigma}_{\rm Q\bar{q} }^{2}}\Big{(}E_{\rm Q_{qq}}+E_{\rm Q\bar{q}}}-E_{\rm 3_{q}}-2c\Big{)}^{2}\,. \tag{3.45}\] An important fact is that the critical separation distance is independent of \(c\), as follows from the expressions for \(E_{\rm Q_{qq}}\) and \(E_{\rm Q\bar{q}}\). Let's make a simple estimate of \(l_{\rm Q_{q}}\). For our chosen parameter values, we have \[l_{\rm Q_{q}}\approx 0.241\,{\rm fm}\,. \tag{3.46}\] Thus, this simple estimate suggests that \(l_{\rm Q_{q}}\) is indeed of order \(0.2\,{\rm fm}\), as expected. Before proceeding further, we discuss here a point with the plots. As seen from the Figure, for separations greater than about \(0.4,{\rm fm}\), the difference between the plots for \(E_{\rm Q\bar{q}}+E_{\rm 3_{q}}\) and \(E_{\rm Q\bar{q}\bar{q}}\) becomes negligible. To see which configuration, (a) or (c), has a higher energy, we can compare their behavior for large \(\ell\). Using (3.36) and (B.4), we get \[\Delta=E_{\rm Q\bar{q}\bar{q}qq}-E_{\rm Q\bar{q}}-E_{\rm 3_{q}}=2\mathbf{g} \sqrt{5}\big{(}I_{0}-I_{\rm Q\bar{q}\bar{q}qq}\big{)}-E_{\rm 3_{q}}\,. \tag{3.47}\] Combining this with (3.2) yields \(\Delta\approx 6,{\rm MeV}\) for our parameter values, indicating that configuration (c) has a higher energy than configuration (a). With this in mind, we can formally define the B-O potential for the first excited state as \(V_{1}=\min\{E_{\rm Q\bar{q}\bar{q}qq},E_{\rm Qqq}+E_{\rm\bar{q}\bar{q}},E_{ \rm Q\bar{q}}+E_{\rm 3_{q}},2E_{\rm\bar{q}\bar{q}}+E_{\rm 3_{q}}\}\). This definition leads to the emergence of three distinct scales that separate different configurations, or in other words different descriptions. The first is a scale which refers to the process of string junction annihilation: \(Q\bar{Q}qq\to Qqq+q\bar{Q}\). In this case, we define a critical separation distance by \[E_{\rm Q\bar{q}qq}(\boldsymbol{\ell}_{\rm Q\bar{q}qq})=E_{\rm Q \bar{q}}+E_{\rm Q\bar{q}}\,. \tag{3.48}\] The scale \(\boldsymbol{\ell}_{\rm Q\bar{q}qq}\), with a value of about \(0.2,{\rm fm}\), distinguishes the descriptions in terms of the compact pentaquark state and hadronic molecule. Using the asymptotic approximation (3.37), we find \[\boldsymbol{\ell}_{\rm Q\bar{q}qq}\approx\frac{1}{2\boldsymbol{ \sigma}_{\rm Q\bar{q}}}\Big{(}2E_{\rm Q\bar{q}}-E_{\rm Q\bar{q}q}-c\Big{)}+ \sqrt{\frac{\alpha_{\rm Q\bar{q}}}{\boldsymbol{\sigma}_{\rm Q\bar{q}}}+\frac{1} {4\boldsymbol{\sigma}_{\rm Q\bar{q}}^{2}}\Big{(}2E_{\rm Q\bar{q}}-E_{\rm Qqq}- c\Big{)}^{2}}\,. \tag{3.49}\] The same argument that we have already given for \(l_{\rm Q_{q}}\) shows that \(\boldsymbol{\ell}_{\rm Q\bar{q}qq}\) is independent of \(c\). Note that the above expression is identical to that obtained in [24] for the \(QQ\bar{q}\bar{q}\) system.15 Because of this, the same estimate gives Figure 11: Various \(E\) vs \(\ell\) plots. Here we assume that \(E_{\rm Q\bar{q}}=E_{\rm q\bar{q}}\). \[\mathbf{\ell}_{\rm qQqqn}\approx 0.184\,{\rm fm}\,. \tag{3.50}\] The second scale is related to the process of string reconnection, which we have just discussed above, but now in the opposite direction: \(Qqq+q\bar{Q}\to Q\bar{Q}+3q\). Therefore the formula (3.45) for the critical separation distance holds true. Finally, the third scale arises from string breaking: \(Q\bar{Q}+3q\to Q\bar{q}+q\bar{Q}+3q\). Here we assume that a nucleon cloud has little impact on this process and, as a consequence, the formula (B.8) for the string breaking distance \(\ell_{\rm q\bar{Q}}\) remains valid. If so, then \(\ell_{\rm q\bar{Q}}=1.22\,{\rm fm}\) (for more on this point, see Appendix B). ## IV Other elementary excitations ### Preview The assumptions made about excited states in Section II are oversimplified for higher-lying B-O potentials. For instance, when constructing configurations for excited states, one must consider excited strings such as the one depicted in Figure 12(a). These strings represent a type of gluonic excitations that has been studied in lattice QCD, but only within the \(Q\bar{Q}\) system [28]. In the context of the current model, one of these excitations (type \(\Sigma_{u}^{-}\)) was modeled in [29], and later, it was considered within the \(Q\bar{Q}q\bar{q}\) system in [26]. Additionally, glueballs must be included as another kind of gluonic excitations [30]. Two of the simplest examples of such color-singlet states are sketched in Figures 12(b) and (b'). The former represents a closed string, while the latter involves a pair of baryon vertices connected by open strings. These gluonic excitations are natural from the perspective of string theory in four dimensions [10]. However, in ten dimensions, there is a novelty related to the description of the baryon vertex as a five-brane [16], which means that we must also consider brane excitations. This would give rise to a set of excited vertices that represent a new type of gluonic excitations. The simplest example of such an excitation is illustrated in Figure 12(c), where the excitation is due to an open string with endpoints on the brane. At this point, one might ask what happens when a string attached to the brane breaks due to the production of a \(q\bar{q}\) pair. If so, this results in a simple picture shown in Figure 13(a). If a string (a chromoelectric flux tube) goes from a quark to an antiquark, then the difference between the numbers of in- and out-strings is equal to 3, which is precisely the number of colors. This example provides a natural definition of a baryon vertex \(V^{(1)}\), where four in- and one out-strings meet. It is straightforward to suggest future generalizations and define a vertex \(V^{(\rm N)}\) with \(N+3\) Figure 12: Some types of gluonic excitations. Figure 13: Generalized baryon vertices: \(V^{(1)}\) (left) and \(V^{(\rm N)}\) (right). in-strings and \(N\) out-strings, as sketched in Figure 13(b).16 In this notation, the baryon vertex of Sec.II corresponds to \(V^{(0)}\). Footnote 16: We have learned that Cobi Sonnenschein is also considering such vertices in QCD. ### Implications for pentaquarks Finding evidence for generalized vertices in QCD, particularly for \(V^{(1)}\), would be highly interesting. Perhaps the simplest way to achieve this is to examine hybrid potentials in the \(QQQ\) quark system through lattice simulations. In the present context, \(V^{(1)}\) produces an additional connected pentaquark configuration as that shown in Figure 14(a).17 Footnote 17: A similar configuration also occurs for the \(QQqq\bar{q}\) system. See Figure 14(b). Our aim for this subsection is to construct a five-dimensional counterpart of this configuration. If one places the above configuration on the boundary of five-dimensional space, a gravitational force pulls the light quarks and strings towards the interior. As a result, the configuration takes the form shown in Figure 15, which is supposed to be the configuration of lowest energy due to its high degree of symmetry. The total action governing this configuration is \[S=\sum_{i=1}^{5}S^{(i)}_{\text{\tiny{NG}}}+S^{(1)}_{\text{\tiny{vert}}}+3S_{ \text{q}}\,. \tag{4.1}\] For what follows, we will assume that the action for the vertex \(V^{(1)}\) is also given by the five-brane world volume action (2.3), specifically \(S^{(1)}_{\text{\tiny{vert}}}=S_{\text{\tiny{vert}}}\). From this starting point, the analysis proceeds in an obvious manner. However, a quicker way to proceed is to use the results from Appendix C. Using those, we can get the corresponding formulas by rescaling \(\mathsf{k}\to\frac{1}{3}\mathsf{k}\). So, we have Figure 14: The generalized pentaquark configurations described by \(V^{(1)}\) for the \(Q\bar{Q}qqq\) and \(QQqq\bar{q}\) systems. Figure 15: A possible configuration of Figure 14(a) in five dimensions. \[E^{(1)}_{\rm QQ_{qqqq}}=3{\sf g}\sqrt{3}\bigg{(}\frac{2}{3}{\cal E}^{+}(\alpha,v)+{ \cal Q}({\rm q})-{\cal Q}(v)+{\sf k}\frac{{\rm e}^{-2v}}{\sqrt{v}}+{\sf n}\frac{{ \rm e}^{\frac{1}{2}{\sf q}}}{\sqrt{\rm q}}\bigg{)}+2c\,, \tag{4.2}\] with the separation distance \(\ell\) given by (3.12). The tangent angle \(\alpha\) is expressed in terms of \(v\) as \[\sin\alpha=\frac{3}{2}\Big{(}1+{\sf k}(1+4v){\rm e}^{-3v}\Big{)}\,. \tag{4.3}\] The parameter \(v\) ranges from \(0\) to \({\rm q}\). A simple analysis shows that for \({\sf k}=-\frac{1}{4}{\rm e}^{\frac{1}{2}}\), Eq.(4.3) has only one solution, which is \(v={\rm v}\). In this case \(\alpha=\pi/2\), \(\ell=0\), and \(E^{(1)}_{\rm QQ_{qqqq}}=3.022\,{\rm GeV}\). Thus, the present configuration is subleading to the pentaquark configuration (c) of Sec.III. In fact, it may be spurious as it only exists at zero-separation of the quark-antiquark pair, where the string models are not reliable. We can proceed further in the fashion just described in Sec.III. For \(v\) slightly larger than \({\rm q}\), the configuration transforms into the one shown in the left panel of Figure 16, where strings (3)-(5) collapse to a point. The remaining strings join in the interior that leads to the formation of cusp at \(r=r_{v}\). For this case, the total action reduces to \[S=\sum_{i=1}^{2}S^{(i)}_{\rm NG}+S_{\rm vert}+3S_{\rm q}\,. \tag{4.4}\] We take a shortcut to the desired results instead of directly analyzing (4.4). The expression for \(\ell\) remains unchanged, and is given by (3.12), while the expression for the energy is obtained by setting \({\rm q}=v\) in (4.2). So, we get \[\ell=\frac{2}{\sqrt{\sf s}}{\cal L}^{+}(\alpha,v)\,,\qquad E^{(1)}_{\rm QQ_{qqqq }}=3{\sf g}\sqrt{\sf s}\bigg{(}\frac{2}{3}{\cal E}^{+}(\alpha,v)+\frac{{\rm k} {\rm e}^{-2v}+{\rm n}{\rm e}^{\frac{1}{2}v}}{\sqrt{v}}\bigg{)}+2c\,. \tag{4.5}\] The angle \(\alpha\) is now determined from the equation \[\sin\alpha=\frac{3}{2}\Big{(}{\sf k}(1+4v){\rm e}^{-3v}+{\sf n}(1-v){\rm e}^{ -\frac{1}{2}v}\Big{)}\,, \tag{4.6}\] which is obtained from Eq.(3.24) by rescaling \({\sf k}\to\frac{1}{2}{\sf k}\) and \({\sf n}\to\frac{3}{2}{\sf n}\). An important fact about this equation is that the right-hand side, as a function of \(v\), decreases as \(v\) increases on the interval \([{\rm q},1]\), given the values of \({\sf k}\) and \({\sf n}\) set in Sec. III. It takes the value \(1\) at \(v=\hat{\rm v}\), where \(\hat{\rm v}\) satisfies the equation \[{\sf k}(1+4v){\rm e}^{-3v}+{\sf n}(1-v){\rm e}^{-\frac{1}{2}v}=\frac{2}{3}\,. \tag{4.7}\] Figure 16: The generalized pentaquark configuration for small (left) and large (right) heavy quark separations. The separation distance \(\ell\) becomes zero at \(v=\check{\mathrm{v}}\) because \(\cos\alpha=0\). Additionally, the right-hand side of (4.6) is zero at \(v=\check{\mathrm{v}}_{{}_{0}}\) if \(\check{\mathrm{v}}_{{}_{0}}\) is a solution to the equation \[\mathsf{k}(1+4v)\mathrm{e}^{-3v}+\mathsf{n}(1-v)\mathrm{e}^{-\frac{1}{2}v}=0\,. \tag{4.8}\] At this point, the cusp disappears as \(\cos\alpha=1\). If these solutions exist, the energy of the generalized pentaquark configuration can be expressed parametrically as \(\ell=\ell(v)\) and \(E^{(1)}_{{}_{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{ \mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}} \bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q }}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{ Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}} \bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q }}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{ Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}} \bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar{\mathrm{Q}}\bar \(\bar{\rm v}\approx 0.625\), \(\bar{\rm v}_{{}_{0}}\approx 0.953\) and \(\bar{\rm v}_{{}_{1}}\approx 0.966\), based on the parameter values outlined in Sec. III. Next, in the left panel of Figure 17, we plot \(E_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}\) and \(E_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}^{(1)}\) versus \(\ell\). As seen from this Figure, the standard pentaquark configuration is favorable at small separations. However, for separations greater than approximately \(0.5\,{\rm fm}\) the difference between the plots becomes negligible. To determine which configuration has a lower energy, we examine the large-\(\ell\) behavior of \(E_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}\) and \(E_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}^{(1)}\). Using the formulas (3.36) and (4.14), we find \[\Delta=E_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}^{(1)}-E_{\rm{q}\bar{\rm Q}qq\bar{ \rm q}}=2\rm{g}\sqrt{\rm{s}}\big{(}I_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}-I_{\rm{ q}\bar{\rm Q}qq\bar{\rm q}}^{(1)}\big{)}-E_{0}\,. \tag{4.15}\] A simple estimate yields \(\Delta\approx-5\,{\rm MeV}\). This implies that the generalized pentaquark configuration is energetically favorable at large separations. Further analysis reveals that the transition between the string configurations occurs at approximately \(0.679\,{\rm fm}\). It can be interpreted as string junction fusion, as sketched in Figure 4(d). It is easy to perform the same analysis for the \(QQqq\bar{\rm q}\) system. In the right panel of Figure 17, we present the plot of \(E_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}\) and \(E_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}^{(1)}\). Here \(E_{\rm{q}\bar{\rm Q}qq\bar{\rm q}}\) represents the energy of the standard pentaquark configuration (see Figure 2 in [8]). The resulting picture exhibits qualitative similarities to what we got for the \(Q\bar{Q}qq\) system. However, two quantitative differences are noticeable. First, the gap now is now about \(-47\,{\rm MeV}\), and thus visible. Second, the transition occurs at a smaller separation, around \(0.278\,{\rm fm}\). To summarize, at large separations the generalized pentaquark configurations described by the vertex \(V^{(1)}\) have lower energy than the standard ones, but at small separations, the standard ones prevail. The transition between these configurations occurs due to shrinking of strings that eventually leads to string junction fusion: \(V\bar{V}V\to V^{(1)}\). From the viewpoint of string theory, this is an instance of brane fusion, when several branes coalesce into one (see, e.g., [31]).18 Footnote 18: It is worth noting that junction annihilation can be interpreted as the process of brane annihilation. ### Beyond the Born-Oppenheimer approximation In the real world, charm quarks are heavy, but not excessively so, which necessitates the inclusion of finite-mass corrections. If these corrections are substantial, the B-O approximation becomes invalid, and one has to treat light and heavy quarks on an equal footing. If so, then the \(c\bar{c}qq\bar{\rm q}\) system can be thought of as a subsystem of the \(3q\) system by adding a \(c\bar{c}\) pair, which can be considered an excitation. In the case of \(uud\), the latter scenario presents a longstanding issue concerning the existence of intrinsic charm quarks within the proton [32].19 Footnote 19: This matter remains a subject of debate in the literature. See, for example, [33] and references therein. In our discussion, we will focus solely on one aspect: the effective quarkonium-nucleon interaction. According to a proposal in [34], this interaction is a van der Waals type associated with multiple gluon exchange. The corresponding (non-relativistic) potential is expected to be of the Yukawa form Figure 17: Energies of the pentaquark configurations for the \(Q\bar{Q}qqq\) (left) and \(QQqq\bar{q}\) (right) systems. \[V_{(\mathrm{q\bar{Q}})\mathrm{A}}=-\frac{\alpha}{L}\mathrm{e}^{-\mu L}\,. \tag{4.16}\] We are now in a position to propose a string interpretation of this effective potential.20 First of all, it is clear that such a potential is meaningful only if a two-cluster decomposition takes place. In this case, we can define a separation distance \(L\) between a nucleon and a \(Q\bar{Q}\) pair as the distance between a baryon vertex and the center of mass of the pair. For small \(L\), but enough larger than the separation distance \(\ell\) of the \(Q\bar{Q}\) pair, the corresponding string configuration is presented in Figure 18(a).21 The contribution to the potential (binding energy) arises from two Footnote 20: Since it is also applicable to a \(b\bar{b}\) pair, we use a general notation \(Q\) to denote the \(c\) and \(b\) quarks. The contribution to the potential (binding energy) arises from two strings stretched between the vertex and heavy quarks. Each string gives rise to an attractive Coulomb term.22 This is the desired result for the effective potential at small separation distances. An important point to note in this argument is that the pair is not a color singlet since the string configuration is connected. The question then arises: what happens for larger values of \(L\)? Naively, the string contribution leads to a linear potential, but this is not the full story due to string reconnection. In fact, at a fixed value of \(\ell\), the separation between the two oppositely oriented strings decreases as \(L\) increases. At some large value of \(L\) string reconnection occurs. As a result, the string configuration becomes disconnected, as shown in Figure 18(b). Hence the potential flattens that is consistent with the expected weakening of the quarkonium-nucleon interaction. Footnote 22: It is a special case of the configuration shown in Figure 14(a). The potential \(V_{(\mathrm{q\bar{Q}})\mathrm{A}}\) interpolates between the energies of the two string configurations. Formally, it can be defined as the smallest eigenvalue of a model Hamiltonian. The Hamiltonian is represented by a \(2\times 2\) matrix \[\mathcal{H}(L)=\begin{pmatrix}E^{(1)}_{\mathrm{q\bar{Q}}\mathrm{q \bar{Q}}}(L)&\Theta_{\mathrm{A}}\\ \Theta_{\mathrm{A}}&E_{\mathrm{q\bar{Q}}}+E^{(1)}_{\mathrm{3q}}\end{pmatrix}\,, \tag{4.17}\] where the diagonal elements correspond to the energies of the configurations, and the off-diagonal element describes the mixing between those. We conclude this discussion with some remarks. (1) Our scenario is based on the generalized baryon vertex \(V^{(1)}\). This assumes that the nucleon is in an excited state, as can be seen from Figure 18(b). (2) For separations of order \(\ell\) or smaller, the dominant string configuration is depicted in Figure 2, which violates cluster decomposition. (3) Since strings are gluonic objects, what we have just discussed pertains to the pure gluonic contributions to the effective potential. (4) The interaction between the hadrons of Figure 18(b) is encoded in \(\Theta_{\mathrm{A}}\). (5) We have only provided a brief outline of the string interpretation of the quarkonium-nucleon interaction, leaving the details for future work. Figure 18: Sketched here are string configurations describing the quarkonium-nucleon interaction. Further comments ### An issue with \(E_{\rm qq}\) and \(E_{\rm 3q}\) Drawing conclusions from the plots of Figure 11 requires a caveat. As already noted in [8; 26], the rest energies of the pion and nucleon calculated from the expressions (3.41) and (3.2) are \(E_{\rm q\bar{q}}=1.190\,\)GeV and \(E_{\rm 3q}=1.769\,\)GeV. These values differ notably from the values of \(280\,\)MeV and \(1.060\,\)GeV used in the lattice calculations [18].23 The issue is that the effective string model in its current form still does not accurately describe light hadrons because it was originally developed for applications in the heavy quark (static) limit. In the context of string theory on AdS-like geometries, this implies that at least one quark needs to be infinitely massive and positioned on the boundary of five-dimensional space. Footnote 23: Note that in the case of two flavors \(E_{\rm 3q}=1.060\,\)GeV at \(E_{\rm q\bar{q}}=285\,\)MeV [35]. To some extent, this issue can be addressed by thinking of \(E_{\rm q\bar{q}}\) and \(E_{\rm 3q}\) as model parameters [8; 26]. For \(E_{\rm q\bar{q}}=280\,\)MeV and \(E_{\rm 3q}=1.060\,\)GeV, the corresponding \(E\)'s are plotted in Figure 19 on the left. The main conclusions that can be drawn from this result are as follows: Configurations (a) and (b) remain the configurations with the lowest energies. The pentaquark configuration (c) now has a higher energy than configuration (d), leading to an interchange of the corresponding graphs compared to Figure 11. A similar interchange also occurs between the graphs for configurations (e) and (f). It is worth noting that at almost physical pion mass one argument, based on phenomenological models [36], can be made for the validity of \(E^{\rm(c)}>E^{\rm(d)}\). Adding a pion to configuration (a) results in an energy cost of \(145\,\)MeV, whereas adding only one string junction leads to an energy cost of \(165\,\)MeV. For the lowest B-O potential \(V_{0}\), the most visible effect of the change in \(E_{\rm q\bar{q}}\) is that string reconnection occurs at a much larger \(\ell\), about \(0.8\,\)fm. In this case, Eq.(3.44) can be approximately solved by using the large-\(\ell\) expansion for \(E_{\rm q\bar{Q}}\). So, with the help of (B.4), we find \[l_{\rm q_{q}}\approx\frac{1}{\sigma}\Big{(}E_{\rm Qqq}+E_{\rm q\bar{q}}-2c-E_{ \rm 3q}+2\rm g\sqrt{5}I_{0}\Big{)}\,. \tag{5.1}\] In fact, the term \(2c\) cancels out due to the \(c\)-dependent contributions coming from \(E_{\rm Qqq}\) and \(E_{\rm Q\bar{q}}\). A numerical estimate shows that \(l_{\rm Qq}\approx 0.816\,\)fm. The next B-O potential is now defined by \(V_{1}=\min\{E_{\rm Q\bar{Q}}+E_{\rm 3q}+E_{\rm q\bar{q}},E_{\rm Qqq}+E_{\rm q \bar{Q}},E_{\rm Q\bar{Q}}+E_{\rm 3q},E_{\rm Qqq}+E_{\rm q\bar{q}}\}\). The essential feature of \(V_{1}\) is the emergence of three length scales which separate different configurations. The first scale refers to the process: \(Q\bar{Q}+3q+q\bar{q}\to Qqq+q\bar{Q}\). In fact, it consists of two subprocesses: virtual pair annihilation and string reconnection. We defines a critical separation distance by Figure 19: Left: The \(E\)’s vs \(\ell\). Right: Sketched here are the three low-lying B-O potentials of the \(Q\bar{Q}qqq\) system. The dashed lines indicate the \(E\)’s. \[E_{{}_{\rm Q\bar{Q}}}(l_{{}_{\rm Qq}}^{-})+E_{{}_{\rm Q\bar{q}}}+E_{{}_{\rm q\bar{ i}}}=E_{{}_{\rm Qqq}}+E_{{}_{\rm Q\bar{q}}}\,. \tag{5.2}\] In \(l_{{}_{\rm Qq}}^{-}\), the upper subscript refers to virtual pair annihilation. As seen from the Figure, \(l_{{}_{\rm Qq}}^{-}\) is of order \(0.6\,\)fm. Within this range of \(\ell\) values, the function \(E_{{}_{\rm Q\bar{Q}}}(\ell)\) can be approximated by (B.4). If so, then a simple calculation shows that \[l_{{}_{\rm Qq}}^{-}\approx\frac{1}{\sigma}\Big{(}E_{{}_{\rm Qqq}}+E_{{}_{\rm Q \bar{i}}}-2c-E_{{}_{\rm Qq}}-E_{{}_{\rm q\bar{i}}}+2\mbox{g}\sqrt{\mbox{g}}I_{ 0}\Big{)}\,. \tag{5.3}\] It is interesting to estimate the value of \(l_{{}_{\rm Qq}}^{-}\). For our parameter values, this gives \(l_{{}_{\rm Qq}}^{-}\approx 0.560\,\)fm. The second scale is related to the process of string reconnection: \(Qqq+q\bar{Q}\to Q\bar{Q}+3q\). This process is the inverse of the process of string reconnection discussed above for \(V_{0}\). Because of this, the formula (5.1) holds true. Finally, the third scale is due to the process: \(Q\bar{Q}+3q\to Qqq+q\bar{Q}+q\bar{q}\). It also consists of two subprocesses: virtual pair creation and string reconnection. In this case, we define a critical distance by \[E_{{}_{\rm Q\bar{Q}}}(l_{{}_{\rm Qq}}^{+})+E_{{}_{\rm 3q}}=E_{{}_{\rm Qqq}}+E_{{}_{ \rm Q\bar{q}}}+E_{{}_{\rm\bar{q}\bar{i}}}\,, \tag{5.4}\] where the upper subscript refers to virtual pair creation. Using the asymptotic formula for \(E_{{}_{\rm Q\bar{Q}}}\) again, we obtain \[l_{{}_{\rm Qq}}^{+}\approx\frac{1}{\sigma}\Big{(}E_{{}_{\rm Qqq}}+E_{{}_{\rm Q \bar{i}}}+E_{{}_{\rm\bar{q}\bar{i}}}-2c-E_{{}_{\rm 3q}}+2\mbox{g}\sqrt{\mbox{g}}I_{ 0}\Big{)}\,. \tag{5.5}\] A simple estimate yields \(l_{{}_{\rm Qq}}^{+}\approx 1.073\,\)fm. As seen from the above Figure, the potential \(V_{2}\) is described in terms of the energies of six different configurations. Therefore, we have \(V_{2}=\min\{E_{{}_{\rm QQqq}},E_{{}_{\rm Qqq}}+E_{{}_{\rm Q\bar{Q}}},E_{{}_{ \rm Q\bar{Q}}}+E_{{}_{\rm 3q}}+E_{{}_{\rm q\bar{i}}},E_{{}_{\rm Qqq}}+E_{{}_{\rm q \bar{i}}},E_{{}_{\rm Q\bar{Q}}}+E_{{}_{\rm 3q}},2E_{{}_{\rm Q\bar{i}}}+E_{{}_{\rm 3q}}\}\). There are five emerging scales that we have already analyzed, so we can discuss them briefly. The first transition near \(\ell=0.184\,\)fm is due to string junction annihilation, as discussed in Sec.III. The corresponding critical distance is given by (3.49). The second scale refers to the process \(Qqq+q\bar{Q}\to Q\bar{Q}+3q+q\bar{q}\) which is the inverse of the process we have discussed in the case of \(V_{1}\). Because of this, the formula (5.3) is valid. The next transition near \(\ell=0.816\,\)fm is due to string reconnection, with the critical separation distance given by the expression (5.1). The fourth scale refers to the process \(Qqq+q\bar{Q}+q\bar{q}\to Q\bar{Q}+3q\). It is the inverse of what we have discussed for \(V_{1}\), so the formula (5.5) is applicable for estimating the critical separation distance. Finally, the fifth scale is set by string breaking in the \(Q\bar{Q}\) system, with the corresponding formula in Appendix B. Two important conclusions that can be drawn from this analysis are: 1) The standard pentaquark configuration provides a dominant contribution to the second excited B-O potential for separations less than \(0.2\,\)fm. In that sense, if pentaquarks exist for \(V_{2}\), they are compact. 2) The generalized pentaquark configuration does not contribute to the three low-lying B-O potentials. It becomes relevant for higher excited potentials. ### More on the potentials By understanding the string configurations, we can gain further insight into the three low-lying B-O potentials. In doing so, we follow the approach of lattice QCD, which is commonly used to investigate string breaking in the \(Q\bar{Q}\) system [9], and consider a model Hamiltonian which for the problem at hand is a \(6\times 6\) matrix. Explicitly, \[\mathcal{H}(\ell)=\begin{pmatrix}E_{{}_{\rm Q\bar{Q}}}(\ell)+E_{{}_{\rm 3q}}&E_{{}_{ \rm Qqq}}+E_{{}_{\rm\bar{q}\bar{i}}}&\\ &E_{{}_{\rm Q\bar{Q}}}(\ell)+E_{{}_{\rm 3q}}+E_{{}_{\rm\bar{q}\bar{i}}}&\Theta_{ij} \\ \Theta_{ij}&E_{{}_{\rm Q\bar{i}}}+E_{{}_{\rm Qqq}}+E_{{}_{\rm\bar{q}\bar{i}}}& \\ &E_{{}_{\rm Q\bar{Q}qq}}(\ell)&2E_{{}_{\rm Q\bar{q}}}+E_{{}_{\rm 3q}}\end{pmatrix}\,, \tag{5.6}\] where the off-diagonal elements describe the strength of mixing between the six distinct states (string configurations). The first three low-lying B-O potentials correspond to the three smallest eigenvalues of the matrix \(\mathcal{H}\). Unlike lattice QCD, where the Hamiltonian can potentially be determined from a correlation matrix, it remains unclear how to calculate the off-diagonal elements within the effective string model. Consequently, it becomes challenging to precisely visualize the form of the potentials. However, we can gain insight from our previous experiences with other quark systems regarding the approximate magnitudes of the \(\Theta\) values near the transition points.24 By doing so, the overall picture becomes more akin to the one sketched in Figure 19 on the right. The compact pentaquark configuration predominantly contributes to the potential \(V_{2}\) for heavy quark separations smaller than \(0.2\,\mathrm{fm}\). Footnote 24: For instance, we can assume these \(\Theta\) values to be approximately \(47\,\mathrm{MeV}\), as in the \(Q\bar{Q}\) system on the lattice [18]. ### A relation among hadron masses Using Eqs.(3.5) and (3.7), we can rewrite the relation (3.21) as follows: \[E_{\mathrm{Q}\bar{\mathrm{Q}}\mathrm{y}\mathrm{q}\mathrm{q}}(\ell)=E_{ \mathrm{Q}\bar{\mathrm{q}}\bar{\mathrm{q}}}(\ell)+E_{\mathrm{Q}qq}-E_{\mathrm{ Q}\bar{\mathrm{q}}}\,. \tag{5.7}\] We assume also that \(E_{\mathrm{Q}\bar{\mathrm{Q}}\mathrm{y}\mathrm{q}}=E_{\mathrm{Q}\bar{ \mathrm{q}}\bar{\mathrm{q}}}\). It is tempting to apply this relation in the heavy quark limit, where contributions from the motion of the heavy quarks and spin interactions are negligible, to derive a relation among the masses of doubly-heavy-light and heavy-light hadrons. This gives \[m_{\mathrm{Q}\bar{\mathrm{Q}}\mathrm{y}\mathrm{q}\mathrm{q}}-m_{\mathrm{Q} \bar{\mathrm{q}}\bar{\mathrm{q}}}=m_{\mathrm{Q}qq}-m_{\mathrm{Q}\bar{\mathrm{ q}}}\,. \tag{5.8}\] It is necessary to keep in mind that because \(E_{\mathrm{Q}\bar{\mathrm{Q}}\mathrm{y}\mathrm{q}\mathrm{q}}\) and \(E_{\mathrm{Q}\bar{\mathrm{q}}\bar{\mathrm{q}}}\) provide the dominant contributions to the potentials at small heavy quark separations, the doubly heavy hadrons are compact in sense of heavy quark separation. Moreover, they are assumed to be described by the connected string configurations. Interestingly, a similar relation is known for the \(QQ\bar{q}\bar{q}\) quark system [37], where \[m_{\mathrm{Q}\bar{\mathrm{q}}\bar{\mathrm{q}}}-m_{\mathrm{Q}\mathrm{Q}q}=m_{ \mathrm{Q}qq}-m_{\mathrm{Q}\bar{\mathrm{q}}}\,. \tag{5.9}\] It can be derived from heavy quark-diquark symmetry [38] that also assumes that the doubly heavy hadrons are compact. It is intriguing to examine the possible phenomenological implications of (5.8). Since it is derived in the heavy quark limit, it is natural to attempt some estimates of the masses of hidden-bottom pentaquarks. For brevity, we will not discuss all such pentaquarks here, but only provide a simple estimate for the lightest one. In this case, several predictions can be found in the literature [39; 40; 41; 42]. We compare those with our estimate based on (5.8). The result is presented in Table 1. In the process we used the hadron masses from [37] which include \(m_{bb\bar{q}\bar{q}}=10.482\,\mathrm{GeV}\), \(m_{bbq}=5.619\,\mathrm{GeV}\), and \(m_{b\bar{q}}=5.28\,\mathrm{GeV}\). Although the obtained value falls within the range of the recent predictions, only the experiment can definitively ascertain the mass. ## VI Conclusions and outlook The somewhat surprising conclusions regarding the \(QQqq\bar{q}\) and \(Q\bar{Q}qqq\) pentaquark systems provide strong evidence that the ground state B-O potential is described in terms of hadro-quarkonia and hadronic molecules. The transition between the hadro-quarkonium description at small separation distances and molecular description at larger separations occurs due to the phenomenon of string reconnection at a critical separation distance of approximately \(0.8-1.0\,\mathrm{fm}\). Pentaquark states described by genuine five-quark interactions may appear in the case of higher lying B-O potentials, namely \(V_{1}\) or \(V_{2}\), depending on the specific model being used. These states are compact in the sense \begin{table} \begin{tabular}{l c c c c c} \hline \hline State & [41] & Our model & [42] & [40] & [39] \\ \hline \(b\bar{b}qqq\) & 10.605 & 10.821 & 11.062 & 11.080 & 11.137 \\ \hline \hline \end{tabular} \end{table} Table 1: Predictions of different models for the mass (in GeV) of the lightest pentaquark state. of small separations between heavy quark sources. This should be a useful guide when analyzing the nature of the doubly heavy pentaquark systems. There are still several important problems that need to be addressed in order to establish contact with the real world. Among these problems are: (i) The treatment of the off-diagonal elements \(\Theta\) in the model Hamiltonians as model parameters. It is highly desirable to develop a string theory technique that enables a direct computation of these elements. (ii) Due to the lack of lattice simulations, we have relied on available data at nearly the double value of the pion mass to fix the model parameters. This issue requires further attention. The recent work [43] gives just one example of needed improvements. The length scale of string junction annihilation is noticeably larger at \(m_{\pi}=146\,\)MeV. (iii) It is important to go beyond the heavy quark limit and estimate the leading \(1/m_{Q}\) corrections, especially in the case of the \(c\) quark. (iv) Finally, aside from the pentaquark systems, the generalized baryon vertices \(V^{(n)}\) and their implications for non-perturbative QCD deserve further study. ###### Acknowledgements. We would like to thank S. Aoki, S. Krippendorf, J. Sonnenschein, and P. Weisz for useful discussions. This work was supported in part by Russian Science Foundation grant 20-12-00200 in association with Steklov Mathematical Institute. ## Appendix A Notation Throughout the paper, heavy and light quarks (antiquarks) are denoted by \(Q(\bar{Q})\) and \(q(\bar{q})\) respectively, and baryon (antibaryon) vertices by \(V(\bar{V})\). Light quarks (antiquarks) are located at \(r=r_{q}(r_{\bar{q}})\), while vertices at \(r=r_{v}(r_{\bar{v}})\) unless otherwise specified. It is convenient to introduce dimensionless variables: \(q=\mbox{\rm{sr}}_{q}^{2}\), \(\bar{q}=\mbox{\rm{sr}}_{\bar{q}}^{2}\), \(v=\mbox{\rm{sr}}_{v}^{2}\), and \(\bar{v}=\mbox{\rm{sr}}_{\bar{v}}^{2}\). These variables range from 0 to 1 and indicate the proximity of the objects to the soft-wall, which is located at 1 in such units. To classify the critical separations related to the string interactions depicted in Figure 4, we use the notation \(l\) for (a), \(\ell\) for (b), and \(\ell\) for (c). In order to express the resulting formulas concisely, we utilize the set of basic functions [19]: \[{\cal L}^{+}(\alpha,x)=\cos\alpha\sqrt{x}\int_{0}^{1}du\,u^{2}\,\mbox{\rm{e}} ^{x(1-u^{2})}\Big{[}1-\cos^{2}\!\alpha\,u^{4}\mbox{\rm{e}}^{2x(1-u^{2})}\Big{]} ^{-\frac{1}{2}}\,,\qquad 0\leq\alpha\leq\frac{\pi}{2}\,,\qquad 0\leq x\leq 1\,.\] (A.1) \({\cal L}^{+}\) is a non-negative function which vanishes if \(\alpha=\frac{\pi}{2}\) or \(x=0\), and has a singular point at \((0,1)\). Assuming that \(\alpha\) is a function of \(x\) such that \(\cos\alpha(x)=\cos\alpha+\cos^{\prime}\!\alpha x+o(x)\) as \(x\to 0\), the small-\(x\) behavior of \({\cal L}^{+}\) is \[{\cal L}^{+}(\alpha,x)=\sqrt{x}\big{(}{\cal L}^{+}_{0}+{\cal L}^{+}_{1}x+o(x) \big{)}\,,\] (A.2) where \[{\cal L}^{+}_{0}=\frac{1}{4}\cos^{-\frac{1}{2}}\!\alpha\,B\big{(}\!\cos^{2}\! \alpha;\tfrac{3}{4},\tfrac{1}{2}\big{)}\,,\qquad{\cal L}^{+}_{1}=\frac{1}{4} \cos^{-\frac{3}{2}}\!\alpha\Big{(}\!\big{(}\!\cos\alpha+\cos^{\prime}\!\alpha \big{)}B\big{(}\!\cos^{2}\!\alpha;\tfrac{3}{4},-\tfrac{1}{2}\big{)}-B\big{(} \!\cos^{2}\!\alpha;\tfrac{5}{4},-\tfrac{1}{2}\big{)}\Big{)}\,,\] and \(B(z;a,b)\) is the incomplete beta function; \[{\cal L}^{-}(y,x)=\sqrt{y}\bigg{(}\int_{0}^{1}du\,u^{2}\,\mbox{\rm{e}}^{y(1-u ^{2})}\Big{[}1-u^{4}\,\mbox{\rm{e}}^{2y(1-u^{2})}\Big{]}^{-\frac{1}{2}}+\int_ {\sqrt{\frac{\pi}{y}}}^{1}du\,u^{2}\,\mbox{\rm{e}}^{y(1-u^{2})}\Big{[}1-u^{4} \,\mbox{\rm{e}}^{2y(1-u^{2})}\Big{]}^{-\frac{1}{2}}\,\bigg{)}\,,\quad 0\leq x \leq y\leq 1\,.\] (A.3) This function is non-negative and equals zero at the origin, but it becomes singular at \(y=1\). Notice that near \(y=1\), with \(x\) kept fixed, it behaves as \[{\cal L}^{-}(y,x)=-\ln(1-y)+O(1)\,.\] (A.4) The \({\cal L}\) functions are related as \({\cal L}^{+}(0,x)={\cal L}^{-}(x,x)\); \[{\cal E}^{+}(\alpha,x)=\frac{1}{\sqrt{x}}\int_{0}^{1}\,\frac{du}{u^{2}}\left({\rm e }^{xu^{2}}\Big{[}1-{\cos}^{2}\!\alpha\,u^{4}{\rm e}^{2x(1-u^{2})}\Big{]}^{-\frac {1}{2}}-1-u^{2}\right),\qquad 0\leq\alpha\leq\frac{\pi}{2}\,,\qquad 0\leq x\leq 1\,.\] (A.5) \({\cal E}^{+}\) is singular at \(x=0\) and \((0,1)\). If \(\cos\alpha(x)=\cos\alpha+{\cos}^{\prime}\!\alpha x+o(x)\) as \(x\to 0\), then the small-\(x\) behavior of \({\cal E}^{+}\) is \[{\cal E}^{+}(\alpha,x)=\frac{1}{\sqrt{x}}\Big{(}{\cal E}^{+}_{0}+{\cal E}^{+}_ {1}x+o(x)\Big{)}\,,\] (A.6) where \[{\cal E}^{+}_{0}=\frac{1}{4}\cos^{\frac{1}{2}}\!\alpha\,B\!\left({\cos}^{2}\! \alpha;-\frac{1}{4},\frac{1}{2}\right),\quad{\cal E}^{+}_{1}=\frac{1}{4}\cos ^{-\frac{1}{2}}\!\alpha\!\left(\!\left(\cos\alpha+{\cos}^{\prime}\!\alpha \right)\!B\!\left(\!\cos^{2}\!\alpha;\frac{3}{4},-\frac{1}{2}\right)-3B\! \left(\!\cos^{2}\!\alpha;\frac{5}{4},-\frac{1}{2}\right)+4\frac{\cos^{2}\! \alpha}{\sin\alpha}\right);\] \[{\cal E}^{-}(y,x)=\frac{1}{\sqrt{y}}\!\left(\int_{0}^{1}\,\frac{du}{u^{2}} \left({\rm e}^{yu^{2}}\Big{[}1-u^{4}\,{\rm e}^{2y(1-u^{2})}\Big{]}^{-\frac{1} {2}}-1-u^{2}\right)+\int_{\sqrt{\frac{\pi}{y}}}^{1}\,\frac{du}{u^{2}}\,{\rm e} ^{yu^{2}}\!\left[1-u^{4}\,{\rm e}^{2y(1-u^{2})}\right]^{-\frac{1}{2}}\right), \ 0\leq x\leq y\leq 1\,.\] (A.7) \({\cal E}^{-}\) is singular at \((0,0)\) and at \(y=1\). More specifically, near \(y=1\), with \(x\) kept fixed, it behaves as \[{\cal E}^{-}(y,x)=-{\rm e}\ln(1-y)+O(1)\,.\] (A.8) The \({\cal E}\) functions are also related as \({\cal E}^{+}(0,x)={\cal E}^{-}(x,x)\); \[{\cal Q}(x)=\sqrt{\pi}{\rm erfi}(\sqrt{x})-\frac{{\rm e}^{x}}{\sqrt{x}}\,.\] (A.9) Here \({\rm erfi}(x)\) is the imaginary error function. This is a special case of \({\cal E}^{+}\) with \(\alpha=\frac{\pi}{2}\). A useful fact is that its small-\(x\) behavior is given by \[{\cal Q}(x)=-\frac{1}{\sqrt{x}}+\sqrt{x}+O(x^{\frac{3}{2}})\,;\] (A.10) \[{\cal I}(x)=I_{0}\!-\!\int_{\sqrt{x}}^{1}\frac{du}{u^{2}}{\rm e}^{u^{2}}\Big{[} 1\!-\!u^{4}{\rm e}^{2(1-u^{2})}\Big{]}^{\frac{1}{2}},\quad{\rm with}\quad I_ {0}=\int_{0}^{1}\frac{du}{u^{2}}\Big{(}1\!+\!u^{2}\!-\!{\rm e}^{u^{2}}\Big{[}1 \!-\!u^{4}{\rm e}^{2(1-u^{2})}\Big{]}^{\frac{1}{2}}\Big{)}\,,\qquad 0<x\leq 1\,.\] (A.11) Notice that \(I_{0}\) can be evaluated numerically, with the result 0.751. ## Appendix B The potential \(V_{0}\) of the \(Q\bar{Q}\) system In this Appendix we give a brief summary of the basic results about the ground state B-O potential of a static quark-antiquark pair in the presence of two light flavors of equal mass. These results are pertinent to our discussion in Section III. For standard explanations, see [12; 13] whose conventions we follow, unless otherwise stated. From the perspective of four-dimensional string models [10], the only relevant string configurations are those shown in Figure 20. The first configuration is the simplest connected one, consisting of a valence quark and an antiquark Figure 20: The string configurations which contribute to the potential \(V_{0}\) of the \(Q\bar{Q}\) system. joined by a string. The second configuration is disconnected, and is formed by adding a pair of light quarks and attaching strings to the quarks in a way that results in a pair of heavy-light mesons. These configurations have a physical meaning: in the context of string models, a heavy meson decay \(Q\bar{Q}\to Q\bar{q}+q\bar{Q}\) is described as a transition between the two configurations. The transition occurs because of the process of string breaking. The key feature of this process is the creation of a light quark-antiquark pair. In five dimensions, the connected configuration consists of a string that is attached to the heavy quark sources located on the boundary of the five-dimensional space, as depicted in Figure 21(a). For the geometry described by Eq.(2.1), the relation between the quark separation distance along the \(x\)-axis and the string energy is written in parametric form \[\ell=\frac{2}{\sqrt{\sf s}}\mathcal{L}^{+}(0,v)\,,\quad E_{\mbox{\tiny Q \bar{Q}}}=2{\sf g}\sqrt{\sf s}\,\mathcal{E}^{+}(0,v)+2c\,.\] (B.1) Here \(v\) is a dimensionless parameter running from \(0\) to \(1\) and \(c\) is the normalization constant as before. The functions \(\mathcal{L}^{+}\) and \(\mathcal{E}^{+}\) are defined in Appendix A. For future reference, we note that the small-\(\ell\) behavior of \(E_{\mbox{\tiny Q\bar{Q}}}\) is given by \[E_{\mbox{\tiny Q\bar{Q}}}(\ell)=-\frac{\alpha_{\mbox{\tiny Q\bar{Q}}}}{\ell}+ 2c+\mathbf{\sigma}_{\mbox{\tiny Q\bar{Q}}}\ell+o(\ell)\,,\] (B.2) with \[\alpha_{\mbox{\tiny Q\bar{Q}}}=(2\pi)^{3}\Gamma^{-4}\big{(}\tfrac{1}{4}\big{)} \sf g\,,\qquad\mathbf{\sigma}_{\mbox{\tiny Q\bar{Q}}}=\frac{1}{2}(2\pi)^{-2}\Gamma^ {4}\big{(}\tfrac{1}{4}\big{)}\sf gs\,.\] (B.3) On the other hand, the large-\(\ell\) behavior is \[E_{\mbox{\tiny Q\bar{Q}}}(\ell)=\sigma\ell-2{\sf g}\sqrt{\sf s}\,I_{0}+2c+o(1) \,,\qquad\mbox{with}\qquad\sigma={\sf egs}\,.\] (B.4) Here \(I_{0}\) is defined in Appendix A and \(\sigma\) is the physical string tension. It has a larger value than the coefficient \(\mathbf{\sigma}_{\mbox{\tiny Q\bar{Q}}}\). Numerically, the ratio of \(\mathbf{\sigma}_{\mbox{\tiny Q\bar{Q}}}\) to \(\sigma\) is approximately \(0.805\). Figure 21(b) represents the five-dimensional counterpart of the disconnected configuration of Figure 20(b). Since the mesons are non-interacting, the energy is just twice the heavy-light meson mass \(E_{\mbox{\tiny Q\bar{q}}}\). The latter is given by Eq.(3.5). The ground state B-O potential is formally defined by \(V_{0}=\min\bigl{(}E_{\mbox{\tiny Q\bar{Q}}},2E_{\mbox{\tiny Q\bar{q}}}\bigr{)}\). Thus \(V_{0}\) varies between \(E_{\mbox{\tiny Q\bar{Q}}}\) at small quark separations and \(2E_{\mbox{\tiny Q\bar{q}}}\) at larger separations. However, this formal definition does not precisely describe what happens at intermediate quark separations. To address this issue, we can use the same mixing analysis as in lattice gauge theory [9; 18]. Specifically, consider a model Hamiltonian of a two-state system \[\mathcal{H}(\ell)=\begin{pmatrix}E_{\mbox{\tiny Q\bar{Q}}}(\ell)&\Theta_{\mbox {\tiny Q\bar{Q}}}\\ \Theta_{\mbox{\tiny Q\bar{Q}}}&2E_{\mbox{\tiny Q\bar{q}}}\end{pmatrix}\,,\] (B.5) with \(\Theta_{\mbox{\tiny Q\bar{Q}}}\) describing the mixing between the two states. The potential is then obtained as the smallest eigenvalue of the model Hamiltonian. Explicitly, Figure 21: The five-dimensional counterparts to the string configurations of Figure 20. \[V_{0}=\frac{1}{2}\Big{(}E_{\rm Q\bar{Q}}+2E_{\rm Q\bar{q}}\Big{)}-\sqrt{\frac{1}{4} \Big{(}E_{\rm Q\bar{Q}}-2E_{\rm Q\bar{q}}\Big{)}^{2}+\Theta^{2}_{\rm Q\bar{Q}}}\,.\] (B.6) Just like in lattice gauge theory [18], the critical separation distance (often called the string breaking distance) is defined by equating the energies of the configurations \[E_{\rm Q\bar{Q}}(\ell_{\rm Q\bar{Q}})=2E_{\rm Q\bar{q}}.\] (B.7) This distance provides a condition for determining which configuration is dominant in the system's ground state at given heavy quark separation. For large quark separations, \(E_{\rm Q\bar{Q}}(\ell)\) becomes a linear function of \(\ell\) and, as a result, the equation drastically simplifies.25 If so, then it follows from Eqs.(3.5) and (B.4) that the string breaking distance is Footnote 25: For the parameter values we use, this is true for \(\ell\gtrsim 0.5\,\)fm, whereas the string breaking distance is about \(1\,\)fm. \[\ell_{\rm Q\bar{Q}}\approx\frac{2}{\rm e\sqrt{5}}\Big{(}{\cal Q}(\rm q)+n\frac {e^{\frac{1}{2}q}}{\sqrt{\rm q}}+I_{0}\Big{)}\,.\] (B.8) Here q is a solution to Eq.(3.6). To illustrate this construction, we will provide a simple example. For the parameter values set in Section III, and with a constant \(\Theta_{\rm Q\bar{Q}}\), the potential is depicted in Figure 22. We see that as \(\ell\) approaches zero, \(V_{0}\) asymptotically approaches \(E_{\rm Q\bar{Q}}\), while as \(\ell\) tends towards infinity, it approaches \(2E_{\rm Q\bar{q}}\). The transition between these two regimes occurs around \(\ell=1.22\,\)fm, which is in line with our expectations [18]. ## Appendix C Details for the pentaquark configuration of Figure 7(c') To get to the specific issues of interest here as quickly as possible, we will use the fact that the action which governs configuration (c') follows from (3.11) at \(v=\bar{v}\). So, we have \[S=3\xi T\bigg{(}\frac{2}{3}\int_{0}^{\tau_{e}}\frac{dr}{r^{2}}\,{\rm e}^{sr^{ 2}}\sqrt{1+(\partial_{r}x)^{2}}\ +\int_{r_{v}}^{r_{q}}\frac{dr}{r^{2}}\,{\rm e}^{sr^{2}}+3{\rm k}\, \frac{{\rm e}^{-2sr_{v}^{2}}}{r_{\bar{v}}}+n\frac{e^{\frac{1}{2}sr_{q}^{2}}}{ r_{q}}\bigg{)}\,.\] (C.1) If we vary this action with respect to the position of the light quarks, this will lead us to Eq.(3.6). But if we vary it with respect to the position of the vertices, then we get the equation \[\sin\alpha=\frac{3}{2}\Big{(}1+3\mathsf{k}(1+4\bar{v})\mathrm{e}^{-3\bar{v}}\Big{)}\,, \tag{104}\] which differs from Eq.(3.14) by the factor \(\frac{3}{2}\). This factor will be crucial for our analysis below. By the same sort of argument given in subsection B of Sec.III, the expressions for the separation distance and energy are given by Eqs. (3.12) and (3.13) with \(\mathrm{v}=\bar{v}\). The latter now takes the form \[E^{\prime}_{\mathrm{QQquq}}=3\mathsf{g}\sqrt{\mathsf{s}}\bigg{(}\frac{2}{3} \mathcal{E}^{+}(\alpha,\bar{v})+\mathcal{Q}(\mathrm{q})-\mathcal{Q}(\bar{v})+ 3\mathsf{k}\frac{\mathrm{e}^{-2\bar{v}}}{\sqrt{\bar{v}}}+\mathsf{n}\frac{ \mathrm{e}^{\frac{1}{2}\mathsf{q}}}{\sqrt{\mathsf{q}}}\bigg{)}+2c\,. \tag{105}\] Here we use the prime to highlight the energy of configuration (c') as opposed to that of configuration (c). Thus the energy of the configuration is given in parametric form by \(E^{\prime}_{\mathrm{Q}\bar{\mathrm{Q}quq}}=E^{\prime}_{\mathrm{Q}\bar{ \mathrm{Q}quq}}(\bar{v})\) and \(\ell=\ell(\bar{v})\), with the parameter varying from \(\mathrm{v}\) to \(\mathrm{q}\). A numerical calculation shows that, for \(\mathsf{k}=-\frac{1}{4}\mathrm{e}^{\frac{1}{4}}\), the function \(\ell(\bar{v})\) is not monotonically increasing on the interval \([\mathrm{v},\mathrm{q}]\). Instead, it develops a local maximum close to \(\bar{v}=0.4\). This means that such a configuration exists only if the distance \(\ell\) does not exceed a critical value.26 Figure 23 illustrates the distinct behaviors of \(\ell(\bar{v})\) for the two configurations of Figure 7. Here, we also include the result for \(E^{\prime}_{\mathrm{QQquq}}\) for the sake of completeness. Footnote 26: It is interesting to note that a similar situation arises when calculating the simplest connected string configuration in the AdS-like models at finite temperature [44; 45].
2303.12307
Predicting and Enhancing the Fairness of DNNs with the Curvature of Perceptual Manifolds
To address the challenges of long-tailed classification, researchers have proposed several approaches to reduce model bias, most of which assume that classes with few samples are weak classes. However, recent studies have shown that tail classes are not always hard to learn, and model bias has been observed on sample-balanced datasets, suggesting the existence of other factors that affect model bias. In this work, we first establish a geometric perspective for analyzing model fairness and then systematically propose a series of geometric measurements for perceptual manifolds in deep neural networks. Subsequently, we comprehensively explore the effect of the geometric characteristics of perceptual manifolds on classification difficulty and how learning shapes the geometric characteristics of perceptual manifolds. An unanticipated finding is that the correlation between the class accuracy and the separation degree of perceptual manifolds gradually decreases during training, while the negative correlation with the curvature gradually increases, implying that curvature imbalance leads to model bias.Building upon these observations, we propose curvature regularization to facilitate the model to learn curvature-balanced and flatter perceptual manifolds. Evaluations on multiple long-tailed and non-long-tailed datasets show the excellent performance and exciting generality of our approach, especially in achieving significant performance improvements based on current state-of-the-art techniques. Our work opens up a geometric analysis perspective on model bias and reminds researchers to pay attention to model bias on non-long-tailed and even sample-balanced datasets.
Yanbiao Ma, Licheng Jiao, Fang Liu, Maoji Wen, Lingling Li, Wenping Ma, Shuyuan Yang, Xu Liu, Puhua Chen
2023-03-22T04:49:23Z
http://arxiv.org/abs/2303.12307v6
# Curvature-Balanced Feature Manifold Learning for Long-Tailed Classification ###### Abstract To address the challenges of long-tailed classification, researchers have proposed several approaches to reduce model bias, most of which assume that classes with few samples are weak classes. However, recent studies have shown that tail classes are not always hard to learn, and model bias has been observed on sample-balanced datasets, suggesting the existence of other factors that affect model bias. In this work, we systematically propose a series of geometric measurements for perceptual manifolds in deep neural networks, and then explore the effect of the geometric characteristics of perceptual manifolds on classification difficulty and how learning shapes the geometric characteristics of perceptual manifolds. An unanticipated finding is that the correlation between the class accuracy and the separation degree of perceptual manifolds gradually decreases during training, while the negative correlation with the curvature gradually increases, implying that curvature imbalance leads to model bias. Therefore, we propose curvature regularization to facilitate the model to learn curvature-balanced and flatter perceptual manifolds. Evaluations on multiple long-tailed and non-long-tailed datasets show the excellent performance and exciting generality of our approach, especially in achieving significant performance improvements based on current state-of-the-art techniques. Our work opens up a geometric analysis perspective on model bias and reminds researchers to pay attention to model bias on non-long-tailed and even sample-balanced datasets. The code and model will be made public. ## 1 Introduction The imbalance of sample numbers in the dataset gives rise to the challenge of long-tailed visual recognition. Most previous works assume that head classes are always easier to be learned than tail classes, e.g., class re-balancing [8, 14, 24, 34, 37, 52], information augmentation [23, 31, 35, 38, 39, 44, 56, 64, 67], decoupled training [10, 16, 29, 30, 71, 76], and ensemble learning [20, 36, 57, 58, 61, 72, 77] have been proposed to improve the performance of tail classes. However, recent studies [3, 50] have shown that classification difficulty is not always correlated with the number of samples, e.g., the performance of some tail classes is even higher than that of the head classes. Also, [49] observes differences in model performance across classes on non-long-tailed data, and even on balanced data. Therefore, it is necessary to explore the impact of other inherent characteristics of the data on the classification difficulty, and then improve the overall performance by mitigating the model bias under multiple sample number distribution scenarios. Focal loss [37] utilizes the DNN's prediction confidence on instances to evaluate the instance-level difficulty. [50] argues that for long-tailed problems, determining class-level difficulty is more important than determining instance-level difficulty, and therefore defines classification difficulty by evaluating the accuracy of each class in real-time. However, both methods rely on the model output and still can Figure 1: Curvature regularization reduces the model bias present in multiple methods on CIFAR-100-LT and ImageNet-LT. The model bias is measured with the variance of the accuracy of all classes, and it is zero when the accuracy of each class is the same. not explain why the model performs well in some classes and poorly in others. Similar to the number of samples, we would like to propose a measure that relies solely on the data itself to model class-level difficulty, which helps to understand how deep neural networks learn from the data. The effective number of samples [14] tries to characterize the diversity of features in each class, but it introduces hyperparameters and would not work in a sample-balanced dataset. Most data distributions obey the manifold distribution law [33, 54], i.e., samples of each class are distributed near a low-dimensional manifold in the high-dimensional space. The manifold consisting of features in the embedding space is called a perceptual manifold [11]. The classification task is equivalent to distinguishing each perceptual manifold, which has a series of geometric characteristics. We speculate that some geometric characteristics may affect the classification difficulty, and therefore conduct an in-depth study. **The main contributions of our work are: (1)** We systematically propose a series of measurements for the geometric characteristics of point cloud perceptual manifolds in deep neural networks (Sec 3). **(2)** The effect of learning on the separation degree (Sec 4.1) and curvature (Sec 4.2) of perceptual manifolds is explored. We find that the correlation between separation degree and class accuracy decreases with training, while the negative correlation between curvature and class accuracy increases with training (Sec 4.3), implying that existing methods can only mitigate the effect of separation degree among perceptual manifolds on model bias, while ignoring the effect of perceptual manifold complexity on model bias. **(3)** Curvature regularization is proposed to facilitate the model to learn curvature-balanced and flatter feature manifolds, thus improving the overall performance (Sec 5). Our approach effectively reduces the model bias on multiple long-tailed (Fig 1) and non-long-tailed datasets (Fig 8), showing excellent performance (Sec 6). ## 3 The Geometry of Perceptual Manifold In this section, we systematically propose a series of geometric measures for perceptual manifolds in deep neural networks, and all the pseudocode is in Appendix C. ### Perceptual Manifold A perceptual manifold is generated when neurons are stimulated by objects with different physical characteristics from the same class. Sampling along the different dimensions of the manifold corresponds to changes in specific physical characteristics. It has been shown [33, 54] that the features extracted by deep neural networks obey the manifold distribution law. That is, features from the same class are distributed near a low-dimensional manifold in the high-dimensional feature space. Given data \(X=[x_{1},\dots,x_{m}]\) from the same class and a deep neural network \(\textit{Model}=\{f(x,\theta_{1}),g(z,\theta_{2})\}\), where \(f(x,\theta_{1})\) represents a feature sub-network with parameters \(\theta_{1}\) and \(g(z,\theta_{2})\) represents a classifier with parameters \(\theta_{2}\). Extract the p-dimensional features \(Z=[z_{1},\dots,z_{m}]\in\mathbb{R}^{p\times m}\) of \(X\) with the trained model, where \(z_{i}=f(x_{i},\theta_{1})\in\mathbb{R}^{p}\). Assuming that the features \(Z\) belong to class \(c\), the \(m\) features form a \(p\)-dimensional point cloud manifold \(M^{c}\), which is called a perceptual manifold [12]. ### The Volume of Perceptual Manifold We measure the volume of the perceptual manifold \(M^{c}\) by calculating the size of the subspace spanned by the features \(z_{1},\dots,z_{m}\). First, the sample covariance matrix of \(Z\) can be estimated as \(\Sigma_{Z}=\mathbb{E}[\frac{1}{n}\sum_{i=1}^{n}z_{i}z_{i}^{T}]=\frac{1}{n}Z \mathbb{Z}^{T}\in\mathbb{R}^{p\times p}\). Diagonalize the covariance matrix \(\Sigma_{Z}\) as \(\textit{UDU}^{T}\), where \(D=diag(\lambda_{1},\dots,\lambda_{p})\) and \(U=[u_{1},\dots,u_{p}]\in\mathbb{R}^{p\times p}\). \(\lambda_{i}\) and \(u_{i}\) denote the \(i\)-th eigenvalue of \(\Sigma_{Z}\) and its corresponding eigenvector, respectively. Let the singular value of matrix \(Z\) be \(\sigma_{i}=\sqrt{\lambda_{i}}(i=1,\dots,p)\). According to the geometric meaning of singular value [1], the volume of the space spanned by vectors \(z_{1},\dots,z_{m}\) is proportional to the product of the singular values of matrix \(Z\), i.e., \(\textit{Vol}(Z)\propto\prod_{i=1}^{p}\sigma_{i}=\sqrt{\prod_{i=1}^{p}\lambda_ {i}}\). Considering \(\lambda_{1}\lambda_{2}\cdots\lambda_{p}=\det(\Sigma_{Z})\), the volume of the perceived manifold is therefore denoted as \(\textit{Vol}(Z)\propto\sqrt{\det(\frac{1}{m}Z\mathbb{Z}^{T})}\). However, when \(\frac{1}{m}Z\mathbb{Z}^{T}\) is a non-full rank matrix, its determinant is \(0\). For example, the determinant of a planar point set located in three-dimensional space is 0 because its covariance matrix has zero eigenvalues, but obviously the volume of the subspace tensed by the point set in the plane is non-zero. We want to obtain the "area" of the planar point set, which is a generalized volume. We avoid the non-full rank case by adding the unit matrix \(I\) to the covariance matrix \(\frac{1}{m}Z\mathbb{Z}^{T}\). \(I+\frac{1}{m}Z\mathbb{Z}^{T}\) is a positive definite matrix with eigenvalues \(\lambda_{i}+1(i=1,\dots,p)\). The above operation enables us to calculate the volume of a low-dimensional manifold embedded in high-dimensional space. The volume \(\textit{Vol}(Z)\) of the perceptual manifold is proportional to \(\sqrt{\det(I+\frac{1}{m}Z\mathbb{Z}^{T})}\). Considering the numerical stability, we further perform a logarithmic transformation on \(\sqrt{\det(I+\frac{1}{m}Z\mathbb{Z}^{T})}\) and define the volume of the perceptual manifold as \[\textit{Vol}(Z)=\frac{1}{2}\log_{2}\det(I+\frac{1}{m}(Z-Z_{mean})(Z-Z_{mean})^ {T}),\] where \(Z_{mean}\) is the mean of \(Z\). When \(m>1\), \(\textit{Vol}(Z>0\). Since \(I+\frac{1}{m}(Z-Z_{mean})(Z-Z_{mean})^{T}\) is a positive definite matrix, its determinant is greater than 0. In the following, the degree of separation between perceptual manifolds will be proposed based on the volume of perceptual manifolds. ### The Separation Degree of Perceptual Manifold Given the perceptual manifolds \(M^{1}\) and \(M^{2}\), they consist of point sets \(Z_{1}=[z_{1,1},\ldots,z_{1,m_{1}}]\in\mathbb{R}^{p\times m_{1}}\) and \(Z_{2}=[z_{2,1},\ldots,z_{2,m_{2}}]\in\mathbb{R}^{p\times m_{2}}\), respectively. The volumes of \(M^{1}\) and \(M^{2}\) are calculated as \(\mathit{Vol}(Z_{1})\) and \(\mathit{Vol}(Z_{2})\). Consider the following case, assuming that \(M^{1}\) and \(M^{2}\) have partially overlapped, when \(\mathit{Vol}(Z_{1})\ll\mathit{Vol}(Z_{2})\), it is obvious that the overlapped volume accounts for a larger proportion of the volume of \(M^{1}\), when the class corresponding to \(M^{1}\) is more likely to be confused. Therefore, it is necessary to construct an asymmetric measure for the degree of separation between multiple perceptual manifolds, and we expect this measure to accurately reflect the relative magnitude of the degree of separation. Suppose there are \(C\) perceptual manifolds \(\{M^{i}\}_{i=1}^{C}\), which consist of point sets \(\{Z_{i}=[z_{i,1},\ldots,z_{i,m_{i}}]\in\mathbb{R}^{p\times m_{i}}\}_{i=1}^{C}\). Let \(Z=[Z_{1},\ldots,Z_{C}]\in\mathbb{R}^{p\times\sum_{j=1}^{C}m_{j}}\), \(Z^{\prime}=[Z_{1},\ldots,Z_{i-1},Z_{i+1},\ldots,Z_{C}]\in\mathbb{R}^{p\times( \sum_{j=1}^{C}m_{j})-m_{i})}\), we define the degree of separation between the perceptual manifold \(M^{i}\) and the rest of the perceptual manifolds as \[S(M^{i})=\frac{\mathit{Vol}(Z)-\mathit{Vol}(Z^{\prime})}{\mathit{Vol}(Z_{i})}.\] The following analysis is performed for the case when \(C=2\) and \(\mathit{Vol}(Z_{2})>\mathit{Vol}(Z_{1})\). According to our motivation, the measure of the degree of separation between perceptual manifolds should satisfy \(S(M^{2})>S(M^{1})\). If \(S(M^{2})>S(M^{1})\) holds, then we can get \[\mathit{Vol}(Z)\mathit{Vol}(Z_{1})-\mathit{Vol}(Z_{1})^{2}> \mathit{Vol}(Z)\mathit{Vol}(Z_{2})-\mathit{Vol}(Z_{2})^{2},\] \[\iff\mathit{Vol}(Z)(\mathit{Vol}(Z_{1})-\mathit{Vol}(Z_{2}))> \mathit{Vol}(Z_{1})^{2}-\mathit{Vol}(Z_{2})^{2},\] \[\iff\mathit{Vol}(Z)<\mathit{Vol}(Z_{1})+\mathit{Vol}(Z_{2}).\] We prove that \(\mathit{Vol}(Z)<\mathit{Vol}(Z_{1})+\mathit{Vol}(Z_{2})\) holds when \(\mathit{Vol}(Z_{2})>\mathit{Vol}(Z_{1})\) and the detailed proof is in Appendix B. The above analysis shows that the proposed measure meets our requirements and motivation. The formula for calculating the degree of separation between perceptual manifolds can be further reduced to \[S(M^{i}) =\log_{\delta}\det((I+\frac{Z^{\prime}Z^{\prime T}}{\sum_{j=1,j \neq i}^{C}m_{j}})^{-1}(I+\frac{ZZ^{T}}{\sum_{j=1}^{C}m_{j}})),\] \[\delta =\det(I+\frac{1}{m}Z_{i}Z_{i}^{T}).\] The detailed derivation is in Appendix B. Next, we validate the proposed measure of the separation degree between perceptual manifolds in a 3D spherical point cloud scene. Specifically, we conducted the experiments in two cases: (1) Construct two 3D spherical point clouds of radius \(1\), and then increase the distance between their spherical centers. Since the volumes of the two spherical point clouds are equal, their separation degrees should be symmetric. The variation curves of the separation degrees are plotted in Fig 2, and it can be seen that the experimental results satisfy our theoretical predictions. (2) Change the distance between the centers of two spherical point clouds of radius \(1\) and radius \(1.5\). Observe their separation degrees, the separation degrees of these two spherical point clouds should be asymmetric. Fig 2 shows that their separation degrees increase as the distance between their centers increases. Also, the manifold with a larger radius has a greater separation degree, and this experimental result conforms to our analysis and motivation. The separation degree between perceptual manifolds may affect the model's bias towards classes. In addition, it can also be used as the regularization term of the loss function or applied in contrast learning to keep the different perceptual manifolds away from each other. ### The Curvature of Perceptual Manifold Given a point cloud perceptual manifold \(M\), which consists of a \(p\)-dimensional point set \(\{z_{1},\ldots,z_{n}\}\), our goal is to calculate the Gauss curvature at each point. First, the normal vector at each point on \(M\) is estimated by the neighbor points. Denote by \(z_{i}^{j}\) the \(j\)-th neighbor point of \(z_{i}\) and \(u_{i}\) the normal vector at \(z_{i}\). We solve for the normal vector by minimizing the inner product of \(z_{i}^{j}-c_{i},j=1,\ldots,k\) and \(u_{i}\)[4], i.e., \[\min\sum_{j=1}^{k}((z_{i}^{j}-c_{i})^{T}u_{i})^{2},\] where \(c_{i}=\frac{1}{k}{\sum_{j=1}^{k}}z_{i}^{j}\) and \(k\) is the number of neighbor points. Let \(y_{j}=z_{i}^{j}-c_{i}\), then the optimization objective is converted to \[\min\sum_{j=1}^{k}(y_{j}^{T}u_{i})^{2} =\min\sum_{j=1}^{k}u_{i}^{T}y_{j}y_{j}^{T}u_{i}\] \[=\min(u_{i}^{T}(\sum_{j=1}^{k}y_{j}y_{j}^{T})u_{i}).\] \(\sum_{j=1}^{k}y_{j}y_{j}^{T}\) is the covariance matrix of \(k\) neighbors of \(z_{i}\). Therefore, let \(Y=[y_{1},\ldots,y_{k}]\in\mathbb{R}^{p\times k}\) and \(\sum_{j=1}^{k}y_{j}y_{j}^{T}=YY^{T}\). The optimization objective is further equated to \[\begin{cases}f(u_{i})=u_{i}^{T}YY^{T}u_{i},YY^{T}\in\mathbb{R}^{p\times p},\\ min(f(u_{i})),\\ s.t.u_{i}^{T}u_{i}=1.\end{cases}\] Figure 2: The variation curve between the separation degree of two spherical point clouds and the distance between spherical centers. Construct the Lagrangian function \(L(u_{i},\lambda)=f(u_{i})-\lambda(u_{i}^{T}u_{i}-1)\) for the above optimization objective, where \(\lambda\) is a parameter. The first-order partial derivatives of \(L(u_{i},\lambda)\) with respect to \(u_{i}\) and \(\lambda\) are \[\frac{\partial L(u_{i},\lambda)}{\partial u_{i}} =\frac{\partial}{\partial u_{i}}f(u_{i})-\lambda\frac{\partial}{ \partial u_{i}}(u_{i}^{T}u_{i}-1)\] \[=2(YY^{T}u_{i}-\lambda u_{i}),\] \[\frac{\partial L(u_{i},\lambda)}{\partial\lambda} =u_{i}^{T}u_{i}-1.\] Let \(\frac{\partial L(u_{i},\lambda)}{\partial u_{i}}\) and \(\frac{\partial L(u_{i},\lambda)}{\partial\lambda}\) be \(0\), we can get \(YY^{T}u_{i}=\lambda u_{i},u_{i}^{T}u_{i}=1\). It is obvious that solving for \(u_{i}\) is equivalent to calculating the eigenvectors of the covariance matrix \(YY^{T}\), but the eigenvectors are not unique. From \(\left\langle YY^{T}u_{i},u_{i}\right\rangle=\left\langle\lambda u_{i},u_{i}\right\rangle\) we can get \(\lambda=\left\langle YY^{T}u_{i},u_{i}\right\rangle=u_{i}^{T}YY^{T}u_{i}\), so the optimization problem is equated to \(\arg\min_{u_{i}}(\lambda)\). Performing the eigenvalue decomposition on the matrix \(YY^{T}\) yields \(p\) eigenvalues \(\lambda_{1},\ldots,\lambda_{p}\) and the corresponding \(p\)-dimensional eigenvectors \([\xi_{1},\ldots,\xi_{p}]\in\mathbb{R}^{p\times p}\), where \(\lambda_{1}\geq\cdots\geq\lambda_{p}\geq 0\), \(\left\|\xi_{i}\right\|_{2}=1,i=1,\ldots,p\), \(\left\langle\xi_{a},\xi_{b}\right\rangle=0(a\neq b)\). The eigenvector \(\xi_{m+1}\) corresponding to the smallest non-zero eigenvalue of the matrix \(YY^{T}\) is taken as the normal vector \(u_{i}\) of \(M\) at \(z_{i}\). Consider an \(m\)-dimensional affine space with center \(z_{i}\), which is spanned by \(\xi_{1},\ldots,\xi_{m}\). This affine space approximates the tangent space at \(z_{i}\) on \(M\). We estimate the curvature of \(M\) at \(z_{i}\) by fitting a quadratic hypersurface in the tangent space utilizing the neighbor points of \(z_{i}\). The \(k\) neighbors of \(z_{i}\) are projected into the affine space \(z_{i}+\left\langle\xi_{1},\ldots,\xi_{m}\right\rangle\) and denoted as \[o_{j}=[(z_{i}^{j}-z_{i})\cdot\xi_{1},\ldots,(z_{i}^{j}-z_{i})\cdot\xi_{m}]^{T }\in\mathbb{R}^{m},j=1,\ldots,k.\] Denote by \(o_{j}[m]\) the \(m\)-th component \((z_{i}^{j}-z_{i})\cdot\xi_{m}\) of \(o_{j}\). We use \(z_{i}\) and \(k\) neighbor points to fit a quadratic hypersurface \(f(\theta)\) with parameter \(\theta\in\mathbb{R}^{m\times m}\). The hypersurface equation is denoted as \[f(o_{j},\theta)=\frac{1}{2}{\sum}_{a,b}\theta_{a,b}o_{j}\left[a \right]o_{j}\left[b\right],j\in\left\{1,\ldots,k\right\},\] further, minimize the squared error \[E(\theta)={\sum}_{j=1}^{k}(\frac{1}{2}{\sum}_{a,b}\theta_{a,b}o_{j} \left[a\right]o_{j}\left[b\right]-(z_{i}^{j}-z_{i})\cdot u_{i})^{2}.\] Let \(\frac{\partial E(\theta)}{\partial\theta_{a,b}}=0,a,b\in\left\{1,\ldots,m\right\}\) yield a nonlinear system of equations, but it needs to be solved iteratively. Here, we propose an ingenious method to fit the hypersurface and **give the analytic solution of the parameter \(\theta\)** directly. Expand the parameter \(\theta\) of the hypersurface into the column vector \[\theta=[\theta_{1,1},\ldots,\theta_{1,m},\theta_{2,1},\ldots,\theta_{m,m}]^{T }\in\mathbb{R}^{m^{2}}.\] Organize the \(k\) neighbor points \(\left\{o_{j}\right\}_{j=1}^{k}\) of \(z_{i}\) according to the following form: \[O(z_{i})=\begin{bmatrix}o_{1}\left[1\right]o_{1}\left[1\right]&o_{1}\left[1 \right]o_{1}\left[2\right]&\cdots&o_{1}\left[m\right]o_{1}\left[m\right]\\ o_{2}\left[1\right]o_{2}\left[1\right]o_{2}\left[1\right]o_{2}\left[2\right]& \cdots&o_{2}\left[m\right]o_{2}\left[m\right]\\ \vdots&\vdots&\ddots&\vdots\\ o_{k}\left[1\right]o_{k}\left[1\right]o_{k}\left[2\right]&\cdots&o_{k}\left[m \right]o_{k}\left[m\right]\end{bmatrix}\in\mathbb{R}^{k\times m^{2}}.\] The target value is \[T=\left[(z_{i}^{1}-z_{i})\cdot u_{i},(z_{i}^{2}-z_{i})\cdot u_{i},\ldots,(z_{i} ^{k}-z_{i})\cdot u_{i}\right]^{T}\in\mathbb{R}^{k}.\] We minimize the squared error \[E(\theta)=\frac{1}{2}tr\left[\left(O(z_{i})\theta-T\right)^{T} \left(O(z_{i})\theta-T\right)\right],\] and find the partial derivative of \(E(\theta)\) for \(\theta\): \[\frac{\partial E(\theta)}{\partial\theta} =\frac{1}{2}\left(\frac{\partial tr(\theta^{T}O(z_{i})^{T}O(z_{i}) \theta)}{\partial\theta}-\frac{\partial tr(\theta^{T}O(z_{i})^{T}T)}{\partial \theta}\right)\] \[=O(z_{i})^{T}O(z_{i})\theta-O(z_{i})^{T}T.\] Let \(\frac{\partial E(\theta)}{\partial\theta}=0\), we can get \[\theta=(O(z_{i})^{T}O(z_{i}))^{-1}O(z_{i})^{T}T.\] Thus, the Gauss curvature of the perceptual manifold \(M\) at \(z_{i}\) can be calculated as \[G(z_{i})=det(\theta)=det((O(z_{i})^{T}O(z_{i}))^{-1}O(z_{i})^{T}T).\] Up to this point, we provide an approximate solution of the Gauss curvature at any point on the point cloud perceptual manifold \(M\). [5] shows that on a high-dimensional Figure 3: The surface equations in the first and second rows are \(Z=w(X^{2}-Y^{2})\) and \(Z=\sin(\sin(0.5wX))+\cos(\cos(0.5wX))\), respectively. We increase the curvature of the surface by increasing \(w\) and calculate the complexity of the two-dimensional point cloud surface. Also, we investigate the effect of the number of neighbors \(k\) on the complexity of the manifold. dataset, almost all samples lie on convex locations, and thus the complexity of the perceptual manifold is defined as the average \(\frac{1}{n}\sum_{i=1}^{n}G(z_{i})\) of the Gauss curvatures at all points on \(M\). Our approach does not require iterative optimization and can be quickly deployed in a deep neural network to calculate the Gauss curvature of the perceptual manifold. Taking the two-dimensional surface in Fig 3 as an example, the surface complexity increases as the surface curvature is artificially increased. This indicates that our proposed complexity measure of perceptual manifold can accurately reflect the changing trend of the curvature degree of the manifold. In addition, Fig 3 shows that the selection of the number of neighboring points hardly affects the monotonicity of the complexity of the perceptual manifold. In our work, we select the number of neighboring points to be \(40\). ## 4 Learning How to Shape Perceptual Manifold The perceptual manifolds in feature space are further decoded by the classification network into predicted probabilities for classification. Intuitively, we speculate that a perceptual manifold is easier to be decoded by the classification network when it is farther away from other perceptual manifolds and flatter. We provide more geometric views on classification and object detection in Appendix I. A model is usually considered to be biased when its performance on classes is inconsistent. In the following, we investigate the effect of the geometry of the perceptual manifold on the model bias and summarize three experimental discoveries. ### Learning Facilitates The Separation Learning typically leads to greater inter-class distance, which equates to greater separation between perceptual manifolds. We trained VGG-16 [48] and ResNet-18 [22] on F-MNIST [62] and CIFAR-10 [32] to explore the effect of the learning process on the separation degree between perceptual manifolds and observed the following phenomenon. As shown in Fig 4, each perceptual manifold is gradually separated from the other manifolds during training. It is noteworthy that the separation is faster in the early stage of training, and the increment of separation degree gradually decreases in the later stage. Separation curves of perceptual manifolds for more classes are presented in Appendix D. ### Learning Reduces The Curvature Experiments are conducted with VGG-16 and ResNet-18 trained on F-MNIST and CIFAR-10, and we find that the perceptual manifold gradually flattens out during training. As shown in Fig 5, the curvature of the perceptual manifold decreases faster in the early stage of training, and it gradually becomes flat with further training. The curvature change curves of perceptual manifolds for more classes are shown in Appendix E. ### Curvature Imbalance and Model Bias Since learning separates perceptual manifolds from each other and also makes perceptual manifolds flatter, it is reasonable to speculate that the separation degree and curvature of perceptual manifolds correlate with class-wise classification difficulty. Experiments are conducted with VGG-16 and ResNet-18 trained on F-MNIST and CIFAR-10. Each class corresponds to a perceptual manifold. As shown in Fig 6, we observe that the negative correlation between the separation degree of the perceptual manifolds and the accuracy of the corresponding class decreases with training, while the correlation between the curvature and the accuracy increases. This implies that existing methods can only mitigate the effect of the separation degree between perceptual manifolds on the model bias, while ignoring the effect of perceptual manifold complexity on the model bias. ## 5 Curvature-Balanced Feature Learning The above study shows that it is necessary to focus on the model bias caused by the curvature imbalance among perceptual manifolds. In this section, we propose curvature Figure 4: The variation curves between the separation degree of perceptual manifolds and training epochs on both datasets. Figure 5: The variation curves between the complexity of perceptual manifolds and training epochs on both datasets. Figure 6: The Pearson correlation coefficients (PCCs) between the accuracy of all classes and the separation degree and complexity of the corresponding perceptual manifolds, respectively. regularization, which can reduce the model bias and further improve the performance of existing methods. ### Design Principles of The Proposed Approach The proposed curvature regularization needs to satisfy the following three principles to learn curvature-balanced and flat perceptual manifolds. **(1)** The greater the curvature of a perceptual manifold, the stronger the penalty for it. Our experiments show that learning reduces the curvature, so it is reasonable to assume that flatter perceptual manifolds are easier to decode. **(2)** When the curvature is balanced, the penalty strength is the same for each perceptual manifold. **(3)** The sum of the curvatures of all perceptual manifolds tends to decrease. ### Curvature Regularization (CR) Given a \(C\) classification task, the \(p\)-dimensional feature embeddings of images from each class are represented as \(Z_{i}=\left[z_{1}^{1},\ldots,z_{i}^{m_{i}}\right],i=1,\ldots,C\). The mean Gaussian curvature \(G_{i},i=1,\ldots,C\) of the corresponding perceptual manifold is calculated with the feature embeddings of each class (Appendix C.Algorithm 5). First, take the inverse of the curvature \(G_{i}\) and perform the maximum normalization on it. Then the negative logarithmic transformation is executed on the normalized curvature, and the curvature penalty term of the perceptual manifold \(M^{i}\) is \(-\log(\frac{G_{i}^{-1}}{\max\{G_{1}^{-1},\ldots,G_{C}^{-1}\}})\). Further, the overall curvature regularization term is denoted as \[L_{Curvature}=\sum_{i=1}^{C}-\log(\frac{G_{i}^{-1}}{\max\{G_{1}^{-1},\ldots, G_{C}^{-1}\}}).\] The detailed derivation is shown in Appendix F. In the following, we verify whether \(L_{Curvature}\) satisfies the three principles one by one. **(1)**: When the curvature \(G_{i}\) of the perceptual manifold is larger, \(G_{i}^{-1}\) is smaller. Since \(-\log(\cdot)\) is monotonically decreasing, \(-\log(\frac{G_{i}^{-1}}{\max\{G_{1}^{-1},\ldots,G_{C}^{-1}\}})\) increases with \(G_{i}\) increases. \(L_{Curvature}\) is consistent with Principle 1. **(2)**: When \(G_{1}=\cdots=G_{C}\), \(\max\{G_{1}^{-1},\ldots,G_{C}^{-1}\}=G_{1}^{-1}=\cdots=G_{C}^{-1}\), so \(-\log(\frac{G_{i}^{-1}}{\max\{G_{1}^{-1},\ldots,G_{C}^{-1}\}})=0,i=1,\ldots,C\). \(L_{Curvature}\) follows Principle 2. **(3)**: The curvature penalty term of the perceptual manifold \(M^{i}\) is \(0\) when \(G_{i}=\min\{G_{1},\ldots,G_{C}\}\). Since the greater the curvature, the greater the penalty, our method aims to bring the curvature of all perceptual manifolds down to \(\min\{G_{1},\ldots,G_{C}\}\). Obviously, \(\sum_{i=1}^{C}G_{i}\geq C\cdot\min\{G_{1},\ldots,G_{C}\}\), so our approach promotes curvature balance while also making all perceptual manifolds flatter, which satisfies Principle 3. The curvature regularization can be combined with any loss function. Since the correlation between curvature and accuracy increases with training, we balance the curvature regularization with other losses using a logarithmic function with a hyperparameter \(\tau\), and the overall loss is denoted as \[L=L_{original}+\frac{\log_{\tau}epoch}{(\frac{L_{Curvature}}{L_{original}}). detach()}\times L_{Curvature},\ \tau>1.\] The term \((\frac{L_{Curvature}}{L_{original}}).detach()\) aims to make the curvature regularization loss of the same magnitude as the original loss. We investigate reasonable values of \(\tau\) in experiments (Sec 6.2). The design principle of curvature regularization is compatible with the learning objective of the model, and our experiments show that the effect of curvature imbalance on model bias has been neglected in the past. Thus curvature regularization is not in conflict with \(L_{original}\), as evidenced by our outstanding performance on multiple datasets. ### Dynamic Curvature Regularization (DCR) The curvature of perceptual manifolds varies with the model parameters during training, so it is necessary to update the curvature of each perceptual manifold in real-time. However, there is a challenge: only one batch of features is available at each iteration, and it is not possible to obtain all the features to calculate the curvature of the perceptual manifolds. If the features of all images from the training set are extracted using the current network at each iteration, it will greatly increase the time cost of training. ``` 0: Training set \(D=\{(x_{i},y_{i})\}_{i=1}^{M}\). A CNN \(\{f(x,\theta_{1}),g(z,\theta_{2})\}\), where \(f()\) and \(g(.)\) denote the feature sub-network and classifier, respectively. The training epoch is \(N\). 1: Initialize the storage pool Q 2:for\(epoch=1\) to \(N\)do 3:for\(iteration=0\) to \(\frac{M}{batch\ size}\)do 4: Sample a mini-batch \(\{(x_{i},y_{i})\}_{i=1}^{batch\ size}\) from \(D\). 5: Calculate feature embeddings \(z_{i}=f(x_{i},\theta_{1}),i=1,\ldots,batch\ size\). 6: Store \(z_{i}\) and label \(y_{i}\) into \(Q\). 7:if\(epoch<n\)then 8:if\(epoch>1\)then 9: Dequeue the oldest batch of features from \(Q\). 10:endif 11: Calculate loss \(L_{original}\). 12:else 13: Dequeue the oldest batch of features from \(Q\). 14: Calculate the curvature of each perceptual manifold. 15: Calculate loss: \(L=L_{original}+\frac{\log_{\tau}epoch}{(\frac{L_{Curvature}}{L_{original}}). detach()}\times L_{Curvature}\). 16:endif 17: Perform back propagation: \(L.backward()\). 18:\(optimizer.step()\). 19:endfor 20:endfor ``` **Algorithm 1** End-to-end training with DCR Inspired by [3, 40], we design a first-in-first-out storage pool to store the latest historical features of all images. The slow drift phenomenon of features found by [59] ensures the reliability of using historical features to approximate the current features. We show the training process in Algorithm 1. Specifically, the features of all batches are stored in the storage pool at the first epoch. To ensure that the drift of the features is small enough, it is necessary to train another \(n\) epochs to update the historical features. Experiments of [3] on large-scale datasets show that \(n\) taken as \(5\) is sufficient, so \(n\) is set to \(5\) in this work. When \(epoch>n\), the oldest batch of features in the storage pool is replaced with new features at each iteration, and the curvature of each perceptual manifold is calculated using all features in the storage pool. The curvature regularization term is updated based on the latest curvature. **It should be noted** that for decoupled training, CR is applied in the feature learning stage. Our method is employed in training only and does not affect the inference speed of the model. ## 6 Experiments ### Datasets and Implementation Details We comprehensively evaluate the effectiveness and generality of curvature regularization on both long-tailed and non-long-tailed datasets. The experiment is divided into two parts, the first part tests curvature regularization on four long-tailed datasets, namely CIFAR-10-LT, CIFAR-100-LT [14], ImageNet-LT [14, 47], and iNaturalist2018 [55]. The second part validates the curvature regularization on two non-long tail datasets, namely CIFAR-100 [32] and ImageNet [47]. For a fair comparison, the training and test images of all datasets are officially split, and the Top-1 accuracy on the test set is utilized as a performance metric. In addition, we train models on CIFAR-100, CIFAR-10/100-LT with a single NVIDIA 2080Ti GPU and ImageNet, ImageNet-LT, and iNaturalist2018 with eight NVIDIA 2080Ti GPUs. Please refer to Appendix G for a detailed description of the dataset and experimental setup. ### Effect of \(\tau\) When \(\tau=epoch\), \(\log_{\tau}epoch=1\), so the selection of \(\tau\) is related to the number of epochs. When the correlation between curvature and accuracy exceeds the correlation between the separation degree and accuracy, we expect \(\log_{\tau}epoch>1\), which means that the curvature regularization loss is greater than the original loss. Following the [45] setting, all models are trained for \(200\) epochs, so \(\tau\) is less than \(200\). To search for the proper value of \(\tau\), experiments are conducted for CE + CR with a range of \(\tau\), and the results are shown in Fig 7. Large-scale datasets require more training epochs to keep the perceptual manifolds away from each other, while small-scale datasets can achieve this faster, so we set \(\tau=100\) on CIFAR-10/100-LT and CIFAR-100, and \(\tau=120\) on ImageNet, ImageNet-LT, and iNaturalist2018. ### Experiments on Long-Tailed Datasets #### 6.3.1 Evaluation on CIFAR-10/100-LT Table 1 summarizes the improvements of CR for several state-of-the-art methods on long-tailed CIFAR-10 and CIFAR-100, and we observe that CR significantly improves all methods. For example, in the setting of IF \(200\), CR results in performance gains of \(2.3\%\), \(2.1\%\), and \(1.5\%\) for CE, Focal loss [37], and CB loss [14], respectively. When CR is applied to feature training, the performance of BBN [77] is improved by more than \(1\%\) on each dataset, which again validates that curvature imbalance negatively affects the learning of classifiers. When CR is applied to several state-of-the-art methods (e.g., RIDE + CMO [45] (2022) and GCL [34] (2022)), CR achieves higher classification accuracy with all IF settings. \begin{table} \begin{tabular}{l|c c c c|c c c} \hline \hline Dataset & \multicolumn{4}{c|}{CIFAR-10-LT} & \multicolumn{4}{c}{CIFAR-100-LT} \\ \hline Backbone Net & \multicolumn{4}{c}{ResNet-32} \\ \hline imbalance factor & 200 & 100 & 50 & 10 & 200 & 100 & 50 & 10 \\ \hline MiSLAS [76] & 77.3 & 82.1 & **85.7** & **90.0** & 42.3 & 47.0 & 52.3 & **63.2** \\ LDAM-DRW [8] & - & 77.0 & 81.0 & 88.2 & - & 42.0 & 46.6 & 58.7 \\ \hline Cross Entropy & 65.6 & 70.3 & 74.8 & 86.3 & 34.8 & 38.2 & 43.8 & 55.7 \\ + CR & 67.9 & 72.6 & 76.2 & 89.5 & 36.9 & 40.5 & 45.1 & 57.4 \\ \hline Focal Loss [37] & 65.2 & 70.3 & 76.7 & 86.6 & 35.6 & 38.4 & 44.3 & 55.7 \\ + CR & 67.3 & 71.8 & 79.1 & 88.4 & 37.5 & 40.2 & 45.2 & 58.3 \\ \hline CB Loss [14] & 68.8 & 74.5 & 79.2 & 87.4 & 36.2 & 39.6 & 45.3 & 57.9 \\ + CR & 70.3 & 75.8 & 79.8 & 89.1 & 38.5 & 40.7 & 46.8 & 59.2 \\ \hline BBN [77] & - & 79.8 & 82.1 & 88.3 & - & 42.5 & 47.0 & 59.1 \\ + CR [34] & - & 81.2 & 83.5 & 89.4 & - & 43.7 & 48.1 & 60.0 \\ \hline De-c-TDE [53] & - & 80.6 & 83.6 & 88.5 & - & 44.1 & 50.3 & 59.6 \\ + CR & - & 81.8 & 84.5 & **89.9** & - & 45.7 & 51.4 & 60.3 \\ \hline RIDE (4*) [58] & - & - & - & - & - & 48.7 & **59.0** & 58.4 \\ + CR & - & - & - & - & - & 49.8 & **59.8** & 59.5 \\ \hline RIDE + CMO [45] & - & - & - & - & - & **50.0** & 53.0 & 60.2 \\ + CR & - & - & - & - & - & **50.7** & 54.3 & **61.4** \\ \hline GCL [34] & **79.0** & **82.7** & 85.5 & **-** & **44.9** & 48.7 & 53.6 & - \\ + CR & **79.9** & **83.5** & **86.8** & - & **45.6** & 49.8 & 55.1 & - \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison on CIFAR-10-LT and CIFAR-100-LT. The accuracy (%) of Top-1 is reported. The best and second-best results are shown in **underlined bold** and **bold**, respectively. Figure 7: The effect of \(\tau\) on accuracy for both datasets. #### 6.3.2 Evaluation on ImageNet-LT and iNaturalist2018 The results on ImageNet-LT and iNaturalist2018 are shown in Table 2. We not only report the overall performance of CR, but also additionally add the performance on three subsets of Head (more than 100 images), Middle (20-100 images), and Tail (less than 20 images). From Table 2, we observe the following three conclusions: first, CR results in significant overall performance improvements for all methods, including \(2.9\%\) and \(2.4\%\) improvements on ImageNet-LT for CE and Focal loss, respectively. Second, when CR is combined with feature training, the overall performance of BBN [77] is improved by \(1.5\%\) and \(1.3\%\) on the two datasets, respectively, indicating that curvature-balanced feature learning facilitates classifier learning. Third, our approach still boosts model performance when combined with advanced techniques (RIDE [58] (2021), RIDE + CMO [45] (2022)), suggesting that curvature-balanced feature learning has not yet been considered by other methods. ### Experiments on Non-Long-Tailed Datasets Curvature imbalance may still exist on sample-balanced datasets, so we evaluate CR on non-long-tailed datasets. Table 3 summarizes the improvements of CR on CIFAR-100 and ImageNet for various backbone networks, and we observe that CR results in approximately \(1\%\) performance improvement for all backbone networks. In particular, the accuracy of CE + CR exceeds CE by \(1.5\%\) on CIFAR-100 when using ResNet-18 [22] as the backbone network. The experimental results show that our proposed curvature regularization is applicable to non-long-tailed datasets and compatible with existing backbone networks and methods. ### Curvature Regularization Reduces Model Bias Here we explore how curvature regularization improves the model performance. Measuring the model bias with the variance of the accuracy of all classes [50], Fig 1 and Fig 8 show that curvature regularization reduces the bias of the models trained on CIFAR-100-LT, Image-Net-LT, CIFAR-100, and ImageNet. By combining Tables 1 and 2, it can be found that curvature regularization reduces the model bias mainly by improving the performance of the tail class and does not compromise the performance of the head class, thus improving the overall performance. In addition, In Appendix H we answer the following two questions: (1) Is the curvature more balanced after training with CR? (2) Did the correlation between curvature imbalance and class accuracy decrease after training with CR?
2305.10846
Non-deterministic approximation operators: ultimate operators, semi-equilibrium semantics and aggregates (full version)
Approximation fixpoint theory (AFT) is an abstract and general algebraic framework for studying the semantics of non-monotonic logics. In recent work, AFT was generalized to non-deterministic operators, i.e.\ operators whose range are sets of elements rather than single elements. In this paper, we make three further contributions to non-deterministic AFT: (1) we define and study ultimate approximations of non-deterministic operators, (2) we give an algebraic formulation of the semi-equilibrium semantics by Amendola, et al., and (3) we generalize the characterisations of disjunctive logic programs to disjunctive logic programs with aggregates.
Jesse Heyninck, Bart Bogaerts
2023-05-18T09:59:12Z
http://arxiv.org/abs/2305.10846v1
Non-deterministic approximation operators: ultimate operators, semi-equilibrium semantics and aggregates (full version)1 ###### Abstract Approximation fixpoint theory (AFT) is an abstract and general algebraic framework for studying the semantics of non-monotonic logics. In recent work, AFT was generalized to non-deterministic operators, i.e. operators whose range are sets of elements rather than single elements. In this paper, we make three further contributions to non-deterministic AFT: (1) we define and study ultimate approximations of non-deterministic operators, (2) we give an algebraic formulation of the semi-equilibrium semantics by Amendola, et al., and (3) we generalize the characterisations of disjunctive logic programs to disjunctive logic programs with aggregates. This is an extended version of our paper that will be presented at ICLP 2023 and will appear in the special issue of TPLP with the ICLP proceedings. Approximation fixpoint theory, Disjunctive logic programming, Semi-equilibrium semantics ## 1 Introduction Knowledge representation and reasoning (KRR), by its very nature, is concerned with the study of a wide variety of languages and formalisms. In view of this, _unifying_ frameworks that allow for the language-independent study of aspects of KRR is essential. One framework with strong unifying potential is _approximation fixpoint theory_ (AFT) (Denecker et al. 2000), a purely algebraic theory which was shown to unify the semantics of, among others, logic programming default logic and autoepistemic logic. The central objects of study of AFT are _(approximating) operators_ and their _fixpoints_. For logic programming for instance, it was shown that Fitting's three-valued immediate consequence operator is an approximating operator of Van Emden and Kowalski's two-valued immediate consequence operator and that all major semantics of (normal) logic programming can be derived directly from this approximating operator. Moreover, this observation does not only hold for logic programming: also for a wide variety of other domains, it is straightforward how to derive an approximating operator, and the major semantics can be recovered from that approximator using purely algebraic means (an overview is given by Heyninck et al. (2022)). This has in turn inspired others to define the semantics of non-monotonic formalisms _directly_ using AFT (Bogaerts, 2019), putting AFT forward not only as a framework to study existing semantics, but also as a framework to define them. The advantage is that AFT-based semantics are guaranteed to follow well-established principles. such as _groundedness_(Bogaerts, 2015). Moreover, it is often easier to define a semantic operator, than to define the semantics from scratch. Recently, AFT was generalized to also capture _non-deterministic operators_(Heyninck et al., 2022) which allow for different options or choices in their output. A prime example of the occurrence of non-determinism in KRR is _disjunctive logic programming_, and it was indeed shown that many semantics of disjunctive logic programming (specifically the weakly supported, (partial) stable, and well-founded semantics (Alcantara et al., 2005)) are captured by non-deterministic AFT. In this paper, we make further contributions to the study of non-deterministic AFT, with a particular emphasis on disjunctive logic programs. On the one hand, (in Section 3) we deepen the theory of non-deterministics AFT by investigating so-called _ultimate semantics_. For standard AFT, Denecker et al. (2002) have shown that with every two-valued operator, we can uniquely associate a most-precise approximator called the _ultimate approximator_. When defining semantics of new formalisms, this even takes the need of defining an approximator away, since it suffices to define an _exact_ operator and its ultimate approximator comes for free.1 Our first contribution is to show how ultimate approximations can be obtained for non-deterministic AFT, which we later illustrate using disjunctive logic programs with aggregates. This means we give the first constructive method for obtaining non-deterministic approximation operators. On the other hand, we also _apply_ non-deterministic AFT to two areas that have thus far been out of reach of AFT. In Section 4, we use it to define an algebraic generalisation of the _semi-equilibrium semantics_, a semantics originally formulated for disjunctive logic programs (Amendola et al., 2016) but now, thanks to our results, available to any operator-based semantics. In Section 5, we apply the theory of non-deterministic AFT to disjunctive logic programs with _aggregates_ in the body, giving rise to a family of semantics for such programs. Footnote 1: However, ultimate semantics often come at the cost of increased computational complexity compared to their standard counterparts. ## 2 Background and Preliminaries In this section, we recall disjunctive logic programming (Sec. 2.1), approximation fixpoint theory for deterministic operators (Sec. 2.2) and non-deterministic operators (Sec. 2.3). ### Disjunctive Logic Programming In what follows we consider a propositional2 language \(\mathfrak{L}\), whose atomic formulas are denoted by \(p,q,r\) (possibly indexed), and that contains the propositional constants \(\mathsf{T}\) (representing truth), \(\mathsf{F}\) (falsity), \(\mathsf{U}\) (unknown), and \(\mathsf{C}\) (contradictory information). The connectives in \(\mathfrak{L}\) include negation \(\neg\), conjunction \(\wedge\), disjunction \(\vee\), and implication \(\leftarrow\). Formulas are denoted by \(\phi\), \(\psi\), \(\delta\) (again, possibly indexed). Logic programs in \(\mathfrak{L}\) may be divided to different kinds as follows: a (propositional) _disjunctive logic program_\(\mathcal{P}\) in \(\mathfrak{L}\) (a dlp in short) is a finite set of rules of the form \(\bigvee_{i=1}^{n}p_{i}\ \leftarrow\ \psi\), where the head \(\bigvee_{i=1}^{n}p_{i}\) is a non-empty disjunction of atoms, and the body \(\psi\) is a formula not containing \(\leftarrow\). A rule is called _normal_ (nlp), if its body is a conjunction of literals (i.e., atomic formulas or negated atoms), and its head is atomic. A rule is _disjunctively normal_ if its body is a conjunction of literals and its head is a non-empty disjunction of atoms. We will use these denominations for programs if all rules in the program satisfy the denomination, e.g. a program is normal if all its rules are normal. The set of atoms occurring in \(\mathcal{P}\) is denoted \(\mathcal{A}_{\mathcal{P}}\). The semantics of dlps are given in terms of _four-valued interpretations_. A _four-valued interpretation_ of a program \(\mathcal{P}\) is a pair \((x,y)\), where \(x\subseteq\mathcal{A}_{\mathcal{P}}\) is the set of the atoms that are assigned a value in \(\{\mathsf{T},\mathsf{C}\}\) and \(y\subseteq\mathcal{A}_{\mathcal{P}}\) is the set of atoms assigned a value in \(\{\mathsf{T},\mathsf{U}\}\). We define \(-\mathsf{T}=\mathsf{F}\), \(-\mathsf{F}=\mathsf{T}\) and \(\mathsf{X}=-\mathsf{X}\) for \(\mathsf{X}=\mathsf{C},\mathsf{U}\). Truth assignments to complex formulas are as follows: * \((x,y)(p)=\begin{cases}\mathsf{T}&\text{if $p\in x$ and $p\in y$},\\ \mathsf{U}&\text{if $p\not\in x$ and $p\in y$},\\ \mathsf{F}&\text{if $p\not\in x$ and $p\not\in y$},\\ \mathsf{C}&\text{if $p\in x$ and $p\not\in y$}.\end{cases}\) A four-valued interpretation of the form \((x,x)\) may be associated with a _two-valued_ (or _total_) interpretation \(x\). \((x,y)\) is a _three-valued_ (or _consistent_) interpretation, if \(x\subseteq y\). Interpretations are compared by two order relations which form a pointwise extension of the structure \(\mathcal{FOUR}\) consisting of \(\mathsf{T},\mathsf{F},\mathsf{C}\) and \(\mathsf{U}\) with \(\mathsf{U}<_{i}\mathsf{F},\mathsf{T}<_{i}\mathsf{C}\) and \(\mathsf{F}<_{t}\mathsf{C},\mathsf{U}<_{t}\mathsf{T}\). The pointwise extension of these orders corresponds to the _information order_, which is equivalently defined as \((x,y)\leq_{i}(w,z)\) iff \(x\subseteq w\) and \(z\subseteq y\), and the _truth order_, where \((x,y)\leq_{t}(w,z)\) iff \(x\subseteq w\) and \(y\subseteq z\). The immediate consequence operator for normal programs (van Emden and Kowalski 1976) is extended to dlp's as follows: **Definition 1** (_Immediate Consquence operator for dlp's_): Given a dlp \(\mathcal{P}\) and a two-valued interpretation \(x\), we define: (1) \(\mathit{HD}_{\mathcal{P}}(x)=\{\Delta\ |\ \bigvee\Delta\leftarrow\psi\in \mathcal{P}\text{ and }(x,x)(\psi)=\mathsf{T}\}\); and (2) \(\mathit{IC}_{\mathcal{P}}(x)=\{y\subseteq\ \bigcup\mathit{HD}_{\mathcal{P}}(x)\ |\ \forall\Delta\in \mathit{HD}_{\mathcal{P}}(x),\ y\cap\Delta\neq\emptyset\}\). Thus, \(\mathit{IC}_{\mathcal{P}}(x)\) consists of sets of atoms that occur in activated rule heads, each set contains at least one representative from every disjuncts of a rule in \(\mathcal{P}\) whose body is \(x\)-satisfied. Denoting by \(\wp(\mathcal{S})\) the powerset of \(\mathcal{S}\), \(\mathit{IC}_{\mathcal{P}}\) is an operator on the lattice \(\langle\wp(\mathcal{A}_{\mathcal{P}}),\subseteq\rangle\).3 Footnote 3: The operator \(\mathit{IC}_{\mathcal{P}}\) is a generalization of the immediate consequence operator from (Fernández and Minker 1995, Definition 3.3), where the minimal sets of atoms in \(\mathit{IC}_{\mathcal{P}}(x)\) are considered. However, this requirement of minimality is neither necessary nor desirable in the consequence operator (Heyminck et al. 2022). Given a dlp \(\mathcal{P}\) a consistent interpretation \((x,y)\) is a _(three-valued) model_ of \(\mathcal{P}\), if for every \(\phi\leftarrow\psi\in\mathcal{P}\), \((x,y)(\phi)\geq_{t}(x,y)(\psi)\). The GL-transformation \(\frac{\mathcal{P}}{(x,y)}\) of a disjunctively normal dlp \(\mathcal{P}\) with respect to a consistent \((x,y)\), is the positive program obtained by replacing in every rule in \(\mathcal{P}\) of the form \(p_{1}\vee\ldots\lor p_{n}\leftarrow\bigwedge_{i=1}^{m}q_{i}\wedge\bigwedge_{j= 1}^{n}\neg r_{j}\) a negated literal \(\neg r_{i}\) (\(1\leq i\leq k\)) by \((x,y)(\neg r_{i})\). \((x,y)\) is a _three-valued stable model_ of \(\mathcal{P}\) iff it is a \(\leq_{t}\)-minimal model of \(\frac{\mathcal{P}}{(x,y)}\).4 Footnote 4: An overview of other semantics for dlp’s can be found in previous work on non-deterministic AFT (Heyminck et al. 2022). ### Approximation Fixpoint Theory We now recall basic notions from approximation fixpoint theory (AFT), as described by Denecker, Marek and Truszczynski (2000). We restrict ourselves here to the necessary formal details, and refer to more detailed introductions by Denecker, Marek and Truszczynski (2000) and Bogaerts (2015) for more informal details. AFT introduces constructive techniques for approximating the fixpoints of an operator \(O\) over a lattice \(L=\langle\mathcal{L},\leq\rangle\).5 Approximations are pairs of elements \((x,y)\). Thus, given a lattice \(L=\langle\mathcal{L},\leq\rangle\), the induced _bilattice_ is the structure \(L^{2}=\langle\mathcal{L}^{2},\leq_{i},\leq_{t}\rangle\), in which \(\mathcal{L}^{2}=\mathcal{L}\times\mathcal{L}\), and for every \(x_{1},y_{1},x_{2},y_{2}\in\mathcal{L}\), \((x_{1},y_{1})\leq_{i}(x_{2},y_{2})\) if \(x_{1}\leq x_{2}\) and \(y_{1}\geq y_{2}\), and \((x_{1},y_{1})\leq_{t}(x_{2},y_{2})\) if \(x_{1}\leq x_{2}\) and \(y_{1}\leq y_{2}\).6 Footnote 5: Recall that a lattice is a partially ordered set in which every pair of elements has a least upper bound and greatest lower bound denoted by \(\sqcup\) and \(\sqcap\), respectively. If every set of elements has a least upper bound and greatest lower bound, we call the lattice complete. Footnote 6: Note that we use small letters to denote elements of lattice, capital letters to denote sets of elements, and capital calligraphic letters to denote sets of sets of elements. An _approximating operator_\(\mathcal{O}:\mathcal{L}^{2}\to\mathcal{L}^{2}\) of an operator \(O:\mathcal{L}\to\mathcal{L}\) is an operator that maps every approximation \((x,y)\) of an element \(z\) to an approximation \((x^{\prime},y^{\prime})\) of another element \(O(z)\), thus approximating the behavior of the approximated operator \(O\). In more details, an operator \(\mathcal{O}:\mathcal{L}^{2}\to\mathcal{L}^{2}\) is \(\leq_{i}\)_-monotonic_, if when \((x_{1},y_{1})\leq_{i}(x_{2},y_{2})\), also \(\mathcal{O}(x_{1},y_{1})\leq_{i}\mathcal{O}(x_{2},y_{2})\); \(\mathcal{O}\) is _approximating_, if it is \(\leq_{i}\)-monotonic and for any \(x\in\mathcal{L}\), \(\mathcal{O}_{l}(x,x)=\mathcal{O}_{u}(x,x)\).7\(\mathcal{O}\)_approximates_ of \(O:\mathcal{L}\to\mathcal{L}\), if it is \(\leq_{i}\)-monotonic and \(\mathcal{O}(x,x)=(O(x),O(x))\) (for every \(x\in\mathcal{L}\)). Finally, for a complete lattice \(L\), let \(\mathcal{O}:\mathcal{L}^{2}\to\mathcal{L}^{2}\) be an approximating operator. We denote: \(\mathcal{O}_{l}(\cdot,y)=\lambda x.\mathcal{O}_{l}(x,y)\) and similarly for \(\mathcal{O}_{u}\). The _stable operator for \(\mathcal{O}\)_ is then defined as \(S(\mathcal{O})(x,y)=(\mathrm{Ifp}(\mathcal{O}_{l}(.,y)),\mathrm{Ifp}(\mathcal{ O}_{u}(x,.))\), where \(\mathrm{Ifp}(O)\) denotes the least fixpoint of an operator \(O\). Footnote 7: In some papers (e.g., Denecker et al. (2000)), an approximation operator is defined as a symmetric \(\leq_{i}\)-monotonic operator, i.e. a \(\leq_{i}\)-monotonic operator s.t. for every \(x,y\in\mathcal{L},\mathcal{O}(x,y)=(\mathcal{O}_{l}(x,y),\mathcal{O}_{l}(y,x))\) for some \(\mathcal{O}_{l}:\mathcal{L}^{2}\to\mathcal{L}\). However, the weaker condition we take here (taken from Denecker et al. (2002)) is actually sufficient for most results on AFT. Approximating operators induce a family of _fixpoint semantics_. Given a complete lattice \(L=\langle\mathcal{L},\leq\rangle\) and an approximating operator \(\mathcal{O}:\mathcal{L}^{2}\to\mathcal{L}^{2}\), \((x,y)\) is a _Kripke-Kleene fixpoint_ of \(\mathcal{O}\) if \((x,y)=\mathrm{Ifp}_{\leq_{i}}(\mathcal{O}(x,y))\); \((x,y)\) is a _three-valued stable fixpoint_ of \(\mathcal{O}\) if \((x,y)=S(\mathcal{O})(x,y)\); \((x,y)\) is a _two-valued stable fixpoints_ of \(\mathcal{O}\) if \(x=y\) and \((x,x)=S(\mathcal{O})(x,x)\); \((x,y)\) is the _well-founded fixpoint_ of \(\mathcal{O}\) if it is the \(\leq_{i}\)-minimal (three-valued) stable fixpoint of \(\mathcal{O}\). ### Non-deterministic approximation fixpoint theory AFT was generalized to non-deterministic operators, i.e. operators which map elements of a lattice to a set of elements of that lattice (like the operator \(IC_{\mathcal{P}}\) for DLPs) by Heyninck et al. (2022). We recall the necessary details, referring to the original paper for more details and explanations. A _non-deterministic operator on_\(\mathcal{L}\) is a function \(O:\mathcal{L}\to\wp(\mathcal{L})\setminus\{\emptyset\}\). For example, the operator \(IC_{\mathcal{P}}\) from Definition 1 is a non-deterministic operator on the lattice \(\langle\wp(\mathcal{A}_{\mathcal{P}}),\subseteq\rangle\). As the ranges of non-deterministic operators are _sets_ of lattice elements, one needs a way to compare them, such as the _Smyth order_ and the _Hoare order_. Let \(L=\langle\mathcal{L},\leq\rangle\) be a lattice, and let \(X,Y\in\wp(\mathcal{L})\). Then: \(X\preceq_{L}^{\mathcal{S}}Y\) if for every \(y\in Y\) there is an \(x\in X\) such that \(x\leq y\); and \(X\preceq_{L}^{H}Y\) if for every \(x\in X\) there is a \(y\in Y\) such that \(x\leq y\). Given some \(X_{1},X_{2},Y_{1},Y_{2}\subseteq\mathcal{L}\), \(X_{1}\times Y_{1}\preceq_{L}^{A}X_{2}\times Y_{2}\) iff \(X_{1}\preceq_{L}^{\mathcal{S}}X_{2}\) and \(Y_{2}\preceq_{L}^{H}Y_{1}\). Let \(L=\langle\mathcal{L},\leq\rangle\) be a lattice. Given an operator \(\mathcal{O}:\mathcal{L}^{2}\to\mathcal{L}^{2}\), we denote by \(\mathcal{O}_{l}\) the operator defined by \(\mathcal{O}_{l}(x,y)=\mathcal{O}(x,y)_{1}\), and similarly for \(\mathcal{O}_{u}(x,y)=\mathcal{O}(x,y)_{2}\). An operator \(\mathcal{O}:\mathcal{L}^{2}\to\wp(\mathcal{L})\backslash\emptyset\times\wp( \mathcal{L})\backslash\emptyset\) is called a _non-deterministic approximating operator_ (ndao, for short), if it is \(\preceq_{i}^{A}\)-monotonic (i.e. \((x_{1},y_{1})\leq_{i}(x_{2},y_{2})\) implies \(\mathcal{O}(x_{1},y_{1})\preceq_{i}^{A}\mathcal{O}(x_{2},y_{2})\)), and is _exact_ (i.e., for every \(x\in\mathcal{L}\), \(\mathcal{O}(x,x)=\mathcal{O}_{l}(x,x)\times\mathcal{O}_{l}(x,x)\)). We restrict ourselves to ndaos ranging over consistent pairs \((x,y)\). We finally define the stable operator (given an ndao \(\mathcal{O}\)) as follows. The _complete lower stable operator_ is defined by (for any \(y\in\mathcal{L}\)) \(C(\mathcal{O}_{l})(y)\ =\{x\in\mathcal{L}\ |\ x\in\mathcal{O}_{l}(x,y)\ \text{and}\ \neg\exists x^{\prime}<x:x^{ \prime}\in\mathcal{O}_{l}(x^{\prime},y)\}\). The _complete upper stable operator_ is defined by (for any \(x\in\mathcal{L}\)) \(C(\mathcal{O}_{u})(x)\ =\ \{y\in\mathcal{L}\ |\ y\in\mathcal{O}_{u}(x,y)\ \text{and}\ \neg\exists y^{ \prime}<y:y^{\prime}\in\mathcal{O}_{u}(x,y^{\prime})\}\). The _stable operator_: \(S(\mathcal{O})(x,y)\ =\ C(\mathcal{O}_{l})(y)\times C(\mathcal{O}_{u})(x)\). \((x,y)\) is a _stable fixpoint_ of \(\mathcal{O}\) if \((x,y)\in S(\mathcal{O})(x,y)\).8 Footnote 8: Notice that we slightly abuse notation and write \((x,y)\in S(\mathcal{O})(x,y)\) to abbreviate \(x\in(S(\mathcal{O})(x,y))_{1}\) and \(y\in(S(\mathcal{O})(x,y))_{2}\), i.e. \(x\) is a lower bound generated by \(S(\mathcal{O})(x,y)\) and \(y\) is an upper bound generated by \(S(\mathcal{O})(x,y)\). Other semantics, e.g. the well-founded state and the Kripke-Kleene fixpoints and state are defined by Heyninck et al (2022) and can be immediately obtained once an ndao is formulated. _Example 1_: An example of an ndao approximating \(IC_{\mathcal{P}}\) (Definition 1) is defined as follows (given a dlp \(\mathcal{P}\) and an interpretation \((x,y)\)): \(\mathcal{HD}^{l}_{\mathcal{P}}(x,y)=\{\Delta\ |\ \bigvee\Delta\leftarrow\phi\in \mathcal{P},(x,y)(\phi)\geq_{t}\text{\ \sf C}\}\), \(\mathcal{HD}^{u}_{\mathcal{P}}(x,y)=\{\Delta\ |\ \bigvee\Delta \leftarrow\phi\in\mathcal{P},(x,y)(\phi)\geq_{t}\text{\ \sf U}\}\), \(\mathcal{IC}^{l}_{\mathcal{P}}(x,y)=\{x_{1}\subseteq\bigcup\mathcal{HD}^{l}_{ \mathcal{P}}(x,y)\ |\ \forall\Delta\in\mathcal{HD}^{l}_{\mathcal{P}}(x,y),\ x_{1}\cap \Delta\neq\emptyset\}\) (for \(\dagger\in\{l,u\}\)), and \(\mathcal{IC}_{\mathcal{P}}(x,y)=(\mathcal{IC}^{l}_{\mathcal{P}}(x,y),\mathcal{ IC}^{u}_{\mathcal{P}}(x,y))\). Consider the following dlp: \(\mathcal{P}=\{p\lor q\leftarrow\neg q\}\). The operator \(\mathcal{IC}^{l}_{\mathcal{P}}\) behaves as follows: * For any interpretation \((x,y)\) for which \(q\in x\), \(\mathcal{HD}^{l}_{\mathcal{P}}(x,y)=\emptyset\) and thus \(\mathcal{IC}^{l}_{\mathcal{P}}(x,y)=\{\emptyset\}\). * For any interpretation \((x,y)\) for which \(q\not\in x\), \(\mathcal{HD}^{l}_{\mathcal{P}}(x,y)=\{\{p,q\}\}\) and thus \(\mathcal{IC}^{l}_{\mathcal{P}}(x,y)=\{\{p\},\{q\},\{p,q\}\}\). Since \(\mathcal{IC}^{l}_{\mathcal{P}}(x,y)=\mathcal{IC}^{u}_{\mathcal{P}}(y,x)\) (see (Heyninck et al., 2022, Lemma 1)), \(\mathcal{IC}_{\mathcal{P}}\) behaves as follows: * For any \((x,y)\) with \(q\not\in x\) and \(q\not\in y\), \(\mathcal{IC}_{\mathcal{P}}(x,y)=\{\{p\},\{q\},\{p,q\}\}\times\{\{p\},\{q\},\{p,q \}\}\), * For any \((x,y)\) with \(q\not\in x\) and \(q\in y\), \(\mathcal{IC}_{\mathcal{P}}(x,y)=\{\emptyset\}\times\{\{\{p\},\{q\},\{p,q\}\}\}\), and * For any \((x,y)\) with \(q\in x\) and \(q\not\in y\), \(\mathcal{IC}_{\mathcal{P}}(x,y)=\{\{\emptyset,\emptyset\}\}\). We see e.g. that \(C(\mathcal{IC}^{l}_{\mathcal{P}})(\{p\})=\{\{p\},\{q\}\}\) and thus \((\{p\},\{p\})\) is a stable fixpoint of \(\mathcal{IC}_{\mathcal{P}}\). \((\emptyset,\{q\})\) is the second stable fixpoint of \(\mathcal{IC}_{\mathcal{P}}\). \((\emptyset,\{p,q\})\) is a fixpoint of \(\mathcal{IC}_{\mathcal{P}}\) that is not stable. In general, (total) stable fixpoints of \(\mathcal{IC}_{\mathcal{P}}\) correspond to (total) stable models of \(\mathcal{P}\), and weakly supported models of \(\mathcal{IC}_{\mathcal{P}}\) correspond to fixpoints of \(\mathcal{IC}_{\mathcal{P}}\). (Heyninck et al., 2022). ## 3 Ultimate Operators Approximation fixpoint theory assumes an approximation operator, but does not specify how to construct it. In the literature, one finds various ways to construct a deterministic approximation operator \(\mathcal{O}\) that approximates a deterministic operator \(O\). Of particular interest is the _ultimate_ operator (Denecker et al., 2002), which is the _most precise_ approximation operator. In this section, we show that non-deterministic approximation fixpoint theory admits an ultimate operator, which is, however, different from the ultimate operator for deterministic AFT. We first recall that for a _deterministic_ operator \(O:\mathcal{L}\rightarrow\mathcal{L}\), the ultimate approximation \(\mathcal{O}^{u}\) is defined by Denecker et al. (2002) as follows:9 Footnote 9: We use the abbreviation DMT\({}^{d}\) for _deterministic_ Denecker, Marck and Truszczynski to denote this operator, as to not overburden the use of \(\mathcal{IC}^{d}_{\mathcal{P}}\). Indeed, we will later see that the ultimate operator for non-disjunctive logic programs generalizes to an ndao that is different from the ultimate non-deterministic operator \(\mathcal{IC}^{d}_{\mathcal{P}}\). \[\mathcal{O}^{\text{DMT}^{d}}(x,y)=(\sqcap\mathcal{O}[x,y],\sqcup\mathcal{O}[x,y]) \lx@note{footnote}{footnote}{Recall that denotes \sqcap X the greatest lower bound of $X$ and $\sqcup X$ denotes the least upper bound of $X$.}\] Where \(O[x,y]:=\{O(z)\mid x\leq z\leq y\}\). This operator is shown to be the most precise operator approximating an operator \(O\)[5]. In more detail, for any (deterministic) approximation operator \(\mathcal{O}\) approximating \(O\), and any consistent \((x,y)\), \(\mathcal{O}(x,y)<_{i}\mathcal{O}^{\mathsf{DMT}^{d}}(x,y)\). The ultimate approximator for \(IC_{\mathcal{P}}\) for non-disjunctive logic programs \(\mathcal{P}\) looks as follows: **Definition 2**: _Given a normal logic program \(\mathcal{P}\), we let: \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}(x,y)=(\mathcal{IC}_{\mathcal{P }}^{\mathsf{DMT}^{d},l}(x,y),\ \mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d},u}(x,y))\) with: \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d},l}(x,y)=\bigcap_{x\subseteq z \subseteq y}\{\alpha\mid\alpha\leftarrow\phi\in\mathcal{P}\text{ and }z(\phi)=\mathsf{T}\}\), and \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d},u}(x,y)=\bigcup_{x\subseteq z \subseteq y}\{\alpha\mid\alpha\leftarrow\phi\in\mathcal{P}\text{ and }z(\phi)=\mathsf{T}\}\)._ In this section, we define the ultimate semantics for the non-deterministic operators. In more detail, we constructively define an approximation operator that is most precise and has non-empty upper and lower bounds. Its construction is based on the following idea: we are looking for an operator \(\mathcal{O}^{\mathcal{U}}\) s.t. for any ndao \(\mathcal{O}\) that approximates \(O\), \(\mathcal{O}_{l}(x,y)\preceq_{L}^{\mathcal{S}}\mathcal{O}_{l}^{\mathcal{U}}(x,y)\) (and similarly for the upper bound). As we know that \(\mathcal{O}_{l}(x,y)\preceq_{L}^{\mathcal{S}}O(z)\) for any \(x\leq z\leq y\), we can obtain \(\mathcal{O}_{l}^{\mathcal{U}}\) by simply gathering all applications of \(O\) to elements of the interval \([x,y]\) i.e. we define: \[\mathcal{O}_{l}^{\mathcal{U}}(x,y)=\bigcup_{x\leq z\leq y}O(z)\] The upper bound can be defined in the same way as the lower bound. Altogether, we obtain: \[\mathcal{O}^{\mathcal{U}}(x,y)=\mathcal{O}_{l}^{\mathcal{U}}(x,y)\times \mathcal{O}_{l}^{\mathcal{U}}(x,y)\] The following example illustrates this definition for normal logic programs: **Example 2**: _Let \(\mathcal{P}=\{q\leftarrow\neg p;p\gets p\}\). Then \(IC_{\mathcal{P}}(\emptyset)=IC_{\mathcal{P}}(\{q\})=\{q\}\) and \(IC_{\mathcal{P}}(\{p\})=IC_{\mathcal{P}}(\{p,q\})=\{p\}\). Therefore, \(\mathcal{IC}_{\mathcal{P}}^{\mathcal{U}}(\emptyset,\{p,q\})=\{\{p\},\{q\}\} \times\{\{p\},\{q\}\}\) whereas \(\mathcal{IC}_{\mathcal{P}}^{\mathcal{U}}(\emptyset,\{q\})=\{\{q\}\}\times\{\{q\}\}\)._ The ultimate approximation is the most precise ndao approximating the operator \(O\): **Proposition 1**: _Let a non-deterministic operator \(O\) over a lattice \(\langle\mathcal{L},\leq\rangle\) be given. Then \(\mathcal{O}^{\mathcal{U}}\) is an ndao that approximates \(O\). Furthermore, for any ndao \(\mathcal{O}\) that approximates \(O\) and for every \(x,y\in\mathcal{L}\) s.t. \(x\leq y\), it holds that \(\mathcal{O}(x,y)\preceq_{i}^{A}\mathcal{O}^{\mathcal{U}}(x,y)\)._ In conclusion, non-deterministic AFT admits, just like deterministic AFT, an ultimate approximation. However, as we will see in the rest of this section, the ultimate non-deterministic approximation operator \(\mathcal{O}^{\mathcal{U}}\) does _not_ generalize the deterministic ultimate approximation operator defined by Denecker et al (2002). In more detail, we compare the non-deterministic ultimate operator \(\mathcal{IC}_{\mathcal{P}}^{\mathcal{U}}\) with the deterministic ultimate \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}}\) from Definition 2. Somewhat surprisingly, even when looking at normal logic programs, the operator \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) does not coincide with the ultimate ndao \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}}\) (and thus \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) is _not_ the most precise ndao, even for non-disjunctive programs). The intuitive reason is that the additional expressivity of non-deterministic operators, which are not restricted to single lower and upper bounds in their outputs, allows to more precisely capture what is derivable in the "input interval" \((x,y)\). **Example 3** (Example 2 continued): _Let \(\mathcal{P}\) be a non-deterministic operator \(O\) over a lattice \(\langle\mathcal{L},\leq\rangle\). Then \(\mathcal{O}^{\mathcal{U}}\) is an ndao that approximates \(O\). Furthermore, for any ndao \(\mathcal{O}\) that approximates \(O\) and for every \(x,y\in\mathcal{L}\) s.t. \(x\leq y\), it holds that \(\mathcal{O}(x,y)\preceq_{i}^{A}\mathcal{O}^{\mathcal{U}}(x,y)\)._ In conclusion, non-deterministic AFT admits, just like deterministic AFT, an ultimate approximation. However, as we will see in the rest of this section, the ultimate non-deterministic approximation operator \(\mathcal{O}^{\mathcal{U}}\) does _not_ generalize the deterministic ultimate approximation operator defined by Denecker et al (2002). In more detail, we compare the non-deterministic ultimate operator \(\mathcal{IC}_{\mathcal{P}}^{\mathcal{U}}\) with the deterministic ultimate \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}}\) from Definition 2. Somewhat surprisingly, even when looking at normal logic programs, the operator \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) does not coincide with the ultimate ndao \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) (and thus \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) is _not_ the most precise ndao, even for non-disjunctive programs). The intuitive reason is that the additional expressivity of non-deterministic operators, which are not restricted to single lower and upper bounds in their outputs, allows to more precisely capture what is derivable in the "input interval" \((x,y)\). **Example 4** (Example 4 continued): _Let \(\mathcal{P}\) be a non-deterministic operator \(O\) over a lattice \(\langle\mathcal{L},\leq\rangle\). Then \(\mathcal{O}^{\mathcal{U}}\) is an ndao that approximates \(O\). Furthermore, for any ndao \(\mathcal{O}\) that approximates \(O\) and for every \(x,y\in\mathcal{L}\) s.t. \(x\leq y\), it holds that \(\mathcal{O}(x,y)\preceq_{i}^{A}\mathcal{O}^{\mathcal{U}}(x,y)\)._ In conclusion, non-deterministic AFT admits, just like deterministic AFT, an ultimate approximation. However, as we will see in the rest of this section, the ultimate non-deterministic approximation operator \(\mathcal{O}^{\mathcal{U}}\) does _not_ generalize the deterministic ultimate approximation operator defined by Denecker et al (2002). In more detail, we compare the non-deterministic ultimate operator \(\mathcal{IC}_{\mathcal{P}}^{\mathcal{U}}\) with the deterministic ultimate \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}}\) from Definition 2. Somewhat surprisingly, even when looking at normal logic programs, the operator \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) does not coincide with the ultimate ndao \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) (and thus \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) is _not_ the most precise ndao, even for non-disjunctive programs). The intuitive reason is that the additional expressivity of non-deterministic operators, which are not restricted to single lower and upper bounds in their outputs, allows to more precisely capture what is derivable in the "input interval" \((x,y)\). **Example 5** (Example 6 continued): _Let \(\mathcal{P}\) be a non-deterministic operator \(O\) over a lattice \(\langle\mathcal{L},\leq\rangle\). Then \(\mathcal{O}^{\mathcal{U}}\) is an ndao that approximates \(O\). Furthermore, for any ndao \(\mathcal{O}\) that approximates \(O\) and for every \(x,y\in\mathcal{L}\) s.t. \(x\leq y\), it holds that \(\mathcal{O}(x,y)\preceq_{i}^{A}\mathcal{O}^{\mathcal{U}}(x,y)\)._ In conclusion, non-deterministic AFT admits, just like deterministic AFT, an ultimate approximation. However, as we will see in the rest of this section, the ultimate non-deterministic approximation operator \(\mathcal{O}^{\mathcal{U}}\) does _not_ generalize the deterministic ultimate approximation operator defined by Denecker et al (2002). In more detail, we compare the non-deterministic ultimate operator \(\mathcal{IC}_{\mathcal{P}}^{\mathcal{U}}\) with the deterministic ultimate \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}}\) from Definition 2. Somewhat surprisingly, even when looking at normal logic programs, the operator \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) does not coincide with the ultimate ndao \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) (and thus \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{d}}\) is _not_ the most precise ndao, even for non-disjunctive programs). The intuitive reason is that the additional expressivity of non-deterministic operators, which are not restricted to single lower and upper bounds in their outputs, allows to more precisely capture what is derivable in the "input interval" \((x,y)\). **Example 6** (Example 7 continued): _Let \(\mathcal{P}\) be a non-deterministic operator \(O\) over a lattice \(\langle\mathcal{L},\leq\rangle\). Then \(\mathcal{O}^{\mathcal{U}}\) is an ndao that approximates \(O\). Furthermore, for any ndao \(\mathcal{O}\) that approximates \(O\) and for every \(x,y\in\mathcal{L}\) s.t. \(x\leq y\), it holds that \(\mathcal{O}(x,y)\preceq_{i Consider again \(\mathcal{P}=\{q\leftarrow\neg p;p\gets p\}\). Applying the \(\mathsf{DMT}^{\mathsf{d}}\)-operator gives: \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}}}(\emptyset,\{p,q\})=( \emptyset,\{p,q\})\). Intuitively, the ultimate semantics \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}}}(\emptyset,\{p,q\})=\{ \{p\},\{q\}\}\times\{\{p\},\{q\}\}\) gives us the extra information that we will always either derive \(p\) or \(q\), which is information a deterministic approximator can simply not capture. Such a "choice" is not expressible within a single interval, hence the deterministic ultimate approximation is \((\emptyset,\{p,q\})\). This example also illustrates the fact that, when applying the ultimate ndao-construction to (non-constant) deterministic operators \(O\), \(\mathcal{O}^{\mathsf{U}}\) might be a _non_-deterministic approximation operator. However, one can still generalize the operator \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}}}\) to disjunctive logic programs. We first generalize the idea behind \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}},l}\) to an operator gathering the heads of rules that are true in every interpretation \(z\) in the interval \([x,y]\). Similarly, \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}},u}\) is generalized by gathering the heads of rules with bodies that are true in at least one interpretation in \([x,y]\): \[\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},l}(x,y)=\bigcap_{x\subseteq z \subseteq y}\mathit{HD}_{\mathcal{P}}(z)\quad\text{ and }\quad\mathcal{HID}_{ \mathcal{P}}^{\mathsf{DMT},u}(x,y)=\bigcup_{x\subseteq z\subseteq y}\mathit{ HD}_{\mathcal{P}}(z)\}.\] The upper and lower immediate consequences operator are then straightforwardly defined, that is: by taking all interpretations that only contain atoms in \(\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},\dagger}(x,y)\) and contain at least one member of every head \(\Delta\in\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},\dagger}(x,y)\) (for \(\dagger\in\{u,l\}\)): \[\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},\dagger}(x,y)=\{z\subseteq\bigcup \mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},\dagger}(x,y)\mid\forall\Delta\in \mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},\dagger}(x,y)\neq\emptyset:z\cap \Delta\neq\emptyset\}.\] Finally, the \(\mathsf{DMT}\)-ndao is defined as: \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}}(x,y)=\mathcal{IIC}_{\mathcal{P}}^{ \mathsf{DMT},l}(x,y)\times\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},u}(x,y)\). We have: _Proposition 2_ (_(_Heyninck et al. 2022_, Proposition 3)): For any disjunctive logic program \(\mathcal{P}\), \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}}\) is an ndao that approximates \(IC_{\mathcal{P}}\). Notice that for a non-disjunctive program \(\mathcal{P}\), \(\bigcup\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},\dagger}(x,y)=\bigcup \mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},\dagger}(x,y)=\mathcal{IIC}_{ \mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}},\dagger}(x,y)\) (for \(\dagger\in\{u,l\}\)), i.e. the non-deterministic version reduces to the deterministic version when looking at non-disjunctive programs. Notice furthermore the operators \(\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},l}(x,y)\) and \(\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},u}(x,y)\) are only defined for consistent interpretations \((x,y)\). We leave the extension of this operator to inconsistent interpretations for future work. _Example 4_ * Consider again the program \(\mathcal{P}=\{p\lor q\leftarrow\neg q\}\) from Example 1. \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},l}\) behaves as follows: * If \(q\in y\) then \(\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},l}(x,y)=\emptyset\) and thus \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},l}(x,y)=\emptyset\). * If \(q\not\in y\) then \(\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},l}(x,y)=\{\{p,q\}\}\) and \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},l}(x,y)=\{\{p\},\{q\},\{p,q\}\}\). * If \(q\in x\) then \(\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},u}(x,y)=\{\{p,q\}\}\) and thus \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},u}(x,y)=\emptyset\). * If \(q\not\in x\) then \(\mathcal{HID}_{\mathcal{P}}^{\mathsf{DMT},u}(x,y)=\{\{p,q\}\}\) and thus \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},u}(x,y)=\{\{p\},\{q\},\{p,q\}\}\). Thus e.g. \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}}(\emptyset,\{p,q\})=\{\emptyset\} \times\{\{p\},\{q\},\{p,q\}\}\) and \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}}(\{p\},\{p\})=\{\{p\},\{q\},\{p,q\}\} \times\{\{p\},\{q\},\{p,q\}\}\). We thus see that \((\{p\},\{p\})\) is a stable fixpoint of \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT}}\). A slightly extended program \(\mathcal{P}=\{q\leftarrow\neg q;p\lor q\gets q\}\) shows some particular but unavoidable behavior of this operator. \(\mathcal{IIC}_{\mathcal{P}}^{\mathsf{DMT},l}(\emptyset,\{q\})=\{\emptyset\}\) as \(\mathit{HD}_{\mathcal{P}}(\emptyset)=\{\{q\}\}\) and \(\mathit{HD}_{\mathcal{P}}(\{q\})=\{\{p,q\}\}\). Note that the lower bound for is _not_ the stronger \(\{p\}\). This would result in a loss of \(\preceq_{i}^{A}\)-monotonicity, as the lower bound \(\{\{q\}\}\) for the less informative \((\emptyset,\{q\})\) would be \(\preceq_{L}^{S}\)-incomparable to the lower bound \(\{\{p\},\{q\},\{p,q\}\}\) of the more informative \((\{q\},\{q\})\). We have shown in this section that non-deterministic AFT admits an ultimate operator, thus providing a way to construct an ndao based on a non-deterministic operator. We have also shown that the ultimate ndao diverges from the ultimate operator for deterministic AFT, but that this deterministic ultimate operator can be generalized to disjunctive logic programs. Both operators will be used in Section 5 to define semantics for DLP's with aggregates. ## 4 Semi-Equilibrium Semantics To further extend the reach of non-deterministic AFT, we generalize yet another semantics for dlp's, namely the _semi-equilibrium semantics_(Amendola et al. 2016). The semi-equilibrium semantics is a semantics for disjunctive logic programs that has been studied for disjunctively normal logic programs. This semantics is a three-valued semantics that fulfills the following properties deemed desirable by Amendola et al. (2016): (1) Every (total) answer set of \(\mathcal{P}\) corresponds to a semi-equilibrium model; (2) If \(\mathcal{P}\) has a (total) answer set, then all of its semi-equilibrium models are (total) answer sets; (3) If \(\mathcal{P}\) has a classical model, then \(\mathcal{P}\) has a semi-equilibrium model. We notice that these conditions can be seen as a view on approximation of the total stable interpretations alternative to the well-founded semantics. We do not aim to have the last word on which semantics is the most intuitive or desirable. Instead, we will show here that semi-equilibrium models can be represented algebraically, and thus can be captured within approximation fixpoint theory. This leaves the choice of exact semantics to the user once an ndao has been defined, and allows the use of the semi-equilibrium semantics for formalisms other than nlps, such as disjunctive logic programs with aggregates (see below) or conditional ADFs. Semi-equilibrium models are based on the _logic of here-and-there_(Pearce 2006). An _HT-interpretation_ is a pair \((x,y)\) where \(x\subseteq y\) (i.e. a consistent pair in AFT-terminology). Satisfaction of a formula \(\phi\), denoted \(\models_{\mathsf{HT}}\), is defined recursively as follows: * \((x,y)\models_{\mathsf{HT}}\alpha\) if \(\alpha\in x\) for any \(\alpha\in\mathcal{A}_{\mathcal{P}}\), * \((x,y)\models_{\mathsf{HT}}\neg\phi\) if \((y,y)(\phi)\neq\mathsf{T}\), and \((x,y)\not\models_{\mathsf{HT}}\bot\), * \((x,y)\models_{\mathsf{HT}}\phi\wedge[\forall|\psi\) if \((x,y)\models_{\mathsf{HT}}\phi\) and [or] \((x,y)\models_{\mathsf{HT}}\psi\), * \((x,y)\models_{\mathsf{HT}}\phi\rightarrow\psi\) if (a) \((x,y)\not\models_{\mathsf{HT}}\phi\) or \((x,y)\models_{\mathsf{HT}}\psi\), and (b) \((y,y)(\neg\phi\vee\psi)=\mathsf{T}\). The \(\mathsf{HT}\)-models of \(\mathcal{P}\) are defined as \(\mathsf{HT}(\mathcal{P})=\{(x,y)\mid\forall\psi\leftarrow\phi\in\mathcal{P}:(x,y)\models_{\mathsf{HT}}\phi\rightarrow\psi\}\). Semi-equilibrium models are a special class of \(\mathsf{HT}\)-models. They are obtained by performing two minimization steps on the set of \(\mathsf{HT}\)-models of a program. The first step is obtained by minimizing w.r.t. \(\leq_{t}\).11 The second step is obtained by selecting the _maximal canonical models_. For this, the _gap_ of an interpretation is defined as \(gap(x,y)=y\setminus x\),12 and, for any set of interpretations \(\mathbf{X}\), the _maximally canonical interpretations_ are \(mc(\mathbf{X})=\{(x,y)\in\mathbf{X}\mid\exists(w,z)\in\mathbf{X}:gap(x,y) \supset gap(w,z)\}\). The semi-equilibrium models of \(\mathcal{P}\) are then defined as: \(\mathcal{SEQ}(\mathcal{P})=mc\left(\min_{\leq_{t}}(\mathsf{HT}(\mathcal{P}))\right.\). Footnote 11: Amendola et al. (2016) proceeds as follows. First, \(\mathsf{HT}^{\kappa}(\mathcal{P})=\{x\cup\{\mathsf{K}\alpha\mid\alpha\in y\}\}\) is constructed, and then the \(\subseteq\)-minimal sets in \(\mathsf{HT}^{\kappa}(\mathcal{P})\) are selected. It is straightforward to see that this is equivalent to minimizing the original interpretations w.r.t. \(\leq_{t}\). Footnote 12: Again, Amendola et al. (2016) proceeds in a slightly more convoluted way by defining \(gap(I)=\{\mathsf{K}\alpha\in I\mid\alpha\not\in I\}\) for any \(I\in\mathsf{HT}^{\kappa}(\mathcal{P})\). _Example 5_ We illustrate these semantics with the program \(\mathcal{P}=\{p\leftarrow\neg p,s\lor q\leftarrow\neg s,s\lor q\leftarrow\neg q\}\). Then \(\mathsf{HT}(\mathcal{P})=\{(x,y)\mid\{p\}\subseteq y\subseteq\{p,q,s\},x\subseteq y,\{q,s\}\cap y\neq\emptyset\}\). Furthermore, \(\min_{\leq_{t}}(\mathsf{HT}(\mathcal{P}))=\{(\emptyset,\{p,q,s\}),(\{q\},\{q,p\} ),(\{s\},\{s,p\})\}\). As \(gap(\emptyset,\{p,q,s\})=\{p,q,s\}\) and \(gap(\{q\},\{q,p\})=gap(\{s\},\{s,p\})=\{p\}\), \(\mathcal{SEQ}=\{(\{q\},\{q,p\}),(\{s\},\{s,p\})\}\). Before we capture the ideas behind this semantics algebraically, we look a bit deeper into the relationship between \(\mathsf{HT}(\mathcal{P})\)-models and the classical notion of three-valued models of a program (see Section 2.1). We first observe that \(\mathsf{HT}\)-models of a program are a proper superset of the three-valued models of a program: **Proposition 3**: _Let a disjunctively normal logic program \(\mathcal{P}\) and a consistent intepretation \((x,y)\) be given. Then if \((x,y)\) is a model of \(\mathcal{P}\), it is an \(\mathsf{HT}\)-model of \(\mathcal{P}\). However, not every \(\mathsf{HT}\)-model is a model of \(\mathcal{P}\)._ We now define the concept of a \(\mathsf{HT}\)-pair algebraically, inspired by Truszczynski (2006): **Definition 3**: _Given an ndao \(\mathcal{O}\) approximating a non-determnistic operator \(O\), a pair \((x,y)\) is a \(\mathsf{HT}\)-pair (denoted \((x,y)\in\mathsf{HT}(\mathcal{O})\)) if the following three conditions are satisfied: (1) \(x\leq y\), (2) \(O(y)\preceq_{L}^{S}y\), and (3) \(\mathcal{O}_{l}(x,y)\preceq_{L}^{S}x\)._ This simple definition faithfully transposes the ideas behind \(\mathsf{HT}\)-models to an algebraic context. Indeed, applying it to \(\mathcal{IC}_{\mathcal{P}}\) gives use exactly the \(\mathsf{HT}\)-models of \(\mathcal{P}\): **Proposition 4**: _Let some normal disjunctive logic program \(\mathcal{P}\) be given. Then: \(\mathsf{HT}(\mathcal{P})=\mathsf{HT}(\mathcal{IC}_{\mathcal{P}})\)._ We now show that exact \(\leq_{t}\)-minimal \(\mathsf{HT}\)-models of \(\mathcal{O}\) are stable interpretations of \(\mathcal{O}\) in our algebraic setting. The opposite direction holds as well: total stable fixpoints are \(\leq_{t}\)-minimal \(\mathsf{HT}\)-pairs of \(\mathcal{O}\). In fact, _every_ total fixpoint of \(\mathcal{O}\) is a \(\mathsf{HT}\)-pair of \(\mathcal{O}\). We assume that \(\mathcal{O}\) is _upwards coherent_, i.e. for every \(x,y\in\mathcal{L}\), \(\mathcal{O}_{l}(x,y)\preceq_{L}^{S}\mathcal{O}_{u}(x,y)\). In the appendix, we provide more details on upwards coherent operators. Notice that all ndaos in this paper are upwards coherent. **Proposition 5**: _Given an upwards coherent ndao \(\mathcal{O}\), (1) if \((x,x)\in\mathcal{O}(x,x)\) then \((x,x)\in\mathsf{HT}(\mathcal{O})\); and (2) \((x,x)\in\min_{\leq_{t}}(\mathsf{HT}(\mathcal{O}))\) iff \((x,x)\in S(\mathcal{O})(x,x)\)._ The second concept that we have to generalize to an algebraic setting is that of maximal canonical models. Recall that \(gap(x,y)\) consists of the atoms which are neither true nor false, i.e. it can be used as a measure of the informativeness or precision of a pair. For the algebraic generalization of this idea, it is useful to assume that the lattice under consideration admits a difference for every pair of elements.13 In more detail, \(z\in\mathcal{L}\) is the _difference_ of \(y\) w.r.t. \(x\) if \(z\sqcap x=\bot\) and \(x\sqcup y=x\sqcup z\). If the difference is unique we denote it by \(x\oslash y\). As an example, note that any Boolean lattice admits a unique difference for every pair of elements. We can then define \(\mathsf{mc}(\mathbf{X})=\operatorname*{argmin}_{(x,y)\in\mathbf{X}}\{y\oslash x\}\). This allows us to algebraically formulate the semi-equilibrium models of an ndao \(\mathcal{O}\) as \[\mathcal{SEQ}(\mathcal{O})=\mathsf{mc}\left(\min_{\leq_{t}}(\mathsf{HT}(\mathcal{ O}))\right)\] The properties mentioned at the start of this section are preserved, and this definition generalizes the semi-equilibrium models for disjunctive logic programs by Amendola et al. (2016): Let an upwards coherent ndao \(\mathcal{O}\) over a finite lattice be given s.t. every pair of elements admits a unique difference. Then \(\mathcal{SEQ}(\mathcal{O})\neq\emptyset\). Furthermore, if there is some \((x,x)\in\mathsf{mc}(\min_{\leq_{t}}(\mathsf{HT}(\mathcal{O})))\) then \(\mathcal{SEQ}(\mathcal{O})=\{(x,x)\in\mathcal{L}^{2}\mid(x,x)\in S(\mathcal{O })(x,x)\}\). Let a disjunctively normal logic program \(\mathcal{P}\) be given. Then \(\mathcal{SEQ}(\mathcal{IC}_{\mathcal{P}})=\mathcal{SEQ}(\mathcal{P})\). In this section, we have shown that semi-equilibrium models can be characterized algebraically. This means semi-equilibrium models can now be obtained for other ndao's (e.g. those from Section 5, as illustrated in Appendix C), thus greatly enlarging the reach of these semantics. We end this section by making a short, informal comparison between the semi-equilibrium models and the well-founded state for ndaos (Heyninck et al., 2022). Both constructions have a similar goal: namely, approximate the (potentially non-existent) total stable interpretations. In the case of the semi-equilibrium models, the set of semi-equilibrium models coincides with the total stable interpretations if they exist, whereas the well-founded state approximates any stable interpretation (and thus in particular the total stable interpretations), but might not coincide with them. When it comes to existence, we have shown here that the semi-equilibrium models exist for any ndao, just like the well-founded state. Thus, the well-founded state and semi-equilibrium models seem to formalize two different notions of approximation. Which notion is most suitable is hard to decide _in abstracto_ but will depend on the exact application context. ## 5 Application to DLPs with Aggregates We apply non-deterministic AFT to disjunctive logic programs with aggregates by studying three ndaos: the ultimate, \(\mathsf{DMT}\) and the trivial operators. We show the latter two generalize the ultimate semantics (Pelov et al., 2007) respectively the semantics by Gelfond and Zhang (2019). ### Preliminaries on aggregates We survey the necessary preliminaries on aggregates and the corresponding programs, restricting ourselves to propositional aggregates and leaving aggregates with variables for future work. A set term \(S\) is a set of pairs of the form \([\vec{t}:\mathit{Conj}]\) with \(t\) a list of constants and \(\mathit{Conj}\) a ground conjunction of standard atoms For example, \([1:p;2:q;-1:r]\) intuitively assigns \(1\) to \(p\), \(2\) to \(q\) and \(-1\) to \(r\). An _aggregate function_ is of the form \(f(S)\) where \(S\) is a set term, and \(f\) is an _aggregate function symbol_ (e.g. \(\#\mathsf{Sum}\), \(\#\mathsf{Count}\) or \(\#\mathsf{Max}\)). An _aggregate atom_ is an expression of the form \(f(S)*w\) where \(f(S)\) is an aggregate function, \(*\in\{<,\leq,\geq,>,=\}\) and \(w\) is a numerical constant. We denote by \(\mathsf{At}(f(S)*w)\) the atoms occuring in \(S\). A _disjunctively normal aggregate program_ consists of rules of the form (where \(\Delta\) is a set of propositional atoms, and \(\alpha_{1},\ldots,\alpha_{n},\beta_{1},\ldots,\beta_{m}\) are aggregate or propositional atoms): \[\bigvee\Delta\leftarrow\alpha_{1},\ldots,\alpha_{n},\neg\beta_{1},\ldots, \neg\beta_{m}\] An aggregate symbol is evaluated w.r.t. a set of atoms as follows. First, let \(x(S)\) denote the multiset \([t_{1}\mid\langle t_{1},\ldots,t_{n}\,:\,Conj\rangle\,\in\,S\) and \(Conj\) is true w.r.t. \(x]\). \(x(f(S))\) is then simply the result of the application of \(f\) on \(x(S)\). If the multiset \(x(S)\) is not in the domain of \(f\), \(x(f(s))=\curlyeq\) where \(\curlyeq\) is a fixed symbol not occuring in \(\mathcal{P}\). An aggregate atom \(f(S)*w\) is true w.r.t. \(x\) (in symbols, \(x(f*w)=\mathsf{T}\)) if: (1) \(x(f(S))\neq\curlyeq\) and (2) \(x(f(S))*w\) holds; otherwise, \(f(S)*w\) is false (in symbols, \(x(f*w)=\mathsf{F}\)). \(\neg f(S)*w\) is true if: (1) \(x(f(S))\neq\curlyeq\) and (2) \(x(f(S))*w\) does not hold; otherwise, \(\neg f(S)*w\) is false. Evaluating a conjunction of aggregate atoms is done as usual. We can now straightforwardly generalize the immediate consequence operator for disjunctive logic programs to disjunctive aggregate programs by generalizing \(\mathit{HD}_{\mathcal{P}}\) to take into account aggregate formulas as described above: \(\mathit{HD}_{\mathcal{P}}(x)=\{\Delta\mid\bigvee\Delta\leftarrow\phi\in \mathcal{P},x(\phi)=\mathsf{T}\}\). \(\mathit{IC}_{\mathcal{P}}\) from Definition 1 is then generalized straightforwardly by simply using the generalized \(\mathit{HD}_{\mathcal{P}}\). Thus, the only difference with the immediate consequence operator for dlp's is that the set of activated heads \(\mathit{HD}_{\mathcal{P}}\) now takes into account the truth of aggregates as well. The first semantics we consider is the one formulated by Gelfond and Zhang (2019) (defined there only for logic programs with aggregates occurring positively in the body of a rule): **Definition 4**: Let a disjunctively normal aggregate logic program \(\mathcal{P}\) s.t. for every \(\bigvee\Delta\leftarrow\bigwedge_{i=1}^{n}\alpha_{i}\land\bigwedge_{j=1}^{m} \neg\beta_{j}\in\mathcal{P}\), \(\beta_{j}\) is a normal (i.e. non-aggregate) atom. Then the \(\mathsf{GZ}\)-reduct of \(\mathcal{P}\) w.r.t. \(x\) is defined by doing, for every \(r=\bigvee\Delta\leftarrow\bigwedge_{i=1}^{n}\alpha_{i}\land\bigwedge_{j=1}^{m} \neg\beta_{j}\in\mathcal{P}\), the following: (1) if an aggregate atom \(\alpha_{i}\) is false or undefined for some \(i=1,\ldots,n\), delete \(r\); (2) otherwise, replace every aggregate atom \(\alpha_{i}=f(S)*w\) by \(\bigcup\{Conj\) occurs in \(\mathsf{S}\mid x(Conj)=\mathsf{T}\}\). We denote the \(\mathsf{GZ}\)-reduct of \(\mathcal{P}\) by \(\mathcal{P}_{\mathsf{GZ}}^{\mathsf{x}}\). Notice that this is a disjunctively normal logic program. A set of atoms \(x\subseteq\mathcal{A}_{\mathcal{P}}\) is a \(\mathsf{GZ}\)_-answer set of \(\mathcal{P}\)_ if \((x,x)\) is an answer set of \(\mathcal{P}_{\mathsf{GZ}}^{\mathsf{x}}\). **Example 6**: Consider the program \(\mathcal{P}=\{p\leftarrow\#\mathsf{Sum}[1:p,q]>0;p\leftarrow\#\mathsf{Sum}[1 :q]>0;q\leftarrow\#\mathsf{Sum}[1:s]<1\}\). We check whether \(\{p,q\}\) is a \(\mathsf{GZ}\)-answer set as follows: 1. The \(\mathsf{GZ}\)-reduct is \(\mathcal{P}_{\mathsf{GZ}}^{\{p,q\}}=\{p\gets p,q;\quad p\gets q; \quad q\leftarrow\}\). In more detail, as \(\{p,q\}(\#\mathsf{Sum}[1:p,q]>0)=\mathsf{T}\), we replace \(\#\mathsf{Sum}[1:p,q]>0\) in the first rule by the atoms in the condition of this aggregate atom verified by \(\{p,q\}\), namely \(p\) and \(q\). Similarly for the other rules. 2. As \(\{p,q\}\) (or, to be formally more precise, \((\{p,q\},\{p,q\})\)) is a minimal model of \(\frac{\mathcal{P}_{\mathsf{GZ}}^{\{p,q\}}}{(\{p,q\},\{p,q\})}\), we see \(\{p,q\}\) is a \(\mathsf{GZ}\)-answer set of \(\mathcal{P}\). We now move to the semantics by Denecker et al. (2002). They are defined only for non-disjunctive aggregate programs. They are defined on the basis of the ultimate (deterministic) approximator \(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}}\) (Definition 2). In more detail, an interpretation \((x,y)\) is \(\mathsf{DMT}^{\mathsf{d}}\)_-stable_ if and only if \((x,y)\in S(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}}})(x,y)\), i.e. \(x\in\mathrm{lfp}(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}}}(.,y))\) and \(y\in\mathrm{lfp}(\mathcal{IC}_{\mathcal{P}}^{\mathsf{DMT}^{\mathsf{d}}}(x,.))\). **Example 7**: Consider the program \(\mathcal{P}=\{p\leftarrow\#\mathsf{Sum}[1:p]>0;\quad p\leftarrow\#\mathsf{Sum}[1 :p]<1\}\). \((\{p\},\{p\})\) is an \(\mathsf{DMT}^{\mathsf{d}}\)-stable model of \(\mathcal{P}\), but the program has no \(\mathsf{GZ}\)-stable models. We first explain why \(\{p\}\) is not a \(\mathsf{GZ}\)-stable model. First, we construct \(\mathcal{P}^{\{p\}}_{\mathsf{GZ}}=\{p\gets p\}\). Since \(\{p\}\) is not a stable model of \(\mathcal{P}^{\{p\}}_{\mathsf{GZ}}\), we see that \(\{p\}\) is not a \(\mathsf{GZ}\)-stable model. Likewise, since \(\mathcal{P}^{\emptyset}_{\mathsf{GZ}}=\{p\leftarrow\emptyset\}\), we see that \(\emptyset\) is not a stable model of \(\mathcal{P}^{\emptyset}_{\mathsf{GZ}}\) and therefore not \(\mathsf{GZ}\)-stable. To see \(\{p\}\) is a \(\mathsf{DMT}^{d}\)-stable model, observe that \(\mathcal{IC}^{\mathsf{DMT}^{d},l}_{\mathcal{P}}(\emptyset,\{p\})=\mathcal{IC }^{\mathsf{DMT}^{d},l}_{\mathcal{P}}(\{p\},\{p\})=\{p\}\). Thus, \(\mathrm{lfp}(\mathcal{IC}^{\mathsf{DMT}^{d},l}_{\mathcal{P}}(.,\{p\})=\{p\}\), i.e. \((\{p\},\{p\})=S(\mathcal{IC}^{\mathsf{DMT}^{d}}_{\mathcal{P}})(\{p\},\{p\})\). ### Non-Deterministic Approximation Operators for Disjunctive Aggregate Programs We now proceed to define ndaos for disjunctive aggregate programs. The first ndao we consider generalizes the _trivial_ operator (Pelov et al., 2007), which maps two-valued interpretations to their immediate consequences whereas three-valued interpretations are mapped to the least precise pair \((\emptyset,\mathcal{A}_{\mathcal{P}})\) (or, in the non-deterministic case, \(\{\emptyset\}\times\{\mathcal{A}_{\mathcal{P}}\}\)). We also study the ndao \(\mathcal{IC}^{\mathsf{DMT}}_{\mathcal{P}}\) based on the deterministic ultimate approximation, and the ultimate ndao \(\mathcal{IC}^{\mathcal{U}}_{\mathcal{P}}\). **Definition 5**: _Given a disjunctively normal aggregate program \(\mathcal{P}\) and a (consistent) interpretation \((x,y)\), let_ \[\mathcal{IC}^{\mathsf{GZ}}_{\mathcal{P}}(x,y) =\begin{cases}\mathit{IC}_{\mathcal{P}}(x)\times\mathit{IC}_{ \mathcal{P}}(x)&\text{ if }x=y\\ \{\emptyset\}\times\{\mathcal{A}_{\mathcal{P}}\}&\text{ otherwise}\end{cases}\] The ndaos \(\mathcal{IC}^{\mathsf{DMT}}_{\mathcal{P}}\) and \(\mathcal{IC}^{\mathcal{U}}_{\mathcal{P}}\) are defined exactly the same as in section 3 (recall that \(\mathit{IC}_{\mathcal{P}}(x)\) was generalized for aggregates in Section 5.1). We illustrate these semantics with an example: **Example 8**: _Let \(\mathcal{P}=\{r\lor q\leftarrow\#\mathsf{Sum}[1:s]>0;s\leftarrow\#\mathsf{ Sum}[1:r,1:q]>0\}\) be given._ _We first look at \(\mathcal{IC}^{\mathsf{GZ}}_{\mathcal{P}}\). As an example of a fixpoint, consider \((\{r,s\},\{r,s\})\). Notice first that \(\#\mathsf{Sum}[1:r,1:q]>0\) and \(\#\mathsf{Sum}[1:r,1:q]>0\) are true in \(\{r,s\}\). Thus, \(\mathit{HD}_{\mathcal{P}}=\{\{r,q\},\{s\}\}\) and \(\mathcal{IC}^{\mathsf{GZ}}_{\mathcal{P}}(\{r,s\},\{r,s\})=\{\{r,s\},\{q,s\},\{r,q,s\}\}\times\{\{r,s\},\{q,s\},\{r,q,s\}\}\)._ _We now look at the \(\mathsf{DMT}\)-semantics. For this, we first calculate \(\mathit{HD}_{\mathcal{P}}\) and \(\mathit{IC}_{\mathcal{P}}\) for all members of \(\wp(\{r,q,s\})\) (with \(\Delta_{1}=\{\{r\},\{q\},\{r,q\}\}\) and \(\Delta_{2}=\{\{s,r\},\{s,q\},\{s,r,q\}\}\)):_ \begin{tabular}{l|l l l l l l l l} \hline \hline \(x\) & \(\emptyset\) & \(\{s\}\) & \(\{q\}\) & \(\{r\}\) & \(\{r,q\}\) & \(\{r,s\}\) & \(\{q,s\}\) & \(\{s,q,r\}\) \\ \hline \(\mathit{HD}_{\mathcal{P}}(x)\) & \(\emptyset\) & \(\{\{r,q\}\}\) & \(\{\{s\}\}\) & \(\{\{s\}\}\) & \(\{\{s\}\}\) & \(\{\{r,q\},\{s\}\}\) & \(\{\{r,q\},\{s\}\}\) & \(\{\{r,q\},\{s\}\}\) \\ \hline \(\mathit{IC}_{\mathcal{P}}(x)\) & \(\{\emptyset\}\) & \(\Delta_{1}\) & \(\{\{s\}\}\) & \(\{\{s\}\}\) & \(\Delta_{2}\) & \(\Delta_{2}\) & \(\Delta_{2}\) & \(\Delta_{2}\) \\ \hline \hline \end{tabular} We then see that e.g. \(\mathcal{IC}^{\mathsf{DMT}}_{\mathcal{P}}(\{r,s\},\{r,s\})=\{\{r,s\},\{q,s\},\{r,q,s\}\}\times\{\{r,s\},\{r,q,s\}\}\) whereas \(\mathcal{IC}^{\mathsf{DMT}}_{\mathcal{P}}(\emptyset,\{r,s\})=\{\emptyset\}\times \{\{r,s\},\{q,s\},\{r,q,s\}\}\). We see that \(\mathcal{IC}^{\mathsf{U}}_{\mathcal{P}}(\{r,s\},\{r,s\})=\{\{r,s\},\{q,s\},\{r,q,s\}\}\times\{\{r,s\},\{q,s\},\{r,q,s\}\}\) whereas \(\mathcal{IC}^{\mathsf{U}}_{\mathcal{P}}(\emptyset,\{r,s\})=\wp(\{r,s,q\})\times \wp(\{r,s,q\})\). We now show that these operators are approximation operators with increasing orders of precision: \(\mathcal{IC}^{\mathsf{GZ}}_{\mathcal{P}}\) is the least precise, \(\mathcal{IC}^{\mathsf{DMT}}_{\mathcal{P}}\) holds a middle ground and \(\mathcal{IC}^{\mathsf{U}}\) is the most precise: **Proposition 7**: _Let \(\mathcal{P}\) be a \(\mathsf{GZ}\)-stable model of \(\mathcal{P}\). Then \(\mathcal{P}\) is a \(\mathsf{GZ}\)-stable model of \(\mathcal{P}\)._ Proof.: We first consider the case where \(\mathcal{P}\) is a \(\mathsf{GZ}\)-stable model of \(\mathcal{P}\). We first consider the case where \(\mathcal{P}\) is a \(\mathsf{GZ}\)-stable model of \(\mathcal{P}\). Let some \(\xi\in\{\mathsf{DMT},\mathsf{GZ},\mathcal{U}\}\) and a disjunctively normal aggregate logic program \(\mathcal{P}\) be given. Then \(\mathcal{I}\mathcal{C}^{\mathsf{GZ}}_{\mathcal{P}}(x,y)\) is an ndao approximating \(IC_{\mathcal{P}}\). For any \((x,y)\), \(\mathcal{I}\mathcal{C}^{\mathsf{GZ}}_{\mathcal{P}}(x,y)\preceq^{A}_{i}\mathcal{ I}\mathcal{C}^{\mathsf{DMT}}_{\mathcal{P}}(x,y)\preceq^{A}_{i}\mathcal{I} \mathcal{C}^{\mathcal{U}}_{\mathcal{P}}(x,y)\). The following properties follow from the general properties shown by Heyninck et al. (2022): **Proposition 8**: _Let some \(\xi\in\{\mathsf{DMT},\mathsf{GZ},\mathcal{U}\}\) and a disjunctively normal aggregate logic program \(\mathcal{P}\) be given. Then: (1) \(S(\mathcal{I}\mathcal{C}^{\mathsf{e}}_{\mathcal{P}})(x,y)\) exists for any \(x,y\subseteq\mathcal{A}_{\mathcal{P}}\), and (2) every stable fixpoint of \(\mathcal{I}\mathcal{C}^{\mathsf{e}}_{\mathcal{P}}\) is a \(\leq_{t}\)-minimal fixpoint of \(\mathcal{I}\mathcal{C}^{\mathsf{e}}_{\mathcal{P}}\)._ The ndao \(\mathcal{I}\mathcal{C}^{\mathsf{GZ}}_{\mathcal{P}}\) only admits two-valued stable fixpoints, and these two-valued stable fixpoints generalize the \(\mathsf{GZ}\)-semantics (Gelfond and Zhang, 2019): **Proposition 9**: _If \((x,y)\in\min_{\leq_{t}}(\mathcal{I}\mathcal{C}^{\mathsf{GZ}}_{\mathcal{P}}(x,y))\) then \(x=y\). Let a disjunctively normal aggregate aggregate logic program \(\mathcal{P}\) s.t. for every \(\bigvee\Delta\leftarrow\bigwedge_{i=1}^{n}\alpha_{i}\land\bigwedge_{j=1}^{m} \neg\beta_{j}\in\mathcal{P}\), \(\beta_{i}\) is a normal atom be given. \((x,x)\in S(\mathcal{I}\mathcal{C}^{\mathsf{GZ}}_{\mathcal{P}})(x,x)\) iff \(x\) is a \(\mathsf{GZ}\)-answer set of \(\mathcal{P}\)._ We finally show that stable semantics based on \(\mathcal{I}\mathcal{C}^{\mathsf{DMT}}_{\mathcal{P}}\) generalize those for non-disjunctive logic programs with aggregates by Denecker et al. (2002). **Proposition 10**: _Let a non-disjunctive logic program \(\mathcal{P}\) be given. Then \((x,y)\) is a stable model according to Denecker et al. (2002) iff \((x,y)\in S(\mathcal{I}\mathcal{C}^{\mathsf{DMT}}_{\mathcal{P}})(x,y)\)._ We have shown how semantics for disjunctive aggregate logic programs can be obtained using the framework of non-deterministic AFT, solving the open question (Alviano et al., 2023) of how operator-based semantics for aggregate programs can be generalized to disjunctive programs. This means AFT can be unleashed upon disjunctive aggregate programs, as demonstrated in this paper, as demonstrated in this section. Other semantics, such as the weakly supported semantics, the well-founded state semantics (Heyninck et al., 2022) and semi-equilibrium semantics (Section 4, as illustrated in Appendix C) are obtained without any additional effort and while preserving desirable properties shown algebraically for ndaos. None of these semantics have, to the best of our knowledge, been investigated for dlp's with aggregates. Other ndao's, left for future work, can likely be obtained straightforwardly on the basis of deterministic approximation operators for aggregate programs that we did not consider in this paper (e.g. the operator defined by Vanbesien et al. (2021) to characterise the semantics of Marek and Remmel (2004) or the bounded ultimate operator introduced by Pelov and Truszczynski (2004)). ## 6 Conclusion, in view of related work In this paper, we have made three contributions to the theory of non-deterministic AFT: (1) definition of the ultimate operator, (2) an algebraic generalization of the semi-equilibrium semantics and (3) an application of non-deterministic AFT to DLPs with aggregates in the body. To the best of our knowledge, there are only a few other semantics that allow for disjunctive rules with aggregates. Among the best-studied is the semantics by Faber et al. (2004) (so-called \(\mathsf{FLP}\)-semantics). As the semantics we propose generalize the operator-based semantics for aggregate programs without disjunction, the differences between the \(\mathsf{FLP}\)-semantics and the semantics proposed here essentially generalize from the non-disjunctive case (see e.g. [1]). Among the avenues for future work, an in-depth analysis of the computational complexity of the semantics proposed in this paper seems to be among the most pressing of questions. Other avenues of future work include the generalisation of the constructions in Section 5 to other semantics [21, 1] and defining ndaos for rules with choice constructs in the head [14], which can be seen as aggregates in the head.
2305.01793
JWST UNCOVER: Discovery of $z>9$ Galaxy Candidates Behind the Lensing Cluster Abell 2744
We present the results of a search for high-redshift ($z>9$) galaxy candidates in the JWST UNCOVER survey, using deep NIRCam and NIRISS imaging in 7 bands over $\sim45$ arcmin$^2$ and ancillary HST observations. The NIRCam observations reach a $5-\sigma$ limiting magnitude of $\sim 29.2$ AB. The identification of high$-z$ candidates relies on a combination of a dropout selection and photometric redshifts. We find 16 candidates at $9<z<12$ and 3 candidates at $12<z<13$, eight candidates are deemed very robust. Their lensing amplification ranges from $\mu=1.2$ to 11.5. Candidates have a wide range of (lensing-corrected) luminosities and young ages, with low stellar masses ($6.8<$ log(M$_{\star}$/M$_{\odot}$) $<9.5$) and low star formation rates (SFR=0.2-7 M$_{\odot}$ yr$^{-1}$), confirming previous findings in early JWST observations of $z>9$. A few galaxies at $z\sim9-10$ appear to show a clear Balmer break between the F356W and F444W/F410M bands, which helps constrain their stellar mass. We estimate blue UV continuum slopes between $\beta=-1.8$ and $-2.3$, typical for early galaxies at $z>9$ but not as extreme as the bluest recently discovered sources. We also find evidence for a rapid redshift-evolution of the mass-luminosity relation and a redshift-evolution of the UV continuum slope for a given range of intrinsic magnitude, in line with theoretical predictions. These findings suggest that deeper JWST observations are needed to reach the fainter galaxy population at those early epochs, and follow-up spectroscopy will help better constrain the physical properties and star formation histories of a larger sample of galaxies.
Hakim Atek, Iryna Chemerynska, Bingjie Wang, Lukas Furtak, Andrea Weibel, Pascal Oesch, John R. Weaver, Ivo Labbé, Rachel Bezanson, Pieter van Dokkum, Adi Zitrin, Pratika Dayal, Christina C. Williams, Themiya Nannayakkara, Sedona H. Price, Gabriel Brammer, Andy D. Goulding, Joel Leja, Danilo Marchesini, Erica J. Nelson, Richard Pan, Katherine E. Whitaker
2023-05-02T21:43:35Z
http://arxiv.org/abs/2305.01793v2
# JWST UNOVER: Discovery of \(z>9\) Galaxy Candidates Behind the Lensing Cluster Abell 2744 ###### Abstract We present the results of a search for high-redshift (\(z>9\)) galaxy candidates in the _JWST_ UNOVER survey, using deep NIRCam and NIRISS imaging in 7 bands over \(\sim 45\) arcmin\({}^{2}\) and ancillary _HST_ observations. The NIRCam observations reach a \(5-\sigma\) limiting magnitude of \(\sim 29.2\) AB. The identification of high\(-z\) candidates relies on a combination of a dropout selection and photometric redshifts. We find 16 candidates at \(9<z<12\) and 3 candidates at \(12<z<13\), eight candidates are deemed very robust. Their lensing amplification ranges from \(\mu=12.2\) to 11.5. Candidates have a wide range of (lensing-corrected) luminosities and young ages, with low stellar masses (\(6.8<\log(\mathrm{M_{\star}/M_{\odot}})<9.5\)) and low star formation rates (SFR=0.2-7 \(\mathrm{M_{\odot}~{}yr^{-1}}\)), confirming previous findings in early _JWST_ observations of \(z>9\). A few galaxies at \(z\sim 9-10\) appear to show a clear Balmer break between the F356W and F444W/F410M bands, which helps constrain their stellar mass. We estimate blue UV continuum slopes between \(\beta=-1.8\) and \(-2.3\), typical for early galaxies at \(z>9\) but not as extreme as the bluest recently discovered sources. We also find evidence for a rapid redshift-evolution of the mass-luminosity relation and a redshift-evolution of the UV continuum slope for a given range of intrinsic magnitude, in line with theoretical predictions. These findings suggest that deeper _JWST_ observations are needed to reach the fainter galaxy population at those early epochs, and follow-up spectroscopy will help better constrain the physical properties and star formation histories of a larger sample of galaxies. keywords: galaxies: high-redshift - dark ages, reionization, first stars - galaxies: dwarfs - galaxies: evolution - gravitational lensing: strong - cosmology: observations ## 1 Introduction While the _Hubble Space Telescope_ (_HST_) and ground-based observatories have uncovered more than two thousands galaxies at redshifts greater than \(z\sim 6\)(Atek et al., 2015; Finkelstein et al., 2015; Bouwens et al., 2021), only a handful of galaxies were known at \(z>9\)(Oesch et al., 2018; Bowler et al., 2020; Bagley et al., 2022). This observational frontier is mainly due to the near-infrared (NIR) wavelength coverage of _HST_ which is limited to \(\lambda<2\)\(\mu\)m, whereas the rest-frame ultraviolet (UV) light of early galaxies is increasingly shifted towards longer wavelengths. With its NIRCam (Near-Infrared Camera) instrument covering the \(\sim 1-5\)\(\mu\)m domain, coupled with a significantly higher sensitivity compared to its NIR predecessors (Rigby et al., 2022; Rieke et al., 2023), JWST is poised to revolutionize our views of the early stages of galaxy formation. In the early months of operation, several studies have reported the discovery of \(z>9\) galaxy candidates in the first JWST imaging observations: the Early Release Observations (ERO; Pontoppidan et al., 2022), Early Release Science (ERS) programs CEERS (Bagley et al., 2022) and GLASS (Treu et al., 2022). Among these early results, Naidu et al. (2022) reported candidates at \(z\sim 12-13\), Finkelstein et al. (2022) a candidate at \(z\sim 12\), while samples of \(z\sim 9-16\) candidates have been presented in Atek et al. (2023); Donnan et al. (2023); Harikane et al. (2023); Adams et al. (2023); Austin et al. (2023). Many of these galaxy candidates broke the previous distance record held by _HST_ observations (Oesch et al., 2016). More recently, several programs have started to spectroscopically confirm some of these high-redshift candidates (Roberts-Borsani et al., 2022; Morishita et al., 2022), with the highest-redshift galaxy located at \(z\sim 13\)(Robertson et al., 2022; Curtis-Lake et al., 2022). At the same time, the high-redshift solution of some of these candidates have been ruled out by NIRSpec follow-up observations. For example, the highest-redshift candidate at \(z\sim 16.7\)(Donnan et al., 2023) has been confirmed to be a dusty galaxy at \(z\sim 4.9\) with intense rest-frame optical emission lines (Arrabal Haro et al., 2023). However, more than their distance, the most striking aspect perhaps is their combined number density and brightness. Indeed, the inferred number density is significantly larger than theoretical predictions based on galaxy formation models, or extrapolation of lower-redshift luminosity functions (Bouwens et al., 2022; Atek et al., 2023; Mason et al., 2022; Naidu et al., 2022; Donnan et al., 2023). While some of these candidates have been confirmed at \(z\sim 12\) or \(13\)(e.g, Curtis-Lake et al., 2022; Arrabal Haro et al., 2023), others turned out to be low-\(z\) dusty interlopers, which always warrants caution in their interpretation. Also, a sample of red massive galaxies at \(z=7-9\), reported by (Labbe et al., 2022), appear to have stellar masses approaching that of the present-day Milky Way, in potential tension with standard \(\Lambda\)-CDM models (Boylan-Kolchin, 2022). Several studies have attempted to understand these early observations and interpret these surprising results. In particular, the high number density of luminous galaxies at \(z>12\) has been attributed to the decreasing amount of dust attenuation at higher redshift (Ferarra et al., 2022), higher star formation efficiency in early galaxies and/or non-standard initial mass function (IMF; Ziparo et al., 2022; Mason et al., 2022), or even non-\(\Lambda\)-CDM cosmologies (Menci et al., 2022; Boylan-Kolchin, 2022). In the meantime, a larger area of deep _JWST_ surveys and spectroscopic follow-up observations are needed to confirm this claim by increasing the sample size of confirmed \(z>9\) galaxies. During the first cycle of _JWST_ operations, our UNCOVER (Ultradeep NIRSpec and NIRCam Observations before the Epoch of Reionization) Treasury survey has obtained deep multi-wavelength NIRCam imaging of the lensing cluster Abell 2744 (Bezanson et al., 2022). UNCOVER deep NIRCam imaging consists of a mosaic in 7 filters for \(\sim 4-6\) hour per band, reaching a magnitude limit of \(\sim 29.2\) AB. Following on the steps of the Hubble frontier fields (HFF), the program relies on the gravitational lensing boost to push beyond blank fields limits. In fact, assuming an average amplification of 5, UNCOVER is intrinsically the deepest observing program of Cycle 1. In addition, the program will obtain spectra for the intrinsically faintest distant galaxies to date with 5-20 hours of NIRSpec Prism follow-up observations. In addition to our program, NIRISS imaging of A2744 was obtained as part of ERS GLASS program, and NIRCam imaging as part of the DDT program ID 2756. We combined all these imaging data to increase the depth and the area of our survey. In this paper, we present the detection of \(z>9\) galaxy candidates in the NIRCam and NIRISS imaging data, determine their physical properties, and compare them to theoretical predictions. Using imaging data in 15 broad-band filters, the identification of galaxy candidates is based on a combination of photometric dropout criteria and photometric redshifts derived from Spectral Energy Distribution (SED) fitting with both BEAGLE and Ezay codes. We describe the imaging dataset in Section 2 and the sample selection in Section 3. We present our estimates of the physical parameters and their redshift-evolution in Section 4. Our conclusions are given in Section 5. We use AB magnitudes (Oke & Gunn, 1983) and a standard cosmology with H\({}_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}=0.7\), and \(\Omega_{m}=0.3\). ## 2 Observations The UNCOVER observations are described in the survey article (Bezanson et al., 2022), which is accompanied by our first data release of the imaging mosaics, available at the UNCOVER webpage1. Here we describe briefly the content of the data and their photometric characteristics. Footnote 1: uncover.github The NIRCam data consists of short wavelength SW imaging in 3 broadband filters (F115W, F150W, F200W) and long wavelength LW imaging in 3 broadband filters (F277W, F356W, F444W) and one medium band filter (F410M). The exposure times and the resulting magnitude limits in all filters are listed in Table 2. Simultaneously to the NIRCam observations, NIRISS imaging is obtained in parallel using 5 broadband filters (F115W, F150W, F200W, F356W, and F444W). Our analysis also includes data from the GLASS survey (Treu et al., 2022) obtained with NIRISS, which adds the F090W band in a fraction of the UNCOVER area. In addition, we incorporate NIRCam imaging from the DDT program ID 2756, which uses a similar set of filters to UNCOVER, except the F410M filter, and shorter exposure times. Using the gri2li software (Brammer et al. in prep.), the data were then reduced and drizzled into mosaics with a common pixel scale of 0.4'' pix\({}^{-1}\) and a total field of view of \(\sim 45\) arcmin\({}^{2}\). The cluster core of A2744 is covered by deep _HST_ imaging from the HFF program, and a slightly wider area with shallower observations from the BUFFALO program (Steinhardt et al., 2020). The _HST_ observations include Advanced Camera for Surveys (ACS) imaging in three filters (F435W, F606W, F814W), and Wide-Field Camera Three (WCF3) in four filters (F105W, F125W, F140W, F160W). Furthermore, the UNCOVER NIRISS parallels overlap with deep _HST_ ACS F814W imaging in the A2744 parallel field. All these observations are drizzled to the same pixel scale and aligned to the UNCOVER images. Detailed characterization of the data are presented in Weaver et al. (2023). ## 3 High-redshift sample selection ### Photometric Catalogs For our sample selection and analysis, we compared two photometric catalogs: (i) the general UNCOVER catalog published in Weaver et al. (2023) which has been designed to fit most of the scientific investigations covered by this dataset, (ii) a custom photometric catalog specifically tailored to the detection of high-redshift galaxies. the main differences reside in the aperture size, the deblending parameters, and the aperture corrections. #### 3.1.1 General catalog The object detection and photometry are performed in images that were previously corrected for contamination from intra-cluster light (ICL) and bright cluster galaxies, following methods developed in Shipley et al. (2018). All images are matched to the point spread function (PSF) of the longest-wavelength image in the F444W filter. The detection image consists of a co-addition of the three long-wavelength _JWST_ filters F277W, F356W, and F444W. Photometry is measured in 0.32'' apertures using the python version of Source Extractor SEP (Bertin & Arnouts, 1996; Barbary, 2016). We adopted the following parameters for the PSF extraction: a detection threshold of 1.5\(\sigma\), a deblending threshold of 16 m, and deblending contrast of 3e-3. The total fluxes are estimated by applying a correction derived from elliptical Kron apertures (Kron, 1980). Details of the PSF-matching procedure, and additional photometric corrections, are described in Weaver et al. (2023) #### 3.1.2 High\(-z\) catalog In addition, we produce a photometric source catalog using the SExtractor tool (Bertin & Arnouts, 1996) in dual mode on each available image, using the F444W as the detection image. We adopt a detection threshold of 0.9 (relative to the rms image), a minimum detection area of 6 pixels, and a deblending threshold of 3e-4. We measured individual fluxes in 0.24'' circular apertures in each filter. The total fluxes were obtained by using a scaling factor derived from the ratio of the aperture flux to the AUTO_FLUX obtained from SExtractor in the F444W image. To account for the missing flux due to the PSF wings, we measured the aperture flux as a function of the aperture radius in the PSF of the F444W band. For each object, we computed the equivalent circularized radius as \(r=\sqrt{a\times b}\times\) kron_radius, and divided the encircled flux by the flux fraction in the PSF-F444W for this radius. This correction typically increases all the fluxes by \(\sim 10-20\%\). In the end, the comparison between the two catalogs shows that the latter is better suited for high-redshift sources, particularly in deblending the small objects, and in estimating the object and background fluxes. ### Dropout selection Following the color-color criteria defined in Atek et al. (2023) we select \(9<z<11\) galaxies that satisfy: \[\begin{split} M_{115}-M_{150}>1.0\\ M_{115}-M_{150}>1.5&+1.4(M_{150}-M_{200})\\ M_{150}-M_{200}<0.5\end{split} \tag{1}\] This selection window, illustrated in Figure 1, has been designed to minimize potential contamination from low-redshift interlopers and cool stars. To determine the color-color space of these contaminants, we generated quiescent galaxy templates from the SWIRES library (Polletta et al., 2007), applied different dust attenuation values \(A_{V}=[0,0.25,1]\) assuming an SMC dust law, and computed synthetic photometry in the set of broadband filters used in this paper. The resulting color-color tracks are shown in Figure 1. We also compute the color tracks of cold red stars and brown dwarfs using stellar templates from Chabrier et al. (2000) and Allard et al. (2001). In addition to these selection criteria, we require that sources are detected in all LW filters with a minimum SNR\(=5\) and that they remain undetected in F090W, when available, and all _HST_ optical bands at a 2 \(\sigma\) level. For sources that are not detected in the dropout filter, we assign a 1\(\sigma\) lower limit corresponding to the filter limiting magnitude to ensure a minimum continuum break of one magnitude. For \(z\sim 12-15\) candidates, we adopt the following criteria: \[\begin{split} M_{150}-M_{200}>1.5\\ M_{150}-M_{200}>1.6+1.8(M_{200}-M_{277})\\ M_{200}-M_{277}<0.5\end{split} \tag{2}\] Similarly, we require high-significance detection in the LW filters and that none of these candidates are detected in the bands blueward of the Lyman break, i.e. in the _HST_, and _JWST_ F090W, F115W filters. ### Spectral energy distribution fitting In parallel, we estimate photometric redshifts by running spectral energy distribution (SED) fitting. We apply 5% error floor to all photometric measurements to reflect the calibration uncertainties of _JWST_ NIRCam data. To minimize the propagation of lensing uncertainties, both of the following procedures are based on the observed flux densities without correction for magnification. The derived parameters are then unlensed a posteriori. We first use the Eazy software (Brammer et al., 2022), over an allowed redshift range \begin{table} \begin{tabular}{l c c} Filter & Depth & Area \\ & (5 \(\sigma\) AB) & (arcmin\({}^{2}\)) \\ \hline \multicolumn{3}{c}{HST} \\ \hline F435W & 29.28 & 18.54 \\ F606W & 28.86 & 36.21 \\ F814W & 28.47 & 31.26 \\ F105W & 28.14 & 20.22 \\ F125W & 28.14 & 20.08 \\ F140W & 28.79 & 5.62 \\ F160W & 28.27 & 20.15 \\ \hline \multicolumn{3}{c}{JWST} \\ \hline F090W & 28.93 & 12.91 \\ F115W & 28.85 & 45.12 \\ F150W & 28.87 & 45.50 \\ F200W & 28.92 & 44.71 \\ F277W & 29.34 & 44.98 \\ F356W & 29.45 & 45.67 \\ F410M & 28.85 & 28.73 \\ F444W & 29.10 & 45.11 \\ \end{tabular} \end{table} Table 1: Limiting AB magnitudes at 5\(\sigma\), quoted in 0.32′′diameter apertures, correspond to the area-weighted 50\({}^{th}\) percentiles. Area reflects the union of the LW detection footprint with that of each band. of \(0.01<z<20\), adopting the CORR_SFHZ set of galaxy templates, which include redshift-dependent SFHs informed by the most recent results of _JWST_ observations of high\(-z\) galaxies (Carnall et al., 2023; Larson et al., 2022). As shown in Weaver et al. (2023), these templates perform better than the default F5PS_FULL library in recovering the true redshift. Improvements over the \(\rm{Eazy}\) F5PS templates have also been presented in Larson et al. (2022), which include bluer UV slopes based on BPASS (Stanway & Eldridge, 2018) and nebular emission models. Second, we run the Beagle (BayEsian Analysis of GaLaxy sEds) SED-fitting software (Chevallard & Charlot, 2016) on the same photometric data. The procedure uses stellar population models from (Bruzual & Charlot, 2003) and nebular emission models from Gutkin et al. (2016). In the first run of the SED fitting, we are mostly interested in the best redshift solution. We adopt a simple star formation history with a constant star formation rate, a uniform distribution of the age priors \(\log(t_{\rm age}/{\rm yr})\in[7,t_{\rm universe}]\), and a fixed metallicity of \(Z=0.1\,Z_{\odot}\). In order to identify spurious objects, or sources affected by artifacts, we flag objects that meet the following criteria: objects whose segmentation apertures overlap with the edge of the detector or are next to bright stars, are affected by bad pixels, whose size is 1 pixel or less, or are likely to be stars. For the latter, we combine information from both the SExtractor stellarity parameter CLASS_STAR and the \(\chi^{2}\) of the \(\rm{Eazy}\) SED-fitting run using a set of dwarf star templates through the fit_phenix_stars function. In addition, all the sources that pass these filters are visually inspected for potential contamination (diffraction spikes, detector artifacts, etc.) All the candidates have best-fit photometric redshifts consistent with the dropout selection. Conversely, when first selecting candidates using photometric redshift criteria, limiting the sample to best-fit solutions with \(\chi^{2}<30\), 31 candidates satisfy the selection. The dropout selection rejects 13 candidates. For the majority of these rejected candidates, the high\(-z\) solution is due to fitting failures, where the best-fit SED does not match the photometry. Some objects do not show a clear Lyman break, or are clearly detected in the blue bands. Also, in few cases, the signal-to-noise level in the detection bands is simply below our selection threshold. The final sample of high-redshift candidates consists of a total of 16 galaxies in the redshift range \(9<z<11\) and 3 galaxies at \(11<z<15\). The complete list and properties of the high\(-z\) sample are reported in Tab. 2. ### Gravitational lensing model In order to compute the gravitational magnifications of the sample and the effective survey area, we use the new UNCOVER cluster mass model derived by Futrak et al. (2022). This parametric strong lensing model is based on existing and newly-discovered multiple-image systems in the deep NIRCam imaging of UNCOVER. Thanks to the wide NIRCam coverage, the total survey area with an amplification factor \(\mu>2\) is about 3.5 arcmin\({}^{2}\), which is a significant improvement over the HFF-derived area of \(\sim 0.9\) arcmin\({}^{2}\). Figure 2 shows the cumulative surface area as a function of magnification. The amplification factors shown in Table 2 range between \(\mu=1.2\) and \(\mu=11.5\), and were derived using the photometric redshifts. Figure 1: Color-color selection identification of high-redshift dropouts. Each panel shows the selection window (white area) defined by the criteria of equations 1, 2 for the identification of candidates in the redshift range \(9<z<11\) and \(11<z<15\), respectively. Each candidate (magenta diamond) is marked with its associated best-fit photometric redshift. The blue-solid lines are the expected color-color space of typical starburst galaxies at \(z>9\) based on galaxy templates generated using BAGLE. We also show the color-color tracks of quiescent galaxies (dashed lines), which represent potential low-redshift contaminants, generated from GRASIL (Silva et al., 1998). We applied different dust attenuation values in the range \(A_{V}=[0,0.25,1]\) illustrated by different colors (yellow to red) assuming an SMC dust law. In addition, the green stars indicate the colors of cool stars (brown dwarfs and M-class), another source of contamination, based on Chabrier et al. (2000) and Allard et al. (2001) libraries. ## 4 Results ### High-redshift candidates We find 16 candidates in the range \(9<z<11\), among which 8 objects have best-fit photometric redshifts above \(z=10\). Among the total sample, we identify 7 robust candidates that have a narrow high\(-z\) solution with no secondary low\(-z\) solution. These candidates have about 70-90% of their total probability enclosed within \(\Delta z=1\) around the best-fit solution. Examples of these high-confidence candidates are shown in Figure 4. These are ranked as the best candidates in the sample and are assigned a quality flag Q=1 in Table 2. The lower-quality Q=2 sample includes galaxies that show a significant secondary solution, with a total probability within 50-70% around their high\(-z\) peak. Four galaxies lie in this category. Finally, 5 sources belong to the Q=3 category because of disagreements between their Eazy and BEAGLE best-fit solutions or a total probability of less than 50% around their high\(-z\) peak. Castellano et al. (2022) recently identified 7 candidates at \(z\sim 10\) in the A2744 region. We recover 6 of their candidates in our sample. Five of these overlapping sources are in the Q=1 category. One of their source, GHZ9, is ranked in the Q=2 of our sample. Source GHZ4 in their catalog is not included in our selection because it has SNR=4.8, which is slightly below our color-color selection criteria. In table 3 we compare the photometric redshifts of the sources that have been identified in both studies. Overall, there is a good agreement between these independent determinations, whose differences remain within the 1\(\sigma\) uncertainties, in most cases. We also identify three candidates in the redshift selection range \(12<z<15\). Among these, one candidate is classified as robust Q=1, while the remaining two candidates have a Q=3 score according to the criteria defined earlier. Overall, these candidates are among the highest-redshift candidates identified in recent _JWST_ observations, as can be seen in Figure 6. This sample spans a large dynamic range in intrinsic luminosity from \(M_{\rm UV}\)=-17.6 to -21.7 mag. We examined the recent _JWST_ spectroscopic observations of A2744 as part of the GLASS program, utilizing both NIRSpec and NIRISS grism observations. \begin{table} \begin{tabular}{l c c c c c c c c} \hline ID & Q & RA & Dec & \(z_{\rm phot}\) & \(M_{\rm UV}\) & \(\beta\) & \(\log(M_{\star}/\rm M_{\odot})\) & SFR (\(\rm M_{\odot}\,yr^{-1}\)) & \(\mu\) \\ \hline \multicolumn{10}{c}{\(z\sim 9-11\) candidates} \\ \hline 1870 & 3 & 3.648010 & -30.426616 & 9.32\({}^{+0.96}_{-0.95}\) & -19.78 \(\pm\) 0.18 & -2.09 \(\pm\) 0.14 & 8.00\({}^{+0.17}_{-0.21}\) & 1.16\({}^{+0.54}_{-0.45}\) & 1.30\({}^{+0.01}_{-0.01}\) \\ 2065 & 1 & 3.617194 & -30.425536 & 9.50\({}^{+0.34}_{-0.08}\) & -21.67 \(\pm\) 0.12 & -2.03 \(\pm\) 0.04 & 8.57\({}^{+0.44}_{-0.46}\) & 4.27\({}^{+0.37}_{-0.58}\) & 1.65\({}^{+0.02}_{-0.03}\) \\ 3148 & 3 & 3.646481 & -30.421615 & 9.40\({}^{+0.74}_{-0.74}\) & -20.51 \(\pm\) 0.18 & -1.92 \(\pm\) 0.12 & 8.47\({}^{+0.25}_{-0.25}\) & 3.41\({}^{+1.80}_{-0.58}\) & 1.31\({}^{+0.01}_{-0.01}\) \\ 3160 & 2 & 3.591436 & -30.421663 & 9.74\({}^{+0.53}_{-0.45}\) & -19.06 \(\pm\) 0.15 & -1.79 \(\pm\) 0.08 & 8.15\({}^{+0.77}_{-0.13}\) & 1.55\({}^{+0.40}_{-0.58}\) & 2.49\({}^{+0.09}_{-0.08}\) \\ 10619 & 1 & 3.594996 & -30.400738 & 9.69\({}^{+0.03}_{-0.03}\) & -17.57 \(\pm\) 0.13 & -2.16 \(\pm\) 0.05 & 6.78\({}^{+0.03}_{-0.25}\) & 0.68\({}^{+0.30}_{-0.23}\) & 1.50\({}^{+0.40}_{-0.50}\) \\ 17987 & 3 & 3.641572 & -30.382825 & 9.41\({}^{+0.72}_{-1.72}\) & -19.39 \(\pm\) 0.18 & -2.05 \(\pm\) 0.14 & 7.57\({}^{+0.45}_{-0.45}\) & 0.42\({}^{+0.27}_{-0.27}\) & 1.31\({}^{+0.01}_{-0.01}\) \\ 21623 & 1 & 3.567067 & -30.377869 & 10.01\({}^{+0.36}_{-0.26}\) & -19.01 \(\pm\) 0.14 & -2.30 \(\pm\) 0.07 & 7.86\({}^{+0.06}_{-0.06}\) & 0.83\({}^{+0.12}_{-0.09}\) & 3.72\({}^{+0.14}_{-0.18}\) \\ 22360 & 2 & 3.637111 & -30.376780 & 10.73\({}^{+0.34}_{-0.19}\) & -19.85 \(\pm\) 0.18 & -2.08 \(\pm\) 0.12 & 8.33\({}^{+0.11}_{-0.11}\) & 2.47\({}^{+1.71}_{-0.11}\) & 1.33\({}^{+0.01}_{-0.01}\) \\ 26928 & 1 & 3.511925 & -30.371861 & 9.47\({}^{+0.43}_{-0.40}\) & -20.36 \(\pm\) 0.12 & -1.97 \(\pm\) 0.04 & 9.02\({}^{+0.28}_{-0.09}\) & 6.54\({}^{+0.34}_{-0.29}\) & 1.67\({}^{+0.09}_{-0.09}\) \\ 31763 & 3 & 3.591867 & -30.366428 & 11.31\({}^{+0.20}_{-0.20}\) & -18.89 \(\pm\) 0.17 & -2.13 \(\pm\) 0.11 & 7.73\({}^{+0.01}_{-0.03}\) & 0.61\({}^{+0.14}_{-0.16}\) & 1.92\({}^{+0.17}_{-0.11}\) \\ 39074 & 1 & 3.590115 & -30.359743 & 10.60\({}^{+0.80}_{-0.31}\) & -20.03 \(\pm\) 0.14 & -2.21 \(\pm\) 0.07 & 8.16\({}^{+0.13}_{-0.07}\) & 1.65\({}^{+0.24}_{-0.24}\) & 1.89\({}^{+0.06}_{-0.06}\) \\ 46026 & 3 & 3.605690 & -30.352664 & 10.86\({}^{+0.32}_{-0.30}\) & -19.92 \(\pm\) 0.16 & -2.06 \(\pm\) 0.14 & 8.31\({}^{+0.67}_{-0.23}\) & 2.34\({}^{+8.57}_{-0.20}\) & 1.47\({}^{+0.07}_{-0.03}\) \\ 52008 & 2 & 3.478739 & -30.345535 & 10.37\({}^{+0.20}_{-0.21}\) & -19.90 \(\pm\) 0.14 & -2.11 \(\pm\) 0.07 & 7.69\({}^{+0.32}_{-0.23}\) & 0.56\({}^{+0.22}_{-0.22}\) & 1.26\({}^{+0.02}_{-0.02}\) \\ 73667 & 1 & 3.451412 & -30.321807 & 10.68\({}^{+0.21}_{-0.21}\) & -20.55 \(\pm\) 0.13 & -2.24 \(\pm\) 0.05 & 8.37\({}^{+0.01}_{-0.01}\) & 2.73\({}^{+0.07}_{-0.08}\) & 1.17\({}^{+0.01}_{-0.01}\) \\ 81198 & 1 & 3.451367 & -30.320717 & 10.50\({}^{+0.02}_{-0.22}\) & -19.90 \(\pm\) 0.14 & -2.33 \(\pm\) 0.08 & 8 * Candidate ID 2065 in our sample has a spectroscopic confirmation at \(z=9.3\) as reported by Boyett et al. (2023), in good agreement with the estimated photometric redshift of \(z=9.50^{+0.34}_{-0.08}\). The NIRSpec observations of this spatially-resolved galaxy show prominent emission lines of O, Ne, and H, as well as a clear Lyman break. By combining the photometric and spectroscopic data, the best-fit SED provides a stellar mass of log(M\({}_{\star}\)/M\({}_{\odot}\)\(\sim\) 9.17). * Candidate ID 10519 is one of the three multiple images of a candidate galaxy previously identified in the HFF data by (Zitrin et al., 2014). It has the highest magnification (\(\mu\sim\) 11.5). It has been spectroscopically confirmed at \(z=9.76\) with NIRSpec prism spectroscopy by Roberts-Borsani et al. (2022). This value is in good agreement with our photometric redshift estimate of \(z=9.69^{+0.33}_{-0.12}\). ### Physical properties In addition to computing photometric redshifts, we perform a second SED fitting run with BEAGLE to refine our estimates of physical parameters, this time using a Gaussian prior for the redshift based on the first-run photo\(-z\) solution. We adopt this time a more flexible SFH using a delayed exponential form SFR \(\propto\) t exp(\(-t/\tau\)), and a potential SF burst episode in the last 10 Myr. We use a constant metallicity of \(Z=0.1Z_{\odot}\), which has been shown to have little effect on the photometry of high-redshift galaxies (Furtak et al., 2021). We also assume an SMC extinction law, which is more appropriate for high-redshift galaxies (Capak et al., 2015; Reddy et al., 2018). We limit the fit to 4 physical parameters using the following priors: * Stellar mass with a log-uniform distribution prior in the range log(M\({}_{\star}\)/M\({}_{\odot}\)) \(\in\) [6-10] * SFR averaged over the last 10 Myr with a log-uniform distribution prior log(\(\psi\) / M\({}_{\odot}\) yr\({}^{-1}\)) \(\in\) [-4,4] * Maximum stellar age for \(t\)=\(\tau\), with a log-uniform prior log(\(t_{age}\)/yr) \(\in\) [6, \(t_{universe}\)], where \(t_{universe}\) is the age of the universe at the redshift of the galaxy. * Dust attenuation as traced by the optical depth measured in the Figure 3: Coordinates of the high-redshift candidates overlaid on the F277W image, which has been corrected for bCG and ICL contamination. The footprint includes UNCOVER, GLASS, DDT observations (see text for details). The field of view of the full frame is about 12 \(\times\) 9 arcmin. \begin{table} \begin{tabular}{c c c c c} \hline ID & GLASS ID & \(z_{phot}\)(Eazy) & \(z_{phot}\)(BEAGLE) & GLASS \(z_{phot}\) \\ \hline 2065 & DHZ1 & \(9.50^{+0.34}_{-0.08}\) & \(9.78^{+1.05}_{-0.34}\) & 9.45 \\ 21623 & UHZ1 & \(10.01^{+0.36}_{-0.26}\) & \(10.17^{+0.77}_{-0.05}\) & 10.32 \\ 26928 & GHZ1 & \(9.47^{+0.42}_{-0.42}\) & \(9.95^{+0.83}_{-0.12}\) & 10.47 \\ 52008 & GHZ9 & \(10.37^{+0.72}_{-1.09}\) & \(9.47^{+0.35}_{-0.35}\) & 9.35 \\ 81198 & GHZ7 & \(10.50^{+0.23}_{-0.46}\) & \(10.17^{+0.66}_{-0.08}\) & 10.62 \\ 73667 & GHZ8 & \(10.68^{+0.40}_{-0.31}\) & \(10.63^{+0.51}_{-0.51}\) & 10.85 \\ \hline \end{tabular} \end{table} Table 3: Comparison between the photometric redshifts derived in our analysis with Eazy and BEAGLE with the results of Castellano et al. (2022) for common objects in the two samples. V band with a uniform prior \(\tau_{V}\in[0,0.5]\). The prior distribution is based on UV continuum slope values measured in Section 4.2.2. Most of the candidates have relatively low stellar masses in the range Log(M\({}_{\star}\)/M\({}_{\odot}\))= \(6.8-9.5\). The data cover the redshifted Balmer break for most galaxies, which helps constrain the older stellar population. Indeed, the best constraints on the stellar mass, but also the age of the stellar population, are obtained for galaxies that show an excess in the F444W band indicative of Balmer break. For example, among the robust candidates, IDs 21623, 26928 and 83338, show a significant Balmer break (Fig. 4) and small uncertainties on their derived stellar masses. It is also interesting to note that in comparison to Eazy,BEGALE attribute more broadband flux to strong emission lines, which result in lower stellar masses. Strong [Oii]\(\lambda\lambda 3726,3729\) and [Neiii]\(\lambda 3869\) emission lines can enhance both F410M and F444W fluxes and mimic a Balmer break. Such strong lines have been observed at \(z\sim 10.6\) for instance in the NIRSpec spectrum of GN-\(z11\)(Bunker et al., 2023). This explains the difference observed between the Eazy and BEGALE fits for a few objects, such as ID 21623. Candidates also show small SFR values, and young stellar ages between 10 and 100 Myr, confirming previous _JWST_ results at similar redshifts (Furtak et al., 2021; Topping et al., 2022; Austin et al., 2023; Whirler et al., 2023). Taken at face value and considering a constant star formation, their stellar mass would imply older ages. It is clear that the current SFR derived from our SED fitting is not representative of the entire star formation history of these candidates. Intermittent episodes of intense star formation, or simply a higher SFR, likely occurred in the past in order to build up the estimated stellar mass so quickly. #### 4.2.1 Mass-Luminosity relation In addition to the stellar mass derived from the SED fitting, we computed the absolute rest-frame UV magnitude by combining the observed magnitude in F200W band and the photometric redshift (column 6 of Table 2). The M\({}_{\star}\)-\(M_{\rm UV}\) relation provides insights into the stellar mass build up of galaxies and its evolution with redshift. Figure 7 shows the M\({}_{\star}\)-\(M_{\rm UV}\) best-fit relation determined at \(z\sim 6\) (blue line; Furtak et al., 2021) and at \(z\sim 9\) (black line; Bhatwadekar et al., 2019) using _HST_ and _Spitzer_ observations. The redshift-evolution of this relation can already be observed. Properties of the present sample are shown with squares. The red squares indicate the robust subset of candidates (cf. Table 2). Our results are consistent with a redshift evolution, where galaxies are on average below the established relations at lower redshifts. They are also in agreement with other recent _JWST_ constraints derived by Whitler et al. (2023) represented by purple circles. We note that for galaxy Figure 4: Imaging data and best-fit solutions for 4 of the high-redshift candidates in the range \(9<z<11\). The top row of each panel shows image cutouts in the 7 _JWST_ filters. The bottom panel shows the best-fit SEDs with Eazy (orange curve) and BEGALE (blue curve) together with object ID and the best fit \(\chi^{2}\) from both codes. The purple diamonds represent the observed photometric data points (and their associated \(1\sigma\) uncertainties) measured in _HST_ and _JWST_ images. The orange and blue circles represent the best fit magnitudes in their respective bands for Eazy and BEGALE solutions, respectively. We also show the probability distribution function (PDF) of the photometric redshift solutions (for both codes) on the right, together with the best-fit \(z_{phot}\) and the total probability enclosed in a redshift width of \(\Delta z=1\) around the Eazy best-fit solution. candidates at \(z\gtrsim 11\), the absence of constraints on the Balmer break tend to underestimate the stellar mass we derive from SED fitting. Furthermore, the green-shaded region and the red-dashed line are theoretical predictions at \(z=9\) computed from hydrodynamical zoom simulations (Kannan et al., 2022) and semi-analytical models (Yung et al., 2019). Models also predict a rapid evolution with redshift, in line with our results. Figure 5: same as figure 4 but for one of the candidates in the \(12<z<15\) range. Figure 6: Absolute UV magnitude as a function of redshift for the present sample compared to literature results. The gray circles (stars) represent a compilation of known galaxies with photometric (spectroscopic) redshifts at \(z>8\). The rest of the circles represents photometrically-selected galaxies from recent _JWST_ observations (Finkelstein et al., 2022; Atek et al., 2023; Austin et al., 2023; Whitler et al., 2023), the colored stars show galaxies with spectroscopic confirmations by NIRSpec observations (Curtis-Lake et al., 2022; Bunker et al., 2023). Figure 7: The stellar mass-luminosity relation for high-redshift galaxies. The sample of the present study is represented with orange squares. Best-fr relations derived from observational constraints at \(z\sim 6\) are indicated with the blue-shaded region (Furtz et al., 2021), while results at \(z\sim 9\) are indicated with a black line (Bhatuwdekar et al., 2019). Theoretical predictions from semi-analytical models at \(z=10\)(Yung et al., 2019) and hydrodynamical simulations at \(z=9\) are plotted with a red-dashed line and green-shaded region, respectively. #### 4.2.2 UV Continuum slopes Next, we explore the UV continuum slope \(\beta\), which is widely used to infer the dust attenuation in high-redshift galaxies. This parameter also encodes information about the age of the stellar population, where the contribution from young stars will result in a bluer UV slope. The \(\beta\) slope is measured by fitting a power law of the form \(f_{A}\propto\lambda^{\beta}\) to the rest-frame UV photometric measurements below 3000 A in F200W, F277W, F356W bands. We fixed the redshift to the Eazy best-fit phot\(-z\). To estimate the impact of redshift and photometric uncertainties, we performed a Monte Carlo sampling using a normal distribution around these fixed parameters in the fitting procedure. The results are provided in Table 2. Like most of the \(z>9\) galaxies recently uncovered in _JWST_ observations, these candidates show blue UV slopes ranging from \(\beta=-1.8\) to -2.3, which is expected if these candidates have younger stellar populations with low-dust attenuation and metallicities. Absence of dust has also been invoked to explain the high number density of high-z galaxies (Ferrara et al., 2022). We do not find however evidence for extremely blue slopes like those that were reported in early _JWST_ data (Topping et al., 2022; Adams et al., 2023). When compared to literature results (see Figure 8), these candidates show bluer UV slopes at a given luminosity than those observed in \(z\sim 6\) galaxies, for instance (Bouwens et al., 2014) or more recently from _JWST_ observations (Nanayakakara et al., 2022). They follow the general trend of \(z>9\) galaxies (Austin et al., 2023; Cullen et al., 2023; Curtis-Lake et al., 2022; Whitler et al., 2023), for which we show only robust measurements with uncertainties below 0.3 dex. Similarly, the \(\beta\)-\(M_{\rm UV}\) relation derived from the latest numerical simulations, such FLARES(Vijayan et al., 2021; Wilkins et al., 2023) and Thesan(Kannan et al., 2022), show broad agreement with the observed \(\beta\) values. In addition to the dependence with age, the UV slope is also affected by the nebular continuum, whose contribution is larger at longer wavelengths, which results in redder \(\beta\) values. Albeit with large uncertainties, these effects, combined with constraints from nebular recombination lines, can be used to indirectly infer the escape fraction of ionizing continuum \(f_{esc}\) in galaxies at the epoch of reionization (e.g., Zackrisson et al., 2017; Plat et al., 2019; Topping et al., 2022). ## 5 Summary In this paper, we presented the results of our search for \(z>9\) galaxy candidates in the _JWST_ UNCOVER survey. We used deep NIRCam and NIRISS imaging from three observing programs UNCOVER (Bezanson et al., 2022), GLASS (Treu et al., 2022), and a DD program ID 2767, in addition to ancillary _HST_ observations. We combined dropout selection and photometric redshift estimates from two independent codes, Eazy and BEAGLE, to identify high-redshift galaxy candidates. We report the detection of 16 candidates at \(9<z<12\) and 3 candidates at \(12<z<13\). According to our quality assessment, a total of 7 candidates are deemed robust among this sample. Candidates span a wide dynamic range in luminosity, from \(M_{\rm UV}\sim\) -22 to -17.6 mag. Some of these sources are among the faintest galaxies discovered at \(z>10\), although most of their magnification factors are still relatively modest, i.e. below \(\mu=5\). Two candidates have spectroscopic confirmation at redshift \(z=9.76\) and \(z=9.3\). In addition to photometric redshift, we ran refined SED fitting with BEAGLE to constrain the physical properties of the sample, fixing the redshift and focusing on four parameters: the stellar mass, the stellar age, the star formation rate averaged over the last 10 Myr, and the attenuation \(\tau_{V}\). We find that galaxies have young ages between 10 and 100 Myr and low star formation rates from \(\sim 0.2\) to \(\sim 7\) M\({}_{\odot}\) yr\({}^{-1}\). These results confirm previous findings in early _JWST_ observations of \(z>9\) galaxies. Most of the candidates have low stellar masses in the range log(M\({}_{\bullet}\)/M\({}_{\odot}\)) \(\sim\) 6.8-9.5. We find evidence for a rapid redshift-evolution of the mass-luminosity relation, in line with recent observational results and theoretical predictions. We also find that these galaxies have blue UV continuum slopes, between \(\beta=-1.8\) and \(\beta=-2.4\), although we do not find extremely blue \(\beta\) values as measured in recent \(z>10\) studies. The young ages we measure are consistent with these blue continuum slopes. We also see a redshift-evolution of the UV slope for a given intrinsic magnitude. The sample aligns with theoretical predictions and similar observational results for the \(\beta\)-\(M_{\rm UV}\) relation at \(z>9\). In the near future, _JWST_ will continue to provide increasingly larger samples of rest-optical observations of galaxies at \(z>9\). Combined with follow-up spectroscopy, these data will help constrain more precisely the physical properties and star formation histories of these galaxies. In particular, ultra-deep observations of additional lensing clusters will help us push to fainter and more representative galaxies at those early epochs. ## Acknowledgements This work is based on observations obtained with the NASA/ESA/CSA _JWST_ and the NASA/ESA _Hubble Space Telescope_ (HST), retrieved from the Mikulski Archive for Space Telescopes (MAST) at the _Space Telescope Science Institute_ (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. Figure 8: The UV continuum slope \(\beta\) as a function of the UV magnitude. The present sample is represented by orange squares, while recent _JWST_ results at similar redshifts are marked with circles (Austin et al., 2023; Cullen et al., 2023; Whitler et al., 2023) and stars (Curtis-Lake et al., 2022). We also show theoretical predictions for this relation with a shaded green region (Kannan et al., 2022) and a gray region (Vijayan et al., 2021), and two brown curves with (upper curve) and without (lower curve) dust attenuation (Wilkins et al., 2023). The empirical relation established at \(z\sim 6\) by Bouwens et al. (2014) is represented with the blue line. This work has made use of the CANDIDE Cluster at the _Institut d'Astrophysique de Paris_ (IAP), made possible by grants from the PNCG and the region of Ile de France through the program DIM-ACAV+. This work was supported by CNES, focused on the _JWST_ mission. This work was supported by the Programme National Cosmology and Galaxies (PNCG) of CNRS/INSU with INP and IN2P3, co-funded by CEA and CNES. P. Dayal acknowledges support from the NWO grant 016.VID1.189.162 ("ODIN") and the European Commission's and University of Groningen's CO-FUND Rosalind Franklin program. ## Data Availability The data underlying this article are publicly available on the Mikulski Archive for Space Telescopes2 (MAST), under program ID 2561. Reduced and calibrated mosaics are also available on the UNCOVER webpage: [https://just-uncover.github.io/](https://just-uncover.github.io/) Footnote 2: [https://archive.stsci.edu/](https://archive.stsci.edu/)
2307.08870
Modeling Data Analytics Architecture for Smart Cities Data-Driven Applications using DAT
Extracting valuable insights from vast amounts of information is a critical process that involves acquiring, storing, managing, analyzing, and visualizing data. Providing an abstract overview of data analytics applications is crucial to ensure that collected data is transformed into meaningful information. One effective way of achieving this objective is through Data Architecture. This article shares our experiences in developing a Data Analytics Architecture (DAA) using model-driven engineering for Data-Driven Smart Cities applications utilizing DAT.
Moamin Abughazala, Henry Muccini
2023-07-17T21:52:57Z
http://arxiv.org/abs/2307.08870v2
# Modeling Data Analytics Architecture for Smart Cities Data-Driven Applications using DAT ###### Abstract Extracting valuable insights from vast amounts of information is a critical process that involves acquiring, storing, managing, analyzing, and visualizing data. Providing an abstract overview of data analytics applications is crucial to ensure that collected data is transformed into meaningful information. One effective way of achieving this objective is through Data Architecture. This article shares our experiences in developing a Data Analytics Architecture (DAA) using model-driven engineering for Data-Driven Smart Cities applications utilizing DAT. Analytics Architecture, Big Data Architecture, IoT, Smart Cities, Data-Driven ## I Introduction Analyzing data for Smart Cities requires utilizing various methods and technologies to collect, store, and examine the information produced by connected devices and sensors. This aids in comprehending the functioning and conduct of Smart Cities systems and identifying patterns and trends in the data that can enhance their efficacy [1][2]. Smart Cities data analytics comprises three primary approaches: real-time, predictive, and prescriptive analytics [3][4]. Real-time analytics [5] utilizes algorithms and software to swiftly analyze data generated by IoT devices, promptly identifying and responding to data trends and patterns. Predictive analytics, on the other hand, harnesses historical data and machine learning algorithms to anticipate future outcomes and behaviors. Lastly, prescriptive analytics employs optimization algorithms to suggest actions and decisions that can assist businesses in reaching their objectives. Data analytics has numerous applications in the Smart Cities domain, such as predictive maintenance, energy management, supply chain optimization, and customer behavior analysis [4]. By utilizing data analytics, we can obtain valuable insights into the performance and behavior of Smart Cities systems, which can help improve their operations, reduce costs, and drive innovation. Analytics architectures refer to the systems and technologies utilized for collecting, storing, processing, and analyzing data to obtain insights and make data-driven decisions. These architectures commonly involve using distributed computing systems, such as server clusters or cloud-based platforms, to manage data storage and processing. Along with these systems, analytics architectures also comprise specialized software tools and algorithms intended for analyzing and acquiring insights from data. These tools may comprise data mining, machine learning algorithms, and visualization and reporting tools for presenting the analysis results. An analytics architecture's specific components and design may vary depending on the organization's objectives and requirements. This paper showcases the efficacy of modeling languages in crafting an Analytics Data Warehouse for Data-Driven Smart Cities applications, substantiated by a detailed case study. The paper is organized as follows. The background is presented in Section II. The applied real case study is in Section III, and the conclusions are finally drawn in Section IV. ## II Background ### _Big Data Analytics_ Big Data analytics (BDA) and Business Intelligence (BI) are two crucial fields that empower businesses to make more informed decisions. BDA involves analyzing large amounts of raw data to uncover valuable insights, such as unidentified relationships and market trends. Skilled data scientists carry out this process to provide businesses with the information they need for making strategic decisions. However, BI is a highly specialized field that uses various technologies, applications, and practices to gather, store, access, and analyze data. With BI, businesses can identify trends, and data patterns, and forecast future outcomes, enabling them to make informed decisions. ### _DAT: Data Architecture Framework_ The DAT tool [6] is ideal for modeling data architecture in IoT applications [7]. It clearly explains how data flows through the system and provides a blueprint for it. Stakeholders can describe two levels of data architecture: high-level architecture (HLA) and low-level architecture (LLA). It represents the data from source to destination, including formats, processing, storage, analysis, and consumption methods. The tool is built based on a structural and behavioral meta-model to support the documentation of Data-Driven applications. The data-view architecture modeling approach follows the IEEE/ISO/IEC 42010 standard [8] and uses the Data architecture structural and behavioral view (DAML) modeling language. DAT is considered the fourth view for CAPS [9][10][11][12]. ## III Application of DAT Models to the Analytics Data Warehouse Case Study This section introduces The Analytics Data Warehouse (ADW) case study and its DAT. ### _The Analytics Data Warehouse_ The data sources of this Analytics Data Warehouse are the AWS RDS (Relational Database Service) instances, the duplicate staging ORC files saved on S3 that are generated from the data streams. In addition, this data warehouse consumes data from CSV files, worksheets, and JSON objects that augment the data in the warehouse to make the data rich for analytics purposes. The ETL process in this data warehouse is developed and run using a cloud-based third-party tool called Keboola Connectors. This tool provides all the needed connectors for data sources like AWS MySQL RDSs. ORCOptimized Row Columnar) file extractors, Google Drive sheets, CSV readers, etc. Keboola also offers the required environment and tools to perform data transformations on the extracted data. Moreover, it provides the needed capabilities to schedule executing all extractions and transformation and orchestration between them. In the case implementation of this data warehouse, the data prepared using Keboola is stored on the cloud-based Snowflake data warehouse. The data in the Snowflake data warehouse is consumed by another cloud-based advanced BI and analytics framework called ThoughtSpot. This framework allows the customers to build analytical reports, dashboards, and KPIs. Through ThoughtSpot, The Company provides Fig. 1: Modeling of the Analytics Data Warehouse customers with an out-of-the-box set of reports, dashboards, and KPIs. Also, customers can build reports and dashboards based on their needs and business inquiries and perform advanced AI-driven data insights and searches. Thoughtspot reports and dashboards are integrated with company's system (the Frontend portal and mobile) applications through the SSO mechanism with white-labeled embedding capabilities to offer the customers and end users one integrated and unified working environment and experience. ### _The DAT model applied to the ADW Case Study_ This section shows the modeling of the ADW case study using DAT. From a structural point of view, Figure 1 shows the primary 4 data nodes; Data Sources, Processing Node (ETL - Keboola), Data warehouse (Snowflake), and Analytics platform (ThoughtSpot). The Data source shows how to collect the data from different data sources in different formats, integrate with (Time, Financial, and Sales data), and ingest data to be transferred into a Column-oriented format to be saved on File System(Amazon S3). Other data source is Excel, JSON, and CSV files will be sent directly to the Keboola node to be processed in the cloud. This Node provides all the needed connectors with other data sources. It provides the needed capabilities to schedule (Batch) extractions and transformation. The extracted prepared data is stored in the cloud-based Snowflake data warehouse. Then it will be consumed by another cloud-based advanced BI and analytics framework. The analytics node shows the ability of the framework to provide analytical reports, dashboards, and KPIs to the customers. ## IV Conclusion In this work, we presented Data Architecture Tool to model Data Analytical Architecture for Smart Cities Data-Driven Applications as a part of the VASARI project.
2304.11871
Charmless Semileptonic Baryonic $B_{u,d,s}$ Decays
We study $\bar B_q\to {{\rm\bf B}\bar{\rm\bf B}}' l \bar\nu$ and $\bar B_q\to {{\rm\bf B}\bar{\rm\bf B}}' \nu \bar\nu$ decays with all low lying octet and decuplet baryons using a topological amplitude approach. In tree induced $\bar B_q\to {{\rm\bf B}\bar{\rm\bf B}}' l \bar\nu$ decay modes, we need 2 tree and 1 annihilation amplitudes in octet-anti-octet decay modes, 1 tree amplitude in octet-anti-decuplet decay modes, 1 tree amplitude in decuplet-anti-octet decay modes and 1 tree and 1 annihilation amplitudes in decuplet-anti-decuplet decay modes. In loop induced $\bar B_q\to {{\rm\bf B}\bar{\rm\bf B}}' \nu \bar\nu$ decay modes, similar numbers of penguin-box and penguin-box-annihilation amplitudes are needed. Relations on these semileptonic baryonic $B_q$ decay amplitudes are found. Furthermore, the ratios of loop topological amplitudes and tree topological amplitudes are fixed by known CKM factors and loop functions. The observed $B^-\to p\bar p \mu^-\bar\nu$ differential rate exhibits threshold enhancement, which is expected to hold in all other semileptonic baryonic modes. The threshold enhancement squeezes the phase space and leads to very large SU(3) breaking effects in the decay rates. They are estimated using the measured $B^-\to p\bar p \mu^-\bar\nu$ differential rate and model calculations. Modes with relatively unsuppressed rates and good detectability are identified. These modes can be searched experimentally in near future and the rate estimations can be improved when more modes are discovered. Ratios of rates of some loop induced $\bar B_q\to {{\rm\bf B}\bar{\rm\bf B}}' \nu \bar\nu$ decays and tree induced $\bar B_q\to {{\rm\bf B}\bar{\rm\bf B}}' l \bar\nu$ decays are predicted and can be checked experimentally. They can be tests of the SM. Some implications on $\bar B_{q}\to {{\rm\bf B}\bar{\rm\bf B}}' l^+ l^-$ decays are also given.
Chun-Khiang Chua
2023-04-24T07:33:40Z
http://arxiv.org/abs/2304.11871v2
# Charmless Semileptonic Baryonic \(B_{u,d,s}\) Decays ###### Abstract We study \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays with all low lying octet (\({\cal B}\)) and decuplet (\({\cal D}\)) baryons using a topological amplitude approach. In tree induced \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decay modes, we need two tree amplitudes and one annihilation amplitude in \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) decays, one tree amplitude in \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) decays, one tree amplitude in \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays and one tree amplitude and one annihilation amplitude in \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays. In loop induced \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decay modes, similar numbers of penguin-box and penguin-box-annihilation amplitudes are needed. As the numbers of independent topological amplitudes are highly limited, there are plenty of relations on these semileptonic baryonic \(B_{q}\) decay amplitudes. Furthermore, the loop topological amplitudes and tree topological amplitudes have simple relations, as their ratios are fixed by known CKM factors and loop functions. It is observed that the \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) differential rate exhibits threshold enhancement, which is expected to hold in all other semileptonic baryonic modes. The threshold enhancement effectively squeezes the phase space toward the threshold region and leads to very large SU(3) breaking effects in the decay rates. They are estimated using the measured \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) differential rate and model calculations. From the model calculations, we find that branching ratios of non-annihilation \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) modes are of the orders of \(10^{-9}\sim 10^{-6}\), while branching ratios of non-penguin-box-annihilation \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) modes are of the orders of \(10^{-12}\sim 10^{-8}\). Modes with relatively unsuppressed rates and good detectability are identified. These modes can be searched experimentally in near future and the rate estimations can be improved when more modes are discovered. Ratios of rates of some loop induced \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays and tree induced \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decays are predicted and can be checked experimentally. They can be tests of the SM. pacs: 12.30.-v, 12.30.-k, 12.30.-k ###### Contents * I Introduction * II Formalism * II.1 Topological amplitudes * II.2 Modeling the topological amplitudes * III Results on amplitudes * III.1 Decay amplitudes in terms of topological amplitudes * III.2 Relations of decay amplitudes * IV Results on rates * IV.1.1 \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{\nu}\) decay rates * IV.2 \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\), \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu}\), \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu}\) decay rates * IV.2 \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decay rates * V Discussions and Conclusion * Acknowledgments * A \(\overline{B}_{q}\to{\bf B}\bar{\bf B}^{\prime}\) matrix elements in the asymptotic limit * B Formulas of decay rates for \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays Introduction Recently, there are some experimental activities on \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays, where \({\bf B}\overline{\bf B}^{\prime}\) are baryon anti-baryon pairs [1; 2; 3; 4; 5]. The present experimental results are summarized in Table 1. In particular, the branching ratio of \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}_{\mu}\) decay is measured to be \((5.27^{+0.23}_{-0.24}\pm 0.21\pm 0.15)\times 10^{-6}\) by LHCb [3] and \(Br(B^{-}\to p\bar{p}l\bar{\nu})=(5.8^{+2.6}_{-2.3})\times 10^{-6}\) by Belle [2] (see also [4]), while only upper limit of \(Br(B^{-}\to\Lambda\bar{p}\nu\bar{\nu})<3.0\times 10^{-5}\) was reported by BaBar [5]. Theoretically, the branching ratios of \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decays were estimated and predicted to be of the order of \(10^{-6}\) to \(10^{-4}\)[6; 7]. Some recent studies are devoted to understand the rate of the \(B^{-}\to p\bar{p}l\bar{\nu}\) decay [8; 9] as the measured rate is roughly 20 times smaller than a previous theoretical prediction [7], while the shape of the predicted differential rate using QCD counting rules agrees well with data, which exhibits threshold enhancement [3; 7]. In this work we will employ the approach of refs. [10; 11; 12; 13], which was used to study two-body baryonic \(B\) decays, \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}\), making use of the well established topological amplitude formalism [14; 15; 16; 17; 18; 19; 20; 21]. The decay amplitudes of \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays with all low lying octet (\({\cal B}\)) and decuplet (\({\cal D}\)) baryons will be decomposed into combinations of several topological amplitudes. As the numbers of topological amplitudes are highly limited, there are many relations of decay amplitudes. It is well known that a decay rate strongly depends on the masses of the final state particles when the decay is just above the threshold. The rates may vary in orders of magnitudes even if the amplitudes are of similar sizes. One normally does not expect such behavior in \(B_{q}\) decays when large phase spaces are available. From the experimental differential rate \(dBr/dm_{p\bar{p}}\) of \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) decay from LHCb [3] as shown in Fig. 1, one can easily see that the spectrum exhibits prominent threshold enhancement, which is a comment feature in three or more body baryonic \(B_{q}\) decays [6; 7; 8; 9; 22; 23; 24; 25; 26]. Threshold enhancement is expected to hold in all other semileptonic baryonic modes considered in this work as well. The threshold enhancement effectively squeezes the phase space to the threshold region, see Fig. 1, and thus mimics the decay just above threshold situation. Consequently, it amplifies the effects of SU(3) breaking in final state baryon masses and can lead to very large SU(3) breaking effects in the decay rates. The SU(3) breakings in the decay rates from threshold enhancements will be estimated using the measured \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) differential rate and model calculations with available theoretical inputs from refs. [8; 9], which can reproduce the measured \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) differential rate. We will try to identify modes with relatively unsuppressed rates and good detectability. The estimation on rates can be improved when more modes are discovered. Recently hints of new physics effects in rare \(B\) decays are accumulating, see, for example, [27; 28; 29]. Given the present situation and the fact that \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decays are tree induced decay modes, while \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays are loop induced decay modes, it will be interesting and useful to identify \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) and \(\overline{B}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decay modes which have good detectability. Their rate ratios, especially, those insensitive to the modeling of SU(3) breaking from threshold enhancement, can be tests of the Standard Model (SM),. The layout of this paper is as following. We give the formalism for decomposing amplitudes in terms of topological amplitudes and modeling of the topological amplitudes in Sec. II. In Sec. III, results on decay amplitudes in term of topological amplitudes, relations of decay amplitudes and decay rates are provided. Conclusion and discussions are given in Sec. IV. Appendix A concerning the transition matrix elements in the asymptotic limit and Appendix B with some useful formulas for calculating 4-body decay rates are added at the end of the paper. ## II Formalism ### Topological amplitudes The decay amplitudes of \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays are given by [7; 30] \[A(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}) = \frac{G_{F}}{\sqrt{2}}V_{ub}\langle{\bf B}\overline{\bf B}^{ \prime}|\bar{u}_{L}\gamma_{\mu}b_{L}|\overline{B}_{q}\rangle\bar{l}_{L}\gamma^ {\mu}\nu_{L},\] \[A(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}) = \frac{G_{F}}{\sqrt{2}}\frac{\alpha_{\rm em}}{2\pi\sin^{2}\theta_ {W}}V_{ts}^{*}V_{tb}D(m_{t}^{2}/m_{W}^{2})\langle{\bf B}\overline{\bf B}^{ \prime}|\bar{s}_{L}\gamma_{\mu}b_{L}|\overline{B}_{q}\rangle\bar{\nu}_{L} \gamma^{\mu}\nu_{L}, \tag{1}\] where \(V_{ub}\), \(V_{ts}\) and \(V_{tb}\) are Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, \(D(x)\), \(D_{0}(x)\) and \(D_{1}(x)\) are loop functions with [31] \[D(x) = D_{0}(x)+\frac{\alpha_{s}}{4\pi}D_{1}(x),\] \[D_{0}(x) = \frac{x}{8}\bigg{(}-\frac{2+x}{1-x}+\frac{3x-6}{(1-x)^{2}}\ln x \bigg{)},\] \[D_{1}(x) = -\frac{23x+5x^{2}-4x^{3}}{3(1-x)^{2}}+\frac{x-11x^{2}+x^{3}+x^{4 }}{(1-x)^{3}}\ln x+\frac{8x+4x^{2}+x^{3}-x^{4}}{2(1-x)^{3}}\ln^{2}x \tag{2}\] \[-\frac{4x-x^{3}}{(1-x)^{2}}Li_{2}(1-x)+8x\frac{dD_{0}(x)}{dx}\ln \frac{\mu^{2}}{m_{W}^{2}}.\] Note that the \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decay is governed by the matrix element, \(\langle{\bf B}\overline{\bf B}^{\prime}|\bar{u}_{L}\gamma_{\mu}b_{L}|\overline {B}_{q}\rangle\), while the \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decay is governed by the matrix element, \(\langle{\bf B}\overline{\bf B}^{\prime}|\bar{s}_{L}\gamma_{\mu}b_{L}| \overline{B}_{q}\rangle\). These two matrix elements are difficult to calculate as they involve baryon pairs \({\bf B}\overline{\bf B}^{\prime}\) in the final Figure 1: The experimental differential rate \(dBr/dm_{p\bar{p}}\) of \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) decay from LHCb [3] exhibits threshold enhancement. The threshold enhancement effectively squeezes the phase space toward the threshold region. state. Nevertheless they are related by interchanging \(u\) and \(s\) and, hence, can be related by SU(3) transformations. It is known that topological amplitude approach is related to SU(3) approach [14; 16; 18]. We follow the approach similar to the one employed in the study of \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\) decays [10; 11; 12; 13] to decompose \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \({\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decay amplitudes into topological amplitudes. From Eq. (1), we see that the Hamiltonian governing \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decays has the following flavor structure, \[(\bar{u}b)=H^{i}_{T}(\bar{q}_{i}b), \tag{3}\] with \[H^{1}_{T}=1,\quad{\rm otherwise}\;\;H^{i}_{T}=0, \tag{4}\] where we take \(q_{1,2,3}=u,d,s\) as usual. Similarly, the Hamiltonian governing \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays has the following flavor structure, \[(\bar{s}b)=H^{k}_{PB}(\bar{q}_{k}b), \tag{5}\] with \[H^{3}_{PB}=1,\quad{\rm otherwise}\;\;H^{k}_{PB}=0. \tag{6}\] These \(H_{T}\) and \(H_{PB}\) will be used as spurion fields in the following constructions of effective Hamiltonian, \(H_{\rm eff}\). We shall start with \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays with \({\cal D}\) the low-lying decuplet baryon. The flavor flow diagram for a \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decay is given in Fig. 2. Note that in the case of a \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decay, the \(q_{i}q_{j}q_{l}\) and \(\bar{q}^{l}\bar{q}^{j}\bar{q}^{m}\) flavors as shown in Fig. 2 correspond to the following fields, \[q_{i}q_{j}q_{l}\to\overline{\cal D}_{ijl},\quad\bar{q}^{l}\bar{q}^{j}\bar{q}^{ m}\to{\cal D}^{jlm}, \tag{7}\] in the Hamiltonian, respectively, where \({\cal D}^{jlm}\) denotes the familiar decuplet field, and, explicitly, we have \({\cal D}^{111}=\Delta^{++}\), \({\cal D}^{112}=\Delta^{+}/\sqrt{3}\), \({\cal D}^{122}=\Delta^{0}/\sqrt{3}\), \({\cal D}^{222}=\Delta^{-}\), \({\cal D}^{113}=\Sigma^{*-}/\sqrt{3}\), \({\cal D}^{123}=\Sigma^{*-}/\sqrt{3}\), \({\cal D}^{223}=\Sigma^{*0}/\sqrt{3}\), \({\cal D}^{233}=\Xi^{*-}/\sqrt{3}\) and \({\cal D}^{333}=\Omega^{-}\) (see, for example [32]). By using the above correspondent rule, we obtain the following effective Hamiltonian for \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays, \[H_{\rm eff}(\overline{B}_{q}\to{\cal D}\overline{\cal D}l\bar{\nu})\;=\;6\,T_{ {\cal D}\overline{\cal D}}\,\overline{B}_{m}H^{i}_{T}\overline{\cal D}_{ijl}{ \cal D}^{jjm}+A_{{\cal D}\overline{\cal D}}\,\overline{B}_{i}H^{i}_{T} \overline{\cal D}_{mjl}{\cal D}^{ljm}, \tag{8}\] with \(\overline{B}_{m}=\left(B^{-},\overline{B}^{0},\overline{B}^{0}_{s}\right)\). Without lost of generality, the pre-factors are assigned for latter purpose. For the \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays, we note that the anti-octet final state is produced by the \({\cal B}^{j}_{k}\) field with [32] \[{\cal B}=\left(\begin{array}{ccc}\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda} {\sqrt{6}}&\Sigma^{+}&p\\ \Sigma^{-}&-\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}}&n\\ \Xi^{-}&\Xi^{0}&-\sqrt{\frac{2}{3}}\Lambda\end{array}\right), \tag{9}\] where \({\cal B}^{j}_{k}\) has the following flavor structure \(q^{j}q^{a}q^{b}\epsilon_{abk}-\frac{1}{3}\,\delta^{j}_{k}q^{c}q^{a}q^{b}\)[32]. To match the flavor of \(\bar{q}^{l}\bar{q}^{j}\bar{q}^{m}\) in the final state as shown in Fig. 2, we use \[\bar{q}^{l}\bar{q}^{j}\bar{q}^{m}\to\epsilon^{ljb}{\cal B}^{m}_{b},\;\;\epsilon^ {lbm}{\cal B}^{j}_{b},\;\;\epsilon^{bjm}{\cal B}^{l}_{b}, \tag{10}\] which are, however, not totally independent, as it can be easily shown that they are subjected to the following relation, \[\epsilon^{ljb}{\cal B}^{m}_{b}+\epsilon^{lbm}{\cal B}^{j}_{b}+ \epsilon^{bjm}{\cal B}^{l}_{b}=0. \tag{11}\] Hence we only need two of the terms in the right-hand-side of Eq. (10), and, without loss of generality, the first two terms are chosen. The effective Hamiltonian of the \(\overline{B}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays can be obtained by replacing \({\cal D}^{ljm}\) in Eq. (8) by \(({\cal B}_{1})^{ljm}\equiv\epsilon^{ljb}{\cal B}^{m}_{b}\) and \(({\cal B}_{2})^{ljm}\equiv\epsilon^{bjm}{\cal B}^{l}_{b}\), and, consequently, we have \[H_{\rm eff}(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{ \nu}) = \sqrt{6}\,T_{1{\cal D}\overline{\cal B}}\,\overline{B}_{m}H^{i}_{T} \overline{\cal D}_{ijl}\epsilon^{ljb}{\cal B}^{m}_{b}+\sqrt{6}\,T_{2{\cal D} \overline{\cal B}}\,\overline{B}_{m}H^{i}_{T}\overline{\cal D}_{ijl}\epsilon ^{bjm}{\cal B}^{l}_{b} \tag{12}\] \[+\sqrt{6}\,A_{1{\cal D}\overline{\cal B}}\,\overline{B}_{i}H^{i} _{T}\overline{\cal D}_{mjl}\epsilon^{ljb}{\cal B}^{m}_{b}+\sqrt{6}\,A_{2{\cal D }\overline{\cal B}}\,\overline{B}_{i}H^{i}_{T}\overline{\cal D}_{mjl}\epsilon ^{bjm}{\cal B}^{l}_{b},\] where some pre-factors are introduced without lost of generality. Note that the \(T_{1{\cal D}\overline{\cal B}}\), \(A_{1{\cal D}\overline{\cal B}}\) and \(A_{2{\cal D}\overline{\cal B}}\) terms are vanishing and we only have \[H_{\rm eff}(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{ \nu})=\sqrt{6}\,T_{{\cal D}\overline{\cal B}}\,\overline{B}_{m}H^{i}_{T} \overline{\cal D}_{ijl}\epsilon^{bjm}{\cal B}^{l}_{b}, \tag{13}\] with \(T_{2{\cal D}\overline{\cal B}}\) relabeled to \(T_{{\cal D}\overline{\cal B}}\). Similarly for \(\overline{B}\to{\cal B}\overline{\cal D}l\bar{\nu}\) decays, the \(q_{i}q_{k}q_{l}\) flavor in the final state corresponds to \(\epsilon_{ika}\overline{\cal B}^{a}_{l}\), \(\epsilon_{ial}\overline{\cal B}^{a}_{k}\) and \(\epsilon_{akl}\overline{\cal B}^{a}_{i}\), while the last one is redundant, since it can be expressed by the formers using the following relation, \(\epsilon_{ika}\overline{\cal B}^{a}_{l}+\epsilon_{ial}\overline{\cal B}^{a}_{k }+\epsilon_{akl}\overline{\cal B}^{a}_{i}=0\). Hence we replace the \(\overline{\cal D}_{ijl}\) in Eq. (8) by \((\overline{\cal B}_{1})_{ijl}\equiv\epsilon_{ija}\overline{\cal B}^{a}_{l}\) and \((\overline{\cal B}_{2})_{ijl}\equiv\epsilon_{ajl}\overline{\cal B}^{a}_{i}\) and obtain \[H_{\rm eff}(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{ \nu}) = -\sqrt{6}\,T_{1{\cal B}\overline{\cal D}}\,\overline{B}_{m}H^{i}_{T} \epsilon_{ija}\overline{\cal B}^{a}_{l}{\cal D}^{ljm}-\sqrt{6}\,T_{2{\cal B} \overline{\cal D}}\,\overline{B}_{m}H^{i}\epsilon_{ajl}\overline{\cal B}^{a}_{i }{\cal D}^{ljm} \tag{14}\] \[-\sqrt{6}\,A_{1{\cal B}\overline{\cal D}}\,\overline{B}_{i}H^{i}_ {T}\epsilon_{ija}\overline{\cal B}^{a}_{l}{\cal D}^{ljm}-\sqrt{6}\,A_{2{\cal B} \overline{\cal D}}\,\overline{B}_{i}H^{i}\epsilon_{ajl}\overline{\cal B}^{a}_{ m}{\cal D}^{ljm}\] \[= -\sqrt{6}\,T_{{\cal B}\overline{\cal D}}\,\overline{B}_{m}H^{i}_ {T}\epsilon_{ija}\overline{\cal B}^{a}_{l}{\cal D}^{ljm}\] where the \(T_{2{\cal B}\overline{\cal D}}\), \(A_{1{\cal B}\overline{\cal D}}\) and \(A_{2{\cal B}\overline{\cal D}}\) terms in the equation are vanishing as \(\epsilon_{ajl}{\cal D}^{ljm}=\epsilon_{ajl}{\cal D}^{ljm}=0\), and \(T_{1{\cal B}\overline{\cal D}}\) is relabeled to \(T_{{\cal B}\overline{\cal D}}\) in the last step. To obtain the effective Hamiltonian of \(\overline{B}_{q}\to{\cal B}\overline{\cal B}l\bar{\nu}\) decays, we first replace \(\overline{\cal D}_{ijl}\) and \({\cal D}^{ljm}\) in Eq. (8) by \((\overline{\cal B}_{1})_{ijl}\equiv\epsilon_{ija}\overline{\cal B}^{a}_{l}\), \((\overline{\cal B}_{2})_{ijl}\equiv\epsilon_{akl}\overline{\cal B}^{a}_{i}\) and \(({\cal B}_{1})^{lim}\equiv\epsilon^{ljb}{\cal B}^{m}_{b}\), \(({\cal B}_{2})^{lim}\equiv\epsilon^{bjm}{\cal B}^{l}_{b}\), respectively, and obtain \[H_{\rm eff}(\overline{B}_{q}\to{\cal B}\overline{\cal B}l\bar{ \nu}) = -T_{11{\cal B}\overline{\cal B}}\overline{B}_{m}H^{i}_{T}(\overline{ \cal B}_{1})_{ijl}({\cal B}_{1})^{ljm}-T_{12{\cal B}\overline{\cal B}}\overline{B }_{m}H^{i}_{T}(\overline{\cal B}_{1})_{ijl}({\cal B}_{2})^{ljm} \tag{15}\] \[-T_{21{\cal B}\overline{\cal B}}\overline{B}_{m}H^{i}_{T}(\overline {\cal B}_{2})_{ijl}({\cal B}_{1})^{ljm}-T_{22{\cal B}\overline{\cal B}}\overline{B }_{m}H^{i}_{T}(\overline{\cal B}_{2})_{ijl}({\cal B}_{2})^{ljm}\] \[-A_{11{\cal B}\overline{\cal B}}\overline{B}_{i}H^{i}_{T}(\overline {\cal B}_{1})_{mjl}({\cal B}_{1})^{ljm}-A_{12{\cal B}\overline{\cal B}}\overline{B }_{i}H^{i}_{T}(\overline{\cal B}_{1})_{mjl}({\cal B}_{2})^{ljm}\] \[-A_{21{\cal B}\overline{\cal B}}\overline{B}_{i}H^{i}_{T}(\overline {\cal B}_{2})_{mjl}({\cal B}_{1})^{ljm}-A_{22{\cal B}\overline{\cal B}}\overline{B }_{i}H^{i}_{T}(\overline{\cal B}_{2})_{mjl}({\cal B}_{2})^{ljm}.\] Using the following identity \[-2(\overline{\cal B}_{1})_{ijl}({\cal B}_{1})^{ljm} = (\overline{\cal B}_{2})_{ijl}({\cal B}_{1})^{ljm}=-2(\overline{\cal B }_{2})_{ijl}({\cal B}_{2})^{ljm},\] \[-2(\overline{\cal B}_{1})_{mjl}({\cal B}_{1})^{ljm} = (\overline{\cal B}_{1})_{mjl}({\cal B}_{2})^{ljm}=(\overline{\cal B }_{2})_{mjl}({\cal B}_{1})^{ljm}=-2(\overline{\cal B}_{2})_{mjl}({\cal B}_{2}) ^{ljm}, \tag{16}\] the above Hamiltonian can be expressed as \[H_{\rm eff}(\overline{B}_{q}\to{\cal B}\overline{\cal B}l\bar{ \nu}) = (-T_{11\mathcal{B}\overline{\cal B}}+2T_{21\mathcal{B}\overline{ \cal B}}-T_{22\mathcal{B}\overline{\cal B}})\overline{B}_{m}H_{T}^{i}( \overline{\cal B}_{1})_{ijl}({\cal B}_{1})^{ljm} \tag{17}\] \[-T_{12\mathcal{B}\overline{\cal B}}\overline{B}_{m}H_{T}^{i}( \overline{\cal B}_{1})_{ijl}({\cal B}_{2})^{ljm}\] \[+(A_{11\mathcal{B}\overline{\cal B}}-2A_{12\mathcal{B}\overline{ \cal B}}-2A_{21\mathcal{B}\overline{\cal B}}+A_{22\mathcal{B}\overline{\cal B}} )\overline{B}_{i}H_{T}^{i}(\overline{\cal B}_{1})_{mjl}({\cal B}_{1})^{ljm}\] \[= -T_{1\mathcal{B}\overline{\cal B}}\overline{B}_{m}H_{T}^{i} \epsilon_{ija}\overline{\cal B}_{i}^{a}\epsilon^{bjm}{\cal B}_{b}^{l}+T_{2 \mathcal{B}\overline{\cal B}}\overline{B}_{m}H_{T}^{i}\epsilon_{ija}\overline{ \cal B}_{l}^{a}\epsilon^{ljb}{\cal B}_{b}^{m}\] \[+A_{\mathcal{B}\overline{\cal B}}\overline{B}_{i}H_{T}^{i} \epsilon_{mja}\overline{\cal B}_{l}^{a}\epsilon^{ljb}{\cal B}_{b}^{m},\] where the topological amplitudes are redefined as following \[T_{1\mathcal{B}\overline{\cal B}} \equiv T_{12\mathcal{B}\overline{\cal B}},\] \[T_{2\mathcal{B}\overline{\cal B}} \equiv -T_{11\mathcal{B}\overline{\cal B}}+2T_{21\mathcal{B}\overline{ \cal B}}-T_{22\mathcal{B}\overline{\cal B}},\] \[A_{\mathcal{B}\overline{\cal B}} \equiv A_{11\mathcal{B}\overline{\cal B}}-2A_{12\mathcal{B}\overline{ \cal B}}-2A_{21\mathcal{B}\overline{\cal B}}+A_{22\mathcal{B}\overline{\cal B}}. \tag{18}\] With this all effective Hamiltonians of \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decays with low-lying octet and decuplet baryons are obtained. The effective Hamiltonian of the \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays can be obtained similarly. We simply give the results in the following equation, \[H_{\rm eff}(\overline{B}_{q}\to{\cal D}\overline{\cal D}\nu \bar{\nu}) = 6PB_{{\cal D}\overline{\cal D}}\,\overline{B}_{m}H_{PB}^{k} \overline{\cal D}_{kjl}{\cal D}^{ljm}+PBA_{{\cal D}\overline{\cal D}}\, \overline{B}_{k}H_{PB}^{k}\overline{\cal D}_{mjl}{\cal D}^{ljm},\] \[H_{\rm eff}(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{ \nu}) = \sqrt{6}PB_{{\cal D}\overline{\cal B}}\,\overline{B}_{m}H_{PB}^{k} \overline{\cal D}_{kjl}\epsilon^{bjm}{\cal B}_{b}^{l},\] \[H_{\rm eff}(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{ \nu}) = -\sqrt{6}PB_{{\cal B}\overline{\cal B}}\,\overline{B}_{m}H_{PB}^{k} \epsilon_{kja}\overline{\cal B}_{l}^{a}{\cal D}^{ljm},\] \[H_{\rm eff}(\overline{B}_{q}\to{\cal B}\overline{\cal B}\nu\bar{ \nu}) = -PB_{1\mathcal{B}\mathcal{B}}\,\overline{B}_{m}H_{PB}^{k}\epsilon_{ kja}\overline{\cal B}_{l}^{a}{\epsilon^{bjm}{\cal B}_{b}^{l}}+PB_{2 \mathcal{B}\overline{\cal B}}\,\overline{B}_{m}H_{PB}^{k}\epsilon_{kja} \overline{\cal B}_{l}^{a}\epsilon^{ljb}{\cal B}_{b}^{m} \tag{19}\] \[+PBA_{\mathcal{B}\overline{\cal B}}\,\overline{B}_{k}H_{PB}^{k} \epsilon_{mja}\overline{\cal B}_{l}^{a}{\epsilon^{ljb}{\cal B}_{b}^{m}}.\] In summary the effective Hamiltonians of \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \({\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays for low-lying octet and decuplet baryons are obtained and are shown in Eqs. (8), (13), (14), (17) and (19). The decay amplitudes can be obtained readily by using these effective Hamiltonians. The results of decay amplitudes in terms of these topological amplitudes and relations on the amplitudes will be given explicitly in the next section. Before we end this section it is important to note that, as shown in Eq. (1), the topological amplitudes \(PB\) and \(T\) and the topological amplitudes \(PBA\) and \(A\) should be related in the following manner, \[\zeta \equiv \frac{PB_{i\overline{\mathcal{B}}\overline{\mathcal{B}}}}{T_{i \overline{\mathcal{B}}\overline{\mathcal{B}}}}=\frac{PBA_{\mathcal{B} \overline{\mathcal{B}}}}{A_{\mathcal{B}\overline{\mathcal{B}}}}=\frac{PB_{ \mathcal{B}\overline{\mathcal{D}}}}{T_{\mathcal{B}\overline{\mathcal{D}}}}= \frac{PB_{\mathcal{D}\overline{\mathcal{B}}}}{T_{\mathcal{D}\overline{ \mathcal{B}}}}=\frac{PB_{\mathcal{D}\overline{\mathcal{D}}}}{T_{\mathcal{D} \overline{\mathcal{D}}}}=\frac{PBA_{\mathcal{D}\overline{\mathcal{D}}}}{A_{ \mathcal{D}\overline{\mathcal{D}}}} \tag{20}\] \[= \frac{\alpha_{\rm em}}{2\pi\sin^{2}\theta_{W}}\frac{V_{ts}^{*}V_ {tb}}{V_{ub}}D(m_{t}^{2}/m_{W}^{2}),\] where numerically we use \(|V_{ub}|=0.0036\) and have \(\zeta=-0.037e^{i\phi_{3}}\), with \(\phi_{3}=(65.5^{+1.3}_{-1.1})^{\circ}\) one of the unitary angle in the CKM matrix [33]. ### Modeling the topological amplitudes In addition to the above decompositions of amplitudes in terms of topological amplitudes, it will be useful to have some numerical results on rates. We will use the available theoretical inputs from refs [8] and [9] in our modeling of the topological amplitudes and we denote them as Model 1 and Model 2, respectively. They are used as illustration and can be improved when more data are available. In general the topological amplitudes \(T_{1\mathcal{B}\overline{\mathcal{B}}}\), \(T_{2\mathcal{B}\overline{\mathcal{B}}}\) and \(A_{\mathcal{B}\overline{\mathcal{B}}}\) in \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}l\bar{\nu}\) decays can be expressed as \[T_{i\mathcal{B}\overline{\mathcal{B}}} = i\frac{G_{F}}{\sqrt{2}}V_{ub}\bar{l}_{L}\gamma^{\mu}\nu_{L}\bar{u }(p_{\mathcal{B}})\{[g_{1}^{(i)}\gamma_{\mu}+ig_{2}^{(i)}\sigma_{\mu\nu}q^{ \nu}+g_{3}^{(i)}q_{\mu}+g_{4}^{(i)}(p_{\mathcal{B}}+p_{\overline{\mathcal{B}} ^{\prime}})_{\mu}+g_{5}^{(i)}(p_{\mathcal{B}}-p_{\overline{\mathcal{B}}^{ \prime}})_{\mu}]\gamma_{5}\] \[-[f_{1}^{(i)}\gamma_{\mu}+if_{2}^{(i)}\sigma_{\mu\nu}q^{\nu}+f_{3 }^{(i)}q_{\mu}+f_{4}^{(i)}(p_{\mathcal{B}}+p_{\overline{\mathcal{B}}^{\prime}} )_{\mu}+f_{5}^{(i)}(p_{\mathcal{B}}-p_{\overline{\mathcal{B}}^{\prime}})_{\mu }]\}v_{R}(p_{\overline{\mathcal{B}}^{\prime}}),\] \[A_{\mathcal{B}\overline{\mathcal{B}}} = i\frac{G_{F}}{\sqrt{2}}V_{ub}\bar{l}_{L}\gamma^{\mu}\nu_{L}\bar{u }(p_{\mathcal{B}})\{[g_{1}^{(a)}\gamma_{\mu}+ig_{2}^{(a)}\sigma_{\mu\nu}q^{ \nu}+g_{3}^{(a)}q_{\mu}+g_{4}^{(a)}(p_{\mathcal{B}}+p_{\overline{\mathcal{B}} ^{\prime}})_{\mu}+g_{5}^{(a)}(p_{\mathcal{B}}-p_{\overline{\mathcal{B}}^{ \prime}})_{\mu}]\gamma_{5} \tag{21}\] \[-[f_{1}^{(a)}\gamma_{\mu}+if_{2}^{(a)}\sigma_{\mu\nu}q^{\nu}+f_{3 }^{(a)}q_{\mu}+f_{4}^{(a)}(p_{\mathcal{B}}+p_{\overline{\mathcal{B}}^{\prime} })_{\mu}+f_{5}^{(a)}(p_{\mathcal{B}}-p_{\overline{\mathcal{B}}^{\prime}})_{ \mu}]\}v_{R}(p_{\overline{\mathcal{B}}}),\] with \(q\equiv p_{B_{q}}-p_{\mathbf{B}}-p_{\overline{\mathbf{B}}^{\prime}}\), \(i=1,2\), \(j=1,\ldots,5\), and \(f_{j}^{(i)}\), \(g_{j}^{(i)}\), \(f_{j}^{(a)}\) and \(g_{j}^{(a)}\) denoting form factors. Similarly the topological amplitudes of \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{D}}l\bar{\nu}\) and \(\overline{B}_{q}\to\mathcal{D}\overline{\mathcal{B}}l\bar{\nu}\) decays can be expressed as \[T_{\mathcal{B}\overline{\mathcal{D}}} = i\frac{G_{F}}{\sqrt{2}}V_{ub}\bar{l}_{L}\gamma_{\mu}\nu_{L} \tag{22}\] \[\times\bar{u}(p_{\mathcal{D}},\lambda_{\mathcal{B}})\Big{\{} \Big{[}g_{1}^{\prime}p_{\mathcal{B}\nu}\gamma_{\mu}+ig_{2}^{\prime}\sigma_{ \mu\rho}p_{\mathcal{B}\nu}q^{\rho}+g_{3}^{\prime}p_{\mathcal{B}\nu}q_{\mu}+ g_{4}^{\prime}p_{\mathcal{B}\nu}p_{\mathcal{B}\mu}+g_{5}^{\prime}g_{\nu\mu}+g_{6}^{ \prime}q_{\nu}\gamma_{\mu}\] \[+ig_{7}^{\prime}\sigma_{\mu\rho}q_{\nu}q^{\rho}+g_{8}^{\prime}q_{ \nu}q_{\mu}+g_{9}^{\prime}q_{\nu}p_{\mathcal{B}\mu}\Big{]}\gamma_{5}-\Big{[} f_{1}^{\prime}p_{\mathcal{B}\nu}\gamma_{\mu}+if_{2}^{\prime}\sigma_{\mu\rho}p_{ \mathcal{B}\nu}q^{\rho}+f_{3}^{\prime}p_{\mathcal{B}\nu}q_{\mu}\] \[+f_{4}^{\prime}p_{\mathcal{B}\nu}p_{\mathcal{B}\mu}+f_{5}^{\prime }g_{\nu\mu}+f_{6}^{\prime}q_{\nu}\gamma_{\mu}+if_{7}^{\prime}\sigma_{\mu\rho}q _{\nu}q^{\rho}+f_{8}^{\prime}q_{\nu}q_{\mu}+f_{9}^{\prime}q_{\nu}p_{\mathcal{ B}\mu}\Big{]}\Big{\}}v^{\nu}(p_{\overline{\mathcal{D}}},\lambda_{ \overline{\mathcal{D}}}),\] and \[T_{\cal D\overline{B}} = i\frac{G_{F}}{\sqrt{2}}V_{ub}\bar{l}_{L}\gamma_{\mu}\nu_{L} \tag{23}\] \[\times\bar{u}^{\nu}(p_{\cal D},\lambda_{\cal D})\Big{\{}\Big{[}g^{ \prime\prime\prime}_{1}\overline{p}_{\overline{B}\nu}\gamma_{\mu}+ig^{\prime \prime}_{2}\sigma_{\mu\rho}p_{\overline{B}\nu}q^{\rho}+g^{\prime\prime}_{3}p_{ \overline{B}\nu}q_{\mu}+g^{\prime\prime}_{4}p_{\overline{B}\nu}p_{\overline{B} \mu}+g^{\prime\prime}_{5}g_{\nu\mu}+g^{\prime\prime}_{6}q_{\nu}\gamma_{\mu}\] \[+ig^{\prime\prime}_{7}\sigma_{\mu\rho}q_{\nu}q^{\rho}+g^{\prime \prime}_{8}q_{\nu}q_{\mu}+g^{\prime\prime}_{9}q_{\nu}p_{\overline{B}\mu}\Big{]} \gamma_{5}-\Big{[}f^{\prime\prime}_{1}p_{\overline{B}\nu}\gamma_{\mu}+if^{ \prime\prime}_{2}\sigma_{\mu\rho}p_{\overline{B}\nu}q^{\rho}+f^{\prime\prime }_{3}p_{\overline{B}\nu}q_{\mu}\] \[+f^{\prime\prime}_{4}p_{\overline{B}\nu}p_{\overline{B}\mu}+f^{ \prime\prime}_{5}g_{\nu\mu}+f^{\prime\prime}_{6}q_{\nu}\gamma_{\mu}+if^{ \prime\prime}_{7}\sigma_{\mu\rho}q_{\nu}q^{\rho}+f^{\prime\prime}_{8}q_{\nu}q _{\mu}+f^{\prime\prime}_{9}q_{\nu}p_{\overline{B}\mu}\Big{]}\Big{\}}v(p_{ \overline{B}},\lambda_{\overline{B}}),\] where \(u^{\mu}\), \(v^{\mu}\) are the Rarita-Schwinger vector spinors. Finally the tree topological amplitude for \(\overline{B}_{q}\to{\cal D\overline{D}}^{\prime}l\bar{\nu}\) decay is given by \[T_{\cal D\overline{D}} = i\frac{G_{F}}{\sqrt{2}}V_{ub}\bar{l}_{L}\gamma_{\mu}\nu_{L} \tag{24}\] \[\times\bar{u}_{\nu}(p_{\cal D},\lambda_{\cal D})\Big{\{}\Big{[}g^ {\prime\prime\prime}_{1}\gamma_{\mu}+ig^{\prime\prime\prime}_{2}\sigma_{\mu \rho}q^{\rho}+g^{\prime\prime\prime}_{3}q_{\mu}+g^{\prime\prime\prime}_{4}(p_{ \cal D}+p_{\overline{D}^{\prime}})_{\mu}+g^{\prime\prime\prime}_{5}(p_{\cal D }-p_{\overline{D}^{\prime}})_{\mu}]\gamma_{5}\] \[-[f^{\prime\prime\prime}_{1}\gamma_{\mu}+if^{\prime\prime\prime}_ {2}\sigma_{\mu\nu}q^{\nu}+f^{\prime\prime\prime}_{3}q_{\mu}+f^{\prime\prime \prime}_{4}(p_{\cal D}+p_{\overline{D}^{\prime}})_{\mu}+f^{\prime\prime\prime }_{5}(p_{\cal D}-p_{\overline{D}^{\prime}})_{\mu}]\Big{\}}\Big{\}}v^{\nu}(p_{ \overline{\cal D}},\lambda_{\overline{D}})\] \[+\ldots,\] where terms such as \(\bar{u}_{\nu}p^{\nu}_{\overline{\cal D}}\{\ldots\}p_{{\cal D}\sigma}u^{\sigma}\), \(\bar{u}_{\nu}q^{\nu}\{\ldots\}p_{{\cal D}\sigma}u^{\sigma}\), \(\bar{u}_{\nu}p^{\nu}_{\overline{\cal D}}\{\ldots\}q_{\sigma}u^{\sigma}\), \(\bar{u}_{\nu}q^{\nu}\{\ldots\}q_{\sigma}u^{\sigma}\) are not shown explicitly in the above equation. The annihilation amplitude \(A_{\cal D\overline{D}}\) can be expressed similarly. Topological amplitudes for loop induced \(\overline{B}_{q}\to{\bf B\overline{B}^{\prime}}\nu\bar{\nu}\) decays can be obtained using the above equations and Eq. (20). The topological amplitudes for \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}l\bar{\nu}\) decays are given in Eq. (21). Fo illustration we follow refs [8; 9] to use \[g^{(i)}_{j}=\frac{G^{(i)}_{j}}{t^{3}},\quad f^{(i)}_{j}=\frac{F^{(i)}_{f_{j}}} {t^{3}},\quad g^{(a)}_{j}=f^{(a)}_{j}=0, \tag{25}\] where \(G^{(i)}_{j}\) and \(F^{(i)}_{j}\) are some constants to be specified later, \(t\equiv m^{2}_{{\bf B\overline{B}^{\prime}}}\) and the last equation corresponds to the \(A_{\cal B\overline{B}}=PBA_{\cal B\overline{B}}=0\) case. The values of the constants \(G^{(i)}_{j}\) and \(F^{(i)}_{j}\) are extracted from refs [8; 9] but slightly modified to match the asymptotic relations in Appendix A, where it is known that there are asymptotic relations [34] in the matrix elements of octet and decuplet baryons in the large momentum transfer region, and to match the \(B^{-}\to p\bar{p}l\bar{\nu}\) data. In fact we find that the corresponding \(F^{(i)}_{3,4,5}\) used in ref. [8] do not satisfy the correct asymptotic relations, which can however be satisfied by adding a minus sign to their \(F^{(i)}_{3,4,5}\). Nevertheless as we shall see that the modification do not significantly affect the \(B^{-}\to{\cal B\overline{B}^{\prime}}l\bar{\nu}\) rates. The values of \(G_{j}^{(i)}\) and \(F_{j}^{(i)}\) are shown in Table 2. Explicitly we use \(G_{1}^{(i)}=\eta_{1}m_{B}(e_{\parallel}^{(i)}C_{LL}-e_{\overline{\parallel}}^{(i )}C_{RR})/3\), \(F_{1}^{(i)}=\eta_{1}m_{B}(e_{\parallel}^{(i)}C_{LL}+e_{\overline{\parallel}}^{ (i)}C_{RR})/3\), \(G_{2,3,4,5}^{(i)}=-F_{2,3,4,5}^{(i)}=-\eta_{1}\times 2e_{F}^{(i)}C_{LR}/3\) with \((C_{LL},C_{RR},C_{LR})=(17.78,-11.67,6.41)\) GeV\({}^{4}\)[8] for Model 1, and \(G_{1}^{(i)}=\eta_{2}m_{B}(e_{\parallel}^{(i)}D_{\parallel}-e_{\overline{ \parallel}}^{(i)}D_{\overline{\parallel}})/3\), \(F_{1}^{(i)}=\eta_{2}m_{B}(e_{\parallel}^{(i)}D_{\parallel}+e_{\overline{ \parallel}}^{(i)}D_{\overline{\parallel}})/3\), \(G_{2,3,4,5}^{(i)}=-F_{2,3,4,5}^{(i)}=-\eta_{2}\times 2e_{F}^{(i)}D_{2,3,4,5}/3\) with \((D_{\parallel},D_{\overline{\parallel}})=(11.2,323.3)\) GeV\({}^{5}\) and \(D_{2,3,4,5}=(47.7,442.2,-38.7,-80.7)\) GeV\({}^{4}\)[9] for Model 2, where the factors \(\eta_{1}=0.93\) and \(\eta_{2}=0.75\) are introduced to match the central value of the \(B^{-}\to p\bar{p}l\bar{\nu}\) data and \(e_{\parallel,\overline{\parallel},F}^{(i)}\) are given in Eq. (100). Note that the sign of \(D_{5}\) is flipped from the one from ref. [9], to match the definitions of form factors \(f_{5}^{(i)}\) and \(g_{5}^{(i)}\) in Eq. (21). The topological amplitudes of \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays are given in Eqs. (22) and (23). For simplicity, we only concentrate on the contributions from \(g_{1,2,3,4,5}^{\prime}\), \(f_{1,2,3,4,5}^{\prime\prime}\), \(g_{1,2,3,4,5}^{\prime\prime}\) and \(f_{1,2,3,4,5}^{\prime\prime}\), by assuming that their contributions are dominant. This working assumption can be checked or relaxed when data of \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays become available. It is known that in the asymptotic limit form factors of octet-octet and octet decuplet are related [34]. As shown in Appendix A in the asymptotic limit \(T_{\mathcal{B}\overline{\mathcal{D}}}\), \(T_{\mathcal{B}\overline{\mathcal{D}}}\) and \(T_{i\mathcal{B}\overline{\mathcal{B}}}\) are related and have similar structure. These impose constrains on the form factors. For simplicity we assume that these form factors have similar forms as the form factors in Eq. (25). Using Eqs. (25), (A12) and (A8), we have \[g^{\prime}_{1,2,3,4} = \frac{m_{\overline{\mathcal{D}}}G^{\prime}_{1,2,3,4}}{t^{4}},\ g^ {\prime}_{5}=\frac{m_{\overline{\mathcal{D}}}G^{\prime}_{5}}{t^{3}},\quad f^{ \prime}_{1,2,3,4}=\frac{m_{\overline{\mathcal{D}}}F^{\prime}_{1,2,3,4}}{t^{4}},\ f^{\prime}_{5}=\frac{m_{\overline{\mathcal{D}}}F^{\prime}_{5}}{t^{3}},\] \[g^{\prime\prime}_{1,2,3,4} = \frac{m_{\mathcal{D}}G^{\prime\prime}_{1,2,3,4}}{t^{4}},\ g^{ \prime\prime}_{5}=\frac{m_{\mathcal{D}}G^{\prime\prime}_{5}}{t^{3}},\quad f^{ \prime\prime}_{1,2,3,4}=\frac{m_{\mathcal{D}}F_{1,2,3,4}}{t^{4}},\ f^{\prime \prime}_{5}=\frac{m_{\mathcal{D}}F^{\prime\prime}_{5}}{t^{3}}, \tag{26}\] with \[G^{\prime}_{1,2,3}=-\sqrt{6}G^{(i)}_{1,2,3},\quad G^{\prime}_{4}=- \sqrt{6}(G^{(i)}_{4}+G^{(i)}_{5}),\quad G^{\prime}_{5}=\sqrt{\frac{3}{2}}(G^{(i )}_{4}-G^{(i)}_{5}),\] \[F^{\prime}_{1,2,3}=-\sqrt{6}F^{(i)}_{1,2,3},\quad F^{\prime}_{4} =-\sqrt{6}(F^{(i)}_{4}+F^{(i)}_{5}),\quad F^{\prime}_{5}=\sqrt{\frac{3}{2}}(F^ {(i)}_{4}-F^{(i)}_{5}), \tag{27}\] but with \((e^{(i)}_{\parallel},e^{(i)}_{\parallel},e^{(i)}_{F})\) in \(G^{(i)}_{j},F^{(i)}_{j}\) replaced by \((e^{\prime}_{\parallel},e^{\prime}_{\parallel},e^{\prime}_{F})\), and \[G^{\prime\prime}_{1,2,3}=-\sqrt{6}G^{(i)}_{1,2,3},\quad G^{ \prime\prime}_{4}=\sqrt{6}(G^{(i)}_{5}-G^{(i)}_{4}),\quad G^{\prime\prime}_{5 }=\sqrt{\frac{3}{2}}(G^{(i)}_{4}+G^{(i)}_{5}),\] \[F^{\prime\prime}_{1,2,3}=-\sqrt{6}F^{(i)}_{1,2,3},\quad F^{ \prime\prime}_{4}=\sqrt{6}(F^{(i)}_{5}-F^{(i)}_{4}),\quad F^{\prime\prime}_{5 }=\sqrt{\frac{3}{2}}(F^{(i)}_{4}+F^{(i)}_{5}), \tag{28}\] but with \((e_{\parallel}^{(i)},e_{\parallel}^{(i)},e_{F}^{(i)})\) in \(G_{j}^{(i)},F_{j}^{(i)}\) replaced by \((e_{\parallel}^{\prime\prime},e_{\overline{\parallel}}^{\prime\prime},e_{F}^{ \prime\prime})\). Note that the above constants are related in the asymptotic limit and, consequently, inputs from Model 1 and 2 have been used in the above relations. The values of these constants in Model 1 and 2 are given in Table 3. In the model calculations of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\nu\) and \({\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decay rates, we use Eq. (24) for the tree topological amplitude, where we neglect terms, such as \(\bar{u}_{\nu}p_{\overline{\cal D}}^{\nu}\{\ldots\}p_{{\cal D}\sigma}u^{\sigma}\), \(\bar{u}_{\nu}q^{\nu}\{\ldots\}p_{{\cal D}\sigma}u^{\sigma}\), \(\bar{u}_{\nu}p_{\overline{\cal D}}^{\nu}\{\ldots\}q_{\sigma}u^{\sigma}\), \(\bar{u}_{\nu}q^{\nu}\{\ldots\}q_{\sigma}u^{\sigma}\), for simplicity. This working assumption can be checked or modified once data is available. As in \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) decays, we neglect the contribution from the annihilation topological amplitude, \(A_{{\cal D}\overline{\cal D}}\). Using Eqs. (25), (100) and (113), the form factors are given by \[g_{j}^{\prime\prime\prime}=m_{\cal D}m_{\overline{\cal D}}\frac{G_{j}^{\prime \prime\prime}}{t^{4}},\quad f_{j}^{\prime\prime\prime}=m_{\cal D}m_{\overline {\cal D}}\frac{F_{j}^{\prime\prime\prime}}{t^{4}}, \tag{29}\] with \[G_{j}^{\prime\prime\prime}=-3G_{j}^{(i)},\quad F_{j}^{\prime\prime\prime}=-3F_ {j}^{(i)}, \tag{30}\] but with \((e_{\parallel}^{(i)},e_{\overline{\parallel}}^{(i)},e_{F}^{(i)})\) in \(G_{j}^{(i)},F_{j}^{(i)}\) replaced by \((e_{\parallel}^{\prime\prime\prime},e_{\overline{\parallel}}^{\prime\prime \prime},e_{F}^{\prime\prime\prime})\). Note that in the asymptotic limit the above form factors are related to those in \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) decays via Eq. (100), and, consequently, inputs from Model 1 and 2 have been used. The values of these constants in Model 1 and 2 are given in Table 4. ## III Results on amplitudes ### Decay amplitudes in terms of topological amplitudes Using the above Hamiltonian the decompositions of amplitudes for \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\), \({\cal B}\overline{\cal D}l\bar{\nu}\), \({\cal D}\overline{\cal B}l\bar{\nu}\), \({\cal D}\overline{\cal B}l\bar{\nu}\), \({\cal D}\overline{\cal B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{\nu}\), \({\cal B}\overline{\cal D}\nu\bar{\nu}\), \({\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are shown in Tables 5, 6, 7 and 8. These tables are some of the main results of this work. As shown in Table 5 we have three topological amplitudes, \(T_{2\cal B}\overline{\cal B}\), \(T_{1\cal B}\overline{\cal B}\) and \(A_{\cal B}\overline{\cal B}\), in \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) decays, and three topological amplitudes, \(PB_{2\cal B}\overline{\cal B}\), \(PB_{1\cal B}\overline{\cal B}\) and \(PBA_{\cal B}\overline{\cal B}\), in \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{\nu}\) decays. As shown in Table 6 we need one topological amplitude, \(T_{{\cal B}\overline{\cal B}}\), in \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) decays, and one topological amplitude, \(PB_{\cal B}\overline{\cal B}\), in \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu}\) decays. Similarly, as shown in Table 7 we have one topological amplitude, \(T_{{\cal D}\overline{\cal B}}\), in \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays, and one topological amplitude, \(PB_{{\cal D}\overline{\cal B}}\), in \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays, and one topological amplitude, \(PB_{{\cal D}\overline{\cal B}}\), in \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. As the numbers of independent topological amplitudes are highly limited comparing to the numbers of the decay modes, there are plenty of relations on \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \({\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decay amplitudes. These relations will be given in the following discussion. ### Relations of decay amplitudes As noted previously since the number of topological amplitudes are quite limited, relations of decay amplitudes are expected. The following relations are obtained by using the decomposition of amplitudes shown in Tables 5, 6, 7 and 8. In \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) decays, we have the following relations on amplitudes, \[A(\overline{B}^{0}\to p\bar{n}l\bar{\nu})\;=\;A(\overline{B}^{0}_{s}\to\Sigma ^{+}\overline{\Xi^{0}}l\bar{\nu})=\sqrt{2}A(\overline{B}^{0}_{s}\to\Sigma^{0} \overline{\Xi^{-}}l\bar{\nu}), \tag{31}\] \[A(B^{-}\to n\bar{n}l\bar{\nu})\;=\;A(B^{-}\to\Xi^{0}\overline{\Xi^{0}}l\bar{ \nu}), \tag{32}\] \[A(\overline{B}^{0}\to\Xi^{0}\overline{\Xi^{-}}l\bar{\nu})\;=\;\sqrt{2}A( \overline{B}^{0}_{s}\to p\overline{\Sigma^{0}}l\bar{\nu})=A(\overline{B}^{0 }_{s}\to n\overline{\Sigma^{-}}l\bar{\nu}), \tag{33}\] \[A(B^{-}\to p\bar{p}l\bar{\nu})\;=\;A(B^{-}\to\Sigma^{+}\overline{\Sigma^{+}} l\bar{\nu}), \tag{34}\] \[A(\overline{B}^{0}\to\Sigma^{+}\overline{\Sigma^{0}}l\bar{\nu}) = -A(\overline{B}^{0}\to\Sigma^{0}\overline{\Sigma^{-}}l\bar{\nu})= \sqrt{3}A(\overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu}), \tag{35}\] \[\sqrt{2}A(B^{-}\to\Sigma^{0}\overline{\Lambda}l\bar{\nu}) = \sqrt{2}A(B^{-}\to\Lambda\overline{\Sigma^{0}}l\bar{\nu})=A( \overline{B}^{0}\to\Sigma^{+}\overline{\Lambda}l\bar{\nu})=A(\overline{B}^{0} \to\Lambda\overline{\Sigma^{-}}l\bar{\nu}), \tag{36}\] \[A(B^{-}\to\Sigma^{-}\overline{\Sigma^{-}}l\bar{\nu}) = A(B^{-}\to\Xi^{-}\overline{\Xi^{-}}l\bar{\nu}), \tag{37}\] and \[A(B^{-}\to p\bar{p}l\bar{\nu}) = A(\overline{B}^{0}\to p\bar{n}l\bar{\nu})+A(B^{-}\to n\bar{n}l\bar{ \nu})\] \[= 2A(B^{-}\rightarrow\Sigma^{0}\overline{\Sigma}^{0}l\bar{\nu})+A(B^{ -}\rightarrow\Sigma^{-}\overline{\Sigma^{-}}l\bar{\nu}),\] \[2\sqrt{3}A(B^{-}\rightarrow\Sigma^{0}\overline{\Lambda}l\bar{ \nu}) = A(\overline{B}^{0}\to p\bar{n}l\bar{\nu})+A(\overline{B}^{0} \rightarrow\Xi^{0}\overline{\Xi^{-}}l\bar{\nu}),\] \[\sqrt{6}A(B^{-}\rightarrow\Lambda\overline{\Lambda}l\bar{\nu}) = A(\overline{B}^{0}\rightarrow\Sigma^{+}\overline{\Lambda}l\bar{ \nu})+\sqrt{6}A(B^{-}\to n\bar{n}l\bar{\nu}),\] \[-\sqrt{6}A(\overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{ \nu}) = 2A(\overline{B}^{0}_{s}\rightarrow\Sigma^{+}\overline{\Xi}^{0}l \bar{\nu})-A(\overline{B}^{0}_{s}\to n\overline{\Sigma^{-}}l\bar{ \nu}),\] \[\sqrt{6}A(\overline{B}^{0}_{s}\rightarrow\Lambda\overline{\Xi^{- }}l\bar{\nu}) = A(\overline{B}^{0}_{s}\rightarrow\Sigma^{+}\overline{\Xi^{0}}l \bar{\nu})-2A(\overline{B}^{0}_{s}\to n\overline{\Sigma^{-}}l\bar{ \nu}). \tag{38}\] Similarly, for \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}\nu\bar{\nu}\) decays, we have \[A(B^{-}\to\Xi^{0}\overline{\Sigma^{+}}\nu\bar{\nu}) = \sqrt{2}A(B^{-}\to\Xi^{-}\overline{\Sigma^{0}}\nu\bar{\nu})=-\sqrt {2}A(\overline{B}^{0}\to\Xi^{0}\overline{\Sigma^{0}}\nu\bar{\nu}) \tag{39}\] \[= A(\overline{B}^{0}\to\Xi^{-}\overline{\Sigma^{-}}\nu\bar{\nu}),\] \[-\sqrt{2}A(B^{-}\to\Sigma^{0}\bar{p}\nu\bar{\nu}) = -A(B^{-}\to\Sigma^{-}\bar{n}\nu\bar{\nu})=-A(\overline{B}^{0}\to \Sigma^{+}\bar{p}\nu\bar{\nu}) \tag{40}\] \[= \sqrt{2}A(\overline{B}^{0}\to\Sigma^{0}\bar{n}\nu\bar{\nu}),\] \[A(\overline{B}^{0}_{s}\to\Sigma^{+}\overline{\Sigma^{+}}\nu \bar{\nu}) = A(\overline{B}^{0}_{s}\to\Sigma^{0}\overline{\Sigma^{0}}\nu\bar{ \nu})=A(\overline{B}^{0}_{s}\to\Sigma^{-}\overline{\Sigma^{-}}\nu\bar{\nu}), \tag{41}\] \[A(\overline{B}^{0}_{s}\to\Xi^{0}\overline{\Xi^{0}}\nu\bar{\nu}) = A(\overline{B}^{0}_{s}\to\Xi^{-}\overline{\Xi^{-}}\nu\bar{\nu}), \tag{42}\] \[A(B^{-}\to\Lambda\bar{\nu}\nu\bar{\nu})=A(\overline{B}^{0}\to \Lambda\bar{n}\nu\bar{\nu}), \tag{43}\] \[A(\overline{B}^{0}_{s}\to p\bar{p}\nu\bar{\nu})=A(\overline{B}^{0}_{s}\to n\bar{n} \nu\bar{\nu}), \tag{44}\] and \[\sqrt{6}A(\overline{B}^{0}\to\Xi^{0}\overline{\Lambda}\nu\bar{\nu}) = A(\overline{B}^{0}\to\Xi^{-}\overline{\Sigma^{-}}\nu\bar{\nu})-A( \overline{B}^{0}\to\Sigma^{+}\bar{p}\nu\bar{\nu}),\] \[\sqrt{6}A(B^{-}\to\Xi^{-}\overline{\Lambda}\nu\bar{\nu}) = A(B^{-}\to\Xi^{0}\overline{\Sigma^{+}}\nu\bar{\nu})-2A(B^{-}\to \Sigma^{-}\bar{n}\nu\bar{\nu}),\] \[-\sqrt{6}A(B^{-}\to\Lambda\bar{p}\nu\bar{\nu}) = 2A(B^{-}\to\Xi^{0}\overline{\Sigma^{+}}\nu\bar{\nu})-A(B^{-}\to \Sigma^{-}\bar{n}\nu\bar{\nu}),\] \[\sqrt{3}A(\overline{B}^{0}_{s}\to\Lambda\overline{\Lambda}\nu \bar{\nu}) = -\sqrt{2}A(B^{-}\to\Lambda\bar{p}\nu\bar{\nu})+\sqrt{3}A( \overline{B}^{0}_{s}\to p\bar{p}\nu\bar{\nu}). \tag{45}\] For \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) decays, there is only one topological amplitude, namely \(T_{{\cal B}\overline{\cal D}}\). Therefore, all decay amplitudes are related, \[-A(B^{-}\to p\overline{\Delta^{+}}l\bar{\nu}) = -A(B^{-}\to n\overline{\Delta^{0}}l\bar{\nu})=A(B^{-}\to\Sigma^{+} \overline{\Sigma^{*}}l\bar{\nu})=2A(B^{-}\to\Sigma^{0}\overline{\Sigma^{*0}}l \bar{\nu}) \tag{46}\] \[= A(B^{-}\to\Xi^{0}\overline{\Xi^{*0}}l\bar{\nu})=\frac{2}{\sqrt{ 3}}A(B^{-}\to\Lambda\overline{\Sigma^{*0}}l\bar{\nu})=-A(\overline{B}^{0}\to p \overline{\Delta^{0}}l\bar{\nu})\] \[= -\frac{1}{\sqrt{3}}A(\overline{B}^{0}\to n\overline{\Delta^{-}}l \bar{\nu})=\sqrt{2}A(\overline{B}^{0}\to\Sigma^{+}\overline{\Sigma^{*0}}l\bar {\nu})\] \[= -\sqrt{2}A(\overline{B}^{0}\to\Sigma^{0}\overline{\Xi^{*-}}l\bar {\nu})=A(\overline{B}^{0}\to\Xi^{0}\overline{\Xi^{*-}}l\bar{\nu})=\sqrt{\frac{ 2}{3}}A(\overline{B}^{0}\to\Lambda\overline{\Sigma^{*-}}l\bar{\nu})\] \[= -\sqrt{2}A(\overline{B}^{0}_{s}\to p\overline{\Sigma^{*0}}l\bar {\nu})=-A(\overline{B}^{0}_{s}\to n\overline{\Sigma^{*-}}l\bar{\nu})=A( \overline{B}^{0}_{s}\to\Sigma^{+}\overline{\Xi^{*0}}l\bar{\nu})\] \[= -\sqrt{2}A(\overline{B}^{0}_{s}\to\Sigma^{0}\overline{\Xi^{*-}}l \bar{\nu})=\frac{1}{\sqrt{3}}A(\overline{B}^{0}_{s}\to\Xi^{0}\overline{\Omega^ {-}}l\bar{\nu})\] \[= \sqrt{\frac{2}{3}}A(\overline{B}^{0}_{s}\to\Lambda\overline{\Xi^{ *-}}l\bar{\nu}).\] Similarly, for \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu}\) decays, there is only one topological amplitude, namely \(PB_{{\cal B}\overline{\cal D}}\). Hence, all decay amplitudes are related. Explicitly, we have the following relations, \[-\frac{1}{\sqrt{6}}A(B^{-}\to\Sigma^{+}\overline{\Delta^{++}}\nu \bar{\nu}) = \frac{1}{2}A(B^{-}\to\Sigma^{0}\overline{\Delta^{+}}\nu\bar{\nu} )=\frac{1}{\sqrt{2}}A(B^{-}\to\Sigma^{-}\overline{\Delta^{0}}\nu\bar{\nu}) \tag{47}\] \[= -\frac{1}{\sqrt{2}}A(B^{-}\to\Xi^{0}\overline{\Sigma^{*+}}\nu \bar{\nu})=A(B^{-}\to\Xi^{-}\overline{\Sigma^{*0}}\nu\bar{\nu})\] \[= -\frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Sigma^{+}\overline{ \Delta^{+}}\nu\bar{\nu})=\frac{1}{2}A(\overline{B}^{0}\to\Sigma^{0}\overline{ \Delta^{0}}\nu\bar{\nu})\] \[= \frac{1}{\sqrt{6}}A(\overline{B}^{0}\to\Sigma^{-}\overline{\Delta ^{-}}\nu\bar{\nu})=-A(\overline{B}^{0}\to\Xi^{0}\overline{\Sigma^{*0}}\nu \bar{\nu})\] \[= \frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Xi^{-}\overline{\Sigma^{*- }}\nu\bar{\nu})=-\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Sigma^{+} \overline{\Sigma^{*+}}\nu\bar{\nu})\] \[= \frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Sigma^{0}\overline{ \Sigma^{*0}}\nu\bar{\nu})=\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Sigma^{-} \overline{\Sigma^{*-}}\nu\bar{\nu})\] \[= -\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Xi^{0}\overline{\Xi^{ *0}}\nu\bar{\nu})=\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Xi^{-}\overline{ \Xi^{*-}}\nu\bar{\nu}).\] For \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays, there is only one topological amplitude (\(T_{{\cal D}\overline{\cal B}}\)), while for \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu}\) decays, there is also only one topological amplitude (\(PB_{{\cal D}\overline{\cal B}}\)). Hence, the decay amplitudes are highly related and we have the following relations for \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays, \[-\frac{1}{\sqrt{2}}A(B^{-}\to\Delta^{+}\bar{p}l\bar{\nu}) = -\frac{1}{\sqrt{2}}A(B^{-}\to\Delta^{0}\bar{n}l\bar{\nu})=\frac{1 }{\sqrt{2}}A(B^{-}\to\Sigma^{*+}\overline{\Sigma^{+}}l\bar{\nu}) \tag{48}\] \[= -\sqrt{2}A(B^{-}\to\Sigma^{*0}\overline{\Sigma^{0}}l\bar{\nu})= \frac{1}{\sqrt{2}}A(B^{-}\to\Xi^{*0}\overline{\Xi^{0}}l\bar{\nu})\] \[= \sqrt{\frac{2}{3}}A(B^{-}\to\Sigma^{*0}\overline{\Lambda}l\bar{ \nu})=\frac{1}{\sqrt{6}}A(\overline{B}^{0}\to\Delta^{++}\bar{p}l\bar{\nu})\] \[= \frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Delta^{+}\bar{n}l\bar{ \nu})=-A(\overline{B}^{0}\to\Sigma^{*+}\overline{\Sigma^{0}}l\bar{\nu})\] \[= -A(\overline{B}^{0}\to\Sigma^{*0}\overline{\Sigma^{-}}l\bar{\nu} )=-\frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Xi^{*0}\overline{\Xi^{-}}l\bar{\nu})\] \[= \frac{1}{\sqrt{3}}A(\overline{B}^{0}\to\Sigma^{*+}\overline{ \Lambda}l\bar{\nu})=-\frac{1}{\sqrt{6}}A(\overline{B}^{0}_{s}\to\Delta^{++} \overline{\Sigma^{+}}l\bar{\nu})\] \[= \frac{1}{2}A(\overline{B}^{0}_{s}\to\Delta^{+}\overline{\Sigma^{0 }}l\bar{\nu})=\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Delta^{0}\overline {\Sigma^{-}}l\bar{\nu})\] \[= -\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{ \Xi^{0}}l\bar{\nu})=A(\overline{B}^{0}_{s}\to\Sigma^{*0}\overline{\Xi^{-}}l \bar{\nu}),\] and \[-A(B^{-}\to\Sigma^{*0}\bar{p}\nu\bar{\nu}) = -\frac{1}{\sqrt{2}}A(B^{-}\to\Sigma^{*-}\bar{n}\nu\bar{\nu})= \frac{1}{\sqrt{2}}A(B^{-}\to\Xi^{*0}\overline{\Sigma^{+}}\nu\bar{\nu}) \tag{49}\] \[= -A(B^{-}\to\Xi^{*-}\overline{\Sigma^{0}}\nu\bar{\nu})=\frac{1}{ \sqrt{6}}A(B^{-}\to\Omega^{-}\overline{\Xi^{0}}\nu\bar{\nu})\] \[= \frac{1}{\sqrt{3}}A(B^{-}\to\Xi^{*-}\overline{\Lambda}\nu\bar{ \nu})=\frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Sigma^{*+}\bar{\nu}\nu\bar{\nu})\] \[= A(\overline{B}^{0}\to\Sigma^{*0}\bar{n}\nu\bar{\nu})=-A( \overline{B}^{0}\to\Xi^{*0}\overline{\Sigma^{0}}\nu\bar{\nu}\] \[= -\frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Xi^{*-}\overline{\Sigma ^{-}}\nu\bar{\nu})=-\frac{1}{\sqrt{6}}A(\overline{B}^{0}\to\Omega^{-}\overline {\Xi^{-}}\nu\bar{\nu})\] \[= -\frac{1}{\sqrt{3}}A(\overline{B}^{0}\to\Xi^{*0}\overline{\Lambda }\nu\bar{\nu})=-\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline {\Sigma^{+}}\nu\bar{\nu})\] \[= \frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Sigma^{*0}\overline{ \Sigma^{0}}\nu\bar{\nu})=\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Sigma^{*-} \overline{\Sigma^{-}}\nu\bar{\nu})\] \[= -\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Xi^{*0}\overline{ \Xi^{0}}\nu\bar{\nu})=\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Xi^{*-} \overline{\Xi^{-}}\nu\bar{\nu}),\] for \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu}\) decays. For \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays, we have two topological amplitudes, namely \(T_{{\cal D}\overline{\cal D}}\) and \(A_{{\cal D}\overline{\cal D}}\). The decay amplitudes are related as following, \[\sqrt{3}A(\overline{B}^{0}\to\Delta^{++}\overline{\Delta^{+}}l\bar{ \nu}) = \frac{1}{2}A(\overline{B}^{0}\to\Delta^{+}\overline{\Delta^{0}}l\bar{ \nu})=\sqrt{3}A(\overline{B}^{0}\to\Delta^{0}\overline{\Delta^{-}}l\bar{\nu}) \tag{50}\] \[= \frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Sigma^{*+}\overline{\Sigma ^{*0}}l\bar{\nu})=\frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Sigma^{*0}\overline{ \Sigma^{*-}}l\bar{\nu})\] \[= A(\overline{B}^{0}\to\Xi^{*0}\overline{\Xi^{*-}}l\bar{\nu})= \frac{1}{\sqrt{3}}A(\overline{B}^{0}_{s}\to\Delta^{++}\overline{\Sigma^{*+}}l \bar{\nu})\] \[= \frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Delta^{+}\overline{ \Sigma^{*0}}l\bar{\nu})=A(\overline{B}^{0}_{s}\to\Delta^{0}\overline{\Sigma^{*- }}l\bar{\nu})\] \[= \frac{1}{2}A(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Xi^{0} }l\bar{\nu})=\frac{1}{\sqrt{2}}A(\overline{B}^{0}_{s}\to\Sigma^{*0}\overline{ \Xi^{*-}}l\bar{\nu})\] \[= \frac{1}{\sqrt{3}}A(\overline{B}^{0}_{s}\to\Xi^{*0}\overline{ \Omega^{-}}l\bar{\nu})=\frac{1}{2}A(B^{-}\to\Delta^{+}\overline{\Delta^{+}}l \bar{\nu}),\] \[A(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{\nu})=A(B^{-}\to \Sigma^{*0}\overline{\Sigma^{*0}}l\bar{\nu})=A(B^{-}\to\Xi^{*0}\overline{\Xi^{* 0}}l\bar{\nu}), \tag{51}\] \[A(B^{-}\to\Delta^{-}\overline{\Delta^{-}}l\bar{\nu}) = A(B^{-}\to\Sigma^{*-}\overline{\Sigma^{*-}}l\bar{\nu})=A(B^{-} \to\Xi^{*-}\overline{\Xi^{*-}}l\bar{\nu}) \tag{52}\] \[= A(B^{-}\to\Omega^{-}\overline{\Omega^{-}}l\bar{\nu}),\] and \[A(B^{-}\to\Delta^{++}\overline{\Delta^{++}}l\bar{\nu}) = A(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{\nu})+A(B^{-}\to \Delta^{-}\overline{\Delta^{-}}l\bar{\nu}),\] \[A(B^{-}\to\Sigma^{*+}\overline{\Sigma^{*+}}l\bar{\nu}) = A(B^{-}\to\Delta^{+}\overline{\Delta^{+}}l\bar{\nu})+A(B^{-}\to \Sigma^{*-}\overline{\Sigma^{*-}}l\bar{\nu}). \tag{53}\] Finally, for \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays, we have two topological amplitudes, namely \(PB_{{\cal D}\overline{\cal D}}\) and \(PBA_{{\cal D}\overline{\cal D}}\), giving the following relations on the amplitudes, \[\frac{1}{\sqrt{3}}A(B^{-}\to\Sigma^{*+}\overline{\Delta^{++}}\nu \bar{\nu}) = \frac{1}{\sqrt{2}}A(B^{-}\to\Sigma^{*0}\overline{\Delta^{+}}\nu \bar{\nu})=A(B^{-}\to\Sigma^{*-}\overline{\Delta^{0}}\nu\bar{\nu}) \tag{54}\] \[= \frac{1}{2}A(B^{-}\to\Xi^{*0}\overline{\Sigma^{*+}}\nu\bar{\nu})= \frac{1}{\sqrt{2}}A(B^{-}\to\Xi^{*-}\overline{\Sigma^{*0}}\nu\bar{\nu})\] \[= \frac{1}{\sqrt{3}}A(B^{-}\to\Omega^{-}\overline{\Xi^{*0}}\nu\bar{ \nu})=A(\overline{B}^{0}\to\Sigma^{*+}\overline{\Delta^{+}}\nu\bar{\nu})\] \[= \frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Sigma^{*0}\overline{\Delta ^{0}}\nu\bar{\nu})=\frac{1}{\sqrt{3}}A(\overline{B}^{0}\to\Sigma^{*-}\overline {\Delta^{-}}\nu\bar{\nu})\] \[= \frac{1}{\sqrt{2}}A(\overline{B}^{0}\to\Xi^{*0}\overline{\Sigma^{ *0}}\nu\bar{\nu})=\frac{1}{2}A(\overline{B}^{0}\to\Xi^{*-}\overline{\Sigma^{* -}}\nu\bar{\nu})\] \[= \frac{1}{\sqrt{3}}A(\overline{B}^{0}\to\Omega^{-}\overline{\Xi^{* -}}\nu\bar{\nu}),\] \[A(\overline{B}_{s}^{0}\rightarrow\Sigma^{*+}\overline{\Sigma^{*+}} \nu\bar{\nu}) = A(\overline{B}_{s}^{0}\rightarrow\Sigma^{*0}\overline{\Sigma^{*0}} \nu\bar{\nu})=A(\overline{B}_{s}^{0}\rightarrow\Sigma^{*-}\overline{\Sigma^{*-}} \nu\bar{\nu}),\] \[A(\overline{B}_{s}^{0}\rightarrow\Xi^{*0}\overline{\Xi^{*0}} \nu\bar{\nu}) = A(\overline{B}_{s}^{0}\rightarrow\Xi^{*-}\overline{\Xi^{*-}} \nu\bar{\nu}), \tag{55}\] \[A(\overline{B}_{s}^{0}\rightarrow\Sigma^{*+}\overline{\Sigma^{*+} }\nu\bar{\nu}) = A(B^{-}\rightarrow\Sigma^{*-}\overline{\Delta^{0}}\nu\bar{\nu})+A( \overline{B}_{s}^{0}\rightarrow\Delta^{0}\overline{\Delta^{0}}\nu\bar{\nu}),\] \[A(\overline{B}_{s}^{0}\rightarrow\Xi^{*0}\overline{\Xi^{*0}} \nu\bar{\nu}) = A(\overline{B}^{0}\rightarrow\Xi^{*-}\overline{\Sigma^{*-}} \nu\bar{\nu})+A(\overline{B}_{s}^{0}\rightarrow\Delta^{-}\overline{\Delta^{ -}}\nu\bar{\nu}),\] \[A(\overline{B}_{s}^{0}\rightarrow\Omega^{-}\overline{\Omega^{-} }\nu\bar{\nu}) = A(\overline{B}^{0}\rightarrow\Xi^{*-}\overline{\Sigma^{*-}} \nu\bar{\nu})+A(\overline{B}_{s}^{0}\rightarrow\Sigma^{*+}\overline{\Sigma^{ *+}}\nu\bar{\nu}), \tag{56}\] \[A(\overline{B}_{s}^{0}\rightarrow\Delta^{++}\overline{\Delta^{ ++}}\nu\bar{\nu}) = A(\overline{B}_{s}^{0}\rightarrow\Delta^{+}\overline{\Delta^{+} }\nu\bar{\nu})=A(\overline{B}_{s}^{0}\rightarrow\Delta^{0}\overline{\Delta^{0} }\nu\bar{\nu}) \tag{57}\] \[= A(\overline{B}_{s}^{0}\rightarrow\Delta^{-}\overline{\Delta^{-} }\nu\bar{\nu}).\] The above relations on amplitudes impose relations on rates. For example, we may have three decay modes, where their rates and amplitudes are related as following \[\Gamma_{1}=\sum_{i}|A_{1}(i)|^{2},\quad\Gamma_{2}=\sum_{i}|A_{2}(i)|^{2},\quad \Gamma_{3}=\sum_{i}|A_{1}(i)+A_{2}(i)|^{2}, \tag{58}\] with \(i\) representing the allowed momentum and helicities of final state particles, summing over \(i\) indicating integrating over phase space and summing over final state helicities. Note that the following discussion only applies to the SU(3) symmetric case, i.e. we are considering the relation on rates in the SU(3) symmetric limit. Using the triangle inequality in the complex plane, we obtain \[|A_{1}(i)|^{2}+|A_{2}(i)|^{2}-2|A_{1}(i)||A_{2}(i)|\] \[\qquad\leq|A_{1}(i)+A_{2}(i)|^{2}\leq|A_{1}(i)|^{2}+|A_{2}(i)|^{2 }+2|A_{1}(i)||A_{2}(i)|. \tag{59}\] Sum over \(i\) in the above equation and make use of the following inequality, \[0\leq\sum_{i}|A_{1}(i)||A_{2}(i)|\leq\sqrt{\sum_{i}|A_{1}(i)|^{2}}\sqrt{\sum_ {j}|A_{2}(j)|^{2}}, \tag{60}\] we finally obtain the triangle inequality on rates in the SU(3) symmetric limit, \[(\Gamma_{1}^{1/2}-\Gamma_{2}^{1/2})^{2}\leq\Gamma_{3}\leq(\Gamma_{1}^{1/2}+ \Gamma_{2}^{1/2})^{2}. \tag{61}\] ## IV Results on rates Before we start the discussion on rates it will be useful to recall the detectability of the final state baryons. In Table 9, we identify some octet and decuplet baryons that can decay to all charged final states with unsuppressed branching ratios. Note that modes with anti-neutron are also detectable, while \(\Delta^{+}\), \(\Sigma^{+,0}\), \(\Xi^{0}\), \(\Sigma^{*0}\) and \(\Xi^{*-}\) can be detected by detecting a \(\pi^{0}\) or \(\gamma\). For example, \(\Delta^{+}\) mainly decays to \(p\pi^{0}\) and \(n\pi^{+}\), while \(\Sigma^{0}\) decays to \(\Lambda\gamma\). We should pay close attention to the modes that involve these baryons and have large decay rates in the \(\overline{B}_{q}\) decays. (\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}\nu\bar{\nu}\) decay rates In this part, we will first give a generic discussion on \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}\nu\bar{\nu}\) decays, and the results will be compared to model calculations, where masses of hadrons and lifetimes are taken from ref. [4]. For \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}l\bar{\nu}\) decays, the decay amplitudes can be decomposed in terms of three independent topological amplitudes, namely \(T_{2\mathcal{B}\overline{\mathcal{B}}}\), \(T_{1\mathcal{B}\overline{\mathcal{B}}}\) and \(A_{\mathcal{B}\overline{\mathcal{B}}}\), as shown in Table 5. As the amplitudes of \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}l\bar{\nu}\) decays have different combinations of these topological amplitudes, the corresponding branching ratios are denoted with different parameters. Specifically, we use \(a\) for the rate with \(A=T_{1\mathcal{B}\overline{\mathcal{B}}}+A_{\mathcal{B}\overline{\mathcal{B}}}\), \(b\) for the rate with \(A=T_{2\mathcal{B}\overline{\mathcal{B}}}\), \(c\) for the rate with \(A=\frac{1}{2}(T_{1\mathcal{B}\overline{\mathcal{B}}}+T_{2\mathcal{B}\overline {\mathcal{B}}})+A_{\mathcal{B}\overline{\mathcal{B}}}\), \(d\) for the rate with \(A=(T_{1\mathcal{B}\overline{\mathcal{B}}}-T_{2\mathcal{B}\overline{\mathcal{B} }})/2\), \(e\) for the rate with \(A=A_{\mathcal{B}\overline{\mathcal{B}}}\), \(f\) for the rate with \(A=\frac{1}{6}(5T_{1\mathcal{B}\overline{\mathcal{B}}}+T_{2\mathcal{B}\overline {\mathcal{B}}})+A_{\mathcal{B}\overline{\mathcal{B}}}\), \(g\) for the rate with \(A=(2T_{1\mathcal{B}\overline{\mathcal{B}}}+T_{2\mathcal{B}\overline{\mathcal{B} }})/3\), and \(h\) for the rate with \(A=\frac{1}{3}(T_{1\mathcal{B}\overline{\mathcal{B}}}+2T_{2\mathcal{B}\overline {\mathcal{B}}})+A_{\mathcal{B}\overline{\mathcal{B}}}\). In addition, we add tildes for rates with similar amplitudes but without the \(A_{\mathcal{B}\overline{\mathcal{B}}}\) terms. For example, \(\tilde{a}\) corresponds to the rate with \(A=T_{1\mathcal{B}\overline{\mathcal{B}}}\). The same set of alphabets is also used in \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}\nu\bar{\nu}\) decays as \(PB_{i\,\mathcal{B}\overline{\mathcal{B}}}\) and \(PBA_{\mathcal{B}\overline{\mathcal{B}}}\) are proportional to \(T_{i\,\mathcal{B}\overline{\mathcal{B}}}\) and \(A_{\mathcal{B}\overline{\mathcal{B}}}\) with a common proportional constant \(\zeta\) as shown in Eq. (20). Note that the above parameters \begin{table} \begin{tabular}{c c c} Octet/Decuplet & Baryons & All charged final states \\ \hline Octet, \(\mathcal{B}\) & \(p\), \(\Lambda\), \(\Xi^{-}\) & \(\Lambda\to p\pi^{-}\), \(\Xi^{-}\to\Lambda\pi^{-}\to p\pi^{-}\pi^{-}\) \\ Decuplet, \(\mathcal{D}\) & \(\Delta^{++,0}\), \(\Sigma^{*\pm}\), \(\Xi^{*0}\), \(\Omega^{-}\) & \(\Delta^{++,0}\to p\pi^{\pm}\), \(\Sigma^{*\pm}\to\Lambda\pi^{\pm}\to p\pi^{-}\pi^{\pm}\), \\ & & \(\Xi^{*0}\to\Xi^{-}\pi^{+}\to\Lambda\pi^{-}\pi^{+}\to p\pi^{-}\pi^{-}\pi^{+}\), \\ & & \(\Omega^{-}\to\Lambda K^{-}\to p\pi^{-}K^{-}\) \\ \end{tabular} \end{table} Table 9: Octet and decuplet baryons decaying to all charged final states with unsuppressed branching ratios [4]. correspond to the rates in the SU(3) symmetric limit. Experimentally not only data of the branching ratio of \(B^{-}\to p\bar{p}l\bar{\nu}\) decay is obtained, information of differential rate is also available. The experimental differential rate \(dBr/dm_{p\bar{p}}\) of \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) decay from LHCb [3] is shown in Fig. 1. The differential rate in Fig. 1 can be well fitted with \[\frac{dBr}{dm_{\bf B\overline{B}^{\prime}}}=\frac{N}{(m_{\bf B \overline{B}^{\prime}}^{2})^{\gamma}}(m_{\bf B\overline{B}^{\prime}}-m_{\bf B }-m_{\bf\overline{B}^{\prime}}), \tag{62}\] where \(\gamma\) and \(N\) are constants. In particular, \(\gamma=9\) is used in Fig. 1 for the plotted blue dashed line. (see also Fig. 3). As noted in Introduction the threshold enhancement is sensitive to the position of the threshold and hence the SU(3) breaking from baryon masses are amplified producing very large SU(3) breaking effects on the integrated decay rates. In this work we use Eq. (62) to estimate the SU(3) breaking effect from threshold enhancement. Take \(B^{-}\to p\bar{p}l\bar{\nu}\) and \(B^{-}\to\Sigma^{+}\overline{\Sigma^{+}}l\bar{\nu}\) decays as examples. As shown in Table 5 their amplitudes are both equal to \(A=T_{1\cal B\overline{B}}+T_{2\cal B\overline{B}}+A_{\cal B\overline{B}}\). Consequently, without SU(3) breaking, their rates should be identical. However, we expect large SU(3) breaking from the threshold enhancement as the masses of \(p\) and \(\Sigma^{+}\) are different. Using Eq. (62) the ratio of their branching ratios is given by \[\frac{Br(B^{-}\to\Sigma^{+}\overline{\Sigma^{+}}l\bar{\nu})}{Br(B^{-}\to p\bar {p}l\bar{\nu})}=\frac{\int_{2m_{\Sigma^{+}}}^{m_{B}}dm_{\cal B\overline{B}^{ \prime}}\frac{N^{\prime}}{(m_{\cal B\overline{B}^{\prime}}^{2})^{9}}(m_{\cal B \overline{B}^{\prime}}-2m_{\Sigma^{+}})}{\int_{2m_{p}}^{m_{B}}dm_{\cal B \overline{B}^{\prime}}\frac{N}{(m_{\cal B\overline{B}^{\prime}}^{2})^{9}}(m_{ \cal B\overline{B}^{\prime}}-2m_{p})}=0.022\frac{N^{\prime}}{N}=0.022\sigma, \tag{63}\] where we define \(N^{\prime}/N\equiv\sigma\). We see that the SU(3) breaking from the threshold enhancement is very large. The decay rates differ by orders of magnitudes. On the other hand, although \(N^{\prime}/N=\sigma\) may contain additional SU(3) breaking from mass differences, it represents a milder SU(3) breaking effect, since the SU(3) breaking from threshold enhancement is already extracted out, the value of \(\sigma\) is expected to be of order one. Consequently, using \(Br(B^{-}\to p\bar{p}\mu\bar{\nu})=(5.32\pm 0.34)\times 10^{-6}\)[4], we expect \(Br(B^{-}\to\Sigma^{+}\overline{\Sigma^{+}}l\bar{\nu})=(5.32\times 0.022\sigma) \times 10^{-6}\) with \(\sigma\) an order one parameter. As we shall see later the above estimation agrees well with some recent theoretical calculations [8; 9]. With these considerations, the branching ratios of \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}\nu\bar{\nu}\) decays are parametrized and are shown in Table 10. SU(3) breaking effects from \(B_{q}\) meson widths and threshold enhancement are included. The order one parameters \(\alpha,\beta,\eta,\tilde{\eta},\tilde{\bar{\eta}},\kappa,\tilde{\kappa},\sigma, \tilde{\sigma},\tilde{\bar{\sigma}},\xi,\tilde{\xi},\tilde{\bar{\xi}}\) and \(\bar{\alpha},\bar{\beta},\bar{\kappa},\tilde{\bar{\kappa}}\) denote milder SU(3) breaking, where different parameters are used when the baryon masses are different, tilde are used when the combinations of topological amplitudes are different, bar are use when the masses of baryon and anti-baryon are switched. From the above example, we expect these parameters to be of order one. We also expect them to be of similar size, and those with bar or tilde be close to those without bar or tilde. We will come back to these later. There are many parameters in Table 1. They are not totally independent, since we only have three independent topological amplitudes. Using the triangle inequality, Eq. (61), the amplitude decomposition in \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decays and the decay rates as shown in Tables 5 and 1, we obtain the following inequalities, \[\Big{(}\frac{\sqrt{5.32}-\sqrt{e}}{2}\Big{)}^{2}\lesssim\ c\ \lesssim\Big{(}\frac{\sqrt{5.32}+\sqrt{e}}{2}\Big{)}^{2},\quad\Big{(}\frac{ \sqrt{5.32}-\sqrt{e}}{2}\Big{)}^{2}\lesssim\tilde{c}\lesssim\Big{(}\frac{ \sqrt{5.32}+\sqrt{e}}{2}\Big{)}^{2},\] \[(\sqrt{a}-\sqrt{e})^{2}\lesssim\ \tilde{a}\ \lesssim(\sqrt{a}+\sqrt{e})^{2},\quad(\sqrt{h}- \sqrt{e})^{2}\lesssim\tilde{h}\lesssim(\sqrt{h}+\sqrt{e})^{2}, \tag{64}\] \[(\sqrt{5.32}-\sqrt{a})^{2}\lesssim\ b\ \lesssim(\sqrt{5.32}+\sqrt{a})^{2}, (\sqrt{\tilde{c}}-\sqrt{\tilde{a}})^{2}\lesssim d\lesssim(\sqrt{\tilde{c}}+ \sqrt{\tilde{a}})^{2},\] \[\Big{(}\frac{\sqrt{c}-2\sqrt{a}}{3}\Big{)}^{2}\lesssim\ f\ \lesssim\Big{(}\frac{\sqrt{c}+2\sqrt{a}}{3}\Big{)}^{2}, \Big{(}\frac{2\sqrt{\tilde{c}}-\sqrt{\tilde{a}}}{3}\Big{)}^{2} \lesssim g\lesssim\Big{(}\frac{2\sqrt{\tilde{c}}+\sqrt{\tilde{a}}}{3}\Big{)} ^{2},\] \[\Big{(}\frac{4\sqrt{c}-\sqrt{a}}{3}\Big{)}^{2}\lesssim\ h\ \lesssim\Big{(}\frac{4\sqrt{c}+\sqrt{a}}{3}\Big{)}^{2}, \Big{(}\frac{4\sqrt{\tilde{c}}-\sqrt{\tilde{a}}}{3}\Big{)}^{2} \lesssim\tilde{h}\lesssim\Big{(}\frac{4\sqrt{\tilde{c}}+\sqrt{\tilde{a}}}{3} \Big{)}^{2}, \tag{65}\] Figure 3: The experimental differential rate \(dBr/dm_{p\bar{p}}\) of \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) decay from LHCb [3] can be well fitted with \(dBr/dm_{p\bar{p}}=N(1/m_{p\bar{p}}^{2})^{9}(m_{p\bar{p}}-m_{p}-m_{\bar{p}})\) with blue dashed line. Orange and green solid lines correspond to the differential rates from Model 1 and Model 2 with inputs basically from refs [8] and [9], respectively. See text for details. \begin{table} \begin{tabular}{c c c c} Mode & \(Br(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu})(10^{-6})\) & Mode & \(Br(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu})(10^{-6})\) \\ \hline \(B^{-}\to p\bar{p}l\bar{\nu}\) & \(5.32\pm 0.34\)[4] & \(B^{-}\to n\bar{n}l\bar{\nu}\) & \(a\times(0.978)\) \\ \(B^{-}\to\Sigma^{+}\overline{\Sigma^{+}}l\bar{\nu}\) & \(5.32\times(0.0225\sigma)\) & \(B^{-}\to\Sigma^{0}\overline{\Sigma^{0}}l\bar{\nu}\) & \(c\times(0.0215\sigma)\) \\ \(B^{-}\to\Sigma^{-}\overline{\Sigma^{-}}l\bar{\nu}\) & \(e\times(0.0202\tilde{\tilde{\sigma}})\) & \(B^{-}\to\Xi^{-}\overline{\Xi^{-}}l\bar{\nu}\) & \(e\times(0.00416\tilde{\tilde{\xi}})\) \\ \(B^{-}\to\Sigma^{0}\overline{\Lambda}l\bar{\nu}\) & \(\frac{d}{3}\times(0.0364\eta)\) & \(B^{-}\to\Xi^{0}\overline{\Xi^{0}}l\bar{\nu}\) & \(a\times(0.00452\tilde{\xi})\) \\ \(B^{-}\to\Lambda\overline{\Sigma^{0}}l\bar{\nu}\) & \(\frac{d}{3}\times(0.0364\eta)\) & \(B^{-}\to\Lambda\overline{\Lambda}l\bar{\nu}\) & \(f\times(0.0626\tilde{\eta})\) \\ \hline \(\overline{B}^{0}\to p\bar{n}l\bar{\nu}\) & \(0.93b\times(0.989)\) & \(\overline{B}^{0}\to\Sigma^{+}\overline{\Sigma^{0}}l\bar{\nu}\) & \(1.85\tilde{c}\times(0.0220\sigma)\) \\ \(\overline{B}^{0}\to\Sigma^{+}\overline{\Lambda}l\bar{\nu}\) & \(0.62d\times(0.0372\eta)\) & \(\overline{B}^{0}\to\Sigma^{0}\overline{\Sigma^{-}}l\bar{\nu}\) & \(1.85\tilde{c}\times(0.0208\sigma)\) \\ \(\overline{B}^{0}\to\Lambda\overline{\Sigma^{-}}l\bar{\nu}\) & \(0.62d\times(0.0352\eta)\) & \(\overline{B}^{0}\to\Xi^{0}\overline{\Xi^{-}}l\bar{\nu}\) & \(0.9\tilde{a}\times(0.00434\tilde{\xi})\) \\ \hline \(\overline{B}^{0}_{s}\to p\overline{\Sigma^{0}}l\bar{\nu}\) & \(0.47\tilde{a}\times(0.131\beta)\) & \(\overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu}\) & \(1.40\tilde{h}\times(0.236\alpha)\) \\ \(\overline{B}^{0}_{s}\to n\overline{\Sigma^{-}}l\bar{\nu}\) & \(0.93\tilde{a}\times(0.125\beta)\) & \(\overline{B}^{0}_{s}\to\Sigma^{+}\overline{\Xi^{0}}l\bar{\nu}\) & \(0.93b\times(0.00988\kappa)\) \\ \(\overline{B}^{0}_{s}\to\Sigma^{0}\overline{\Xi^{-}}l\bar{\nu}\) & \(0.47b\times(0.00927\kappa)\) & \(\overline{B}^{0}_{s}\to\Lambda\overline{\Xi^{-}}l\bar{\nu}\) & \(1.40g\times(0.0152\tilde{\kappa})\) \\ \hline \hline Mode & \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{ \nu})(10^{-8})\) & Mode & \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{ \nu})(10^{-8})\) \\ \hline \(B^{-}\to\Sigma^{0}\bar{p}\nu\bar{\nu}\) & \(0.20\tilde{a}\times(0.131\bar{\beta})\) & \(B^{-}\to\Sigma^{-}\bar{n}\nu\bar{\nu}\) & \(0.40\tilde{a}\times(0.125\bar{\beta})\) \\ \(B^{-}\to\Xi^{0}\overline{\Sigma^{+}}\nu\bar{\nu}\) & \(0.40b\times(0.00988\bar{\kappa})\) & \(B^{-}\to\Xi^{-}\overline{\Sigma^{0}}\nu\bar{\nu}\) & \(0.20b\times(0.00927\bar{\kappa})\) \\ \(B^{-}\to\Xi^{-}\overline{\Lambda}\nu\bar{\nu}\) & \(0.60g\times(0.0152\bar{\kappa})\) & \(B^{-}\to\Lambda\bar{p}\nu\bar{\nu}\) & \(0.60\tilde{h}\times(0.236\bar{\alpha})\) \\ \hline \(\overline{B}^{0}\to\Sigma^{+}\bar{p}\nu\bar{\nu}\) & \(0.37\tilde{a}\times(0.134\bar{\beta})\) & \(\overline{B}^{0}\to\Sigma^{0}\bar{n}\nu\bar{\nu}\) & \(0.19\tilde{a}\times(0.130\bar{\beta})\) \\ \(\overline{B}^{0}\to\Xi^{0}\overline{\Sigma^{0}}\nu\bar{\nu}\) & \(0.19b\times(0.00968\bar{\kappa})\) & \(\overline{B}^{0}\to\Xi^{0}\overline{\Lambda}\nu\bar{\nu}\) & \(0.56g\times(0.0159\tilde{\kappa})\) \\ \(\overline{B}^{0}\to\Xi^{-}\overline{\Sigma^{-}}\nu\bar{\nu}\) & \(0.37b\times(0.00899\bar{\kappa})\) & \(\overline{B}^{0}\to\Lambda\bar{n}\nu\bar{\nu}\) & \(0.56\tilde{h}\times(0.233\bar{\alpha})\) \\ \hline \(\overline{B}^{0}_{s}\to p\bar{p}\nu\bar{\nu}\) & \(0.37e\) & \(\overline{B}^{0}_{s}\to n\bar{n}\nu\bar{\nu}\) & \(0.37e\times(0.978)\) \\ \(\overline{B}^{0}_{s}\to\Sigma^{+}\overline{\Sigma^{+}}\nu\bar{\nu}\) & \(0.37a\times(0.0225\tilde{\sigma})\) & \(\overline{B}^{0}_{s}\to\Sigma^{0}\overline{\Sigma^{0}}\nu\bar{\nu}\) & \(0.37a\times(0.0215\tilde{\sigma})\) \\ \(\overline{B}^{0}_{s}\to\Sigma^{-}\overline{\Sigma^{-}}\nu\bar{\nu}\) & \(0.37a\times(0.0202\tilde{\sigma})\) & \(\overline{B}^{0}_{s}\to\Xi^{0}\overline{\Xi^{0}}\nu\bar{\nu}\) & \(1.98\times(0.00452\xi)\) \\ \(\overline{B}^{0}_{s}\to\Xi^{-}\overline{\Xi^{-}}\nu\bar{\nu}\) & \(1.98\times(0.00416\xi)\) & \(\overline{B}^{0}_{s}\to\Lambda\overline{\Lambda}\nu\bar{\nu}\) & \(0.37h\times(0.0626\tilde{\tilde{\eta}})\) \\ \end{tabular} \end{table} Table 1: Branching ratios of \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{\nu}\) decays. The \(B^{-}\to p\bar{p}l\bar{\nu}\) rate is from experimental data [2; 4]. Most of the parameters are expected to be of order 1. In particular, we expect \(c\simeq\tilde{c}\simeq\sqrt{5.32}/2\), \(a\simeq\tilde{a}\), \(h\simeq\tilde{h}\) and \(e\ll 5.32\), satisfying Eqs.(67), (68) and (69). The last factors are from the SU(3) breaking from threshold enhancement, and we expect \(\alpha,\beta,\eta,\tilde{\eta},\tilde{\bar{\eta}},\kappa,\tilde{\kappa}, \sigma,\tilde{\sigma},\tilde{\tilde{\sigma}},\xi,\tilde{\tilde{\xi}}\) being of order unity. See text for details. \[(\sqrt{5.32}-\sqrt{b})^{2}\lesssim\] \[\Big{(}\frac{4\sqrt{\tilde{c}}-\sqrt{b}}{3}\Big{)}^{2}\lesssim \tag{66}\] Although the above inequalities can constrain the sizes of these parameters, it will be useful to reduce the number of the parameters. Note that the rates proportional to \(e\) are governed by annihilation \(A_{\mathcal{B}\overline{\mathcal{B}}}\) or penguin-box-annihilation \(PBA_{\mathcal{B}\overline{\mathcal{B}}}\) diagrams. It is known that these contributions are usually much suppressed than tree and penguin contributions. For example, in two-body baryonic \(B_{q}\) decays, \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}\) decays, the tree dominated mode \(B^{-}\to p\bar{p}\) and penguin dominated mode \(B^{-}\to\Lambda\bar{p}\) was observed with branching ratios at \(10^{-8}\) and \(10^{-6}\) levels, respectively [35; 36; 37], while \(\overline{B}_{s}\to p\bar{p}\) decay, which is an exchange and penguin-annihilation mode is not yet observed with the upper limit pushed down to \(10^{-9}\) level [37]. It is therefore reasonable to consider the case where the annihilation \(A_{\mathcal{B}\overline{\mathcal{B}}}\) and penguin-box-annihilation \(PBA_{\mathcal{B}\overline{\mathcal{B}}}\) contributions are highly suppressed, i.e. \(e\ll\mathcal{O}(1)\). Nevertheless this assumption can be checked by searching pure annihilation (penguin-box-annuhalation) modes, \(B^{-}\to\Sigma^{-}\overline{\Sigma^{-}}l\bar{\nu}\), \(B^{-}\to\Xi^{-}\overline{\Xi^{-}}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to p\bar{p}\nu\bar{\nu}\) and \(\overline{B}^{0}_{s}\to n\bar{n}\nu\bar{\nu}\) decays, as their rates are proportional to \(e\). In particular, as the \(\overline{B}^{0}_{s}\to p\bar{p}\nu\bar{\nu}\) mode has good detectability, see Table 9, it is a good place to verify the above assumption. Applying the above assumption to the relations Eq. (68), we obtain, \[e\ll 5.32,\quad c\simeq\tilde{c}\simeq\frac{5.32}{4},\quad\tilde {a}\simeq a,\quad\tilde{h}\simeq h, \tag{67}\] \[(\sqrt{5.32}-\sqrt{a})^{2}\lesssim\] \[\Big{(}\frac{\sqrt{5.32}-4\sqrt{a}}{6}\Big{)}^{2}\lesssim\] \[\Big{(}\frac{2\sqrt{5.32}-\sqrt{a}}{3}\Big{)}^{2}\lesssim \tag{68}\] and \[(\sqrt{5.32}-\sqrt{b})^{2}\lesssim\] \[\Big{(}\frac{5\sqrt{5.32}-4\sqrt{b}}{6}\Big{)}^{2}\lesssim \] \[\Big{(}\frac{\sqrt{5.32}-\sqrt{b}}{3}\Big{)}^{2}\lesssim \tag{69}\] These are the inequalities we shall employed in this work. The parameters \(a\), \(b\), \(c\) and so on in Table 10 need to satisfy the above triangular inequalities, Eqs.(67), (68) and (69). At this moment we do not have enough data to verify them. Nevertheless, it will be useful to make use of model calculations in Sec. II.2 for illustration. \begin{table} \begin{tabular}{c c c c c} Parameters & Values (Model 1) & Bounds (Model 1) & Values (Model 2) & Bounds (Model 2) \\ \hline \(a\) & \(0.42\) & \(0.15\sim 17.90\) & \(7.99\) & \(0.80\sim 30.34\) \\ \(b\) & \(3.70\) & \(2.75\sim 8.74\) & \(10.25\) & \(0.27\sim 26.34\) \\ \(d\eta\) & \(1.56\) & \((0.59\sim 3.25)\eta\) & \(14.89\) & \((4.20\sim 15.83)\eta\) \\ \(f\tilde{\eta}\) & \(0.99\) & \((0.41\sim 0.67)\tilde{\eta}\) & \(7.44\) & \((2.25\sim 5.14)\tilde{\eta}^{\prime}\) \\ \(g\tilde{\kappa}\) & \(2.24\) & \((0.80\sim 0.97)\tilde{\kappa}\) & \(4.81\) & \((0.22\sim 2.93)\tilde{\kappa}\) \\ \(g\bar{\tilde{\kappa}}\) & \(1.96\) & \((0.80\sim 0.97)\tilde{\kappa}\) & \(4.45\) & \((0.22\sim 2.93)\tilde{\kappa}\) \\ \(\tilde{h}\alpha\) & \(3.08\) & \((1.75\sim 1.99)\alpha\) & \(3.88\) & \((0.35\sim 3.37)\alpha\) \\ \(\tilde{h}\bar{\alpha}\) & \(2.77\) & \((1.75\sim 1.99)\bar{\alpha}\) & \(3.67\) & \((0.35\sim 3.37)\bar{\alpha}\) \\ \(h\tilde{\bar{\eta}}\) & \(4.17\) & \((1.75\sim 1.99)\tilde{\bar{\eta}}\) & \(5.26\) & \((0.35\sim 3.37)\tilde{\bar{\eta}}\) \\ \hline \(\alpha\) & \(-\) & \(1.55\sim 1.76\) & \(-\) & \(1.15\sim 10.93\) \\ \(\bar{\alpha}\) & \(-\) & \(1.39\sim 1.59\) & \(-\) & \(1.09\sim 10.34\) \\ \(\beta\) & \(1.72\) & \(-\) & \(1.79\) & \(-\) \\ \(\bar{\beta}\) & \(1.47\) & \(-\) & \(1.54\) & \(-\) \\ \(\eta\) & \(-\) & \(0.48\sim 2.62\) & \(-\) & \(0.94\sim 3.55\) \\ \(\tilde{\eta}\) & \(-\) & \(1.49\sim 2.43\) & \(-\) & \(1.45\sim 3.31\) \\ \(\tilde{\bar{\eta}}\) & \(-\) & \(2.10\sim 2.39\) & \(-\) & \(1.56\sim 14.83\) \\ \(\sigma\) & \(2.21\) & \(-\) & \(2.13\) & \(-\) \\ \(\tilde{\sigma}\) & \(2.30\) & \(-\) & \(2.49\) & \(-\) \\ \(\kappa\) & \(2.96\) & \(-\) & \(2.55\) & \(-\) \\ \(\bar{\kappa}\) & \(2.59\) & \(-\) & \(2.21\) & \(-\) \\ \(\tilde{\kappa}\) & \(-\) & \(2.31\sim 2.79\) & \(-\) & \(1.64\sim 21.73\) \\ \(\bar{\tilde{\kappa}}\) & \(-\) & \(2.02\sim 2.45\) & \(-\) & \(1.52\sim 20.11\) \\ \(\xi\) & \(3.19\) & \(-\) & \(3.13\) & \(-\) \\ \(\tilde{\xi}\) & \(2.32\) & \(-\) & \(2.61\) & \(-\) \\ \hline \((\frac{\bar{\alpha}}{\alpha},\frac{\bar{\beta}}{\beta},\frac{\bar{\kappa}}{ \kappa},\frac{\bar{\tilde{\kappa}}}{\bar{\kappa}})\) & \((0.90,0.85,0.87,0.88)\) & \(-\) & \((0.95,0.86,0.87,0.93)\) & \(-\) \\ \end{tabular} \end{table} Table 11: Values of various parameters in Model 1 and Model 2. The bounds of the parameters \(d\eta,\ldots h\tilde{\bar{\eta}}\) are obtained using triangular inequalities, Eqs. (68) and (69), while the bounds of the parameters \(\alpha,\bar{\alpha},\eta,\tilde{\eta},\tilde{\bar{\eta}},\tilde{\kappa}, \bar{\tilde{\kappa}}\) are obtained using the values and bounds of the parameters \(d\eta,\ldots h\tilde{\bar{\eta}}\). Branching ratios of \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}\,l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}\,\nu\bar{\nu}\) decays in Model 1 and Model 2 can be obtained by using \(T_{i{\cal B\overline{B}}}\) and \(A_{{\cal B\overline{B}}}\), as shown in Eq. (21), with inputs as shown in Table 2, and formulas of decay rates collected in Appendix B. The results are shown in Table 1. They can be compared to the results given in refs. [8; 9], where \(Br(B^{-}\to p\bar{p}l\bar{\nu})=(5.21\pm 0.34)\times 10^{-6}\)[8], \((5.3\pm 0.2)\times 10^{-6}\)[9], \(Br(B^{-}\to n\bar{n}l\bar{\nu})=(0.68\pm 0.10)\times 10^{-6}\), \(Br(B^{-}\to\Sigma^{+}\overline{\Sigma^{+}}l\bar{\nu})=(0.24\pm 0.02)\times 10^{-6}\), \(Br(B^{-}\to\Sigma^{0}\overline{\Sigma^{0}}l\bar{\nu})=(0.06\pm 0.01)\times 10^{-6}\), \(Br(B^{-}\to\Sigma^{0}\overline{\Lambda}l\bar{\nu})=(0.014\pm 0.004)\times 10^{-6}\), \(Br(B^{-}\to\Lambda\overline{\Sigma^{0}}l\bar{\nu})=(0.014\pm 0.004)\times 10^{-6}\), \(Br(B^{-}\to\Xi^{0}\overline{\Xi^{0}}l\bar{\nu})=(0.008\pm 0.001)\times 10^{-6}\), \(Br(B^{-}\to\Lambda\overline{\Lambda}l\bar{\nu})=(0.08\pm 0.01)\times 10^{-6}\)[8], \(Br(\overline{B}_{s}^{0}\to p\overline{\Lambda}l\bar{\nu})=(2.1\pm 0.6)\times 10^{-6}\), \(\sum_{\nu}Br(B^{-}\to\Lambda\bar{p}\nu\bar{\nu})=(3.5\pm 1.0)\times 10^{-8}\) and \(\sum_{\nu}Br(\overline{B}_{s}\to\Lambda\bar{\Lambda}\nu\bar{\nu})=(0.8\pm 0.2) \times 10^{-8}\)[9] are reported. We find that results in Model 1 agree with or close to those in ref. [8], while the results on \(\sum_{\nu}Br(B^{-}\to\Lambda\bar{p}\nu\bar{\nu})\) and \(\sum_{\nu}Br(\overline{B}_{s}\to\Lambda\bar{\Lambda}\nu\bar{\nu})\) in Model 2 differs to theose in ref. [9] by factors of 7. Results on all other modes in Table 1 are new. Model 1 and Model 2 have similar results on some modes, but very different results on some other modes. For example, their rates in the \(B^{-}\to p\bar{p}l\bar{\nu}\) decay are identical by construction, \(B^{-}\to\Sigma^{+}\overline{\Sigma^{+}}l\bar{\nu}\) rates as well as \(B^{-}\to\Sigma^{0}\overline{\Sigma^{0}}l\bar{\nu}\) rates are similar, but the \(B^{-}\to\Sigma^{0}\overline{\Lambda}l\bar{\nu}\) rate in Model 2 is larger than the one in Model 1 by one order of magnitude, the \(\overline{B}^{0}\to p\bar{n}l\bar{\nu}\) rate in Model 2 is larger than the one in Model 1 by a factor of 3, and the \(B^{-}\to n\bar{n}l\bar{\nu}\) rate in Model 2 is larger than the one in Model 1 by a factor of 19. Note that the amplitudes of \(B^{-}\to p\bar{p}l\bar{\nu}\), \(B^{-}\to\Sigma^{+}\overline{\Sigma^{+}}l\bar{\nu}\) and \(B^{-}\to\Sigma^{0}\overline{\Sigma^{0}}l\bar{\nu}\) are to \(T_{1{\cal B\overline{B}}}+T_{2{\cal B\overline{B}}}\), \(B^{-}\to\Sigma^{0}\overline{\Lambda}l\bar{\nu}\), \(B^{-}\to n\bar{n}l\bar{\nu}\) and \(\overline{B}^{0}\to p\bar{n}l\bar{\nu}\) are proportional to \(T_{1{\cal B\overline{B}}}-T_{2{\cal B\overline{B}}}\), \(T_{1{\cal B\overline{B}}}\) and \(T_{2{\cal B\overline{B}}}\), respectively. These results imply that Model 1 (2) has constructive (destructive) interference of \(T_{1{\cal B\overline{B}}}\) and \(T_{2{\cal B\overline{B}}}\) in \(B^{-}\to p\bar{p}l\bar{\nu}\) decay, but destructive (constructive) interference of \(T_{1{\cal B\overline{B}}}\) and \(-T_{2{\cal B\overline{B}}}\) in \(B^{-}\to\Sigma^{0}\overline{\Lambda}l\bar{\nu}\) decay, and \(|T_{1,2{\cal B\overline{B}}}|\) in Model 2 are larger than those in Model 1. These two models are complementary. It is therefore useful to consider both of them. In Fig. 3 the differential rates \(dBr/dm_{p\bar{p}}\) of \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) from Model 1 and Model 2 are shown and are compared to data. The differential rates from Model 1 and 2 agree with data and are similar to each other. The expectations of the orders of magnitudes of \(\alpha\), \(\beta\) and so on to be of order one and the triangular inequalities, Eqs.(67), (68) and (69), on \(a\), \(b\) and so on can be checked by comparing Table 1 with the results in Model 1 and Model 2 as shown in Table 1. The findings are shown in Table 1. The values of the parameters \(\beta,\bar{\beta},\sigma,\tilde{\sigma},\kappa,\bar{\kappa},\xi,\tilde{\xi}\) are indeed of order one and are in the range of \(1.47\sim 3.19\), and their values in Model 1 and Model 2 are similar with differences at most \(13\%\), even though these two models have very different interference patterns. The ratios of \(\frac{\bar{\alpha}}{\alpha},\frac{\bar{\beta}}{\beta},\frac{\bar{\kappa}}{\kappa},\frac{\bar{\bar{\kappa}}}{\bar{\kappa}}\) are close to one and are in the range of \(0.86\sim 0.93\) and again their values in Model 1 and Model 2 are similar. The bounds on \(\alpha,\bar{\alpha},\eta,\tilde{\eta},\tilde{\tilde{\eta}},\tilde{\kappa}, \tilde{\bar{\kappa}}\) in Model 1 are more restrictive than those in Model 2, but they all allow these parameters to be of order one. We do not see any violation of the triangular inequalities, Eqs.(67), (68) and (69). The values of \(a\) and \(b\) in Model 1 and 2 confirm that \(T_{1\mathcal{B}\overline{\mathcal{B}}}\) and \(T_{2\mathcal{B}\overline{\mathcal{B}}}\) are constructive in Model 1, but destructive in Model 2, and \(|T_{1,2\mathcal{B}\overline{\mathcal{B}}}|\) in Model 2 are larger than those in Model 1. These two models are indeed different but they give similar results on these parameters. Furthermore, our expectations on these parameters are basically verified in these two models. From Table XI we find that the \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}l\bar{\nu}\) branching ratios are of the orders \(10^{-8}\sim 10^{-6}\) for non-annihilation modes, while the branching ratios of \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}\nu\bar{\nu}\) decays are of the orders of \(10^{-11}\sim 10^{-8}\) for non-penguin-box-annihilation modes. From Tables IX and XI, we see that the following modes have good detectability and relatively unsuppressed rates, they are \(B^{-}\to p\bar{p}l\bar{\nu}\), \(\overline{B}^{0}\to p\bar{n}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu}\), \(B^{-}\to\Lambda\bar{p}\nu\bar{\nu}\), \(\overline{B}^{0}\to\Lambda\bar{n}\nu\bar{\nu}\) and \(\overline{B}^{0}_{s}\to\Lambda\overline{\Lambda}\nu\bar{\nu}\) decays. It is reasonable that the \(B^{-}\to p\bar{p}l\bar{\nu}\) decay is the first detected mode as it has a large rate with very good detectability. In fact its rate is the largest one in Model 1, but is the third largest one in Model 2, where \(\overline{B}^{0}\to p\bar{n}l\bar{\nu}\) and \(B^{-}\to n\bar{n}l\bar{\nu}\) decays have larger rates but poorer detectability. It will be useful to search for these modes to differentiate these two models and to understand the interference patterns of \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}l\bar{\nu}\) decay amplitudes. From Tables X, we obtain \[\frac{\sum_{\nu}Br(B^{-}\to\Lambda\bar{p}\nu\bar{\nu})}{Br( \overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu})} = 4.29\frac{\bar{\alpha}}{\alpha}\times\left(\frac{0.0036}{|V_{ub}| }\right)^{2}\times 10^{-3},\] \[\frac{\sum_{\nu}Br(\overline{B}^{0}\to\Lambda\bar{n}\nu\bar{\nu} )}{Br(\overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu})} = 3.94\frac{\bar{\alpha}}{\alpha}\times\left(\frac{0.0036}{|V_{ub}| }\right)^{2}\times 10^{-3}. \tag{70}\] The ratio \(\bar{\alpha}/\alpha\) is expected to be close to one. In Model 1 and 2, we have \(\bar{\alpha}/\alpha=0.90\) and \(0.95\), respectively, as shown in Table XII, which are indeed close to one. Hence the ratios are not sensitive to the SU(3) breakings from threshold enhancement as they are mostly cancelled out. Furthermore, the ratios do not rely on the assumption of neglecting annihilation \(A\) and penguin-box-annihilation \(PBA\) contributions, as these modes are free from these contributions, see Table V. As the \(\overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu}\) decay are tree level decay modes, while the \(B^{-}\to\Lambda\bar{p}\nu\bar{\nu}\) and \(\overline{B}^{0}\to\Lambda\bar{n}\nu\bar{\nu}\) decays are governed by penguin and box diagrams, the above ratios can be tests of SM. (\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\), \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu}\), \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu}\) decay rates We now consider the rates of \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu}\) decays and the rates of \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu}\) decays. As shown in Table 6, in \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\nu\) decays, there is only one topological amplitude, namely \(T_{\cal B\overline{\cal D}}\), while in \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu}\) decays, there is also only one topological amplitude, namely \(PB_{\cal B\overline{\cal D}}\), but \(PB_{\cal B\overline{\cal D}}\) and \(T_{\cal B\overline{\cal D}}\) are related by \(\zeta\) as shown in Eq. (20). Similar features also hold in \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu}\) decays by using Table 7 and Eq. (20), but with \(T_{{\cal D}\overline{\cal B}}\) and \(PB_{{\cal D}\overline{\cal B}}\). The decay rates of \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\nu,{\cal B}\overline{\cal D}\nu \bar{\nu}\) decays and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}^{\prime}l\bar{\nu},{\cal D} \overline{\cal B}\nu\bar{\nu}\) decays are parametrized in term of \(a^{\prime}\) and \(a^{\prime\prime}\), respectively, where the rates correspond to \(A=T_{{\cal B}\overline{\cal B}}\) and \(T_{{\cal D}\overline{\cal B}}\) are denoted as \(a^{\prime}\) and \(a^{\prime\prime}\), respectively. Note that the above parameters correspond to the rates in the SU(3) symmetric limit. As in \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{\nu}\) decays, we expect to see threshold enhancement in the differential rates of these modes. Likewise the SU(3) breaking on \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) rates from the threshold enhancement can be estimated as in the \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) case, once the corresponding differential rates of some modes are known. However, at the moment no such information is available yet. We should make use of some model calculations to obtain informations of the differential rates of these modes. As we shall see the SU(3) breaking from the threshold enhancement can be estimated using Eq. (62) and similar procedure in the discussion around Eq. (63) but with \(\gamma=7\). In Tables 13 and 14 the decay rates of \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\nu,{\cal B}\overline{\cal D}\nu \bar{\nu}\) decays and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu},{\cal D}\overline{\cal B }\nu\bar{\nu}\) decays are shown. The parameters \(\beta^{\prime,\prime\prime}\), \(\kappa^{\prime,\prime\prime}\), \(\sigma^{\prime,\prime\prime}\) and so on are used to denote milder SU(3) breaking effects and are expected to be of order one. The relative sizes of \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}(\nu\bar{\nu})\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}(\nu\bar{\nu})\) decay rates can be readily read from Tables 13 and 14. At this moment we do not have enough data to verify Tables 13 and 14. As in the case of \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{\nu}\) decays we will make use of Model 1 and Model 2 for illustration. Using inputs from Sec. II.2 and the formulas given Appendix B, the branching ratios of \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu}\) in Model 1 and 2 are obtained and are shown in Table 14, while the branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu}\) in Model 1 and 2 are obtained and are shown in Table 14. These results are new. From Tables 14 and 14, we see that the branching ratios of \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays are in the ranges of \(10^{-9}\sim 10^{-8}\) and \(10^{-9}\sim 10^{-7}\) in Model 1 and 2, respectively, while \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\nu\nu)\) and \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\nu\nu)\) are in the ranges of \(10^{-12}\sim 10^{-10}\) and \(10^{-11}\sim 10^{-10}\) in Model 1 and 2, respectively. The rates in Model 2 are greater than those in Model 1 by a factor of \(4\sim 7\). This mostly corresponds to the fact that \(|T_{\cal B}\overline{\cal D}|\) and \(|T_{\cal D}\overline{\cal B}|\) in Model 2 are greater than those in Model 1, as reflected through the sizes of \(a^{\prime}\) and \begin{table} \begin{tabular}{c c c} Parameters & Values (Model 1) & Values (Model 2) \\ \hline \(a^{\prime}\) & 1.15 & 7.24 \\ \hline \(\beta^{\prime}\) & 1.01 & 1.03 \\ \(\kappa^{\prime}\) & 0.72 & 0.78 \\ \(\sigma^{\prime}\) & 0.74 & 0.79 \\ \(\xi^{\prime}\) & 0.52 & 0.57 \\ \(\omega^{\prime}\) & 0.50 & 0.56 \\ \hline \hline Parameters & Values (Model 1) & Values (Model 2) \\ \hline \(a^{\prime\prime}\) & 1.35 & 6.19 \\ \hline \(\beta^{\prime\prime}\) & 1.57 & 1.58 \\ \(\kappa^{\prime\prime}\) & 1.86 & 2.24 \\ \(\sigma^{\prime\prime}\) & 1.49 & 1.78 \\ \(\xi^{\prime\prime}\) & 1.58 & 2.29 \\ \(\omega^{\prime\prime}\) & 1.53 & 2.59 \\ \end{tabular} \end{table} Table 14: Values of various parameters in Model 1 and Model 2. \(a^{\prime\prime}\) as shown in Table 17. This is not surprising as \(|T_{1\mathcal{B}\overline{\mathcal{B}}}|\) and \(|T_{2\mathcal{B}\overline{\mathcal{B}}}|\) in \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{B}}^{\prime}l\bar{\nu}\) decays in Model 2 are greater than those in Model 1, as reflected in the sizes of \(a\) and \(b\) as shown in Table 18. From Table 17, we see that the parameters \(\beta^{\prime,\prime\prime},\kappa^{\prime,\prime\prime},\sigma^{\prime, \prime\prime},\xi^{\prime,\prime\prime}\) and \(\omega^{\prime}\) denoting milder SU(3) breaking are indeed of order one and are similar in Model 1 and 2 in most cases. From Tables 19, 18 and 19, we find that \(\overline{B}^{0}\to p\overline{\Delta^{0}}l\bar{\nu}\) and \(\overline{B}^{0}\to\Sigma^{0}\overline{\Delta^{0}}\nu\bar{\nu}\) have relatively unsuppressed rates and good detectability. In particular, we have the follow rate ratio of the loop induced mode and tree induced modes, \[\frac{\sum_{\nu}Br(\overline{B}^{0}\to\Sigma^{0}\overline{\Delta^{0}}\nu\bar{\nu} )}{Br(\overline{B}^{0}\to p\overline{\Delta^{0}}l\bar{\nu})}\;=\;2.11\beta^{ \prime}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2}\times 10^{-3}, \tag{71}\] where \(\beta^{\prime}\) is of order one. In fact as shown in Table 15, we have \(\beta^{\prime}=1.01\) and \(1.03\) in Model 1 and 2, respectively. The ratio in Eq. (71) can be a test of SM. From Tables 16, 17 and 18, we find that \(\overline{B}^{0}\to\Delta^{++}\bar{p}l\bar{\nu}\), \(\overline{B}^{0}\to\Sigma^{*+}\overline{\Lambda}l\bar{\nu}\), \(B^{-}\to\Delta^{+}\bar{p}l\bar{\nu}\), \(B^{-}\to\Delta^{0}\bar{n}l\bar{\nu}\), \(\overline{B}^{0}\to\Sigma^{*+}\bar{p}\nu\bar{\nu}\) and \(B^{-}\to\Sigma^{*-}\bar{n}\nu\bar{\nu}\) decays have good detectability and relatively unsuppressed rates. The rate ratios of these loop induced modes and tree induced modes can be sensible tests of SM. For example, we have the following rate ratio, \[\frac{\sum_{\nu}Br(\overline{B}^{0}\to\Sigma^{*+}\bar{p}l\bar{\nu})}{Br( \overline{B}^{0}\to\Delta^{++}\bar{p}l\bar{\nu})}\;=\;4.55\kappa^{\prime\prime }\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2}\times 10^{-4}, \tag{72}\] where \(\kappa^{\prime\prime}\) is of order one. In fact, as shown in Table 15, we have \(\kappa^{\prime\prime}=1.86\) and \(2.24\) in Model 1 and 2, respectively. The differential rates \(dBr/dm_{p\overline{\Delta^{0}}}\) of \(\overline{B}^{0}\to p\overline{\Delta^{0}}l^{-}\bar{\nu}\) decay and \(dBr/dm_{\Delta^{++}\bar{p}}\) of \(\overline{B}^{0}\to\Delta^{++}\bar{p}l^{-}\bar{\nu}\) decay from Model 1 and Model 2 are plotted in Fig. 4. They can be compared to the dashed lines plotted using Eq. (62) with \(\gamma=7\). They clearly exhibit threshold enhancement as expected. (\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decay rates As shown in Table 19, the \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\nu\) decay amplitudes are governed by tree \(T_{{\cal D}\overline{\cal D}}\) and annihilation \(A_{{\cal D}\overline{\cal D}}\) amplitudes, while the \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decay amplitudes are governed by penguin-box \(PB\) and penguin-box-annihilation \(PBA\) amplitudes. The penguin-box and tree amplitudes are related by a proportional constant \(\zeta\), while the penguin-box-annihilation and annihilation amplitudes are related by the same constant, see Eq. (20). The \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\nu\) decay rates can be parametrized by 5 parameters, namely, \(a^{\prime\prime\prime}\), \(b^{\prime\prime\prime}\), \(c^{\prime\prime\prime}\), \(d^{\prime\prime\prime}\) and \(e^{\prime\prime\prime}\), where the first four are contributed from tree and annihilation amplitudes, with the following amplitudes \(T_{{\cal D}\overline{\cal D}}\), \(T_{{\cal D}\overline{\cal D}}+A_{{\cal D}\overline{\cal D}}/6\), \(T_{{\cal D}\overline{\cal D}}+A_{{\cal D}\overline{\cal D}}/4\), \(T_{{\cal D}\overline{\cal D}}+A_{{\cal D}\overline{\cal D}}/2\), respectively, while the last one is only from the annihilation amplitude, \(A_{{\cal D}\overline{\cal D}}\). The same set of parameters can be used in \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decay rates as the topological amplitudes are proportional to those in \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\nu\) decays by the common factor, \(\zeta\). Note that the above parameters correspond to the rates in the SU(3) symmetric limit. \begin{table} \begin{tabular}{c c c c} Mode & \(Br(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu})(10^{-8})\) & Mode & \(Br(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu})(10^{-8})\) \\ \hline \(B^{-}\to\Delta^{++}\overline{\Delta^{++}}l\bar{\nu}\) & \(36b^{\prime\prime\prime}\) & \(B^{-}\to\Delta^{+}\overline{\Delta^{+}}l\bar{\nu}\) & \(16a^{\prime\prime\prime}\) \\ \(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{\nu}\) & \(4d^{\prime\prime\prime}\) & \(B^{-}\to\Delta^{-}\overline{\Delta^{-}}l\bar{\nu}\) & \(e^{\prime\prime\prime}\) \\ \(B^{-}\to\Sigma^{*+}\overline{\Sigma^{*+}}l\bar{\nu}\) & \(16c^{\prime\prime\prime}\times(0.125\sigma^{\prime\prime\prime})\) & \(B^{-}\to\Sigma^{*0}\overline{\Sigma^{*0}}l\bar{\nu}\) & \(4d^{\prime\prime\prime}\times(0.124\sigma^{\prime\prime\prime})\) \\ \(B^{-}\to\Sigma^{*-}\overline{\Sigma^{*-}}l\bar{\nu}\) & \(e^{\prime\prime\prime}\times(0.118\sigma^{\prime\prime\prime})\) & \(B^{-}\to\Xi^{*0}\overline{\Xi^{*0}}l\bar{\nu}\) & \(4d^{\prime\prime\prime}\times(0.0198\xi^{\prime\prime\prime})\) \\ \(B^{-}\to\Xi^{*-}\overline{\Xi^{*-}}l\bar{\nu}\) & \(e^{\prime\prime\prime}\times(0.0191\xi^{\prime\prime\prime})\) & \(B^{-}\to\Omega^{-}\overline{\Omega^{-}}l\bar{\nu}\) & \(e^{\prime\prime\prime}\times(0.00407\upsilon^{\prime\prime\prime})\) \\ \hline \(\overline{B}^{0}\to\Delta^{++}\overline{\Delta^{+}}l\bar{\nu}\) & \(11.1a^{\prime\prime\prime}\) & \(\overline{B}^{0}\to\Delta^{+}\overline{\Delta^{0}}l\bar{\nu}\) & \(14.8a^{\prime\prime\prime}\) \\ \(\overline{B}^{0}\to\Delta^{0}\overline{\Delta^{-}}l\bar{\nu}\) & \(11.1a^{\prime\prime\prime}\) & \(\overline{B}^{0}\to\Sigma^{*+}\overline{\Sigma^{*0}}l\bar{\nu}\) & \(7.4a^{\prime\prime\prime}\times(0.124\sigma^{\prime\prime\prime})\) \\ \(\overline{B}^{0}\to\Sigma^{*0}\overline{\Sigma^{*-}}l\bar{\nu}\) & \(7.4a^{\prime\prime\prime}\times(0.121\sigma^{\prime\prime\prime})\) & \(\overline{B}^{0}\to\Xi^{*0}\overline{\Xi^{*-}}l\bar{\nu}\) & \(3.7a^{\prime\prime\prime}\times(0.0195\xi^{\prime\prime\prime})\) \\ \hline \(\overline{B}^{0}_{s}\to\Delta^{++}\overline{\Sigma^{*+}}l\bar{\nu}\) & \(11.2a^{\prime\prime\prime}\times(0.343\beta^{\prime\prime\prime})\) & \(\overline{B}^{0}_{s}\to\Delta^{+}\overline{\Sigma^{*0}}l\bar{\nu}\) & \(7.5a^{\prime\prime\prime}\times(0.341\beta^{\prime\prime\prime})\) \\ \(\overline{B}^{0}_{s}\to\Delta^{0}\overline{\Sigma^{*-}}l\bar{\nu}\) & \(3.7a^{\prime\prime\prime}\times(0.333\beta^{\prime\prime\prime})\) & \(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Xi^{*0}}l\bar{\nu}\) & \(14.9a^{\prime\prime\prime}\times(0.0486\kappa^{\prime\prime\prime})\) \\ \(\overline{B}^{0}_{s}\to\Sigma^{*0}\overline{\Xi^{*-}}l\bar{\nu}\) & \(7.5a^{\prime\prime\prime}\times(0.0474\kappa^{\prime\prime\prime})\) & \(\overline{B}^{0}_{s}\to\Xi^{*0}\overline{\Omega^{-}}l\bar{\nu}\) & \(11.2a^{\prime\prime\prime}\times(0.00883\omega^{\prime\prime\prime})\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} Mode & \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu})(10^{-10})\) & Mode & \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu} )(10^{-10})\) \\ \hline \(B^{-}\to\Sigma^{*+}\overline{\Delta^{++}}\nu\bar{\nu}\) & \(4.80a^{\prime\prime\prime}\times(0.343\bar{\beta}^{\prime\prime\prime})\) & \(B^{-}\to\Sigma^{*0}\overline{\Delta^{+}}\nu\bar{\nu}\) & \(3.20a^{\prime\prime\prime}\times(0.341\bar{\beta}^{\prime\prime\prime})\) \\ \(B^{-}\to\Sigma^{*-}\overline{\Delta^{0}}\nu\bar{\nu}\) & \(1.60a^{\prime\prime\prime}\times(0.33\bar{\beta}^{\prime\prime\prime})\) & \(B^{-}\to\Xi^{*0}\overline{\Sigma^{*+}}\nu\bar{\nu}\) & \(6.40a^{\prime\prime\prime}\times(0.0486\bar{\kappa}^{\prime\prime\prime})\) \\ \(B^{-}\to\Xi^{*-}\overline{\Sigma^{*0}}\nu\bar{\nu}\) & \(3.20a^{\prime\prime\prime}\times(0.0474\bar{\kappa}^{\prime\prime\prime})\) & \(B^{-}\to\Omega^{-}\overline{\Xi^{*0}}\nu\bar{\nu}\) & \(4.80a^{\prime\prime\prime}\times(0.00883\bar{\omega}^{\prime\prime\prime})\) \\ \hline \(\overline{B}^{0}\to\Sigma^{*+}\overline{\Delta^{+}}\nu\bar{\nu}\) & \(1.48a^{\prime\prime\prime}\times(0.343\bar{\beta}^{\prime\prime\prime})\) & \(\overline{B}^{0}\to\Sigma^{*0}\overline{\Delta^{0}}\nu\bar{\nu}\) & \(2.97a^{\prime\prime\prime}\times(0.341\bar{\beta}^{\prime\prime\prime})\) \\ \(\overline{B}^{0}\to\Sigma^{*-}\overline{\Delta^{-}}\nu\bar{\nu}\) & \(4.45a^{\prime\prime\prime}\times(0.333\bar{\beta}^{\prime\prime\prime})\) & \(\overline{B}^{0}\to\Xi^{*0}\overline{\Sigma^{*0}}\nu\bar{\nu}\) & \(2.97a^{\prime\prime\prime}\times(0.0484\bar{\kappa}^{\prime\prime\prime})\) \\ \(\overline{B}^{0}\to\Xi^{*-}\overline{\Sigma^{*-}}\nu\bar{\nu}\) & \(5.93a^{\prime\prime\prime}\times(0.0464\bar{\kappa}^{\prime\prime\prime})\) & \(\overline{B}^{0}\to\Omega^{-}\overline{\Xi^{*-}}\nu\bar{\nu}\) & \(4.45a^{\prime\prime\prime}\times(0.00867\bar{\omega}^{\prime\prime\prime})\) \\ \hline \(\overline{B}^{0}_{s}\to\Delta^{++}\overline{\Delta^{++}}\nu\bar{\nu}\) & \(0.37e^{\prime\prime\prime}\) & \(\overline{B}^{0}_{s}\to\Delta^{+}\overline{\Delta^{+}}\nu\bar{\nu}\) & \(0.37e^{\prime\prime\prime}\) \\ \(\overline{B}^{0}_{s}\to\Delta^{0}\overline{\Delta^{0}}\nu\bar{\nu}\) & \(0.37e^{\prime\prime\prime}\) & \(\overline{B}^{0}_{s}\to\Delta^{-}\overline{\Delta^{-}}\nu\bar{\nu}\) & \(0.37e^{\prime\prime\prime}\) \\ \(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Sigma^{*+}}\nu\bar{\nu}\) & \(1.49d^{\prime\prime\prime}\times(0.125\sigma^{\prime\prime\prime})\) & \(\overline{B}^{0}_{s}\to\Sigma^{*0}\overline{\Sigma^{*0}}\nu\bar{\nu}\) & \(1.49d^{\prime\prime\prime}\times(0.124\sigma^{\prime\prime\prime})\) \\ \(\overline{B}^{0}_{s}\to\Sigma^{*-}\overline{\Sigma^{*-}}\nu\bar{\nu}\) & \(1.49d^{\prime\prime\prime}\times(0.118\sigma^{\prime\prime\prime})\) & \(\overline{B}^{0}_{s}\to\Xi^{*0}\overline{\Xi^{*0}}\nu\bar{\nu}\) & \( Using the triangle inequality Eq. (61), we obtain the following relations, \[(\sqrt{a^{\prime\prime\prime}}-\sqrt{e^{\prime\prime\prime}}/6)^{2} \lesssim\] \[(\sqrt{a^{\prime\prime\prime}}-\sqrt{e^{\prime\prime\prime}}/4)^{2} \lesssim\] \[(\sqrt{a^{\prime\prime\prime}}-\sqrt{e^{\prime\prime\prime}}/2)^{2} \lesssim \tag{73}\] As in the case of \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) decays, it is expected that the contributions from annihilation amplitudes to be much suppressed than others. Consequently, we should have \(e^{\prime\prime\prime}\ll a^{\prime\prime\prime}\) and the above inequalities lead to the following relations, \[a^{\prime\prime\prime}\simeq b^{\prime\prime\prime}\simeq c^{\prime\prime \prime}\simeq d^{\prime\prime\prime}\gg e^{\prime\prime\prime}. \tag{74}\] As in other \(\overline{B}_{q}\to{\bf B}\overline{\cal B}^{\prime}l\bar{\nu}(\nu\bar{\nu})\) decays, threshold enhancements in the differential rates of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are anticipated. They will lead to large SU(3) breaking effect on \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decay rates. The SU(3) breaking on rates from the threshold enhancement can be estimated as in the \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) case, once the differential rate of a \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decay mode is measured. Apparently, no such information is available at this moment. We should make use of some model calculations to obtain informations of the differential rates of these modes for illustration. As we shall see, in the model calculations the SU(3) breaking can be estimated using Eq. (62) with \(\gamma=10\) employing a similar procedure in the discussion around Eq. (63), and the related parameters \(\beta^{\prime\prime\prime},\sigma^{\prime\prime\prime},\kappa^{\prime\prime \prime},\xi^{\prime\prime\prime},\upsilon^{\prime\prime\prime},\omega^{\prime \prime\prime}\) denoting milder SU(3) breaking are expected to be of order 1. With these considerations the rates of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\nu\) and \({\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are shown in Table 15. With parameters \(a^{\prime\prime\prime}\simeq b^{\prime\prime\prime}\simeq c^{\prime\prime \prime}\simeq d^{\prime\prime\prime}\gg e^{\prime\prime\prime}\) and SU(3) breaking parameters \(\beta^{\prime\prime\prime},\sigma^{\prime\prime\prime},\kappa^{\prime\prime \prime},\xi^{\prime\prime\prime},\upsilon^{\prime\prime\prime},\omega^{\prime \prime\prime}\) of order 1, the relative sizes of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\nu\) and \({\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decay rates can be readily read from the table. From Tables 15 and 16 we note that \(B^{-}\to\Delta^{++}\overline{\Delta^{++}}l\bar{\nu}\) decay mode has the largest rate and good detectability. It is also among the least suppressed modes by SU(3) breaking effect from the threshold enhancement even if \(\gamma=10\) in Eq. (62) is not borne out. For \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) modes, \(B^{-}\to\Sigma^{*+}\overline{\Delta^{++}}\nu\bar{\nu}\) decay has relatively unsuppressed rate and good detectability. The above assumption of neglecting annihilation contributions can be checked by searching the \(B^{-}\to\Sigma^{*-}\overline{\Sigma^{*-}}l\bar{\nu}\) decay mode, which is a pure annihilation mode but with final states of good detectability. Using inputs from Sec. II.2 and the formulas given Appendix B, the branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) in Model 1 and 2 are obtained and are shown in Table 16. These results are new. From Table 16, we see that the branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays of non-neutral particles are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. The branching ratios of \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays are \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\ \begin{table} \begin{tabular}{c c c} Parameters & Values (Model 1) & Values (Model 2) \\ \hline \(a^{\prime\prime\prime}\) & 0.47 & 1.34 \\ \(e^{\prime\prime\prime}\) & 0 & 0 \\ \hline \(\beta^{\prime\prime\prime}\) & 1.43 & 1.45 \\ \(\kappa^{\prime\prime\prime}\) & 1.87 & 1.91 \\ \(\sigma^{\prime\prime\prime}\) & 1.40 & 1.43 \\ \(\xi^{\prime\prime\prime}\) & 1.62 & 1.65 \\ \(v^{\prime\prime\prime}\) & 1.93 & 2.03 \\ \(\omega^{\prime\prime\prime}\) & 2.01 & 2.06 \\ \(\bar{\beta}^{\prime\prime\prime}\) & 1.19 & 1.22 \\ \(\bar{\kappa}^{\prime\prime\prime}\) & 1.51 & 1.58 \\ \(\bar{\omega}^{\prime\prime\prime}\) & 1.58 & 1.66 \\ \hline \(\bar{\beta}^{\prime\prime\prime}/\beta^{\prime\prime\prime}\) & 0.83 & 0.85 \\ \(\bar{\kappa}^{\prime\prime\prime}/\kappa^{\prime\prime\prime}\) & 0.81 & 0.83 \\ \(\bar{\omega}^{\prime\prime\prime}/\omega^{\prime\prime\prime}\) & 0.79 & 0.81 \\ \end{tabular} \end{table} Table 9: Values of various parameters in Model 1 and Model 2. Other parameters can be obtained by using this table and Eq. (74). annihilation modes are in the ranges of \(10^{-9}\sim 10^{-7}\) in Model 1 and 2, while \(\sum_{\nu}Br(\overline{B}_{q}\to\mathcal{D}\overline{\mathcal{D}}^{\prime}l\nu\nu)\) are in the ranges of \(10^{-12}\sim 10^{-10}\) and \(10^{-11}\sim 10^{-10}\) in Model 1 and 2, respectively. The rates in Model 2 are greater than those in Model 1 by a factor of 3. This corresponds to the fact that \(|T_{\mathcal{D}\overline{\mathcal{D}}}|\) in Model 2 is greater than one the in Model 1 as reflected through the sizes of \(a^{\prime\prime\prime}\) in these two models as shown in Table XX. This is not surprising as we noted previously, the sizes of topological amplitudes \(|T_{1\mathcal{B}\overline{\mathcal{B}}}|\), \(|T_{2\mathcal{B}\overline{\mathcal{B}}}|\)\(|T_{\mathcal{B}\overline{\mathcal{D}}}|\) and \(|T_{\mathcal{D}\overline{\mathcal{B}}}|\) in Model 2 are greater than those in Model 1. The differential rates \(dBr/dm_{\Delta^{++}\overline{\Delta^{++}}}\) of \(B^{-}\to\Delta^{++}\overline{\Delta^{++}}l^{-}\bar{\nu}\) decay from Model 1 and Model 2 are shown in Fig. 5, they can be compared to the one plotted using Eq. (62) with \(\gamma=10\). They clearly exhibit threshold enhancement as in other \(\overline{B}_{q}\to\mathbf{B}\overline{\mathbf{B}}^{\prime}l\bar{\nu}\) decays. From Table XX, we see that the parameters \(\beta^{\prime\prime\prime},\kappa^{\prime\prime\prime},\sigma^{\prime\prime \prime},\)\(\kappa^{\prime\prime\prime},\)\(\omega^{\prime},\)\(\bar{\beta}^{\prime\prime},\bar{\kappa}^{\prime\prime\prime}\) and \(\bar{\omega}^{\prime\prime\prime}\) denoting milder SU(3) breaking are indeed of order one and are similar in Model 1 and 2. Furthermore, \(\bar{\beta}^{\prime\prime\prime}/\beta^{\prime\prime\prime},\)\(\bar{\kappa}^{\prime\prime\prime}/\kappa^{\prime\prime\prime}\) and \(\bar{\omega}^{\prime\prime\prime}/\omega^{\prime\prime\prime}\) are close to one as expected. By taking into account the sensitivity of detection, see Table IX, and decay rates, see Tables XVIII and XIX, we find that the following decay modes have relatively unsuppressed rates and good detectability, they are \(B^{-}\to\Delta^{++}\overline{\Delta^{++}}l\bar{\nu}\), \(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{\nu}\), \(B^{-}\to\Sigma^{*+}\overline{\Sigma^{*+}}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Delta^{++}\overline{\Sigma^{*+}}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Delta^{0}\overline{\Sigma^{*-}}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Xi^{*0}}l\bar{\nu}\), \(B^{-}\to\Sigma^{*+}\overline{\Delta^{++}}\nu\bar{\nu}\), \(B^{-}\to\Sigma^{*-}\overline{\Delta^{0}}\nu\bar{\nu}\), \(B^{-}\to\Xi^{*0}\overline{\Sigma^{*+}}\nu\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Sigma^{*+}}\nu\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Sigma^{*-}\overline{\Sigma^{*-}}\nu\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Xi^{*0}\overline{\Xi^{*0}}\nu\bar{\nu}\) decays. Ratios of rates from loop induced modes and tree induced modes are sensible test of the SM. From Table XVIII, we obtain \[\frac{\sum_{\nu}Br(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{ \Sigma^{*+}}\nu\bar{\nu})}{Br(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{ \nu})} = 4.66\sigma^{\prime\prime\prime}\times\left(\frac{0.0036}{|V_{ub}| }\right)^{2}\times 10^{-4},\] \[\frac{\sum_{\nu}Br(\overline{B}^{0}_{s}\to\Sigma^{*-}\overline{ \Sigma^{*-}}\nu\bar{\nu})}{Br(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{ \nu})} = 4.40\sigma^{\prime\prime\prime}\times\left(\frac{0.0036}{|V_{ub}| }\right)^{2}\times 10^{-4}, \tag{75}\] and \[\frac{\sum_{\nu}Br(B^{-}\to\Sigma^{*+}\overline{\Delta^{++}}\nu \bar{\nu})}{Br(\overline{B}^{0}_{s}\to\Delta^{++}\overline{\Sigma^{*+}}l\bar{ \nu})} = 4.29\frac{\bar{\beta}^{\prime\prime\prime}}{\beta^{\prime\prime }}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2}\times 10^{-3},\] \[\frac{\sum_{\nu}Br(B^{-}\to\Sigma^{*-}\overline{\Delta^{0}}\nu \bar{\nu})}{Br(\overline{B}^{0}_{s}\to\Delta^{0}\overline{\Sigma^{*-}}l\bar{ \nu})} = 4.29\frac{\bar{\beta}^{\prime\prime\prime}}{\beta^{\prime\prime }}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2}\times 10^{-3},\] \[\frac{\sum_{\nu}Br(B^{-}\to\Xi^{*0}\overline{\Sigma^{*+}}\nu \bar{\nu})}{Br(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Xi^{*0}}l\bar{ \nu})} = 4.29\frac{\bar{\kappa}^{\prime\prime\prime}}{\kappa^{\prime\prime \prime}}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2}\times 10^{-3}, \tag{76}\] where \(\sigma^{\prime\prime\prime}\) is expected to be of order one, while \(\bar{\beta}^{\prime\prime\prime}/\beta^{\prime\prime\prime}\) and \(\bar{\kappa}^{\prime\prime\prime}/\kappa^{\prime\prime\prime}\) are expected to be close to one. In fact as shown in Table XX, we have \(\sigma^{\prime\prime\prime}=1.40(1.43)\), \(\bar{\beta}^{\prime\prime\prime}/\beta^{\prime\prime\prime}=0.83(0.85)\) and \(\bar{k}^{\prime\prime\prime}/\kappa^{\prime\prime\prime}=0.81(0.83)\) in Model 1 (2), which are indeed agree with the above expectations. Note that the ratios in Eqs. (75) and (76) do not involve the small \(e^{\prime\prime\prime}\) assumption, and the ratios in Eq. (76) are less sensitive to the SU(3) breaking from threshold enhancement. These ratios can be checked experimentally. ## V Discussions and conclusion We study the decay amplitudes and rates of \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays with all low lying octet (\({\cal B}\)) and decuplet (\({\cal D}\)) baryons using a topological amplitude approach. The decay amplitudes are decomposed into combinations of topological amplitudes. In \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}l\bar{\nu}\) decays we need three topological amplitudes, namely two tree amplitudes, \(T_{2{\cal B}\overline{\cal B}}\), \(T_{1{\cal B}\overline{\cal B}}\) and one annihilation amplitude, \(A_{{\cal B}\overline{\cal B}}\). In \(\overline{B}_{q}\to{\cal B}\overline{\cal D}l\bar{\nu}\) decays only one tree amplitude, \(T_{{\cal B}\overline{\cal B}}\), is needed. Likewise in \(\overline{B}_{q}\to{\cal D}\overline{\cal B}l\bar{\nu}\) decays, we only need one tree amplitude, \(T_{{\cal D}\overline{\cal B}}\). Lastly in \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}l\bar{\nu}\) decays, two topological amplitudes, namely a tree amplitude, \(T_{{\cal D}\overline{\cal D}}\), and an annihilation amplitude, \(A_{{\cal D}\overline{\cal D}}\), are needed. In loop induced decay modes, we have three topological amplitudes, namely two penguin-box amplitudes \(PB_{2{\cal B}\overline{\cal B}}\), \(PB_{1{\cal B}\overline{\cal B}}\) and one penguin-box-annihilation amplitude, \(PBA_{{\cal B}\overline{\cal B}}\), in \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{\nu}\) decays, one topological amplitude, namely a penguin-box amplitude, \(PB_{{\cal B}\overline{\cal B}}\), in \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu}\) decays, one topological amplitude, namely a penguin-box amplitude, \(PB_{{\cal D}\overline{\cal B}}\), in \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu}\) decays, two topological amplitudes, namely a penguin-box amplitude, \(PB_{{\cal D}\overline{\cal B}}\), and a penguin-box-annihilation amplitude, \(PBA_{{\cal D}\overline{\cal B}}\), in \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu}\) decays. As the numbers of independent topological amplitudes are highly limited, there are plenty of relations on these \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) and \({\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decay amplitudes. Furthermore, the loop topological amplitudes and tree topological amplitudes have simple relations, as their ratios are determined by the CKM factors and loop functions. It is known that the \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) differential rate exhibits threshold enhancement, which is expected to hold in all other \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}(\nu\bar{\nu})\) decay modes. These \(B_{q}\) decays have large phase space and one normally does expect the SU(3) breaking in baryon masses to have large SU(3) breaking effects on the \(B_{q}\) decay rates. However, the threshold enhancement effectively squeezes the phase space to the threshold region and thus mimics the decay just above threshold situation. It amplifies the effects of SU(3) breaking in final state baryon masses, consequently, the decay rates may differ by orders of magnitudes even if their amplitudes are of similar sizes. In this work, the \(B^{-}\to p\bar{p}\mu^{-}\bar{\nu}\) differential rate and model calculations with available theoretical inputs from ref. [8; 9], which can reproduce the observed differential rate, are used to estimate the SU(3) breaking from threshold enhancement. We find that the differential rates \(dBr/dm_{\bf B\overline{B}^{\prime}}\) of \(\overline{B}_{q}\to{\bf B\overline{B}^{\prime}}l\bar{\nu}(\nu\bar{\nu})\) decays can be parametrized as \[\frac{dBr}{dm_{\bf B\overline{B}^{\prime}}}=\frac{N}{(m_{\bf B\overline{B}^{ \prime}}^{2})^{\gamma}}(m_{\bf B\overline{B}^{\prime}}-m_{\bf B}-m_{\bf \overline{B}^{\prime}}), \tag{77}\] with \(\gamma\) and \(N\) some constants, and we obtain \(\gamma=9,7,7,10\) for \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}l\bar{\nu}(\nu\bar{\nu})\), \(\overline{B}_{q}\to{\cal B\overline{D}}l\bar{\nu}(\nu\bar{\nu})\), \(\overline{B}_{q}\to{\cal D\overline{B}}l\bar{\nu}(\nu\bar{\nu})\) and \(\overline{B}_{q}\to{\cal D\overline{D}^{\prime}}l\bar{\nu}(\nu\bar{\nu})\) decays, respectively. SU(3) breaking from threshold enhancement can be estimated using the above equation. The estimations on SU(3) breaking from threshold enhancement are supported by model calculations and can be improved once differential rates of other modes are measured. For example, in \(\overline{B}_{q}\to{\cal B\overline{D}}l\bar{\nu}\left({\cal D\overline{B}}l \bar{\nu}\right)\) decays, as there is only one topological amplitude, namely \(T_{\cal B\overline{D}}\) (\(T_{\cal D\overline{B}}\)), the rates of all other modes can be estimated without resorting to model calculations, once the total rate and the differential rate of a single mode is measured. In the model calculations, we find that the \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}l\bar{\nu}\) branching ratios are of the orders \(10^{-8}\sim 10^{-6}\) for non-annihilation modes, while the branching ratios of \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}\nu\bar{\nu}\) decays are of the orders of \(10^{-11}\sim 10^{-8}\) for non-penguin-box-annihilation modes. The branching ratios of \(\overline{B}_{q}\to{\cal B\overline{D}}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D\overline{B}}l\bar{\nu}\) decays are in the ranges of \(10^{-9}\sim 10^{-7}\) while \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal B\overline{D}}l\nu\nu)\) and \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal D\overline{B}}l\nu\nu)\) are in the ranges of \(10^{-12}\sim 10^{-10}\). The branching ratios of \(\overline{B}_{q}\to{\cal D\overline{D}^{\prime}}l\bar{\nu}\) decays of non-annihilation modes are in the ranges of \(10^{-9}\sim 10^{-7}\), while \(\sum_{\nu}Br(\overline{B}_{q}\to{\cal D\overline{D}^{\prime}}l\nu\nu)\) are in the ranges of \(10^{-12}\sim 10^{-10}\). Modes with relatively unsuppressed rates and good detectability are identified as following. In \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B\overline{B}^{\prime}}\nu\bar{\nu}\) decays, we have \(B^{-}\to p\bar{p}l\bar{\nu}\), \(\overline{B}^{0}\to p\bar{n}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu}\), \(B^{-}\to\Lambda\bar{p}\nu\bar{\nu}\), \(\overline{B}^{0}\to\Lambda\bar{n}\nu\bar{\nu}\) and \(\overline{B}^{0}_{s}\to\Lambda\overline{\Lambda}\nu\bar{\nu}\) decays. In \(\overline{B}_{q}\to{\cal B\overline{D}}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal B\overline{D}}\nu\bar{\nu}\) decays, \(\overline{B}^{0}\to p\overline{\Delta^{0}}l\bar{\nu}\) and \(\overline{B}^{0}\to\Sigma^{0}\overline{\Delta^{0}}\nu\bar{\nu}\) have unsuppressed rates and good detectability. While in \(\overline{B}_{q}\to{\cal D\overline{B}}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D\overline{B}}\nu\bar{\nu}\) decays, \(\overline{B}^{0}\to\Delta^{++}\bar{p}l\bar{\nu}\), \(\overline{B}^{0}\to\Sigma^{*+}\overline{\Lambda}l\bar{\nu}\), \(B^{-}\to\Delta^{+}\bar{p}l\bar{\nu}\), \(B^{-}\to\Delta^{+}\bar{p}l\bar{\nu}\), \(B^{-}\to\Delta^{0}\bar{n}l\bar{\nu}\), \(\overline{B}^{0}\to\Sigma^{*+}\bar{p}\nu\bar{\nu}\) and \(B^{-}\to\Sigma^{*-}\bar{n}\nu\bar{\nu}\) decay modes are identified. Finally in \(\overline{B}_{q}\to{\cal D\overline{D}^{\prime}}l\bar{\nu}\) and \(\overline{B}_{q}\to{\cal D\overline{D}^{\prime}}\nu\bar{\nu}\) decays, we find that the following decay modes have unsuppressed rates and good detectability, they are \(B^{-}\to\Delta^{++}\overline{\Delta^{++}}l\bar{\nu}\), \(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{\nu}\), \(B^{-}\to\Sigma^{*+}\overline{\Sigma^{*+}}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Delta^{+}\overline{\Sigma^{*+}}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Delta^{0}\overline{\Sigma^{*-}}l\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Xi^{*0}}l\bar{\nu}\), \(B^{-}\to\Sigma^{*-}\overline{\Delta^{0}}\nu\bar{\nu}\), \(B^{-}\to\Xi^{*0}\overline{\Sigma^{*+}}\nu\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Sigma^{*+}}\nu\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Sigma^{*-}\overline{\Sigma^{*-}}\nu\bar{\nu}\), \(\overline{B}^{0}_{s}\to\Xi^{*-}\overline{\Sigma^{*0}}\nu\bar{\nu}\) decays. These modes can be searched experimentally in near future. Ratios of rates of some loop induced \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}\nu\bar{\nu}\) decays and tree induced \(\overline{B}_{q}\to{\bf B}\overline{\bf B}^{\prime}l\bar{\nu}\) decays are predicted and can be checked experimentally. They can be tests of the SM. In particular, we predict \[\frac{\sum_{\nu}Br(B^{-}\to\Lambda\bar{p}\nu\bar{\nu})}{Br( \overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu})} = 4.29\frac{\bar{\alpha}}{\alpha}\times\left(\frac{0.0036}{|V_{ub} |}\right)^{2}\times 10^{-3},\] \[\frac{\sum_{\nu}Br(\overline{B}^{0}\to\Lambda\bar{n}\nu\bar{\nu} )}{Br(\overline{B}^{0}_{s}\to p\overline{\Lambda}l\bar{\nu})} = 3.94\frac{\bar{\alpha}}{\alpha}\times\left(\frac{0.0036}{|V_{ub} |}\right)^{2}\times 10^{-3}, \tag{78}\] for \(\overline{B}_{q}\to{\cal B}\overline{\cal B}^{\prime}\nu\bar{\nu},{\cal B} \overline{\cal B}^{\prime}l\bar{\nu}\) decays, \[\frac{\sum_{\nu}Br(\overline{B}^{0}\to\Sigma^{0}\overline{\Delta^{0}}\nu \bar{\nu})}{Br(\overline{B}^{0}\to p\overline{\Delta^{0}}l\bar{\nu})} = 2.11\beta^{\prime}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2} \times 10^{-3}, \tag{79}\] for \(\overline{B}_{q}\to{\cal B}\overline{\cal D}\nu\bar{\nu},{\cal B}\overline{ \cal D}l\bar{\nu}\) decays, \[\frac{\sum_{\nu}Br(\overline{B}^{0}\to\Sigma^{*+}\bar{p}\nu\bar{\nu})}{Br( \overline{B}^{0}\to\Delta^{++}\bar{p}l\bar{\nu})} = 4.55\kappa^{\prime\prime}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2} \times 10^{-4}, \tag{80}\] for \(\overline{B}_{q}\to{\cal D}\overline{\cal B}\nu\bar{\nu},{\cal D}\overline{ \cal B}l\bar{\nu}\) decays, and \[\frac{\sum_{\nu}Br(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{ \Sigma^{*+}}\nu\bar{\nu})}{Br(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{ \nu})} = 4.66\sigma^{\prime\prime\prime}\times\left(\frac{0.0036}{|V_{ub} |}\right)^{2}\times 10^{-4},\] \[\frac{\sum_{\nu}Br(\overline{B}^{0}_{s}\to\Sigma^{*-}\overline{ \Sigma^{*-}}\nu\bar{\nu})}{Br(B^{-}\to\Delta^{0}\overline{\Delta^{0}}l\bar{ \nu})} = 4.40\sigma^{\prime\prime\prime}\times\left(\frac{0.0036}{|V_{ub} |}\right)^{2}\times 10^{-4}, \tag{81}\] and \[\frac{\sum_{\nu}Br(B^{-}\to\Sigma^{*+}\overline{\Delta^{*+}}\nu \bar{\nu})}{Br(\overline{B}^{0}_{s}\to\Delta^{+}\overline{\Sigma^{*+}}l\bar{ \nu})} = 4.29\frac{\bar{\beta}^{\prime\prime\prime}}{\beta^{\prime\prime \prime}}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2}\times 10^{-3},\] \[\frac{\sum_{\nu}Br(B^{-}\to\Sigma^{*-}\overline{\Delta^{0}}\nu \bar{\nu})}{Br(\overline{B}^{0}_{s}\to\Delta^{0}\overline{\Sigma^{*-}}l\bar{ \nu})} = 4.29\frac{\bar{\beta}^{\prime\prime\prime}}{\beta^{\prime\prime \prime}}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2}\times 10^{-3},\] \[\frac{\sum_{\nu}Br(B^{-}\to\Xi^{*0}\overline{\Sigma^{*+}}\nu \bar{\nu})}{Br(\overline{B}^{0}_{s}\to\Sigma^{*+}\overline{\Xi^{*0}}l\bar{ \nu})} = 4.29\frac{\bar{\kappa}^{\prime\prime\prime}}{\kappa^{\prime\prime \prime}}\times\left(\frac{0.0036}{|V_{ub}|}\right)^{2}\times 10^{-3}, \tag{82}\] for \(\overline{B}_{q}\to{\cal D}\overline{\cal D}^{\prime}\nu\bar{\nu},{\cal D} \overline{\cal D}^{\prime}l\bar{\nu}\) decays. The parameters \(\beta^{\prime}\), \(\kappa^{\prime\prime}\) and \(\sigma^{\prime\prime\prime}\) are expected to be of order one, while the ratios \(\bar{\alpha}/\alpha\), \(\bar{\beta}^{\prime\prime\prime}/\beta^{\prime\prime\prime}\) and \(\bar{\kappa}^{\prime\prime\prime}/\kappa^{\prime\prime\prime}\) are expected to be close to one. These expectations are supported by model calculations. Note that the ratios in Eqs. (78) and (82) are insensitive to the SU(3) breaking from threshold enhancement, while those in Eqs. (79), (80) and (81) do depend on the estimations of SU(3) breaking from threshold enhancement, which, however, can be checked and improved when more modes are discovered. The ratios which are insensitive to the modeling of SU(3) breaking from threshold enhancement can be tests of the SM. ###### Acknowledgements. The author would like to thank Yu-Kuo Hsiao for discussion. This work is supported in part by the National Science and Technology Council of R.O.C. under Grant No NSTC-111-2112-M-033-007. Appendix A \(\overline{B}_{q}\to\mathbf{B}\bar{\mathbf{B}}^{\prime}\) matrix elements in the asymptotic limit We discuss \(\overline{B}_{q}\to\mathbf{B}\bar{\mathbf{B}}^{\prime}\) transition matrix elements in the asymptotic limit in this appendix. We follow ref. [34] to obtain the asymptotic limit of these matrix elements. The wave function of a octet or decuplet baryon with helicity \(\lambda=-1/2\) can be expressed as \[|\mathbf{B}\,;\downarrow\rangle\sim\frac{1}{\sqrt{3}}(|\mathbf{B}\,;\downarrow \uparrow\downarrow\rangle+|\mathbf{B}\,;\downarrow\downarrow\uparrow\rangle+| \mathbf{B}\,;\uparrow\downarrow\downarrow\rangle), \tag{104}\] which are composed of 13-, 12- and 23-symmetric terms, respectively. For octet baryons, we have \[|p\,;\downarrow\uparrow\downarrow\rangle = \left[\frac{d(1)u(3)+u(1)d(3)}{\sqrt{6}}u(2)-\sqrt{\frac{2}{3}}u( 1)d(2)u(3)\right]|\downarrow\uparrow\downarrow\rangle,\] \[|n\,;\downarrow\uparrow\downarrow\rangle = (-|p\,;\downarrow\uparrow\downarrow\rangle\text{ with }\,u\leftrightarrow d),\] \[|\Sigma^{+}\,;\downarrow\uparrow\downarrow\rangle = (-|p\,;\downarrow\uparrow\downarrow\rangle\text{ with }d\to s),\] \[|\Sigma^{0}\,;\downarrow\uparrow\downarrow\rangle = \bigg{[}-\frac{u(1)d(3)+d(1)u(3)}{\sqrt{3}}\,s(2)+\frac{u(2)d(3)+d (2)u(3)}{2\sqrt{3}}\,s(1)\] \[+\frac{u(1)d(2)+d(1)u(2)}{2\sqrt{3}}\,s(3)\bigg{]}|\downarrow \uparrow\downarrow\rangle,\] \[|\Sigma^{-}\,;\downarrow\uparrow\downarrow\rangle = (|p\,;\downarrow\uparrow\downarrow\rangle\text{ with }u,d\to d,s),\] \[|\Lambda\,;\downarrow\uparrow\downarrow\rangle = \bigg{[}\frac{d(2)u(3)-u(2)d(3)}{2}\,s(1)+\frac{u(1)d(2)-d(1)u(2)}{ 2}\,s(3)\bigg{]}|\downarrow\uparrow\downarrow\rangle,\] \[|\Xi^{0}\,;\downarrow\uparrow\downarrow\rangle = (|p\,;\downarrow\uparrow\downarrow\rangle\text{ with }u,d\to s,u),\] \[|\Xi^{-}\,;\downarrow\uparrow\downarrow\rangle = (-|p\,;\downarrow\uparrow\downarrow\rangle\text{ with }u\to s), \tag{105}\] and for decuplet baryons, we have \[|\Delta^{++};\downarrow\uparrow\downarrow\rangle = u(1)u(2)u(3)|\downarrow\uparrow\downarrow\rangle,\qquad\qquad| \Delta^{-};\downarrow\uparrow\downarrow\rangle=d(1)d(2)d(3)|\downarrow \uparrow\downarrow\rangle,\] \[|\Delta^{+};\downarrow\uparrow\downarrow\rangle = \frac{1}{\sqrt{3}}[u(1)u(2)d(3)+u(1)d(2)u(3)+d(1)u(2)u(3)]| \downarrow\uparrow\downarrow\rangle,\] \[|\Delta^{0};\downarrow\uparrow\downarrow\rangle = (|\Delta^{+};\downarrow\uparrow\downarrow\rangle\text{ with }u\leftrightarrow d),\qquad|\Sigma^{ *+};\downarrow\uparrow\downarrow\rangle=(|\Delta^{+};\downarrow\uparrow \downarrow\rangle\text{ with }d\leftrightarrow s),\] \[|\Sigma^{*0};\downarrow\uparrow\downarrow\rangle = \frac{1}{\sqrt{6}}[u(1)d(2)s(3)+\text{permutation}]|\downarrow \uparrow\downarrow\rangle,\] \[|\Omega^{-};\downarrow\uparrow\downarrow\rangle = (|\Delta^{++};\downarrow\uparrow\downarrow\rangle\text{ with }u \to s), \tag{106}\] for the \(|{\bf B}\,;\downarrow\uparrow\downarrow\rangle\) parts. while the 12- and 23-symmetric parts can be easily obtained by suitable permutation. The transition matrix element can be expressed as \[\langle{\bf B}\bar{\bf B}^{\prime}|\bar{q}_{L}\gamma_{\mu}b_{L}| \overline{B}_{q^{\prime}}\rangle = i\,\bar{u}_{L}(p_{\bf B})\gamma_{\mu}v_{L}(p_{\overline{\bf B}^{ \prime}}){\cal G}_{L}+i\,\bar{u}_{R}(p_{\bf B})\gamma_{\mu}v_{R}(p_{\overline {\bf B}^{\prime}}){\cal G}_{R} \tag{104}\] \[+i\,\bar{u}_{L}(p_{\bf B}){\bf F}_{\mu}v_{R}(p_{\overline{\bf B}^ {\prime}}),\] where \({\bf F}_{\mu}\) can be expressed as \[{\bf F}_{\mu}=a_{F}\sigma_{\mu\nu}q^{\nu}+b_{F}q_{\mu}+c_{F}(p_{\bf B}+p_{ \overline{\bf B}^{\prime}})_{\mu}+d_{F}(p_{\bf B}-p_{\overline{\bf B}^{\prime }})_{\mu}, \tag{105}\] with \(q\equiv p_{B_{q}}-p_{\mathbf{B}}-p_{\overline{\mathbf{B}}^{\prime}}\) and form factors \(a_{F}\), \(b_{F}\), \(c_{F}\) and \(d_{F}\). These \(\mathcal{G}_{L}\), \(\mathcal{G}_{R}\) and \(\mathbf{F}_{\mu}\) depends on the decaying meson and the final state baryon pair. We use spacelike case for illustration. Using the approach similar to those in [10; 11; 12; 13; 22; 34] the above form factors \(\mathcal{G}_{L}\), \(\mathcal{G}_{R}\) and \(\mathbf{F}_{\mu}\) can be expressed in terms of three universal form factors, \(\mathcal{G}_{\parallel}\), \(\mathcal{G}_{\overline{\parallel}}\) and \(\mathcal{F}_{\mu}\) as following, \[\mathcal{G}_{L}=e_{\parallel}\,\mathcal{G}_{\parallel},\quad\mathcal{G}_{R}=e _{\overline{\parallel}}\,\mathcal{G}_{\overline{\parallel}},\quad\mathbf{F}_{ \mu}=e_{F}\,\mathcal{F}_{\mu}, \tag{104}\] where the coefficients \(e_{\parallel}\), \(e_{\overline{\parallel}}\) and \(e_{F}\) are given by \[e_{\parallel} = \biggl{(}\langle{\bf B};\downarrow\uparrow\downarrow|O[q^{\prime}_ {L}(1)\to q_{L}(1)]|{\bf B}^{\prime};\downarrow\uparrow\downarrow\rangle \tag{104}\] \[+\langle{\bf B};\downarrow\uparrow\downarrow|O[q^{\prime}_{L}(3) \to q_{L}(3)]|{\bf B}^{\prime};\downarrow\uparrow\downarrow\rangle\biggr{)},\] \[e_{\overline{\parallel}} = \langle{\bf B};\uparrow\downarrow\uparrow|O[q^{\prime}_{L}(2) \to q_{L}(2)]|{\bf B}^{\prime};\uparrow\downarrow\uparrow\rangle,\] \[e_{F} = \biggl{(}\langle{\bf B};\downarrow\downarrow\uparrow|O[q^{\prime}_ {R}(1)\to q_{L}(1)]|{\bf B}^{\prime};\uparrow\downarrow\uparrow\rangle\] (105) \[+\langle{\bf B};\uparrow\downarrow\downarrow|O[q^{\prime}_{R}(3) \to q_{L}(3)]|{\bf B}^{\prime};\uparrow\downarrow\uparrow\rangle\biggr{)}.\] Note that \(q^{\prime}\) is the anti-quark in \(\overline{B}_{q^{\prime}}\) meson and \(q\) is the quark in the \(\bar{q}_{L}\gamma_{\mu}b_{L}\) current. Applying \(Q[q^{\prime}_{L}(1)\to q_{L}(1)]\) to \(|{\bf B}^{\prime};\downarrow\uparrow\downarrow\rangle\) changes the parallel spin \(q^{\prime}(1)|\downarrow\rangle\) part of \(|{\bf B}^{\prime};\downarrow\uparrow\downarrow\rangle\) to a parallel spin \(q(1)|\downarrow\rangle\) part, where the flavor is changed from \(q^{\prime}\) to \(q\), and likewise for the operation of \(O[q^{\prime}_{L}(3)\to q_{L}(3)]\) on \(|{\bf B}^{\prime};\downarrow\uparrow\downarrow\rangle\). As the operation involves only the parallel spin component, the coefficient is called \(e_{\parallel}\) and the correspondent form factor is \({\cal G}_{\parallel}\). Likewise \(e_{\overline{\parallel}}\) involves only the anti-parallel spin component, while \(e_{F}\) involves operations that flip the spin of the quark in addition to changing the flavor from \(q^{\prime}\) to \(q\). Note that annihilation diagram is not included in the above analysis, as the flavor flow structure is different, see Fig. 2, where, as far as the flavor structure is concern, \(\overline{B}_{q^{\prime}}\) is annihilated by the current and the baryon pair is created from vacuum. The coefficients \(e_{\parallel}\), \(e_{\overline{\parallel}}\) and \(e_{F}\) for all relevant transitions considered in this work are obtained accordingly and are shown in Tables 14, 15, 16, 17. By comparing the Tables 14, 15, 16, 18 and Tables 15, 16, 18, 19, 20, the following correspondences of topological amplitudes and \((e_{\parallel},e_{\overline{\parallel}},e_{F})\), \[T_{1\mathcal{B}\overline{\mathcal{B}}} : (e_{\parallel}^{(1)},e_{\overline{\parallel}}^{(1)},e_{F}^{(1)})= (1,2,\frac{1}{2}),\] \[T_{2\mathcal{B}\overline{\mathcal{B}}} : (e_{\parallel}^{(2)},e_{\overline{\parallel}}^{(2)},e_{F}^{(2)})= (4,-1,-\frac{5}{2}),\] \[T_{\mathcal{B}\overline{\mathcal{D}}} : (e_{\parallel}^{\prime},e_{\overline{\parallel}}^{\prime},e_{F}^{ \prime})=(1,-1,\frac{1}{2}),\] \[T_{\mathcal{D}\overline{\mathcal{D}}} : (e_{\parallel}^{\prime\prime},e_{\overline{\parallel}}^{\prime \prime},e_{F}^{\prime\prime})=(1,-1,\frac{1}{2}),\] \[T_{\mathcal{D}\overline{\mathcal{D}}} : (e_{\parallel}^{\prime\prime\prime},e_{\overline{\parallel}}^{ \prime\prime\prime},e_{F}^{\prime\prime\prime})=(1,\frac{1}{2},\frac{1}{2}), \tag{190}\] and similar relations for \(P_{i\mathcal{B}\overline{\mathcal{B}},\mathcal{B}\overline{\mathcal{D}}, \mathcal{D}\overline{\mathcal{D}}}\). In general, the topological amplitudes, \(T_{i\mathcal{B}\overline{\mathcal{B}}}\), \(T_{\mathcal{B}\overline{\mathcal{D}}}\)\(T_{\mathcal{D}\overline{\mathcal{B}}}\) and \(T_{\mathcal{D}\overline{\mathcal{D}}}\), are given in Eqs. (21), (22), (23) and (24). It is useful to show that \(T_{\mathcal{B}\overline{\mathcal{D}}}\), \(T_{\mathcal{D}\overline{\mathcal{B}}}\) and \(T_{\mathcal{D}\overline{\mathcal{D}}}\) have the structure of \(T_{i\mathcal{B}\overline{\mathcal{B}}}\) in the asymptotic limit. Note that the Rarita-Schwinger vector spinor \(u_{\mu}\) can be expressed in terms of Dirac spinors and polarization vectors as following [38] \[u_{\mu}\bigg{(}\vec{p},\pm\frac{3}{2}\bigg{)} = \epsilon_{\mu}(\vec{p},\pm 1)u\bigg{(}\vec{p},\pm\frac{1}{2} \bigg{)},\] \[u_{\mu}\bigg{(}\vec{p},\pm\frac{1}{2}\bigg{)} = \frac{1}{\sqrt{3}}\epsilon_{\mu}(\vec{p},\pm 1)u\bigg{(}\vec{p}, \mp\frac{1}{2}\bigg{)}+\sqrt{\frac{2}{3}}\,\epsilon_{\mu}(\vec{p},0)u\bigg{(} \vec{p},\pm\frac{1}{2}\bigg{)}, \tag{191}\] where \(\epsilon_{\mu}(\vec{p},\lambda)\) is the polarization vector, \[\epsilon_{\mu}(\vec{p},0)=\bigg{(}\frac{|\vec{p}|}{m},\frac{E}{m}\hat{n} \bigg{)}\,,\quad\epsilon_{\mu}(\vec{p},\pm 1)=\big{(}0,\vec{\epsilon}(\vec{0}, \pm 1)\big{)}. \tag{192}\] with \(\hat{n}\equiv\vec{p}/|\vec{p}|\) and \(\vec{\epsilon}(\vec{0},\pm 1)\cdot\hat{n}=0\), and, for example, in the case of \(\vec{p}=(0,0,p)\), we have \(\hat{n}=\hat{z}\) and \(\vec{\epsilon}(\vec{0},\pm 1)=\mp(1,\pm i,0)/\sqrt{2}\). Spinors \(v^{\mu}(\vec{p},\lambda)\) have similar relations. When \(|\vec{p}|\gg m\), \(\epsilon_{\mu}(\vec{p},0)\) dominates over \(\epsilon_{\mu}(\vec{p},\pm 1)\) and, consequently, \(u^{\mu}(\vec{p},\pm 1/2)\) and \(v^{\mu}(\vec{p},\pm 1/2)\) dominate over \(u^{\mu}(\vec{p},\pm 3/2)\) and \(v^{\mu}(\vec{p},\pm 3/2)\), respectively, and they can be approximated as \[u_{\mu}\bigg{(}\vec{p},\pm\frac{1}{2}\bigg{)}\simeq\sqrt{\frac{2}{3}}\,\frac{p _{\mu}}{m}u\bigg{(}\vec{p},\pm\frac{1}{2}\bigg{)},\quad v_{\mu}\bigg{(}\vec{p},\pm\frac{1}{2}\bigg{)}\simeq\sqrt{\frac{2}{3}}\,\frac{p_{\mu}}{m}v\bigg{(} \vec{p},\pm\frac{1}{2}\bigg{)}. \tag{193}\] Using the above relations and Eqs. (22), (23) and (24), in the large momentum limit, we should have \[T_{\cal B\overline{D}} \simeq i\frac{G_{F}}{\sqrt{2}}V_{ub}\bar{l}_{L}\gamma_{\mu}\nu_{L}\ \delta_{|\lambda_{\overline{D}}|,1/2}\] \[\times\bar{u}(p_{\cal D},\lambda_{\cal B})\sqrt{\frac{2}{3}}\, \frac{1}{m_{\overline{D}}}\Big{\{}\Big{[}\big{(}g^{\prime}_{1}p_{\cal B}\cdot p _{\overline{D}}+g^{\prime}_{6}q\cdot p_{\overline{D}}\big{)}\gamma_{\mu}+i(g^ {\prime}_{2}p_{\cal B}\cdot p_{\overline{D}}+g^{\prime}_{7}q\cdot p_{\overline {D}})\sigma_{\mu\rho}q^{\rho}\] \[\qquad+(g^{\prime}_{3}p_{\cal B}\cdot p_{\overline{D}}+g^{\prime} _{8}q\cdot p_{\overline{D}})q_{\mu}+(g^{\prime}_{4}p_{\cal B}\cdot p_{\overline {D}}+g^{\prime}_{9}q\cdot p_{\overline{D}})p_{\cal B}\mu+g^{\prime}_{5}p_{ \overline{D}\mu}\Big{]}\gamma_{5}\] \[\qquad-\Big{[}\big{(}f^{\prime}_{1}p_{\cal B}\cdot p_{\overline{ D}}+f^{\prime}_{6}q\cdot p_{\overline{D}}\big{)}\gamma_{\mu}+i(f^{\prime}_{2}p_{ \cal B}\cdot p_{\overline{D}}+f^{\prime}_{7}q\cdot p_{\overline{D}})\sigma_{ \mu\rho}q^{\rho}\] \[\qquad+(f^{\prime}_{3}p_{\cal B}\cdot p_{\overline{D}}+f^{\prime} _{8}q\cdot p_{\overline{D}})q_{\mu}+(f^{\prime}_{4}p_{\cal B}\cdot p_{\overline {D}}+f^{\prime}_{9}q\cdot p_{\overline{D}})p_{\cal B}\mu+f^{\prime}_{5}p_{ \overline{D}\mu}\Big{]}\Big{\}}v(p_{\overline{D}},\lambda_{\overline{D}}),\] \[T_{\cal D\overline{B}} \simeq i\frac{G_{F}}{\sqrt{2}}V_{ub}\bar{l}_{L}\gamma_{\mu}\nu_{L}\ \delta_{|\lambda_{\mathcal{D}}|,1/2}\] \[\times\bar{u}(p_{\cal D},\lambda_{\cal D})\sqrt{\frac{2}{3}}\, \frac{1}{m_{\cal D}}\Big{\{}\Big{[}(\bar{g}^{\prime\prime}_{1}p_{\overline{ \cal B}}\cdot p_{\cal D}+\bar{g}^{\prime\prime}_{6}q\cdot p_{\cal D})\gamma_{ \mu}+i(\bar{g}^{\prime\prime}_{2}p_{\overline{\cal B}}\cdot p_{\cal D}+\bar{g} ^{\prime\prime}_{7}q\cdot p_{\cal D})\sigma_{\mu\rho}q^{\rho}\] \[\qquad+(g^{\prime\prime}_{3}p_{\overline{\cal B}}\cdot p_{\cal D }+g^{\prime\prime}_{8}q\cdot p_{\cal D})q_{\mu}+g^{\prime\prime}_{5}p_{{\cal D }\mu}+(g^{\prime\prime}_{4}p_{\overline{\cal B}}\cdot p_{\cal D}+g^{\prime \prime}_{9}q\cdot p_{\cal D})p_{\overline{\cal B}\mu}\Big{]}\gamma_{5}\] \[\qquad-\Big{[}(f^{\prime\prime}_{1}p_{\overline{\cal B}}\cdot p_ {\cal D}+f^{\prime\prime}_{6}q\cdot p_{\cal D})\gamma_{\mu}+i(f^{\prime\prime }_{2}p_{\overline{\cal B}}\cdot p_{\cal D}+f^{\prime\prime}_{7}q\cdot p_{\cal D })\sigma_{\mu\rho}q^{\rho}\] \[\qquad+(f^{\prime\prime}_{3}p_{\overline{\cal B}}\cdot p_{\cal D }+f^{\prime\prime}_{8}q\cdot p_{\cal D})q_{\mu}+f^{\prime\prime}_{5}p_{{\cal D }\mu}+(f^{\prime\prime}_{4}p_{\overline{\cal B}}\cdot p_{\cal D}+f^{\prime \prime}_{9}q\cdot p_{\cal D})p_{\overline{\cal B}\mu}\Big{]}\Big{\}}v(p_{ \overline{\cal B}},\lambda_{\overline{\cal B}}),\] and \[T_{\cal D\overline{D}} \simeq i\frac{G_{F}}{\sqrt{2}}V_{ub}\bar{l}_{L}\gamma_{\mu}\nu_{L} \tag{13}\] \[\times\bar{u}_{\nu}(p_{\cal D},\lambda_{\cal D})\frac{2}{3}\frac{ p_{\cal D}\cdot p_{\overline{\cal B}}}{m_{\cal D}m_{\overline{\cal B}}}\Big{\{} \Big{[}g^{\prime\prime\prime}_{1}\gamma_{\mu}+ig^{\prime\prime\prime}_{2} \sigma_{\mu\rho}q^{\rho}+g^{\prime\prime\prime}_{3}q_{\mu}+g^{\prime\prime \prime}_{4}(p_{\cal D}+p_{\overline{\cal D}^{\prime}})_{\mu}\] \[+g^{\prime\prime\prime}_{5}(p_{\cal D}-p_{\overline{\cal D}^{ \prime}})_{\mu}]\gamma_{5}-[f^{\prime\prime\prime}_{1}\gamma_{\mu}+if^{\prime \prime\prime}_{2}\sigma_{\mu\nu}q^{\nu}+f^{\prime\prime\prime}_{3}q_{\mu}+f^{ \prime\prime\prime}_{4}(p_{\cal D}+p_{\overline{\cal D}^{\prime}})_{\mu}\] \[+f^{\prime\prime\prime}_{5}(p_{\cal D}-p_{\overline{\cal D}^{ \prime}})_{\mu}]\Big{\}}\Big{\}}v^{\nu}(p_{\overline{\cal D}},\lambda_{ \overline{\cal D}}).\] Comparing the above equations and Eq. (21), we see that \(T_{\cal B\overline{D}}\), \(T_{\cal D\overline{B}}\) and \(T_{\cal D\overline{D}}\) indeed have the structure of \(T_{i\,\cal B\overline{B}}\) in the asymptotic limit. Their asymptotic form can be obtained by using Eqs. (10), (11) and the above equations. Appendix B Formulas of decay rates for \(\overline{B}_{q}\to{\bf B\overline{B}^{\prime}}l\bar{\nu}\) and \(\overline{B}_{q}\to{\bf B\overline{B}^{\prime}}\nu\bar{\nu}\) decays The \(\overline{B}_{q}\to{\bf B\overline{B}^{\prime}}l\bar{\nu}\) and \(\overline{B}_{q}\to{\bf B\overline{B}^{\prime}}\nu\bar{\nu}\) decays involve 4-body decays. The decay rate of a 4-body decay is given by [7; 30; 39] \[d\Gamma=\frac{|M|^{2}}{4(4\pi)^{6}m_{B_{q}}^{3}}\beta Xds\,dt\,d\cos\theta_{\bf B}\,d\cos\theta_{\bf L}\,d\phi, \tag{14}\] where \(s\) is the invariant mass squared of the lepton pair, \(t\) is the invariant mass squared of the baryon pair, \(\theta_{\bf B(L)}\) is the angle of the baryon \({\bf B}\) (the lepton \(l\) [or \(\nu\)]) in the baryon pair (lepton pair) rest frame with respect to the opposite direction of the lepton pair (baryon pair) total 3-momentum direction, \(\phi\) is the angle between the baryon and the lepton planes, and \[X=\left(\frac{(m_{B_{q}}^{2}-s-t)^{2}}{4}-st\right)^{1/2},\quad \beta=\frac{1}{t}\left[t^{2}-2t(m_{\bf B}^{2}+m_{\overline{\bf B}}^{2})+(m_{ \bf B}^{2}-m_{\overline{\bf B}}^{2})^{2}\right]^{1/2}. \tag{100}\] The ranges of \(s\), \(t\), \(\theta_{\bf B}\), \(\theta_{\bf L}\) and \(\phi\) are \[0\leq s\leq(m_{B_{q}}-t^{1/2})^{2},\quad(m_{\bf B}+m_{\overline{ \bf B}})^{2}\leq t\leq m_{B_{q}}^{2},\quad 0\leq\theta_{\bf B,L}\leq\pi, \quad 0\leq\phi\leq 2\pi, \tag{101}\] where the masses of leptons are neglected. The amplitude squared \(|M|^{2}\) can be obtained by using \(T_{i\,\mathcal{B}\overline{\mathcal{B}}}\), \(T_{\mathcal{B}\overline{\mathcal{D}}}\), \(T_{\mathcal{D}\overline{\mathcal{B}}}\) and \(T_{\mathcal{D}\overline{\mathcal{D}}}\) as shown in Eqs. (21), (22), (23) and (24) with the help of FeynCalc [40; 41; 42]. The scalar products of momenta and the contracted Levi-Civita symbol need to be expressed in terms of \(s\), \(t\), \(\theta_{\bf B}\), \(\theta_{\bf L}\) and \(\phi\) before the integration \(\int d\Gamma\) can be carried out. The expressions have been worked out in ref. [39]. Defining \[P\equiv p_{\bf B}+p_{\overline{\bf B}},\quad Q\equiv p_{\bf B}-p_ {\overline{\bf B}},\quad L\equiv p_{l(\nu)}+p_{\bar{\nu}},\quad N\equiv p_{l( \nu)}-p_{\bar{\nu}}, \tag{102}\] one has [39] \[P^{2} = t,\quad P\cdot Q=m_{\bf B}^{2}-m_{\overline{\bf B}}^{2},\quad Q^{ 2}=2(m_{\bf B}^{2}+m_{\overline{\bf B}}^{2})-t,\] \[L^{2} = s,\quad L\cdot N=0,\quad N^{2}=-s, \tag{103}\] \[L\cdot P = \frac{1}{2}(m_{B_{q}}^{2}-t-s),\quad L\cdot Q=\beta X\cos\theta_ {\bf B}+\frac{m_{\bf B}^{2}-m_{\overline{\bf B}}^{2}}{t}L\cdot P,\quad N\cdot P =X\cos\theta_{\bf L},\] \[N\cdot Q = \frac{m_{\bf B}^{2}-m_{\overline{\bf B}}^{2}}{t}X\cos\theta_{\bf L }+\beta(L\cdot P)\cos\theta_{\bf B}\cos\theta_{\bf L}-\sqrt{st}\beta\sin\theta _{\bf B}\sin\theta_{\bf L}\cos\phi, \tag{104}\] \[p_{B}\cdot P = \frac{1}{2}(m_{B_{q}}^{2}-s+t),\quad p_{B}\cdot Q=\frac{(m_{\bf B }^{2}-m_{\overline{\bf B}}^{2})(m_{B_{q}}^{2}-s+t)}{2t}+\beta X\cos\theta_{ \bf B},\] \[p_{B}\cdot L = \frac{1}{2}(m_{B_{q}}^{2}+s-t),\quad p_{B}\cdot N=X\cos\theta_{ \bf L}, \tag{105}\] and \[\epsilon_{\mu\nu\rho\sigma}N^{\mu}P^{\nu}p_{B}^{\rho}Q^{\sigma} = \sqrt{st}\beta X\sin\theta_{\bf B}\sin\theta_{\bf L}\sin\phi, \tag{106}\] with \(\epsilon_{0123}=-1\). In \(\overline{B}_{q}\to\mathcal{B}\overline{\mathcal{D}}l\bar{\nu}(\nu\bar{\nu})\), \(\overline{B}_{q}\to\mathcal{D}\overline{\mathcal{B}}l\bar{\nu}(\nu\bar{\nu})\) and \(\overline{B}_{q}\to\mathcal{D}\overline{\mathcal{D}}^{\prime}l\bar{\nu}(\nu \bar{\nu})\) decays, the calculation of \(|M|^{2}\) involve polarization sums of Rarita-Schwinger vector spinors. The following formulas for polarization sums [see, for example, Eq. (4.31) of ref. [38]] are needed, \[\sum_{\lambda=-3/2}^{3/2}u_{\mu}(p,\lambda)\bar{u}_{\nu}(p, \lambda) = -(\not{p}+m)\bigg{(}G_{\mu\nu}-\frac{1}{3}G_{\mu\sigma}G_{\nu \lambda}\gamma^{\sigma}\gamma^{\lambda}\bigg{)},\] \[\sum_{\lambda=-3/2}^{3/2}v_{\mu}(p,\lambda)\bar{v}_{\nu}(p, \lambda) = -(\not{p}-m)\bigg{(}G_{\mu\nu}-\frac{1}{3}G_{\mu\sigma}G_{\nu \lambda}\gamma^{\sigma}\gamma^{\lambda}\bigg{)}, \tag{101}\] where \(G_{\mu\nu}\) is defined as \[G_{\mu\nu}\equiv g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{m^{2}}. \tag{102}\] Note that in the above formulas the signs of \(m\) differ from those in ref. [38]. It is useful to check that in the large momentum limit, we have \[\sum_{\lambda=-3/2}^{3/2}u_{\mu}(p,\lambda)\bar{u}_{\nu}(p, \lambda) \simeq (\not{p}+m)\frac{2}{3m}p_{\mu}p_{\nu}=\frac{2}{3m^{2}}p_{\mu}p_{ \nu}\sum_{\lambda=-1/2}^{1/2}u(p,\lambda)\bar{u}(p,\lambda),\] \[\sum_{\lambda=-3/2}^{3/2}v_{\mu}(p,\lambda)\bar{v}_{\nu}(p, \lambda) \simeq (\not{p}-m)\frac{2}{3m^{2}}p_{\mu}p_{\nu},=\frac{2}{3m^{2}}p_{ \mu}p_{\nu}\sum_{\lambda=-1/2}^{1/2}v(p,\lambda)\bar{v}(p,\lambda), \tag{103}\] which agree with Eq. (100). Note that in our calculations involving Rarita-Schwinger vector spinors, only the exact polarization formulas in Eq. (101) are used. With these the \(\overline{B}_{q}\to\mathbf{B}\overline{\mathbf{B}}^{\prime}l\bar{\nu}\) and \(\overline{B}_{q}\to\mathbf{B}\overline{\mathbf{B}}^{\prime}\nu\bar{\nu}\) decay rates can be readily obtained once the topological amplitudes are given.
2310.13121
Understanding Addition in Transformers
Understanding the inner workings of machine learning models like Transformers is vital for their safe and ethical use. This paper provides a comprehensive analysis of a one-layer Transformer model trained to perform n-digit integer addition. Our findings suggest that the model dissects the task into parallel streams dedicated to individual digits, employing varied algorithms tailored to different positions within the digits. Furthermore, we identify a rare scenario characterized by high loss, which we explain. By thoroughly elucidating the model's algorithm, we provide new insights into its functioning. These findings are validated through rigorous testing and mathematical modeling, thereby contributing to the broader fields of model understanding and interpretability. Our approach opens the door for analyzing more complex tasks and multi-layer Transformer models.
Philip Quirke, Fazl Barez
2023-10-19T19:34:42Z
http://arxiv.org/abs/2310.13121v9
# Understanding Addition in Transformers ###### Abstract Understanding the inner workings of machine learning models like Transformers is vital for their safe and ethical use. This paper presents an in-depth analysis of a one-layer Transformer model trained for n-digit integer addition. We reveal that the model divides the task into parallel, digit-specific streams and employs distinct algorithms for different digit positions. Our study also finds that the model starts calculations late but executes them rapidly. A rare use case with high loss is identified and explained. Overall the model's algorithm is explained in detail. These findings are validated through rigorous testing and mathematical modeling, contributing to the broader works in Mechanistic Interpretability, AI safety, and alignment. Our approach opens the door for analyzing more complex tasks and multi-layer Transformer models. ## 1 Introduction Understanding the underlying mechanisms of machine learning models is essential for ensuring their safety and reliability (Barez et al., 2023; Olah et al., 2020). Specifically, the sub-field of mechanistic interpretability within machine learning interpretability aims to dissect the behavior of individual neurons and their interconnections in neural networks (Rauker et al., 2022). This pursuit is part of a larger endeavor to make the decision-making processes of complex machine learning models transparent and understandable. Although models like Transformers have shown remarkable performance on a myriad of tasks, their complexity makes them challenging to interpret. Their multi-layered architecture and numerous parameters make it difficult to comprehend how they derive specific outputs (Vig, 2019). Further, while simple arithmetic tasks like integer addition may be trivial for humans, understanding how a machine learning model like a Transformer performs such an operation is far from straightforward. In this work, we offer an in-depth analysis of a one-layer Transformer model performing n-digit integer addition. We show that the model separates the addition task into independent digit-specific streams of work, which are computed in parallel. Different algorithms are employed for predicting the first, middle, and last digits of the answer. The model's behavior is influenced by the compact nature of the task and the specific format in which the question is presented. Despite having the opportunity to begin calculations early, the model actually starts later. The calculations are performed in a time-dense manner, enabling the model to add two 5-digit numbers to produce a 6-digit answer in just 6 steps (See Fig. 1). A rare use case with high loss was predicted by analysis and proved to exist via experimentation. Our findings shed light on understanding and interpreting transformers. These insights may also have implications for AI safety and alignment. As evidenced in appendices 11 and 12, our results demonstrate the model's unique approach applies to integer addition across various digit lengths. The theoretical framework we develop provides a mathematical justification for the model's behavior, substantiating our empirical observations and offering a foundation for future work. Our main **contributions** are: * Reformulation of the traditional mathematical rules of addition into a framework more applicable to Transformers. * Detailed explanation of the model's (low loss) implementation of the addition algorithm, including the problem and model constraints that informed the algorithm design. * Identification of a rare use case where the model is not safe to use (has high loss), and explanation of the root cause. * Demonstration of a successful approach to elucidating a model algorithm via rigorous analysis from first principles, detailed investigation of model training and prediction behaviours, with targeted experimentation, leading to deep understanding of the model. Below, we provide an overview of related work (SS3), discuss our methodology (SS4), describe our mathematical framework (SS5), our analysis of model training (SS6) and model predictions (SS7). We conclude with a summary of our findings and directions for future research (SS8). ## 2 Background We focus on a single-layer transformer model with a vocabulary of size \(V\) containing the symbols "0" to "9", "+" and "=". The model converts the human readable input (e.g. "12345+67890=") into an input sequence \((\mathtt{x}_{1},\ldots,\mathtt{x}_{p})\) where each \(\mathtt{x}_{i}\in\{1,\ldots,V\}\). Tokens are mapped to \(\mathtt{d}_{\mathtt{e}}\) dimensional embeddings by selecting the \(\mathtt{x}_{i}\)-th column of \(E\in\mathbb{R}^{d_{\mathtt{e}}\times V}\). The model processes the input tokens, using a mechanism called "self-attention". Each input token is passed through a self-attention mechanism that calculates weighted relationships between all input tokens - capturing the importance of each token relative to others. The model then aggregates these weighted representations to produce contextually enriched representations for each token. These enriched representations are subsequently Figure 1: Illustration of the transformer model’s attention pattern when adding two 5-digit integers. The model attends to digit pairs sequentially from left to right, resulting in a “double staircase” pattern across rows. **A:** The 5 digit question is revealed token by token. The “10s of thousands” digit is revealed first. **B:** From the “=” token, the model attention heads focus on successive pairs of digits, giving a “double staircase” attention pattern. **C:** The 3 heads are time-offset from each other by 1 token such that, in each row, data from 3 tokens is available. **D:** To calculate A3, the 3 heads do independent simple calculations on D3, D2 and D1. The results are combined by the MLP layer using trigrams. A3 is calculated one token before it is needed. This approach applies to all answer digits, with the first and last digits using slight variations of the approach. fed through feedforward neural networks (i.e. an MLP) to refine their information. Finally, the output tokens are generated based on the refined representations, and converted back to human readable format using the vocabulary (e.g. "12345+67890=80235"). ## 3 Related Work Interpreting and reverse engineering neural networks and transformers to find meaningful circuits has been an area of active research. Olah et al. (2020) argued that by studying the connections between neurons and their weights, we can find meaningful algorithms (aka Circuits) in a "vision" neural network. Elhage et al. (2021) extended this approach to transformers, conceptualizing their operation in a mathematical framework that allows significant understanding of how transformer operate internally. Tools (such as explained in Foote et al. (2023), Conny et al. (2023)) use this framework to semi-automate some aspects of reverse engineering. Nanda et al. (2023) reverse-engineered modular addition (e.g. 5 + 7 mod 10 = 2) showing the model used discrete Fourier transforms and trigonometric identities to convert modular addition to rotation about a circle. Further, Nanda and Lieberum (2022) have argued models comprise multiple circuits. It gave examples, including the distinct training loss curve per answer digit in _5-digit_ integer addition, but did not identify the underlying circuits. This work investigates and explains the circuits in **n-digit** integer addition. From a circuit analysis lens, studies like Bau et al. (2017) extract graphical circuit representations and analyze component interactions. To enable analysis and interpretability, techniques in works like Petersen et al. (2021) symbolically reverse-engineer networks by recovering computational expressions. Research including Seth (2005) advocate analyzing networks causally, introducing structural causal models to infer mechanisms. Examinations of sequence models like Petersen et al. (2021) have analyzed the emergence and interaction of modular components during training. Evolutionary perspectives such as Miikkulainen (2021) elucidate how selection pressures shape hierarchical representations. Information bottleneck analyses including Kawaguchi et al. (2023) relate bottlenecks to abstractions and modularization arising from forced compression. Surveys like Carbonneau et al. (2022) overview techniques to disentangle explanatory factors into separate latent dimensions. Novel objectives proposed in works like Conny et al. (2023) improve interpretability by encouraging modularity and disentanglement. ## 4 Methodology The integer addition problem space is very dense. For 5 digit addition, there are 10 billion distinct questions (e.g. "54321+77779="). The model must predict all 6 answer digits correctly to get the one right answer out of 200,000 possibilities. Changing a single digit in the question changes 1 to 6 digits in the answer. The full question is only revealed one token (the "=") before the model must predict the first answer digit. Figure 2: Training loss curves per digit position for a 5-digit integer addition task, showing the model trains each digit semi-independently. Our model was trained on 1.8 million out of 10 billion questions. After training, the model predicts answers to questions with low loss, showing the model does not rely on memorisation of training data. Fig. 2 shows the model trains each digit semi-independently suggesting the model performs integer addition by breaking down the task into parallel digit-specific streams of computation. The traditional human addition process first sums the units before moving on to higher value digits. This is the simplest process but relies on being able to choose the order to process the digits in. This autoregressive transformer model processes text from left to right. So the model sees the higher value digits (e.g. thousands) of the question before the lower value digits (e.g. units). It doesn't use the traditional process. A key component of addition is the need to sum each digit in the first number with the corresponding digit in the second number. Transformer models contain "attention heads" and they are the only computational sub-component of a model that can move information _between_ positions (aka digits or tokens). Visualising which token(s) each attention head focussed on in each row of the calculation provided insights. While our model works with 2, 3 or 4 attention heads, 3 attention heads gives the most easily interpreted attention patterns. Fig. 3 shows the attention pattern for a single 5 digit addition calculation using 3 attention heads. Appendix 12 shows the same pattern for 10 and 15 digit addition. Appendix 12 shows the pattern with 2 or 4 attention heads. While it's clear the model is calculating answer digits from highest value to lowest value, using the attention heads, it's not clear what calculation each attention head is doing, or how the attention heads are composed together to perform addition. ## 5 Mathematical Framework To help investigate, we created a mathematical framework describing what **any** algorithm must do if it is to perform addition correctly. Our intuition is that the model a) incrementally discovers a necessary and sufficient set of addition sub-tasks (minimising complexity), b) discovers these sub-tasks semi-independently (maximising parallelism), and c) treats each digit semi-independently (more parallelism). Our framework reflects this. To explain the framework, let us first define \(x\) and \(y\) be two \(n\)-digit integers that need to be added, represented as vectors where \(x=(x_{0},x_{1},\ldots,x_{n-1})\) and \(y=(y_{0},y_{1},\ldots,y_{n-1})\). Figure 3: The attention pattern, for a model with 3 attention heads, performing a single 5 digit addition. The pattern is 18 by 18 squares (as 54321+77779=132100 is 18 tokens). Time proceeds vertically downwards, with one additional token being revealed horizontally at each row, giving the overall triangle shape. After the question is fully revealed (at row 11), each head starts attending to pairs of question digits from left to right (i.e. high-value digits before lower-value digits) giving the “double staircase” shape. The three heads attend to a given digit pair in three different rows, giving a time ordering of heads. We assert that the framework utilizes three base functions that operate on individual digit pairs. The first is _Base Add_ (aka _BA_ ), which calculates the sum of two digits \(x_{i}\) and \(y_{i}\) modulo 10, ignoring any carry over from previous columns. The second is _Make Carry 1_ (aka _MC1_ ), which evaluates if adding digits \(x_{i}\) and \(y_{i}\) results in a carry over of 1 to the next column. The third is _Make Sum 9_ (aka _MS9_ ), which evaluates if \(x_{i}+y_{i}=9\) exactly. In addition, the framework uses two compound functions that chain operations across digits. The first is _Use Carry 1_ (aka _UC1_ ), which takes the previous column's carry output and adds it to the sum of the current digit pair. The second is _Use Sum 9_ (aka _US9_ ), which propagates (aka cascades) a carry over of 1 to the next column if the current column sums to 9 and the previous column generated a carry over. _US9_ is the most complex task as it spans three digits. For some rare questions (e.g. 00555 + 00445 = 01000) _US9_ applies to up to four sequential digits, causing a chain effect, with the _MC1_ cascading through multiple digits. This cascade requires a time ordering of the _US9_ calculations from lower to higher digits. These tasks occur in the training data with different, predictable frequencies (e.g. _BA_ is common, _US9_ is rarer). Compound tasks are reliant on the base tasks and so discovered later in training. The discovery of each task reduces the model loss by a different, predictable amount (e.g. _BA_ by 50%, _US9_ by 5%). Combining these facts give an expected order of task discovery during training. We use this mathematical framework solely for analysis to gain insights. The model training and all loss calculations are completely independent of this mathematical framework. ## 6 Training Analysis Fig. 2 shows the model trains each digit semi-independently. Armed with the mathematical framework, we investigated each digit separately. The Digit 0 calculation is the least interesting as it only uses _BA_ (not _UC1_ or _US9_ ). Once discovered, Digit 0 always quickly refines to have the lowest loss and least noise (as expected). (Graphs in Appendix 11.) For the other digits, we categorised the training data into 3 non-overlapping subsets aligned to the _BA_, _UC1_ and _US9_ tasks, and graphed various combinations, finding interesting results. Figure 4: The mathematical framework (our method) predicts that during training, tasks are learnt for each digit independently, progressively increasing per digit accuracy (i.e. decreasing loss) shown as percentages. Mathematical rules cause dependencies between digits, giving an predicted ordering for perfect (i.e. zero loss) addition. The chain of blue squares relate to questions like 99999 + 00001 = 100000 where the _MC1_ in digit 0 causes _US9_ cascades through multiple other digits. The _US9_ graphs are much noisier than other graphs (Fig. 5). We found that the model has low loss on simple _US9_ cases (e.g. \(45+55=100\)) but has high loss on _US9_ cascades (e.g. \(445+555=1000\)) where the _MC1_ must be propagated "right to left" two 2, 3 or 4 columns. The model can't perform these rare use cases safely, as it has a "left to right" algorithm. Graphing the _BA_ and _UC1_ use cases side by side for any one of the Digits 1, 2 and 3 shows an interesting pattern (Fig. 6). In Phase 1, both tasks have the same (high) loss. In Phase 2, both curves drop quickly but the _BA_ curve drops faster than the _UC1_ curve. This "time lag" matches our expectation that the _BA_ task must be accurate before the _UC1_ task can be accurate. In Phase 3, both tasks' loss curve decrease slowly over time. Both the _BA_ and _UC1_ tasks need to move data between tokens, and so will be implemented in attention head(s). Fig. 6 shows they are trained semi-independently. We choose the number of attention heads in our model with the clearest separation of tasks in the attention pattern. We find (later) that our model has separate attention heads for the _BA_ and _UC1_ tasks. Digit 4, the highest question digit, has a significantly different loss curve (shown in Fig. 7) than Digits 1, 2 and 3. This is partially explained by Digit 4 only having simple _US9_ cases (i.e. no _US9_ cascades). This does not explain the _BA_ or _UC1_ differences. This difference persists with different seed values, and with 10 or 15 digit addition. We explain this difference later. Figure 5: High variability in the per digit training loss for _US9_ cases caused by the model’s inability to reliably do cascading _US9_ cases such as 445+555=1000. Figure 6: Training loss for digit 3 showing that, in Phase 2, the refining of _Use Carry 1_ lags behind _Base Add_. _Base Add_ and _Use Carry 1_ are refined separately and have separate calculation algorithms. The 3 phases seem to correspond to “memorisation”, “algorithm discovery” and “clean-up”. ## 7 Prediction Analysis During model prediction, we overrode (mean ablated) the model memory (residual stream) at each row, and confirmed that the addition algorithm does **not** use any data generated in rows 0 to 10 inclusive. In these rows the model has **not** yet seen the full question and every digit in the question is independent of every other digit, making accurate answer prediction infeasible. The model also does not use the last (17th) row. Therefore, the addition is started and completed in 6 rows (11 to 16). Further (ablation) experiments confirmed that the A0 to A4 answers are calculated one row **before** being revealed. (Details in Appendix 17.) The model has slightly different algorithms for the first digit pairs, the middle digit pairs and the last digit pairs. Fig. 1 has a simplified version of how the model calculates the middle digit pair A3. Fig. 8 has more details. For 5 digit addition, there are 2 middle digit pairs (A3 and A2) whereas for 15 digit addition there are 12 middle digit pairs. The A3 addition algorithm has three clauses related to digits 3, 2 and 1. Ablating each head in turn shows that the 3rd head has most impact on loss, the 2nd head has less impact, and the 1st head has Figure 8: **A:** For A3, the addition algorithm must combine information from digits 3, 2 and 1. **B:** The 1st head calculates \(MC1\) on digit 1. **C:** The 2nd head calculates \(MC1\) and _MS9_ (at most one of which can be true) on digit 2. **D:** The 3rd head calculates _Base Add_ on digit 3. **E:** The MLP layer uses trigrams to combine the information from the 3 heads to give the final answer A3, one row before it is output. Appendix 16 shows this algorithm as pseudocode. Figure 7: Training loss for digit 4 starts and stays lower for all tasks than it does for digits 1, 2 and 3. Digit 4 has a different calculation algorithm from digits 1, 2 and 3. little impact. This aligns with the intuition that the sum "D3 + D3" matters most, the _MC1_ from the previous digit (D2 + D2') matters less, and the rare _MC1_ from the previous previous digit (D1 + D1') matters least. The last two digits, A1 and A0, use a simplified a version of the A3 algorithm, with some of the three clauses not needed. The A3 algorithm could also successfully be applied to A4. But the Digit 4 training curve is better (faster) than the middle digits. The attention patterns shows that for A4, the model is using all the heads in row 11 (the "=" token) when the A3 algorithm doesn't require this. Uniquely, A4 utilises more "compute" than is available to A3, A2, A1 or A0. We assume the model uses this advantage to implement a faster-training and lower-loss algorithm for A5 and A4. We haven't worked out the details of this. Mean ablating the 1st or 2nd head slightly increased the average loss for _BA_ questions from 0.05 to 0.08, whereas ablating the 3rd head substantially increased the loss to 3.7, confirming that the 3rd head is doing the _BA_ task. (Details in Appendix 17.) The MLP can be thought of as a "key-value pair" memory (Meng et al., 2022; Geva et al., 2021) that can hold many bigrams and trigrams. We claim our MLP pulls together the two-state 1st head result, the tri-state 2nd head result and the ten-state 3rd head result value, treating them as a trigram with 60 (2 x 3 x 10) possible keys. For each digit, the MLP has memorised the mapping of these 60 keys to the 60 correct digit answers (0 to 9). We haven't proven this experimentally. Our MLP is sufficiently large to store this many mappings with zero interference between mappings (Elhage et al., 2022). Despite being feasible, the model does **not** calculate the task _MC1_ in rows 7 to 11. Instead it completes each digit calculation in 1 row, possibly because there are training optimisation benefits in generating a "compact" algorithm. This algorithm explains all the observed prediction behaviour - including the fact that the model can calculate a simple _US9_ case but not a cascading _US9_ case. We assume that, given the dense nature of the question and answer, and the small model size, the model does not have sufficient time and compute resources to implement both _UC1_ and _US9_ accurately, and so preferences implementing the more common (_UC1_ ) case, and only partially implements the more complex and rare (_US9_ ) case. ## 8 Conclusions and Future Work This work demonstrates a successful approach to reverse engineering and elucidating the emergent algorithm within a transformer model trained on integer addition. By combining mathematical analysis, empirical investigation of training and prediction, and targeted experimentation, we are able to explain how the model divides the task into parallel digit-specific streams, employs distinct subroutines for different digit positions, postpones calculations until the last possible moment yet executes them rapidly, and struggles with a specific rare case. Our theoretical framework of necessary addition subtasks provides a foundation for the model's behavior. The digit-wise training loss curves reveal independent refinement consistent with separate digit-specific circuits. Attention patterns illustrate staging and time-ordering of operations. Controlled ablation experiments validate ounur hypothesis about algorithmic elements' roles. Together these methods enable a detailed accounting of the model's addition procedure. This methodology for mechanistic interpretability, when applied to broader tasks and larger models, can offer insights into not just what computations occur inside complex neural networks, but how and why those computations arise. Such elucidation will be increasingly important for ensuring the safety, reliability and transparency of AI systems. Our study paves the way for numerous potential research avenues. Recognizing the challenges in rare cases can inspire methods to enhance the robustness of addition models. The established framework might be adapted to elucidate models for integer subtraction or multiplication. By integrating proven and effective addition modules into a larger, untrained network geared towards multiplication, the training process could be expedited. Further, decoding the multiplication algorithm becomes more straightforward when the addition-related tasks are already recognized and deemed reliable. Utilizing this modular approach can simplify the understanding of intricate algorithms, propelling advancements in the field of mechanistic interpretability. In summary, this research underscores that diving deep into the workings of contemporary machine learning can highlight valuable strengths, pinpoint areas for improvement, and present avenues for accelerated progress. ## 9 Reproducibility Statement To ensure our work is reproducible, we provide the full source code in the supplementary materials, as well as all necessary data and parameters and instructions to reproduce our experimental results.
2302.13330
Power of $k$ Choices in the Semi-Random Graph Process
The semi-random graph process is a single player game in which the player is initially presented an empty graph on $n$ vertices. In each round, a vertex $u$ is presented to the player independently and uniformly at random. The player then adaptively selects a vertex $v$, and adds the edge $uv$ to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible. In this paper, we introduce a natural generalization of this game in which $k$ random vertices $u_1, \ldots, u_k$ are presented to the player in each round. She needs to select one of the presented vertices and connect to any vertex she wants. We focus on the following three monotone properties: minimum degree at least $\ell$, the existence of a perfect matching, and the existence of a Hamiltonian cycle.
Paweł Prałat, Harjas Singh
2023-02-26T15:02:26Z
http://arxiv.org/abs/2302.13330v2
# Power of \(k\) choices in the semi-random graph process ###### Abstract. The semi-random graph process is a single player game in which the player is initially presented an empty graph on \(n\) vertices. In each round, a vertex \(u\) is presented to the player independently and uniformly at random. The player then adaptively selects a vertex \(v\), and adds the edge \(uv\) to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible. In this paper, we introduce a natural generalization of this game in which \(k\) random vertices \(u_{1},\ldots,u_{k}\) are presented to the player in each round. She needs to select one of the presented vertices and connect to any vertex she wants. We focus on the following three monotone properties: minimum degree at least \(\ell\), the existence of a perfect matching, and the existence of a Hamiltonian cycle. ## 1. Introduction and Main Results ### Definitions In this paper, we consider a natural generalization of the **semi-random graph process** suggested by Peleg Michaeli, introduced formally in [4], and studied recently in [3, 12, 14, 2, 7, 19, 13, 9, 18] that can be viewed as a "one player game". The original process starts from \(G_{0}\), the empty graph on the vertex set \([n]:=\{1,\ldots,n\}\) where \(n\geq 1\). In each **step**\(t\), a vertex \(u_{t}\) is chosen uniformly at random from \([n]\). Then, the player (who is aware of the graph \(G_{t}\) and the vertex \(u_{t}\)) must select a vertex \(v_{t}\) and add the edge \(u_{t}v_{t}\) to \(G_{t}\) to form \(G_{t+1}\). The goal of the player is to build a (multi)graph satisfying a given property \(\mathcal{P}\) as quickly as possible. It is convenient to refer to \(u_{t}\) as a **square**, and \(v_{t}\) as a **circle** so every edge in \(G_{t}\) joins a square with a circle. We say that \(v_{t}\) is paired to \(u_{t}\) in step \(t\). Moreover, we say that a vertex \(x\in[n]\) is **covered** by the square \(u_{t}\) arriving at round \(t\), provided \(u_{t}=x\). The analogous definition extends to the circle \(v_{t}\). Equivalently, we may view \(G_{t}\) as a directed graph where each arc directs from \(u_{t}\) to \(v_{t}\), and thus we may use \((u_{t},v_{t})\) to denote the edge added in step \(t\). For this paper, it is easier to consider squares and circles for counting arguments. We generalize the process as follows. Let \(k\in\mathbb{N}\). In each step \(t\), \(k\) vertices \(u_{t}^{1},\ldots,u_{t}^{k}\) are chosen independently and uniformly at random from \([n]\). For simplicity, we allow repetitions but, of course there will not be too many of them. Then, the player must select one of them (that is, select \(i_{t}\in[k]\) and fix \(u_{t}=u_{t}^{i_{t}}\)), select a vertex \(v_{t}\), and add the edge \(u_{t}v_{t}\) to \(G_{t}\) to form \(G_{t+1}\). The objective is the same as before, namely, to achieve a property \(\mathcal{P}\) as quickly as possible. We will refer to this game as the \(k\)**-semi-random graph process** or simply the \(k\)**-process**. Clearly, it is a generalization, since for \(k=1\) we recover the original game. Moreover, if \(k_{1}>k_{2}\geq 1\), then the \(k_{1}\)-process is as easy for the player as the \(k_{2}\)-process since additional \(k_{1}-k_{2}\) squares may be simply ignored when making choices. However, it is often the case that more choices provide substantially more power to the player. For more details, we refer the reader to a general survey [20] and the first paper introducing this powerful and fundamental idea [1]. A **strategy**\(\mathcal{S}\) to play against the \(k\)-process is defined by specifying for each \(n\geq 1\), a sequence of functions \((f_{t})_{t=1}^{\infty}\), where for each \(t\in\mathbb{N}\), \(f_{t}(u_{1},v_{1},\ldots,u_{t-1},v_{t-1},u_{t}^{1},\ldots,u_{t}^{k})\) is a distribution over \([k]\times[n]\) which depends on the vertices \(u_{t}^{1},\ldots,u_{t}^{k}\), and the history of the process up until step \(t-1\). Then, \(i_{t}\in[k]\) and \(v_{t}\) is chosen according to this distribution. Observe that this means that the player needs to select her strategy (possibly randomized) in advance, before the game actually starts. If \(f_{t}\) is an atomic distribution, then the pair \((i_{t},v_{t})\) is determined by \(u_{1},v_{1},\ldots,u_{t-1},v_{t-1},u_{t}^{1},\ldots,u_{t}^{k}\). We then denote \((G_{i}^{\mathcal{S}}(n))_{i=0}^{t}\) as the sequence of random (multi)graphs obtained by following the strategy \(\mathcal{S}\) for \(t\) rounds; where we shorten \(G_{t}^{\mathcal{S}}(n)\) to \(G_{t}\) or \(G_{t}(n)\) when clear. Suppose \(\mathcal{P}\) is a monotonely increasing property. Given a strategy \(\mathcal{S}\) to play against the \(k\)-process and a constant \(0<q<1\), let \(\tau_{\mathcal{P}}(\mathcal{S},q,n,k)\) be the minimum \(t\geq 0\) for which \(\mathbb{P}[G_{t}\in\mathcal{P}]\geq q\), where \(\tau_{\mathcal{P}}(\mathcal{S},q,n,k):=\infty\) if no such \(t\) exists. Define \[\tau_{\mathcal{P}}(q,n,k)=\inf_{\mathcal{S}}\tau_{\mathcal{P}}(\mathcal{S},q, n,k),\] where the infimum is over all strategies on \([k]\times[n]\). Observe that for each \(n\geq 1\), if \(0\leq q_{1}\leq q_{2}\leq 1\), then \(\tau_{\mathcal{P}}(q_{1},n,k)\leq\tau_{\mathcal{P}}(q_{2},n,k)\) as \(\mathcal{P}\) is increasing. Thus, the function \(q\to\limsup_{n\to\infty}\tau_{\mathcal{P}}(q,n,k)\) is non-decreasing, and so the limit \[\tau_{\mathcal{P}}(k):=\lim_{q\to 1^{-}}\limsup_{n\to\infty}\frac{\tau_{ \mathcal{P}}(q,n,k)}{n},\] is guaranteed to exist. The goal is typically to compute upper and lower bounds on \(\tau_{\mathcal{P}}(k)\) for various properties \(\mathcal{P}\). Note that we normalized \(\tau_{\mathcal{P}}(q,n,k)\) by \(n\) above since the properties investigated in this paper need a linear number of rounds to be achieved. Other properties might require different scaling. For example, creating a fixed graph \(H\) requires \(o(n)\) rounds a.a.s. [4, 2]. ### Main Results In this paper, we investigate the following three monotone properties: minimum degree at least \(\ell\) (Section 3), the existence of a perfect matching (Section 4), and the existence of a Hamiltonian cycle (Section 5). The computations presented in the paper (see Tables 1, 2, and 3) were performed by using Maple [5]. The worksheets can be found on-line1. Footnote 1: [https://math.torontomu.ca/~pralat/](https://math.torontomu.ca/~pralat/) ## 2. Preliminaries ### Notation The results presented in this paper are asymptotic by nature. We say that some property \(P\) holds _asymptotically almost surely_ (or a.a.s.) if the probability that the \(k\)-process has this property (after possibly applying some given strategy) tends to \(1\) as \(n\) goes to infinity. Given two functions \(f=f(n)\) and \(g=g(n)\), we will write \(f(n)=\mathcal{O}(g(n))\) if there exists an absolute constant \(c\in\mathcal{R}_{+}\) such that \(|f(n)|\leq c|g(n)|\) for all \(n\), \(f(n)=\Omega(g(n))\) if \(g(n)=\mathcal{O}(f(n))\), \(f(n)=\Theta(g(n))\) if \(f(n)=\mathcal{O}(g(n))\) and \(f(n)=\Omega(g(n))\), and we write \(f(n)=o(g(n))\) or \(f(n)\ll g(n)\) if \(\lim_{n\to\infty}f(n)/g(n)=0\). In addition, we write \(f(n)\gg g(n)\) if \(g(n)=o(f(n))\) and we write \(f(n)\sim g(n)\) if \(f(n)=(1+o(1))g(n)\), that is, \(\lim_{n\to\infty}f(n)/g(n)=1\). We will use \(\log n\) to denote a natural logarithm of \(n\). As mentioned earlier, for a given \(n\in\mathbb{N}:=\{1,2,\ldots\}\), we will use \([n]\) to denote the set consisting of the first \(n\) natural numbers, that is, \([n]:=\{1,2,\ldots,n\}\). Finally, as typical in the field of random graphs, for expressions that clearly have to be an integer, we round up or down but do not specify which: the choice of which does not affect the argument. ### Concentration Tools Let us first state a few specific instances of Chernoff's bound that we will find useful. Let \(X\in\operatorname{Bin}(n,p)\) be a random variable distributed according to a Binomial distribution with parameters \(n\) and \(p\). Then, a consequence of _Chernoff's bound_ (see e.g. [15, Theorem 2.1]) is that for any \(t\geq 0\) we have \[\mathbb{P}(X\geq\mathbb{E}X+t) \leq \exp\left(-\frac{t^{2}}{2(\mathbb{E}X+t/3)}\right) \tag{1}\] \[\mathbb{P}(X\leq\mathbb{E}X-t) \leq \exp\left(-\frac{t^{2}}{2\mathbb{E}X}\right). \tag{2}\] Moreover, let us mention that the bound holds in a more general setting as well, that is, for \(X=\sum_{i=1}^{n}X_{i}\) where \((X_{i})_{1\leq i\leq n}\) are independent variables and for every \(i\in[n]\) we have \(X_{i}\in\text{Bernoulli}(p_{i})\) with (possibly) different \(p_{i}\)-s (again, see e.g. [15] for more details). Finally, it is well-known that the Chernoff bound also applies to negatively correlated Bernoulli random variables [8]. ### The Differential Equation Method In this section, we provide a self-contained _non-asymptotic_ statement of the differential equation method which we will use for each property we investigate. The statement combines [21, Theorem 2], and its extension [21, Lemma 9], in a form convenient for our purposes, where we modify the notation of [21] slightly. In particular, we rewrite [21, Lemma 9] in a less general form in terms of a stopping time \(T\). We need only check the 'Boundedness Hypothesis' (see below) for \(0\leq t\leq T\), which is exactly the setting in our proofs. Suppose we are given integers \(a,n\geq 1\), a bounded domain \(\mathcal{D}\subseteq\mathbb{R}^{a+1}\), and functions \((F_{k})_{1\leq k\leq a}\) where each \(F_{k}:\mathcal{D}\to\mathbb{R}\) is \(L\)-Lipschitz-continuous on \(\mathcal{D}\) for \(L\geq 0\). Moreover, suppose that \(R\in[1,\infty)\) and \(S\in(0,\infty)\) are _any_ constants which satisfy \(\max_{1\leq k\leq a}|F_{k}(x)|\leq R\) for all \(x=(s,y_{1},\ldots,y_{a})\in\mathcal{D}\) and \(0\leq s\leq S\). **Theorem 2.1** (Differential Equation Method, [21]).: _Suppose we are given \(\sigma\)-fields \(\mathcal{F}_{0}\subseteq\mathcal{F}_{1}\subseteq\cdots\), and for each \(t\geq 0\), random variables \(((Y_{k}(t))_{1\leq k\leq a}\) which are \(\mathcal{F}_{t}\)-measurable. Define \(T_{\mathcal{D}}\) to be the minimum \(t\geq 0\) such that_ \[(t/n,Y_{1}(t)/n,\ldots,Y_{a}(t)/n)\notin\mathcal{D}.\] _Let \(T\geq 0\) be an (arbitrary) stopping time2 adapted to \((\mathcal{F}_{t})_{t\geq 0}\), and assume that the following conditions hold for \(\delta,\beta,\gamma\geq 0\) and \(\lambda\geq\delta\min\{S,L^{-1}\}+R/n\):_ Footnote 2: The stopping time \(T\geq 0\) is **adapted** to \((\mathcal{F}_{t})_{t\geq 0}\), provided the event \(\{\tau=t\}\) is \(\mathcal{F}_{t}\)-measurable for each \(t\geq 0\). * _The 'Initial Condition': For some_ \((0,\hat{y}_{1},\ldots,\hat{y}_{a})\in\mathcal{D}\)_,_ \[\max_{1\leq k\leq a}|Y_{k}(0)-\hat{y}_{k}n|\leq\lambda n.\] * _The 'Trend Hypothesis': For each_ \(t\leq\min\{T,T_{\mathcal{D}}-1\}\)_,_ \[|\mathbb{E}[Y_{k}(t+1)-Y_{k}(t)\mid\mathcal{F}_{t}]-F_{k}(t/n,Y_{1}(t)/n, \ldots,Y_{a}(t)/n)|\leq\delta.\] * _The 'Boundedness Hypothesis': With probability_ \(1-\gamma\)_,_ \[|Y_{k}(t+1)-Y_{k}(t)|\leq\beta,\] _for each_ \(t\leq\min\{T,T_{\mathcal{D}}-1\}\)_._ _Then, with probability at least \(1-2a\exp\left(\frac{-n\lambda^{2}}{85\beta^{2}}\right)-\gamma\), we have that_ \[\max_{0\leq t\leq\min\{T,\sigma n\}}\max_{1\leq k\leq a}|Y_{k}(t)-y_{k}(t/n)n |<3\lambda\exp(LS)n, \tag{3}\] _where \((y_{k}(s))_{1\leq k\leq a}\) is the unique solution to the system of differential equations_ \[y_{k}^{\prime}(s)=F_{k}(s,y_{1}(s),\ldots,y_{a}(s))\quad\text{with $y_{k}(0)=\hat{y}_{k}$ for $1\leq k\leq a$,} \tag{4}\] _and \(\sigma=\sigma(\hat{y}_{1},\ldots,\hat{y}_{a})\in[0,S]\) is any choice of \(\sigma\geq 0\) with the property that \((s,y_{1}(s),\ldots,y_{a}(s))\) has \(\ell^{\infty}\)-distance at least \(3\lambda\exp(LS)\) from the boundary of \(\mathcal{D}\) for all \(s\in[0,\sigma)\)._ **Remark 2.2**.: Standard results for differential equations guarantee that (4) has a unique solution \((y_{k}(s))_{1\leq k\leq a}\) which extends arbitrarily close to the boundary of \(\mathcal{D}\). ## 3. Minimum Degree at Least \(\ell\) Let us fix a natural number \(\ell\). Our goal is to investigate how long does it take for the \(k\)-process to create a graph with minimum degree at least \(\ell\). This problem was considered in [4] for the original semi-random process (\(k=1\)). In this paper, we investigate it for the \(k\)-process for any value of \(k\). Let \(\mathcal{P}_{\ell}\) be the property that a graph has a minimum degree at least \(\ell\). In order to establish the value of \(\tau_{\mathcal{P}_{\ell}}(k)\), we need to do two things: investigate which strategy is optimal (we do it in Subsection 3.1) and then analyze the optimal strategy (we do it in Subsection 3.2). One consequence of our results is Table 1 which consists of numerical values of \(\tau_{\mathcal{P}_{\ell}}(k)\) for a grid of parameters \((k,\ell)\) with \(1\leq k,\ell\leq 5\). It follows immediately from the definition of \(\tau_{\mathcal{P}_{\ell}}(k)\) that it is a non-decreasing function with respect to \(\ell\) but a non-increasing one with respect to \(k\). Finally, we note that for large values of \(k\) (and any fixed value of \(\ell\)), typically some squares lands on a vertex with minimum degree and so the degree distribution is well balanced during the whole process. As the result, the total number of rounds is close to the trivial lower bound of \(\ell n/2\). In other words, \(\tau_{\mathcal{P}_{\ell}}(k)=\ell/2+o_{k}(1)\). Similarly, for large values of \(\ell\) (and any fixed value of \(k\)), because of the law of large numbers, each vertex receives more or less the same number of squares. As before, the degree distribution is well balanced and as a consequence, \(\tau_{\mathcal{P}_{\ell}}(k)=\ell/2+o_{\ell}(1)\). We investigate both of these properties in Subsection 3.3. ### Optimal Strategy In this subsection, we show that the following greedy strategy is an optimal strategy. In this strategy, in each round \(t\) of the process the player selects a square that lands on a vertex with the smallest degree, that is, she selects \(i_{t}\) such that \(\deg_{G_{t}}(u_{t}^{i})\geq\deg_{G_{t}}(u_{t}^{i_{t}})\) for any \(i\in[k]\); if there is more than one such square to choose from, the decision which one to select is made arbitrarily. Then, the player puts a circle on a vertex with minimum degree; again, if there is more than one such vertex to choose from, the decision is made arbitrarily. Let us denote this strategy as \(\mathcal{S}_{0}\). Let us fix \(k\) and \(\ell\). For a given strategy \(\mathcal{S}\), let \(H(\mathcal{S})\) be the hitting time for the property \(\mathcal{P}_{\ell}\), that is, \(H(\mathcal{S})\) is the random variable equal to the number of rounds required for \(\mathcal{S}\) to achieve the property \(\mathcal{P}_{\ell}\). We say that a strategy \(\mathcal{S}\)**dominates** a strategy \(\mathcal{S}^{\prime}\) if the random variable \(H(\mathcal{S})\) is dominated by the random variable \(H(\mathcal{S})\), that is, \(\mathbb{P}(H(\mathcal{S})\leq t)\geq\mathbb{P}(H(\mathcal{S}^{\prime})\leq t)\) for any \(t\). The next lemma is straightforward. Its proof is an adaptation of the proof for the original process [4]. **Lemma 3.1**.: _Let \(k,\ell\in\mathbb{N}\), and consider the property \(\mathcal{P}_{\ell}\). The strategy \(\mathcal{S}_{0}\) dominates any other strategy \(\mathcal{S}\) against the \(k\)-process._ \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \(k=1\) & \(k=2\) & \(k=3\) & \(k=4\) & \(k=5\) \\ \hline \(\ell=1\) & 0.69315 & 0.62323 & 0.59072 & 0.57183 & 0.55947 \\ \(\ell=2\) & 1.21974 & 1.12498 & 1.09081 & 1.07184 & 1.05947 \\ \(\ell=3\) & 1.73164 & 1.62508 & 1.59081 & 1.57184 & 1.55947 \\ \(\ell=4\) & 2.23812 & 2.12508 & 2.09081 & 2.07184 & 2.05947 \\ \(\ell=5\) & 2.74200 & 2.62508 & 2.59081 & 2.57184 & 2.55947 \\ \hline \end{tabular} \end{table} Table 1. Minimum Degree at Least \(\ell\)—numerical values of \(\tau_{\mathcal{P}_{\ell}}(k)\) for a grid of parameters \((k,\ell)\) with \(1\leq k,\ell\leq 5\). Proof.: We say that a strategy \(\mathcal{S}\) is \((i,j)\)**-minimizing** if in each of the first \(i\) rounds, the player chooses a square of minimum degree and in each of the first \(j\) rounds, she puts a circle on a vertex of minimum degree. Strategy \(\mathcal{S}\) is said to be **minimizing** if it is \((i,j)\)-minimizing for every \(i\) and \(j\). In order to prove the lemma, since domination is a transitive relation, it is enough to show that any \((i,j)\)-minimizing strategy is dominated by some \((i+1,j)\)-minimizing strategy as well as some \((i,j+1)\)-minimizing strategy. Since \(\mathcal{S}_{0}\) is minimizing and any two minimizing strategies dominate one another, we conclude that \(\mathcal{S}_{0}\) dominates any other strategy \(\mathcal{S}\). Consider any \((i,j)\)-minimizing strategy \(\mathcal{S}\). We will modify it slightly and create two new strategies, \(\mathcal{S}^{\prime}\) and \(\mathcal{S}^{\prime\prime}\), that are \((i+1,j)\)-minimizing and, respectively, \((i,j+1)\)-minimizing. We can imagine a player using strategy \(\mathcal{S}\) on the graph \(G_{t}\) and another player using strategy \(\mathcal{S}^{\prime}\) on an auxiliary graph \(G_{t}^{\prime}\). The two games are coupled such that squares appear in both games at the same locations. During the first \(i\) rounds, the strategy \(\mathcal{S}^{\prime}\) is the same as the strategy \(\mathcal{S}\). Suppose that at round \(i+1\), with probability \(p>0\), \(\mathcal{S}\) chooses a square that lands on a vertex \(v\) but the decision is made in a non-greedy fashion. We condition on this event, and slightly modify \(\mathcal{S}\) to get \(\mathcal{S}^{\prime}\) as follows. At round \(i+1\), strategy \(\mathcal{S}^{\prime}\) chooses a square that lands on a vertex \(u\) that has minimum degree; in particular, \(\deg_{G_{t}}(u)<\deg_{G_{t}}(v)\). From that point on, the two graphs, \(G_{t}\) and \(G_{t}^{\prime}\), are going to differ. For the rest of the game, as long as \(\deg_{G_{t}}(u)\leq\deg_{G_{t}}(v)-2\), \(\mathcal{S}^{\prime}\) continues "stealing" strategy \(\mathcal{S}\), that is, both the choices for squares and for circles are exactly the same in both games. If, at any point of the game, \(\deg_{G_{t}}(u)=\deg_{G_{t}}(v)-1\), then \(u\) and \(v\) are "relabeled" in \(G_{t}^{\prime}\) (that is, \(v\) becomes \(u\) and \(u\) becomes \(v\)). After that, we continue coupling the two games but now each time a square lands on \(u\) from \(G_{t}\), we assume that it also lands on \(u\) (that used to be initially labelled as \(v\)) in \(G_{t}^{\prime}\). The same property holds for \(v\). The strategy \(\mathcal{S}^{\prime}\) continues "stealing" strategy \(\mathcal{S}\). A simple but important property is that from time \(i+1\) on, but before possible relabelling, \(\deg_{G_{t}^{\prime}}(u)=\deg_{G_{t}}(u)+1\) and \(\deg_{G_{t}^{\prime}}(v)=\deg_{G_{t}}(v)-1\). Since \(\deg_{G_{t}}(u)\leq\deg_{G_{t}}(v)-2\), \(\deg_{G_{t}^{\prime}}(u)\leq\deg_{G_{t}^{\prime}}(v)\). As a consequence, \[\min\{\deg_{G_{t}}(u),\deg_{G_{t}}(v)\}=\deg_{G_{t}}(u)=\deg_{G_{t}^{\prime}}( u)-1=\min\{\deg_{G_{t}^{\prime}}(u),\deg_{G_{t}^{\prime}}(v)\}-1.\] For any other vertex \(w\not\in\{u,v\}\), \(\deg_{G_{t}}(w)=\deg_{G_{t}^{\prime}}(w)\). Hence, provided that no relabelling took place, \(\min\{\deg_{G_{t}}(u),\deg_{G_{t}}(v)\}\geq\ell\) implies that \(\min\{\deg_{G_{t}^{\prime}}(u),\deg_{G_{t}^{\prime}}(v)\}\geq\ell+1\) and so the desired property \(\mathcal{P}_{\ell}\) cannot be achieved by the strategy \(\mathcal{S}\) before it is achieved by the strategy \(\mathcal{S}^{\prime}\). Finally, note that when \(\deg_{G_{t}}(u)=\deg_{G_{t}}(v)-1\), we have that \(\deg_{G_{t}}(u)=\deg_{G_{t}^{\prime}}(v)\) and \(\deg_{G_{t}}(v)=\deg_{G_{t}^{\prime}}(u)\). Hence, after relabelling, the degree distribution in \(G_{t}\) is exactly the same as the degree distribution in \(G_{t}^{\prime}\) (despite the fact that graphs are possibly different). This property will be preserved to the end of the process and so both strategies will achieve the desired property \(\mathcal{P}_{\ell}\) at the same time. The same argument can be repeated to create an \((i,j+1)\)-minimizing strategy \(\mathcal{S}^{\prime\prime}\). This finishes the proof of the lemma. ### Analysis of the Optimal Strategy In this subsection, we analyze the greedy strategy that is introduced and proved to be optimal in the previous subsection. This establishes \(\tau_{\mathcal{P}_{\ell}}(k)\) for any value of \(\ell\) and \(k\). **Theorem 3.2**.: _Let \(k,\ell\in\mathbb{N}\). Then, \(\tau_{\mathcal{P}_{\ell}}(k)=c_{\ell,k}\), where \(c_{\ell,k}\geq\ell/2\) is a constant that is derived from a system of differential equations. The numerical values for \(1\leq k,\ell\leq 5\) are presented in Table 1._ Proof.: In the greedy strategy, we distinguish phases by labelling them with integers \(q\in\{0,1,\ldots,\ell-1\}\). During the \(q\)th phase, the minimum degree in \(G_{t}\) is equal to \(q\). In order to analyze the evolution of the \(k\)-process, we will track the following sequence of \(\ell\) variables: for \(0\leq i\leq\ell-1\), let \(Y_{i}=Y_{i}(t)\) denote the number of vertices in \(G_{t}\) of degree \(i\). Phase 0 starts at the beginning of the \(k\)-process. Since \(G_{0}\) is empty, \(Y_{0}(0)=n\) and \(Y_{i}(0)=0\) for \(1\leq i\leq\ell-1\). There are initially many isolated vertices but they quickly disappear. Phase 0 ends at time \(t\) which is the smallest value of \(t\) for which \(Y_{0}(t)=0\). The DEs method will be used to show that a.a.s. Phase 0 ends at time \(t_{0}\sim x_{0}n\), where \(x_{0}\) is an explicit constant which will be obtained by investigating the associated system of DEs. Moreover, the number of vertices of degree \(i\) (\(1\leq i\leq\ell-1\)) at the end of this phase is well concentrated around some values that are also determined based on the solution to the same system of DEs: a.a.s. \(Y_{i}(t_{0})\sim y_{i}(x_{0})n\). With that knowledge, we move on to Phase 1 in which we prioritize vertices of degree 1. Consider any Phase \(q\), where \(q\in\{0,1,\ldots,\ell-1\}\). This phase starts at time \(t_{q-1}\), exactly when the previous phase ends (or at time \(t_{-1}:=0\) if \(q=0\)). At that point, the minimum degree of \(G_{t_{q-1}}\) is \(q\), so \(Y_{i}(t)=0\) for any \(t\geq t_{q-1}\) and \(i<q\). Hence, we only need to track the behaviour of the remaining \(\ell-q\) variables. Let \(\mathcal{A}_{j}(t)\) be the event that at time \(t\) the player selects a square that lands on a vertex with degree \(j\), that is, \(\mathcal{A}_{j}(t)=\{\deg_{G_{t}}(u_{t}^{i})=j\}\). The probability that \(\mathcal{A}_{j}(t)\) holds can be computed based on the sequence of \(Y_{i}(t)\)'s. To that end, it is convenient to introduce the following auxiliary event. Let \(\mathcal{B}_{j}(t)\) be the event that at time \(t\) all squares land on vertices of degree at least \(j\), that is, \(\mathcal{B}_{j}(t)=\{\deg_{G_{t}}(u_{t}^{i})\geq j,\text{ for all }i\in[k]\}\). Clearly, \(\mathcal{B}_{j+1}(t)\subseteq\mathcal{B}_{j}(t)\) and \(\mathcal{A}_{j}(t)=\mathcal{B}_{j}(t)\setminus\mathcal{B}_{j+1}(t)\). As a result, \[\mathbb{P}(\mathcal{A}_{j}(t)) =\mathbb{P}(\mathcal{B}_{j}(t))-\mathbb{P}(\mathcal{B}_{j+1}(t))\] \[=\left(1-\sum_{a=q}^{j-1}\frac{Y_{a}(t)}{n}\right)^{k}-\left(1- \sum_{a=q}^{j}\frac{Y_{a}(t)}{n}\right)^{k}.\] Let us denote \(H(t)=(Y_{q}(t),Y_{q+1}(t),\ldots,Y_{\ell-1}(t))\). Let \(\delta_{A}\) be the Kronecker delta for the event \(A\), that is, \(\delta_{A}=1\) if \(A\) holds and \(\delta_{A}=0\) otherwise. Then, for any \(i\) such that \(q\leq i\leq\ell-1\), \[\mathbb{E}(Y_{i}(t+1)-Y_{i}(t)\ |\ H(t))=-\delta_{i=q}+\delta_{i=q+1}-\mathbb{P}( \mathcal{A}_{i}(t))+\delta_{i\geq q+1}\mathbb{P}(\mathcal{A}_{i-1}(t)). \tag{5}\] Indeed, since the circle is put on a vertex of degree \(q\), we always lose one vertex of degree \(q\) (term \(-\delta_{i=q}\)) that becomes of degree \(q+1\) (term \(\delta_{i=q+1}\)). We might lose a vertex of degree \(i\) when the selected square lands on a vertex of degree \(i\) (term \(\mathbb{P}(\mathcal{A}_{i}(t))\). We might also gain one of them when the selected square lands on a vertex of degree \(i-1\) (term, \(\mathbb{P}(\mathcal{A}_{i-1}(t))\); note that this is impossible if \(i=q\) (term \(\delta_{i\geq q+1}\)). This suggests the following system of DEs: for any \(i\) such that \(q\leq i\leq\ell-1\), \[y_{i}^{\prime}(x) =-\delta_{i=q}+\delta_{i=q+1}\] \[-\left(\left(1-\sum_{a=q}^{i-1}y_{a}(x)\right)^{k}-\left(1-\sum_ {a=q}^{i}y_{a}(x)\right)^{k}\right)\] \[+\delta_{i\geq q+1}\left(\left(1-\sum_{a=q}^{i-2}y_{a}(x)\right)^ {k}-\left(1-\sum_{a=q}^{i-1}y_{a}(x)\right)^{k}\right). \tag{6}\] Let us now check that the assumptions of the DEs method are satisfied and then discuss the conclusions. Let \(\varepsilon>0\) be an arbitrarily small constant and \(\omega=\omega(n)\) be any function that tends to infinity as \(n\to\infty\). We will ensure that for some universal constant \(C>0\), at the beginning of Phase \(q\), the initial condition is satisfied with \(\lambda=C^{q}\omega/\sqrt{n}=o(1)\). (At the beginning of Phase 0, there is no error in the initial condition so this property is trivially satisfied.) In particular, we assume that the phase starts at time \(t_{q-1}\sim x_{q-1}n\) for some constant \(x_{q-1}\in[0,\infty)\), and for any \(q\leq i\leq\ell-1\), \(Y_{i}(t_{q-1})\sim y_{i}(x_{q-1})n\) for some constants \(y_{i}(x_{q-1})\in(0,1]\). The right hand side of (6) is continuous, bounded, and Lipschitz in the connected open set \[\mathcal{D}=\{(x,y_{q},\ldots,y_{\ell-1}):-\varepsilon<x<\ell+\varepsilon,- \varepsilon<y_{i}<1+\varepsilon\},\] which contains the point \((x_{q-1},y_{q}(x_{q-1}),\ldots,y_{\ell-1}(x_{q-1}))\). Note that there is no error in the 'Trend Hypothesis', that is, \(\delta=0\) (see (5)). Finally, note that the 'Boundedness Hypothesis' holds deterministically \((\gamma=0)\) with \(\beta=2\). We conclude, based on Theorem 2.1, that a.a.s. during the entire Phase \(q\), \[\max_{q\leq i\leq\ell-1}|Y_{i}(t)-y_{i}(t/n)n|<\lambda Cn=o(n),\] provided that \(C\) is a large enough constant. In particular, Phase \(q\) ends at time \(t_{q}\sim x_{q}n\), where \(x_{q}>x_{q-1}\) is the solution of the equation \(y_{q}(x)=0\). Using the final values \(y_{i}(x_{q})\) in Phase \(q\) as initial values for Phase \(q+1\) we can repeat the argument inductively moving from phase to phase. The desired property is achieved at the end of Phase \(\ell-1\) when a graph of minimum degree equal to \(\ell\) is reached. ### Large value of \(k\) or \(\ell\) A natural question that arises is about the asymptotic behaviour of \(\tau_{\mathcal{P}_{\ell}}(k)\) as either \(\ell\) grows large or \(k\) grows large. First, let us show that \(\tau_{\mathcal{P}_{\ell}}(k)\to\ell/2\) as \(\ell\to\infty\). **Theorem 3.3**.: _Fix \(k\in\mathbb{N}\). Then,_ \[\frac{\ell}{2}\leq\tau_{\mathcal{P}_{\ell}}(k)\leq\frac{\ell}{2}\left(1+ \mathcal{O}(\sqrt{\log\ell/\ell})\right).\] Proof.: Noting that \(\tau_{\mathcal{P}_{\ell}}(k)\) is a non-increasing function in \(k\) and, trivially, for any value of \(k\) we have \(\tau_{\mathcal{P}_{\ell}}(k)\geq\ell/2\), we only investigate the case \(k=1\). One can try to analyze an optimal, greedy strategy but we aim for an easy argument without trying to optimize the error term, as long as it goes to zero as \(\ell\to\infty\). Our algorithm consists of two phases. In Phase 1, which lasts \(\ell n/2\) rounds, we place circles sequentially, that is, in round \(i\), a circle is placed on vertex \(i-1\pmod{n}+1\). As a result, at the end of Phase 1, each vertex has exactly \(\ell/2\) circles. Let \(X_{v}\) denote the number of squares on vertex \(v\) at the end of Phase 1. Then \(X_{v}\in\operatorname{Bin}(\ell n/2,1/n)\), with \(\mathbb{E}(X_{v})=\ell/2\). Let \(t:=2\sqrt{\ell\log\ell}\). Then, by the Chernoff bound (2) \[\mathbb{P}(X_{v}\leq\ell/2-t)\leq\exp\left(-\frac{t^{2}}{\ell}\right)=\exp(-4 \log\ell)=1/\ell^{4}.\] Hence, we expect at most \(n/\ell^{4}\) vertices with at most \(\ell/2-t\) squares. More importantly, the events "\(X_{v}\leq\ell/2-t\)" associated with various vertices are negatively correlated. It follows immediately from the Chernoff bound (1) (see also the comment right after it) that a.a.s. there are at most \(2n/\ell^{4}\) vertices with at most \(\ell/2-t\) squares. (Alternatively, one could estimate the variance and use Chebyshev's inequality.) Let a vertex \(v\) be considered deficient if \(\deg(v)<\ell\). Furthermore, define the deficit of a deficient vertex \(v\) to be equal to \(\ell-\deg(v)\). Then, at the end of Phase 1, a.a.s. at most \(2n/\ell^{4}\) vertices have a deficit of at most \(\ell/2\) (a trivial, deterministic upper bound), with the remaining vertices having deficit at most \(2\sqrt{\ell\log\ell}\). In Phase 2, we place circles on the deficient vertices to bring the deficit down to \(0\). This takes at most \(n/\ell^{3}+2n\sqrt{\ell\log\ell}\) rounds. Thus, the total number of rounds is at most \[n\ell/2+n/\ell^{3}+2n\sqrt{\ell\log\ell}=\frac{n\ell}{2}\left(1+\mathcal{O}( \sqrt{\log\ell/\ell})\right).\] It follows that \(\tau_{\mathcal{P}_{\ell}}(k)=\ell/2+o_{\ell}(1)\) as \(\ell\to\infty\), as required. Next, let us show that \(\tau_{\mathcal{P}_{\ell}}(k)\to\ell/2\) as \(k\to\infty\). **Theorem 3.4**.: _Fix \(\ell\in\mathbb{N}\). Then,_ \[\frac{\ell}{2}\leq\tau_{\mathcal{P}_{\ell}}(k)\leq\frac{\ell}{2}\Big{(}1+ \mathcal{O}(\log k/k)\Big{)}.\] Proof.: We will investigate a greedy algorithm by considering \(\ell\) phases. As before, we do not try to optimize the error term and aim for an easy argument. During Phase \(i\), the minimum degree is equal to \(i-1\). The algorithm stops at the end of Phase \(\ell\). We will show that a.a.s. each phase takes at most \(n/2+n\log k/k\) rounds, so the total number of rounds is at most \((n\ell/2)(1+2\log k/k)\). Since, trivially, \(\tau_{\mathcal{P}_{\ell}}(k)\geq\ell/2\), we will get that \(\tau_{\mathcal{P}_{\ell}}(k)=(\ell/2)(1+\mathcal{O}(\log k/k))=\ell/2+o_{k}(1)\) as \(k\to\infty\). Suppose that Phase \(i\) starts at time \(t_{i}\). Let \(X_{t}\) be the number of vertices of degree \(i-1\) at the beginning of round \(t\). Clearly, \(X_{t_{i}}\leq n\). It is convenient to consider two sub-phases. The first sub-phase continues as long as \(X_{t}\geq n\log k/k\). Note that at any step \(t\) of this sub-phase, the probability that no square lands on a vertex of degree \(i-1\) is equal to \[(1-X_{t}/n)^{k}\leq\exp(-kX_{t}/n)\leq\exp(-\log k)=1/k.\] It means that \(X_{t}\) goes down by \(1\) with probability at most \(1/k\) and goes down by \(2\), otherwise. In other words, at time \(t\geq t_{0}\) during this sub-phase, the number of vertices of degree \(i-1\) can be stochastically upper bounded as follows: \[X_{t}\leq X_{t_{0}}-2(t-t_{0})+\operatorname{Bin}(t,1/k)\leq n-2(t-t_{0})+ \operatorname{Bin}(t,1/k).\] (The term \(\operatorname{Bin}(t,1/k)\) corresponds to the number of rounds during which the algorithm "slows down" because no square lands on a vertex of degree \(i-1\).) Hence, the probability that the first sub-phase does not finish in less than \(n/2\) rounds is at most \[\mathbb{P}(\operatorname{Bin}(n/2,1/k)\geq n\log k/k) \leq \mathbb{P}(\operatorname{Bin}(n/2,1/k)\geq 2\mathbb{E}( \operatorname{Bin}(n/2,1/k)))\] \[\leq \exp(-\Theta(n))=o(1),\] assuming that \(k\geq 3>e\) (which we may, since we aim for a result that holds for \(k\) large enough). The second sub-phase takes at most \(n\log k/k\) steps (deterministically) so, a.a.s. the entire phase ends in at most \(n/2+n\log k/k\) steps, and the desired property holds. ## 4. Perfect Matchings In this section, we investigate another classical monotone property that was already studied in the context of semi-random processes, namely, a property of having a perfect matching, which we denote by PM. In the very first paper [4], it was shown that the semi-random process is general enough to approximate (using suitable strategies) several well-studied random graph models, including an extensively studied \(\ell\)-out process (see, for example, Chapter 18 in [16]). In the \(\ell\)-out process, each vertex independently connects to \(\ell\) randomly selected vertices which results in a random graph on \(n\) vertices and \(\ell n\) edges. Since the \(2\)-out process has a perfect matching a.a.s. [11], we immediately get that \(\tau_{\mathtt{PM}}(k)\leq\tau_{\mathtt{PM}}(1)\leq 2\). By coupling the semi-random process with another random graph that is known to have a perfect matching a.a.s. [17], the bound can be improved to \(1+2/e<1.73576\). This bound was recently improved by investigating a fully adaptive algorithm [14]. The currently best upper bound is \(\tau_{\mathtt{PM}}(1)<1.20524\) but there is an easy algorithm that yields the following bound: \(\tau_{\mathtt{PM}}(1)<1.27695\). In this paper, we adjust the easy algorithm to deal with the \(k\)-process and present the corresponding upper bounds (see Subsection 4.1). One could adjust the more sophisticated algorithm as well. We do not do it as the improvement is less significant for larger values of \(k\) but the argument is substantially more involved. Let us now move to the lower bounds. In the initial paper introducing the semi-random process [4], it was already observed that \(\tau_{\mathtt{PM}}(1)\geq\tau_{\mathcal{P}_{1}}(1)=\ln(2)>0.69314\). This lower bound was improved as well, and now we know that \(\tau_{\mathtt{PM}}(1)>0.93261\)[14]. Since adapting the argument from [14] to \(k\geq 2\) would be much more involved and the improvement would be less significant, we only use the trivial bound and the results from the previous section: \(\tau_{\mathtt{PM}}(k)\geq\tau_{\mathcal{P}_{1}}(k)\). Indeed, the gap between the upper and lower bounds gets small as \(k\) is large. In fact, \(\tau_{\texttt{PM}}(k)\to 1/2\) as \(k\to\infty\) (see Subsection 4.2). ### Upper bound for \(\tau_{\texttt{PM}}(k)\) In this subsection, we analyze the following simple but fully adaptive strategy. In each step \(t\) of the algorithm, we will track a partial matching \(M_{t}\) that is already built and the set \(U_{t}\) of unsaturated vertices. Initially, \(M_{0}=\emptyset\) and \(U_{0}=[n]\). We will use \(V[M_{t}]\) to denote the set of vertices associated with the edges in \(M_{t}\). Some vertices in \(V[M_{t}]\) will be coloured red or green, and some edges (outside of \(M_{t}\)) will be coloured green. Suppose that at time \(t\), \(k\) squares land on vertices \(u_{t}^{1},\ldots,u_{t}^{k}\). We consider a few cases. 1. At least one square lands on a vertex from \(U_{t-1}\), that is, \(\{u_{t}^{1},\ldots,u_{t}^{k}\}\cap U_{t-1}\neq\emptyset\). We arbitrarily select \(u_{t}\in\{u_{t}^{1},\ldots,u_{t}^{k}\}\cap U_{t-1}\), and let \(v_{t}\) be a uniformly random vertex in \(U_{t-1}\). We extend the partial matching by adding an edge we just created to \(M_{t-1}\), that is, \(M_{t}=M_{t-1}\cup\{u_{t}v_{t}\}\) and \(U_{t}=U_{t-1}\setminus\{u_{t},v_{t}\}\). For every green vertex \(x\in V[M_{t-1}]\), if it is adjacent to either \(u_{t}\) or \(v_{t}\) by a green edge, then we uncolour this green edge, uncolour \(x\) (from green), and uncolour the mate of \(x\) in \(M_{t-1}\) (from red). 2. No square lands on a vertex from \(U_{t-1}\) but at least one square lands on a red vertex in \(V[M_{t-1}]\). We arbitrarily select one of such red vertices to be \(u_{t}\), and let \(v_{t}\) be a uniformly random vertex in \(U_{t-1}\). Let \(x\in V[M_{t-1}]\) be the mate of \(u_{t}\) in \(M_{t-1}\). Let \(y\) be the (unique) vertex in \(U_{t-1}\) which is adjacent to \(x\) by a green edge. Let \(M_{t}\) be the matching obtained by augmenting along the path \(yxu_{t}v_{t}\), that is, \(M_{t}=(M_{t-1}\setminus\{xu_{t}\})\cup\{yx,u_{t}v_{t}\}\). Let \(U_{t}=U_{t-1}\setminus\{y,v_{t}\}\). Finally, update the green vertices and edges and the red vertices accordingly as in Case (a). 3. No square lands on a vertex from \(U_{t-1}\) nor on a red vertex in \(V[M_{t-1}]\) but at least one square lands on an uncoloured vertex in \(V[M_{t-1}]\). We arbitrarily select one of such uncoloured vertices to be \(u_{t}\), and let \(v_{t}\) be a uniformly random vertex in \(U_{t-1}\). Colour the edge \(u_{t}v_{t}\) and the vertex \(u_{t}\) green and colour the mate of \(u_{t}\) in \(M_{t-1}\) red. The matching is not affected, that is, \(M_{t}=M_{t-1}\) and \(U_{t}=U_{t-1}\). 4. All squares land on green vertices. Let \(v_{t}\) be an arbitrary vertex in \([n]\). The edge \(u_{t}v_{t}\) will not be used in the process of constructing a perfect matching. Let \(M_{t}=M_{t-1}\) and \(U_{t}=U_{t-1}\). As it was done in [14], we terminate the algorithm prematurely (in order to avoid technical issues with the DEs method that will be used) at the step when \(|U_{t}|\) becomes at most \(\varepsilon n\) where \(\varepsilon=10^{-14}\). To saturate the remaining unsaturated vertices, the clean-up algorithm can be used. This algorithm is not as efficient as the one described above but it may be easily analyzed. It was proved in [14] that a.a.s. this algorithm takes at most \(100\sqrt{\varepsilon}n=10^{-5}n\) steps, which is numerically insignificant. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & lower bound & upper bound & & lower bound & upper bound \\ \hline \(k=1\) & 0.69315 (0.93261) & 1.27696 (1.20524) & \(k=6\) & 0.55075 & 0.66425 \\ \(k=2\) & 0.62323 & 0.92990 & \(k=7\) & 0.54426 & 0.64243 \\ \(k=3\) & 0.59072 & 0.80505 & \(k=8\) & 0.53924 & 0.62573 \\ \(k=4\) & 0.57183 & 0.73708 & \(k=9\) & 0.53525 & 0.61255 \\ \(k=5\) & 0.55947 & 0.69402 & \(k=10\) & 0.53199 & 0.60187 \\ \hline \end{tabular} \end{table} Table 2. Perfect Matchings—numerical upper and lower bounds of \(\tau_{\texttt{PM}}(k)\) for \(1\leq k\leq 10\). Stronger bounds for \(k=1\) follow from [14]. **Theorem 4.1**.: _Let \(k\in\mathbb{N}\). Then, \(\tau_{\text{PP}}(k)\leq u_{k}+10^{-5}\), where \(u_{k}\geq 1/2\) is a constant that is derived from a system of differential equations. The numerical bounds for \(1\leq k\leq 10\) are presented in Table 2._ Proof.: To analyze the above algorithm, we introduce the following random variables. Let \(X(t)\) be the number of saturated vertices, that is, \(X(t)=|V(M_{t})|=2|M_{t}|\). Let \(R(t)\) be the number of red vertices in \(V[M_{t}]\). The algorithm is designed in such a way that \(R(t)\) is also equal to the number of green vertices, and thus equal to the number of green edges. We will use the DEs method to analyze the behaviour of the sequence \(H_{t}:=(X(t),R(t))\) but we will not encompass the full history of the process. For convenience, we will condition on less information and do not reveal the placement of circles associated with green edges; their placements amongst the unsaturated vertices remain distributed uniformly at random. Let us start with analyzing \(X(t)\). Let \(\mathcal{A}^{i}_{t+1}\) be the event that Case (i) occurred at step \(t+1\). Note that at step \(t\), the set of vertices is partitioned into unsaturated vertices, red vertices in \(V[M_{t}]\), uncoloured vertices in \(V[M_{t}]\), and green vertices in \(V[M_{t}]\). The algorithm makes a greedy selection of squares from these classes. There are \(X(t)\) vertices that are _not_ unsaturated, \(X(t)-R(t)\) of them are _not_ red, and \(R(t)\) of the remaining ones are _not_ uncoloured (that is, are green). It follows then that \[\mathbb{P}(\mathcal{A}^{a}_{t+1}) = 1-\left(\frac{X(t)}{n}\right)^{k}\] \[\mathbb{P}(\mathcal{A}^{b}_{t+1}) = \left(\frac{X(t)}{n}\right)^{k}-\left(\frac{X(t)-R(t)}{n}\right) ^{k}\] \[\mathbb{P}(\mathcal{A}^{c}_{t+1}) = \left(\frac{X(t)-R(t)}{n}\right)^{k}-\left(\frac{R(t)}{n}\right) ^{k}\] \[\mathbb{P}(\mathcal{A}^{d}_{t+1}) = \left(\frac{R(t)}{n}\right)^{k}.\] Since \(X(t)\) increases by \(2\) when \(\mathcal{A}^{a}_{t+1}\) or \(\mathcal{A}^{b}_{t+1}\) occur, and does not change otherwise, we get that \[\mathbb{E}[X(t+1)-X(t)\ |\ H_{t}]=2\cdot\left(1-\left(\frac{X(t)-R(t)}{n} \right)^{k}\right)+\mathcal{O}(1/n). \tag{7}\] The term \(\mathcal{O}(1/n)\) corresponds to the probability that \(v_{t+1}\) is the same as \(u_{t+1}\) (in Case (a)) or the same as \(y\) (in Case (b)). The analysis of \(R(t)\) is slightly more complicated. If \(\mathcal{A}^{a}_{t+1}\) occurs, then two vertices in \(U_{t}\) become saturated after the augmentation. Since the endpoints of the set of green edges (those with circles) are uniformly distributed in \(U_{t}\), the expected number of green edges incident with at least one of the two vertices is equal to \(2R(t)/(n-X(t))\). The other endpoints of these green edges become uncoloured from green after the augmentation which, in turn, forces their mates to become uncoloured from red. If \(\mathcal{A}^{b}_{t+1}\) occurs, then the situation is similar, except that \(u_{t+1}\) is first uncoloured from red and its mate is uncoloured from green. If \(\mathcal{A}^{c}_{t+1}\) occurs, then a new green vertex is created which, in turn, makes its mate red. Finally, If \(\mathcal{A}^{d}_{t+1}\) occurs, then there is no change to R(t). It follows that \[\mathbb{E}[R(t+1)-R(t)\ |\ H_{t}] = \mathbb{P}(\mathcal{A}^{a}_{t+1})\cdot\left(-\frac{2R(t)}{n-X(t) }\right)+\mathbb{P}(\mathcal{A}^{b}_{t+1})\cdot\left(-1-\frac{2(R(t)-1)}{n-X( t)}\right)\] \[+\mathbb{P}(\mathcal{A}^{c}_{t+1})+\mathcal{O}(1/n)\] \[= -\left(\mathbb{P}(\mathcal{A}^{a}_{t+1})+\mathbb{P}(\mathcal{A}^ {b}_{t+1})\right)\cdot\frac{2R(t)}{n-X(t)}-\mathbb{P}(\mathcal{A}^{b}_{t+1})+ \mathbb{P}(\mathcal{A}^{c}_{t+1})+\mathcal{O}(1/n)\] \[= -\left(1-\left(\frac{X(t)-R(t)}{n}\right)^{k}\right)\cdot\frac{2R(t)} {n-X(t)}\] \[-\left(\frac{X(t)}{n}\right)^{k}+2\left(\frac{X(t)-R(t)}{n}\right) ^{k}-\left(\frac{R(t)}{n}\right)^{k}+\mathcal{O}(1/n). \tag{8}\] By writing \(x(s)=X(sn)/n\) and \(r(s)=R(sn)/n\), we have that \[x^{\prime} = 2(1-(x-r)^{k}),\] \[r^{\prime} = \frac{-2(1-(x-r)^{k})r}{1-x}-x^{k}+2(x-r)^{k}-r^{k}, \tag{9}\] with the initial conditions \(x(0)=0\) and \(r(0)=0\). Let us now check that the assumptions of the DEs method are satisfied. Let \(\varepsilon>0\) be an arbitrarily small constant. Note that the right hand sides of (9) are continuous, bounded, and Lipschitz in the connected open set \[\mathcal{D}=\{(s,x,r):-\varepsilon<s<2,-\varepsilon<x<1-\varepsilon/3,- \varepsilon<r<1+\varepsilon\},\] which contains the point \((0,x(0),r(0))=(0,0,0)\). There is no error in the 'Initial Condition' so it holds with any \(\lambda=\Omega(\delta)\). The 'Trend Hypothesis' holds with \(\delta=\mathcal{O}(1/n)\) (see (7) and (8)) so any \(\lambda=\Omega(1/n)\) works. Trivially, \(|X(t+1)-X(t)|\leq 2\) for every \(t\leq T_{\mathcal{D}}\). To estimate \(|R(t+1)-R(t)|\), first note that for any unsaturated vertex, the expected number of green vertices that are adjacent to it is equal to \(R(t)/|U_{t}|=R(t)/(n-X(t))\leq 1/(\varepsilon/3)=\mathcal{O}(1)\). It follows from Chernoff's bound that with probability \(\mathcal{O}(n^{-2})\), for any \(1\leq t\leq T_{\mathcal{D}}\leq 2n\) we have \(|R(t+1)-R(t)|\geq(\log n)^{2}\). Hence, the 'Boundedness Hypothesis' holds with \(\gamma=\mathcal{O}(n^{-2})\) and \(\beta=(\log n)^{2}\). It follows from Theorem 2.1, applied with \(\lambda=n^{-1/4}\), \(\gamma=\mathcal{O}(n^{-2})\) and \(\beta=(\log n)^{2}\), that the differential equations (9) with the given initial conditions have a unique solution that can be extended arbitrarily close to the boundary of \(\mathcal{D}\) and, more importantly, a.a.s. for every \(t\) such that \(t/n<\sigma\), where \(\sigma\) is the supremum of \(s\) where \(x(s)\leq 1-\varepsilon/2\) and \(s<2\), \[\max\Big{\{}|X(t)-x(t/n)n|,|R(t)-r(t/n)n|\Big{\}}=\mathcal{O}(\lambda n)=o(n).\] Numerical calculations show that \(x(s)\) reaches \(1-\varepsilon/2\) before \(s\) reaches \(2\). This gives us a bound (that holds a.a.s.) for the number of steps for the process to reach at most \(\varepsilon n\) unsaturated vertices and the clean-up algorithm can deal with the rest. ### Large value of \(k\) Let us show that \(\tau_{\mathtt{PM}}(k)\to 1/2\) as \(k\to\infty\). **Theorem 4.2**.: _The following bounds hold:_ \[\frac{1}{2}\leq\tau_{\mathtt{PM}}(k)\leq\frac{1}{2}\Big{(}1+\mathcal{O}( \sqrt{\log k/k})\Big{)}.\] Proof.: The lower bound is trivial. Deterministically, a perfect matching cannot be created in less than \(n/2\) rounds. Let us now move to the upper bound. During the first phase that lasts for \(n/2\) steps, we will consider the following greedy algorithm. If at least one square lands on an unsaturated vertex, then a partial matching is extended; otherwise, an edge that is created at this step is simply ignored and will not be used in the process of constructing a perfect matching. Let \(X(t)\) be the number of unsaturated vertices at time \(t\). We will show that a.a.s. after \(n/2\) steps, all but at most \(\log k/k\) fraction of vertices are saturated, that is, a.a.s. \(X(n/2)\leq n\log k/k\). For a contradiction, suppose that \(X(n/2)>n\log k/k\). It implies that at any step \(t\), \(1\leq t\leq n/2\), \(X(t)\geq X(n/2)>n\log k/k\) and so a partial matching cannot be extended at time \(t\) with probability \[\left(1-\frac{X(t)}{n}\right)^{k}\leq\exp\left(-\frac{kX(t)}{n}\right)<\exp(- \log k)=1/k.\] Hence, we expect at most \(n/(2k)\) steps failing to extend the matching and so a.a.s. at most \(n/(2k)+o(n)\) steps do that by Chernoff's bound. We get that a.a.s. \[X(n/2)\leq 2\cdot\Big{(}\frac{n}{2k}+o(n)\Big{)}=n/k+o(n)\leq n\log k/k\] for \(k\geq 3\). (The upper bound trivially holds for \(k=1\) and \(k=2\) by adjusting the constant in \(\mathcal{O}()\) notation, if needed.) The desired contradiction implies that a.a.s. at the end of the first phase, there are at most \(n\log k/k\) unsaturated vertices. During the second phase, the clean-up algorithm analyzed in [14] can be used to finish the job and to saturate the remaining \(\varepsilon=\varepsilon(k)=\log k/k\) fraction of vertices. It was proved in [14] that a.a.s. this algorithm takes at most \(100\sqrt{\varepsilon}n=\mathcal{O}(n\sqrt{\log k/k})\) steps, which finishes the proof of the theorem. ## 5. Hamiltonian Cycles In this section, we concentrate on another classical property, namely, the property of having a Hamiltonian cycle, which we denote by HAM. It is known that a.a.s. the 3-out process we discussed in the previous section is Hamiltonian [6]. As already mentioned earlier, the semi-random process can be coupled with the \(\ell\)-out process [4] (for any \(\ell\in\mathbb{N}\)) and so we get that \(\tau_{\texttt{HAM}}\leq 3\). A new upper bound was obtained in [12] in terms of an optimal solution to an optimization problem whose value is believed to be \(2.61135\) by numerical support. The upper bound on \(\tau_{\texttt{HAM}}\) of 3 obtained by simulating the 3-out process is _non-adaptive_. That is, the strategy does _not_ depend on the history of the semi-random process. The above mentioned improvement proposed in [12] uses an adaptive strategy but in a weak sense. The strategy consists of 4 phases, each lasting a linear number of rounds, and the strategy is adjusted _only_ at the end of each phase (for example, the player might identify vertices of low degree, and then focus on connecting circles to them during the next phase). In [13], a fully adaptive strategy was proposed that pays attention to the graph \(G_{t}\) and the position of \(u_{t}\) for every single step \(t\). As expected, such a strategy creates a Hamiltonian cycle substantially faster than the weakly adaptive or non-adaptive strategies, and it allows to improve the upper bound from \(2.61135\) to \(2.01678\). One more trick was observed recently which further improves an upper bound to \(1.84887\)[9]. After combining all the ideas together, the currently best upper bound is equal to \(1.81701\)[10]. In this paper, we adjust a slightly easier version of the algorithm from [9] to deal with the \(k\)-process and present the corresponding upper bounds (see Subsection 5.1). Let us now move to the lower bounds. As observed in the initial paper introducing the semi-random process [4], if \(G_{t}\) has a Hamiltonian cycle, then \(G_{t}\) has minimum degree at least 2. Thus, \(\tau_{\texttt{HAM}}\geq\tau_{\mathcal{P}_{2}}=\ln 2+\ln(1+\ln 2)\geq 1.21973\), where \(\mathcal{P}_{2}\) corresponds to the property of having the minimum degree at least 2--see Section 3. In [12], the lower bound mentioned above was shown to not be tight. The lower bound was increased by \(\varepsilon=10^{-8}\) and so numerically negligible. A better bound was obtained in [13] (see also [10]) and now we know that \(\tau_{\texttt{HAM}}\geq 1.26575\). Adjusting the lower bound from [13] seems challenging and technical so we only report trivial lower bounds using the results from Section 3: \(\tau_{\texttt{HAM}}(k)\geq\tau_{\mathcal{P}_{2}}(k)\). The gap between the upper and lower bounds gets small as \(k\) gets large. In fact, \(\tau_{\texttt{HAM}}(k)\to 1\) as \(k\to\infty\) (see Subsection 5.2). ### Upper bound for \(\tau_{\texttt{HAM}}(k)\) In this subsection, we adjust a slightly easier version of the algorithm from [9] to deal with the \(k\)-process for any \(k\in\mathbb{N}\). For \(k=1\), it yields a bound of \(1.87230\) which is slightly worse than the one reported in [9] (\(1.84887\)) and in [10] (\(1.81696\)) but is easier to analyze. For \(k\geq 2\) the difference would be even smaller, but with substantially larger effort one may do it. The algorithm builds a path that eventually becomes a Hamiltonian path and then it is turned into a Hamiltonian cycle. Let \(X(t)\) be the number of vertices that belong to the path \(P_{t}\) that is present at time \(t\). Some of the vertices outside of \(P_{t}\) will be matched with each other and will form a **matching** (a family of independent edges). Let \(Y(t)\) be the number of vertices outside of \(P_{t}\) that are **matched**. The remaining vertices (not on the path \(P_{t}\) nor matched) are **unsaturated**. Let \(U(t)\) be the number of unsaturated vertices. It is convenient to colour some of our vertices and edges red. Vertices on the path \(P_{t}\) are **red** if they are adjacent to precisely one red edge of \(G_{t}\) (this edge will not belong to \(P_{t}\)). Let \(R(t)\) be the number of red vertices. It will be useful to maintain the property that no two red vertices are at the path distance less than 3 from each other. Assume then that this property is satisfied at time \(t\). Clearly, there are \(2R(t)+\mathcal{O}(1)\) vertices that are at distance 1 from the set of red vertices; we colour such vertices **green**. (Note that it is possible that one or both of the endpoints of the path are red and such vertices have only one neighbour on the path. This explains additional \(\mathcal{O}(1)\) term.) Moreover, there are at most \(2R(t)\) vertices that are at distance 2 from the set of red vertices; we call them **useless**. To simplify the analysis, we arbitrarily select more vertices on the path that are not red nor green and call them useless so that there are precisely \(2R(t)\) useless vertices. Vertices on \(P_{t}\) that are not coloured nor useless are called **permissible**. Note that in each step of the process the set of vertices is partitioned into 6 sets: red vertices, green vertices, useless vertices, permissible vertices, matched vertices, and unsaturated vertices. We will track the length of the path \(P_{t}\) (random variable \(X(t)\)), the number of red vertices (random variable \(R(t)\)) and the number of matched vertices (random variable \(Y(t)\)). By design, the number of green and useless vertices are equal to \(2R(t)+\mathcal{O}(1)\) and, respectively, \(2R(t)\) so there is no need to track them. Similarly, the number of permissible vertices is equal to \(X(t)-5R(t)+\mathcal{O}(1)\). Finally, the number of unsaturated vertices is equal to \(n-X(t)-Y(t)\). Suppose that at time \(t\), \(k\) squares land on vertices \(u_{t}^{1},\ldots,u_{t}^{k}\). We consider a few cases. The algorithm performs the first case that holds. 1. At least one square lands on an unsaturated vertex. We arbitrarily select one of them to be \(u_{t}\) and let \(v_{t}\) be a uniformly random unsaturated vertex. We extend the partial matching by adding an edge \(u_{t}v_{t}\) we just created to it. 2. At least one square lands on a matched vertex. We arbitrarily select one of the matched vertices to be \(u_{t}\) and let \(v_{t}\) be one of the two endpoints of the path. We greedily extend \(P_{t-1}\) by adding \(v_{t}u_{t}\) and the edge containing \(u_{t}\) from the matching to the path. If some red vertex \(x\) is adjacent to either of the two absorbed vertices by a red edge, then we uncolour this red edge and uncolour \(x\). This, in turn, uncolours green neighbours of \(x\). 3. At least one square lands on a green vertex. We arbitrarily select one of these green vertices to be \(u_{t}\) and let \(y\) be the unique red neighbour of \(u_{t}\). We augment \(P_{t-1}\) via the unique \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & lower bound & upper bound & & lower bound & upper bound \\ \hline \(k=1\) & 1.21974 (1.26575) & 1.87230 (1.81701) & \(k=6\) & 1.05075 & 1.13325 \\ \(k=2\) & 1.12498 & 1.39618 & \(k=7\) & 1.04426 & 1.11534 \\ \(k=3\) & 1.09081 & 1.26077 & \(k=8\) & 1.03924 & 1.10180 \\ \(k=4\) & 1.07184 & 1.19615 & \(k=9\) & 1.03525 & 1.09115 \\ \(k=5\) & 1.05947 & 1.15827 & \(k=10\) & 1.03199 & 1.08254 \\ \hline \end{tabular} \end{table} Table 3. Hamilton Cycles—numerical upper and lower bounds of \(\tau_{\tt HAM}(k)\) for \(1\leq k\leq 10\). Stronger upper and lower bounds for \(k=1\) follow from [10] and [13] respectively. red edge \(yz\). If \(z\) is unsaturated (sub-case (c')), then we let \(v_{t}=z\), add edges \(u_{t}v_{t}=u_{t}z\), \(v_{t}y=zy\), and remove edge \(u_{t}y\) from \(P_{t-1}\) to form \(P_{t}\). On the other hand, if \(z\) is matched to vertex \(q\) (sub-case (c")), then we let \(v_{t}=q\), add edges \(u_{t}v_{t}=u_{t}q\), \(qz=v_{t}z\), \(zy\), and remove edge \(u_{t}y\). If some red vertex \(x\) is adjacent to the absorbed vertex \(z\) (or the absorbed vertices \(z\) and \(a\) in the second sub-case) by a red edge, then we uncolour this red edge and uncolour \(x\). As before, this uncolours green neighbours of \(x\). * At least one square lands on a permissible vertex. We arbitrarily select one of these vertices to be \(u_{t}\). Then, we choose \(v_{t}\) uniformly at random amongst matched and unsaturated vertices, and colour \(u_{t}v_{t}\) red. This case creates one red vertex, namely, vertex \(u_{t}\), and two green vertices (or one if \(u_{t}\) is one of the endpoints of the path). * All squares land on useless or red vertices. In this case, we choose \(v_{t}\) arbitrarily and interpret the algorithm as passing on this round, meaning the edge \(u_{t}v_{t}\) will not be used to construct a Hamiltonian cycle. The analysis of the above algorithm is the main ingredient of the proof of the next result. Let us note that one may consider different orders of the above five cases yielding 120 greedy algorithms. We selected the order that gives the strongest upper bound. **Theorem 5.1**.: _Let \(k\in\mathbb{N}\). Then, \(\tau_{\texttt{HAM}}(k)\leq u_{k}\), where \(u_{k}\in[1,3)\) is a constant that is derived from a system of differential equations. The numerical bounds for \(1\leq k\leq 10\) are presented in Table 2._ Proof.: Let \(\mathcal{A}_{t+1}^{i}\) be the event that Case (i) occurred at step \(t+1\). It follows that \[\mathbb{P}(\mathcal{A}_{t+1}^{a}) = 1-\left(\frac{X(t)+Y(t)}{n}\right)^{k}\] \[\mathbb{P}(\mathcal{A}_{t+1}^{b}) = \left(\frac{X(t)+Y(t)}{n}\right)^{k}-\left(\frac{X(t)}{n}\right)^ {k}\] \[\mathbb{P}(\mathcal{A}_{t+1}^{c}) = \left(\frac{X(t)}{n}\right)^{k}-\left(\frac{X(t)-2R(t)+\mathcal{O }(1)}{n}\right)^{k}\] \[\mathbb{P}(\mathcal{A}_{t+1}^{d}) = \left(\frac{X(t)-2R(t)+\mathcal{O}(1)}{n}\right)^{k}-\left(\frac {3R(t)}{n}\right)^{k}\] \[\mathbb{P}(\mathcal{A}_{t+1}^{e}) = \left(\frac{3R(t)}{n}\right)^{k}.\] We first need to estimate the expected change in the three random variables we track. Let us denote \(H_{t}=(X(i),R(i),Y(i))_{0\leq i\leq t}\). Note that \(H_{t}\) does _not_ encompass the entire history of the random process after \(t\) rounds (that is, \(G_{0},\ldots,G_{t}\)). This deferred information exposure permits a tractable analysis of the random positioning of \(v_{t}\) when \(u_{t}\) is red. In particular, as we only expose \(R(t)\) instead of the exact locations of the red edges, their endpoints are random vertices in \([n]\setminus V(P_{t})\). Similarly, as we only expose \(Y(t)\) instead of the exact locations of the edges that form a matching, these edges have the same distribution (conditional on \(H_{t}\)) as first exposing the set of vertices in \([n]\setminus V(P_{t})\), then uniformly selecting a subset of vertices in \([n]\setminus V(P_{t})\) of cardinality \(Y(t)\), and then finally taking a uniformly random perfect matching over the \(Y(t)\) vertices (that is, pair the \(Y(t)\) vertices into \(Y(t)/2\) disjoint edges of the matching). We observe the following expected difference equations. Let us start from \(X(t)\), which is the easiest to deal with. \(X(t)\) changes only when case (b) or case (c) occurs; it increases by 2 in case (b) and increases by 1 or 2 in case (c). Conditioning on case (c) occurring, since the endpoint of the red edge we augment via is a random vertex in \([n]\setminus V(P_{t})\), sub-case (c') occurs with probability \((n-X(t)-Y(t))/(n-X(t))\) and sub-case (c") occurs with probability \(Y(t)/(n-X(t))\)--we expect to absorb \(1+Y(t)/(n-X(t))\) vertices. We get that \[\mathbb{E}[X(t+1)-X(t)\mid H_{t}]=2\cdot\mathbb{P}(\mathcal{A}_{t+1}^{b})+\left(1 +\frac{Y(t)}{n-X(t)}\right)\cdot\mathbb{P}(\mathcal{A}_{t+1}^{c}). \tag{10}\] Investigating the behaviour of \(Y(t)\) is also relatively easy to do. \(Y(t)\) increases by \(2\) in case (a) and decreases by \(2\) in case (b). In case (c), it may decrease by \(2\) but only when the endpoint of a red edge is matched (sub-case (c")). We get that \[\mathbb{E}[Y(t+1)-Y(t)\mid H_{t}]=2\cdot\mathbb{P}(\mathcal{A}_{t+1}^{a})-2 \cdot\mathbb{P}(\mathcal{A}_{t+1}^{b})-2\cdot\mathbb{P}(\mathcal{A}_{t+1}^{c} )\cdot\frac{Y(t)}{n-X(t)}. \tag{11}\] The most challenging part is to understand the behaviour of \(R(t)\). The contribution to the expected change comes from two sources. In case (c) we augment via a red edge so one red vertex gets uncoloured and in case (d) we create one red vertex. The second source is associated with the fact that when one or two vertices get absorbed into the path all red edges incident to them get uncoloured. The expected number of vertices that get absorbed is already computed in (10). Each vertex that gets absorbed uncolours \((R(t)+\mathcal{O}(1))/(n-X(t))\) red vertices. We get that \[\mathbb{E}[R(t+1)-R(t)\mid H_{t}] = -\mathbb{P}(\mathcal{A}_{t+1}^{c})+\mathbb{P}(\mathcal{A}_{t+1}^ {d}) \tag{12}\] \[-\left(2\cdot\mathbb{P}(\mathcal{A}_{t+1}^{b})+\left(1+\frac{Y(t )}{n-X(t)}\right)\cdot\mathbb{P}(\mathcal{A}_{t+1}^{c})\right)\cdot\frac{R(t)+ \mathcal{O}(1)}{n-X(t)}\] \[= -\frac{2R(t)}{n-X(t)}\cdot\mathbb{P}(\mathcal{A}_{t+1}^{b})\] \[-\left(\frac{(n-X(t)+Y(t))\cdot(R(t)+\mathcal{O}(1))}{(n-X(t))^{ 2}}+1\right)\cdot\mathbb{P}(\mathcal{A}_{t+1}^{c})+\mathbb{P}(\mathcal{A}_{t+ 1}^{d}).\] After rescaling, (10), (11), and (12), we get the following set of DEs: \[x^{\prime} = 2\Big{(}(x+y)^{k}-x^{k}\Big{)}+\left(1+\frac{y}{1-x}\right) \Big{(}x^{k}-(x-2r)^{k}\Big{)}\] \[y^{\prime} = 2\Big{(}1-(x+y)^{k}\Big{)}-2\Big{(}(x+y)^{k}-x^{k}\Big{)}-2 \Big{(}x^{k}-(x-2r)^{k}\Big{)}\frac{y}{1-x}\] \[r^{\prime} = -\frac{2r}{1-x}\Big{(}(x+y)^{k}-x^{k}\Big{)}-\left(\frac{(1-x+y)r }{(1-x)^{2}}+1\right)\Big{(}x^{k}-(x-2r)^{k}\Big{)} \tag{13}\] \[+\Big{(}(x-2r)^{k}-(3r)^{k}\Big{)},\] with the initial conditions \(x(0)=0\), \(y(0)=0\), and \(r(0)=0\). As usual, we need to check that the assumptions of the DEs method are satisfied. Let \(\varepsilon>0\) be an arbitrarily small constant. Initially, \(X(0)=Y(0)=R(0)=0\) so the 'Initial Condition' trivially holds. The right hand sides of all equations in (13) is continuous, bounded, and Lipschitz in the connected open set \[\mathcal{D}_{\varepsilon}=\{(s,x,y,r):-1<s<3,-1<x<1-\varepsilon,-1<y,r<2\},\] which contains the point \((0,0,0,0)\). (Note that we need to restrict the interval for \(x\) due to a singularity point \(x=1\).) Define \[T_{\mathcal{D}_{\varepsilon}}=\min\{t\geq 0:\ (t/n,X(t)/n,Y(t)/n,R(t)/n)\notin \mathcal{D}_{\varepsilon}\}.\] The 'Trend Hypothesis' holds with \(\delta=\mathcal{O}(1/n)\). The 'Boundedness Hypothesis' requires more investigation. Random variables \(X(t)\) and \(Y(t)\) can change by at most \(2\) in each round. To estimate the maximum change for the random variable \(R(t)\), we need to upper bound the number of red edges adjacent to any unsaturated or matched vertex \(v\). Observe that at any step \(t\leq 3n\), since we have assumed that there are at least \(\varepsilon n\) unsaturated or matched vertices, the number of red edges adjacent to \(v\) is stochastically upper bounded by \(\operatorname{Bin}(3n,1/(\varepsilon n))\) with expectation \(3/\varepsilon\). It follows immediately from Chernoff's bound (1) that with probability \(1-\mathcal{O}(n^{-3})\), the number of red vertices adjacent to \(v\) is at most \(\beta=\mathcal{O}(\log n)\). Hence, the 'Boundedness Hypothesis' holds with probability at least \(1-\gamma\) with \(\gamma=\mathcal{O}(n^{-1})\) by taking the union bound over all \(3n^{2}\) vertices and steps. We conclude, based on Theorem 2.1, that for every \(\tau>0\), a.a.s. for any \(0\leq t\leq(\sigma(\varepsilon)-\tau)n\), \[\max\Big{\{}|X(t)-x(t/n)n|,|Y(t)-y(t/n)n|,|R(t)-r(t/n)n|\Big{\}}=\mathcal{O}( \lambda n)=o(n),\] where \(x,y,r\) are the unique solutions of the above DEs satisfying the desired initial conditions, and \(\sigma(\varepsilon)\) is the supremum of \(s\) to which the solution can be extended before reaching the boundary of \(T_{\mathcal{D}_{\varepsilon}}\). As \(\mathcal{D}_{\varepsilon}\subseteq\mathcal{D}_{\varepsilon^{\prime}}\) for every \(\varepsilon>\varepsilon^{\prime}>0\), \(\sigma(\varepsilon)\) is monotonely nondecreasing as \(\varepsilon\to 0\). Thus, \[u_{k}:=\lim_{\varepsilon\to 0+}\sigma(\varepsilon)\] exists. It is obvious that \(|Y(t)/n|\) and \(|R(t)/n|\) are both bounded by \(1\) for all \(t\) and thus, when \(t/n\) approaches \(u_{k}\), either \(X(t)/n\) approaches \(1\) or \(t/n\) approaches \(3\). If follows that a.a.s. either \(X(t)>(1-\varepsilon)n\) for all \(t\geq(u_{k}-\delta)n\) or \(u_{k}=3\). The above DEs do not have an analytical solution but numerical solutions show that \(u_{k}\leq u_{1}<1.87230\). Hence, by the end of the execution of the algorithm, there are \(\varepsilon n\) unsaturated or matched vertices remaining for some \(\varepsilon=o(1)\). The clean-up algorithm analyzed in [13] (see also [10]) absorbs the remaining \(\varepsilon n=o(n)\) vertices into the path to form a Hamiltonian path, after which a Hamiltonian cycle is constructed. The whole procedure takes \(\mathcal{O}(\sqrt{\varepsilon}n+n^{3/4}\log^{2}n)=o(n)\) further steps, which finishes the proof of the theorem. ### Large value of \(k\) In this subsection, we show that \(\tau_{\mathtt{HAM}}(k)\to 1\) as \(k\to\infty\). **Theorem 5.2**.: _The following bounds hold:_ \[1\leq\tau_{\mathtt{HAM}}(k)\leq 1+\mathcal{O}(\sqrt{\log k/k}).\] Proof.: The proof is almost the same as the proof of Theorem 4.2. The lower bound is trivial: one cannot create a Hamilton cycle in less than \(n\) rounds. As before, during the first phase that lasts for \(n\) steps, a greedy algorithm is used that extends a path whenever at least one square lands on a vertex that is not on the path. At the end of this phase, a.a.s. a path of length at least \(n-n\log k/k\) is created. During the second phase, another clean-up algorithm can be used (analyzed in [13]) to finish the job and to absorb the remaining \(\varepsilon=\varepsilon(k)=\log k/k\) fraction of vertices. It was proved in [13] that a.a.s. this algorithm takes \(\mathcal{O}(\sqrt{\varepsilon}n)=\mathcal{O}(n\sqrt{\log k/k})\) steps, which finishes the proof of the theorem.
2308.06000
Superconducting TSV contact for cryoelectronic devices
This work focuses on the fabrication of niobium through silicon vias (TSV) superconductors interconnects. The effect of supercycle of sequential oxidation and chemical etching process on the through-etch wall quality was investigated. It was experimentally shown that the use of supercycle in the fabrication process leads to significant improvement of the TSV wall quality and removal of the defect type - scallops. After 12 times repetitions of supercycles a dissipative bonding of superconducting strips on the front and back side of the sample is observed. The critical current density of such coupling is 5x104 A/cm2. The critical ratio of substrate thickness to hole diameter at which electrical coupling is formed is 3 : 1.
Ivan Filippov, Alexandr Anikanov, Aleksandr Rykov, Alexander Mumlyakov, Maksim Shibalov, Igor Trofimov, Nikolay Porokhov, Yuriy Anufriev, Michael Tarkhov
2023-08-11T08:07:11Z
http://arxiv.org/abs/2308.06000v1
# Superconducting TSV contact for cryoelectronic devices ###### Abstract This work focuses on the fabrication of niobium through silicon vias (TSV) superconductors interconnects. The effect of supercycle of sequential oxidation and chemical etching process on the through-etch wall quality was investigated. It was experimentally shown that the use of supercycle in the fabrication process leads to significant improvement of the TSV wall quality and removal of the defect type - scallops. After 12 times repetitions of supercycles a dissipative bonding of superconducting strips on the front and back side of the sample is observed. The critical current density of such coupling is 5x10\({}^{4}\) A/cm\({}^{2}\). The critical ratio of substrate thickness to hole diameter at which electrical coupling is formed is 3 : 1. _Keywords: superconducting interconnects, sputter deposition, deep reactive ion etching, Bosch process, niobium, cryo-electronics, superconductors_ ## 1 Introduction Interest in superconducting quantum computing has increased dramatically in recent years, mostly due to the potential of increasing the computing processing power of such devices in connection with classical processors [1]. Qubit processors have demonstrated the basis of quantum error correction protocols, elementary quantum algorithms and simulations [2]. Universal gate operations are performed with 99.9% accuracy for single qubits and 99.5% accuracy for two-qubit gates. The use of optimised parametric amplification allows non-destructive qubit measurements with more than 99% confidence [3]. Coherence time of qubits is constantly increasing and has reached 150 us [4]. At the same time, the high-speed classical control electronics required for real-time feedback also continue to evolve rapidly. Designing and fabricating large scale superconducting circuits with addressable, low noise cross-talk switching for all circuit elements is a complex task. Superconducting qubits are sensitive to fabrication defects, which limits the yield and reproducibility of the final device. Both aspects require optimised device and process design. Modern superconducting applications and devices require an increase in the integration density to combine them with silicon CMOS technology, which is mostly used as the controlling part. To achieve this coupling, through-silicon-vias integrated packaging techniques are being developed, which allow multiple layers of crystals to be logically interconnected. A similar technology has long been used in CMOS devices [5]. There are different technological approaches for forming layer-to-layer interconnects. One approach suggests fabricating 300 \(\upmu\)m deep channels using aluminium with an aspect ratio of 6:1, to interconnect a CMOS crystal with qubit layer [6]. Materials like Al, TiN, Nb and Nb alloys (NbN, NbTi and NbTiN) investigates to fabricate superconducting TSV with different types shapes of channels. Best result was achieved using Al/TiN layers with single, sharp and hysteresis free transition that was measured at 1.27 K [7]. Another approach, the use of hybrid flip-chip
2307.06685
How many digits are needed?
Let $X_1,X_2,...$ be the digits in the base-$q$ expansion of a random variable $X$ defined on $[0,1)$ where $q\ge2$ is an integer. For $n=1,2,...$, we study the probability distribution $P_n$ of the (scaled) remainder $T^n(X)=\sum_{k=n+1}^\infty X_k q^{n-k}$: If $X$ has an absolutely continuous CDF then $P_n$ converges in the total variation metric to the Lebesgue measure $\mu$ on the unit interval. Under weak smoothness conditions we establish first a coupling between $X$ and a non-negative integer valued random variable $N$ so that $T^N(X)$ follows $\mu$ and is independent of $(X_1,...,X_N)$, and second exponentially fast convergence of $P_n$ and its PDF $f_n$. We discuss how many digits are needed and show examples of our results. The convergence results are extended to the case of a multivariate random variable defined on a unit cube.
Ira W. Herbst, Jesper Møller, Anne Marie Svane
2023-07-13T11:14:05Z
http://arxiv.org/abs/2307.06685v5
# How many digits are needed? ###### Abstract Let \(X_{1},X_{2},...\) be the digits in the base-\(q\) expansion of a random variable \(X\) defined on \([0,1)\) where \(q\geq 2\) is an integer. For \(n=1,2,...\), we study the probability distribution \(P_{n}\) of the (scaled) remainder \(T^{n}(X)=\sum_{k=n+1}^{\infty}X_{k}q^{n-k}\): If \(X\) has an absolutely continuous CDF then \(P_{n}\) converges in the total variation metric to Lebesgue measure \(\mu\) on the unit interval. Under weak smoothness conditions we establish first a coupling between \(X\) and a non-negative integer valued random variable \(N\) so that \(T^{N}(X)\) follows \(\mu\) and is independent of \((X_{1},...,X_{N})\), and second exponentially fast convergence of \(P_{n}\) and its PDF \(f_{n}\). We discuss how many digits are needed and show examples of our results. The convergence results are extended to the case of a multivariate random variable defined on a unit cube. _Keywords:_ asymptotic distribution; coupling; exponential convergence rate; extended Newcomb-Benford law; multivariate digit expansion; remainder of a digit expansion; total variation distance; uniform distribution _2020 Mathematics Subject Classification:_ 60F25; 62E17; 37A50 ## 1 Introduction Let \(X\) be a random variable so that \(0\leq X<1\), and for \(x\in\mathbb{R}\), let \(F(x)=\mathrm{P}(X\leq x)\) be the cumulative distribution function (CDF) of \(X\). For a given integer \(q\geq 2\), we consider the base-\(q\) transformation \(T:[0,1)\mapsto[0,1)\) given by \[T(x)=xq-\lfloor xq\rfloor \tag{1}\] where \(\lfloor\cdot\rfloor\) is the floor function (so \(\lfloor xq\rfloor\) is the integer part of \(xq\)). For \(n=1,2,...\), let \(T^{n}=T\circ\cdots\circ T\) denote the composition of \(T\) with itself \(n\) times and define \[X_{n}=\lfloor T^{n-1}(X)q\rfloor \tag{2}\] where \(T^{0}(X)=X\). Then \[X=\sum_{n=1}^{\infty}X_{n}q^{-n} \tag{3}\] is the base-\(q\) expansion of \(X\) with digits \(X_{1},X_{2},...\). Note that \(X\) is in a one-to-one correspondence to the first \(n\) digits \((X_{1},...,X_{n})\) together with \(T^{n}(X)=\sum_{k=n+1}^{\infty}X_{k}q^{n-k}\), which is the remainder multiplied by \(q^{n}\). Let \(\mu\) denote Lebesgue measure on \([0,1)\), \(P_{n}\) the probability distribution of \(T^{n}(X)\) and \(F_{n}\) its CDF, so \(X\) follows \(P_{0}\) and has CDF \(F_{0}=F\). The following facts are well-known (see [4]): * \(P_{0}=P_{1}\) (i.e., invariance in distribution under \(T\)) is equivalent to stationarity of the process \(X_{1},X_{2},...\). * \(P_{0}=P_{1}\) and \(F\) is absolutely continuous if and only if \(P_{0}=\mu\). * \(P_{0}=\mu\) if and only if \(X_{1},X_{2},...\) are independent and uniformly distributed on \(\{0,1,...,q-1\}\). Items (a)-(c) together with the fact that \(T\) is ergodic with respect to \(\mu\) are used in metric number theory (see [8] and the references therein) to establish properties such as 'for Lebesgue almost all numbers between \(0\) and \(1\), the relative frequency of any finite combination of digits of a given length \(n\) and which occurs among the first \(m>n\) digits converges to \(q^{-n}\) as \(m\to\infty\)' (which is basically the definition of a normal number in base-\(q\), cf. [3]). To the best of our knowledge, less (or perhaps no) attention has been paid to the asymptotic behaviour of the (scaled) remainder \(T^{n}(X)\) as \(n\to\infty\). This paper fills this gap. By ergodicity (see e.g. [5]), since the event \(A=\{x\in[0,1)\,|\,\lim_{n\to\infty}F_{n}(x)=x\}\) is \(T\)-invariant, either \(\mu(A)=0\) or \(\mu(A)=1\), but we shall prove even more when \(F\) is absolutely continuous: We start in Section 2 to consider a special case of \(f\) where \(T^{n}(X)\) follows exactly \(\mu\) when \(n\) is sufficiently large. Then in Section 3, under a weak assumption on \(f\), we specify an interesting coupling construction involving a non-negative integer-valued random variable \(N\) so that \(T^{N}(X)\) follows exactly \(\mu\) and is independent of \((X_{1},...,X_{N})\). Moreover, in Section 4, we show that \(\lim_{n\to\infty}d_{\rm TV}(P_{n},\mu)=0\) where \(d_{\rm TV}\) is the total variation metric (as given later in (12)). Because of these results in Sections 2-4, in an experiment, if a realization of \(X\) is observed and the first \(n\) digits are kept, and if (so far) the only model assumption is absolute continuity of \(F\), then the remainder rescaled by \(q^{n}\) is at least approximately uniformly distributed when \(n\) is large. Thus it may be wise to increase \(n\) if it is found experimentally that the remainder is not distributed approximately uniformly. Furthermore, in Section 4 we study the convergence rate of \(d_{\rm TV}(P_{n},\mu)\) and other related properties. In Section 5, we illustrate our results from Sections 3 and 4 in connection to various specific choices of \(F\), including the case where \(F\) follows the extended Newcomb-Benford law (Example 1). Finally, in Section 6, we generalize our convergence results to the situation where \(X\) is extended to a multivariate random variable with values in the \(k\)-dimensional unit cube \([0,1)^{k}\) and each of the \(k\) coordinates of \(X\) is transformed by \(T\). We plan in a future paper to study the asymptotic behaviour of the remainder in other expansions, including a certain base-\(\beta\) expansion of a random variable, namely when \(q\) is replaced by \(\beta=(1+\sqrt{5})/2\) (the golden ratio) in all places above. ## 2 Preliminaries Let again the situation be as in (1)-(3). The following lemma is true in general (i.e., without assuming \(F\) is absolutely continuous). As in [4], we define a base-\(q\) fraction in \([0,1)\) to be a number of the form \(\sum_{k=1}^{n}j_{k}q^{-k}\) with \((j_{1},...,j_{n})\in\{0,1,...,q-1\}^{n}\) and \(n\in\mathbb{N}\). **Lemma 2.1**.: _If \(F\) has no jump at any base-\(q\) fraction in \([0,1)\) then for every \(x\in[0,1]\),_ \[F_{n}(x)=\sum_{j=0}^{q^{n}-1}F(q^{-n}(j+x))-F(q^{-n}j). \tag{4}\] Proof.: Clearly, (4) holds for \(x=1\), so let \(0\leq x<1\). For \(j_{1},...,j_{n}\in\{0,1,...,q-1\}\) and \(j=\sum_{i=1}^{n}j_{i}q^{n-i}\), the event that \(X_{1}=j_{1},...,X_{n}=j_{n}\), and \(T^{n}(X)\leq x\) is the same as the event that \(q^{-n}j\leq X<q^{-n}(j+1)\) and \(X\leq q^{-n}(j+x)\). Hence, since \(0\leq x<1\), \[F_{n}(x)=\sum_{j=0}^{q^{n}-1}\mathrm{P}(q^{-n}j\leq X\leq q^{-n}(j+x))\] whereby (4) follows since \(F(x)\) has no jumps at the base-\(q\) fractions. The property that \(F\) has no jump at any base-\(q\) fraction is of course satisfied when \(F\) is continuous. For the remainder of this section and the following Sections 3-5 we assume that \(X\) has a probability density function (PDF) \(f\) concentrated on \((0,1)\), meaning that \(F\) is absolutely continuous with \(F(x)=\int_{-\infty}^{x}f(t)\,\mathrm{d}t\) for all \(x\in\mathbb{R}\). Then, by (4), \(F_{n}\) is absolutely continuous with PDF \[f_{n}(x)=q^{-n}\sum_{j=0}^{q^{n}-1}f(q^{-n}(j+x)) \tag{5}\] for \(0<x<1\). In the following special case of \(f\), convergence of \(P_{n}\) is obtained within a finite number of steps. **Proposition 2.2**.: _Let \(m\geq 1\) be an integer. Then \(P_{m}=\mu\) (and hence \(P_{n}=\mu\) for \(n=m,m+1,...\)) if and only if for all \(k\in\{0,1,...,q^{m}-1\}\) and Lebesgue almost every \(u\in[0,1)\),_ \[f((k+u)q^{-m})=q^{m}\mathrm{P}\left(\sum_{i=1}^{m}X_{i}q^{m-i}=k\left|\,T^{m}( X)=u\right). \tag{6}\] _In particular, if \(f\) is constant Lebesgue almost everywhere on each of the intervals \([jq^{-m},(j+1)q^{-m})\), \(j=0,1,...,q^{m}-1\), then for \(n=m,m+1,...\), \(P_{n}=\mu\) and \((X_{1},...,X_{n})\) is independent of \(T^{n}(X)\)._ Proof.: If \(P_{m}=\mu\) then by invariance of \(\mu\) under \(T\), \(P_{n}=\mu\) for \(n=m,m+1,...\). Let \(K=\sum_{i=1}^{m}X_{i}q^{m-i}\) and \(U=T^{m}(X)\), so \(X=(K+U)q^{-m}\). For Lebesgue almost every \(t\in[0,1)\), \[f(t)=q^{m}\mathrm{P}(K=\lfloor q^{m}t\rfloor\,|\,U=q^{m}t-\lfloor q^{m}t \rfloor)f_{m}(q^{m}t-\lfloor q^{m}t\rfloor)\] since \[F(t)=\mathrm{P}((K+U)q^{-m}\leq t)\] \[=F(q^{-m}\lfloor q^{m}t\rfloor)+\int_{0}^{q^{m}t-\lfloor q^{m}t \rfloor}\mathrm{P}(K=\lfloor q^{m}t\rfloor\,|\,U=u)f_{m}(u)\,\mathrm{d}u.\] Thereby the first assertion follows. Suppose that \(c_{j}\) is a constant and \(f=c_{j}\) Lebesgue almost everywhere on \([jq^{-m},(j+1)q^{-m})\) for \(j=0,1,...,q^{m}-1\). Then \[\sum_{j=0}^{q^{m}-1}c_{j}q^{-m}=\sum_{j=0}^{q^{m}-1}\int_{jq^{-m}}^{(j+1)q^{- m}}c_{j}=\int_{0}^{1}f=1,\] and so for Lebesgue almost all \(x\in[0,1)\), (5) gives that \(f_{m}(x)=1\). Therefore, \(P_{m}=\mu\), and hence \(P_{n}=\mu\) for \(n=m,m+1,...\). Consequently, the last assertion follows from (6), using that \(\sum_{i=1}^{m}X_{i}q^{m-i}\) and \((X_{1},...,X_{m})\) are in a one-to-one correspondence. ## 3 Couplings We need some notation for the following results. Let \(I_{\emptyset}=I_{1;0}=[0,1)\) and \(c_{\emptyset}=\inf_{I_{\emptyset}}f\). For \(n=1,2,...\) and \(x_{1},x_{2},...\in\{0,1,...,q-1\}\), let \(k=1+\sum_{i=1}^{n}x_{i}q^{n-i}\) and \[I_{x_{1},...,x_{n}}=I_{k;n}=[(k-1)q^{-n},kq^{-n})\] \[c_{x_{1},...,x_{n}}=c_{k;n}=\inf_{I_{x_{1},...,x_{n}}}f-\inf_{I_{x_{1},...,x_{n-1} }}f.\] Write \(U\sim\mu\) if \(U\) is a uniformly distributed random variable on \([0,1)\). Recall that a function \(f\) is lower semi-continuous at a point \(x\) if for any sequence \(y_{n}\to x\), it holds that \(\liminf_{n}f(y_{n})\geq f(x)\). Note that if \(x=\sum_{n=1}^{\infty}x_{n}q^{-n}\in[0,1)\) is not a base-\(q\) fraction, then lower semi-continuity at \(x\) is equivalent to \[f(x)=\lim_{n\to\infty}\inf_{y\in I_{x_{1},...,x_{n}}}f(y). \tag{7}\] **Theorem 3.1**.: _Suppose \(f\) is lower semi-continuous at Lebesgue almost all points in \([0,1)\). Then there is a coupling between \(X\sim f\) and a non-negative integer-valued random variable \(N\) such that \(T^{N}(X)\sim\mu\) is independent of \((X_{1},...,X_{N})\)._ **Remark 1**.: _Set \(\{0,1,...,q-1\}^{0}=\{\emptyset\}\) so we interpret \(\emptyset\) as no digits. Then \((X_{1},...,X_{N})\) is a discrete random variable with state space \(\cup_{n=0}^{\infty}\{0,1,...,q-1\}^{n}\)._ _Commonly used PDFs are lower semi-continuous almost everywhere. For an example where this condition does not hold, let \(0<\epsilon_{k;n}<q^{-n}\) such that \(a=\sum_{n=0}^{\infty}\sum_{k=1}^{q^{n}}\epsilon_{k;n}<0\). Further, let \(J_{k;n}=[(k-1)q^{-n},(k-1)q^{-n}+\epsilon_{k;n}]\), \(G=\cup_{n=0}^{\infty}\cup_{k=1}^{q^{n}}J_{k;n}\), and \(H=[0,1)\setminus G\). Then \(H\) is a Borel set with \(0<\mu(H)\leq 1\), since \(\mu(J_{k;n})=\epsilon_{k;n}\) and so \(\mu(G)\leq a<1\). Hence, the uniform distribution \(\mu_{H}\) on \(H\) is absolutely continuous. Since \(H\) contains no base-\(q\) fraction and the set of base-\(q\) fractions is dense in \([0,1)\), any interval will contain points not in \(H\). Now, any PDF \(f\) for \(\mu_{H}\) will be zero outside \(H\cup A\) for some nullset \(A\) (depending on the version of \(f\)), so for all integers \(n\geq 0\) and \(1\leq k\leq q^{n}\), \(f\) will be zero on \(I_{k;n}\setminus(H\cup A)\neq\emptyset\). Thus the right hand side in (7) is zero, so \(f\) is not lower semi-continuous anywhere._ Proof.: For Lebesgue almost all \(x=\sum_{n=1}^{\infty}x_{n}q^{-n}\in[0,1)\) with \(x_{n}=\lfloor T^{n}(x)q\rfloor\), assuming \(x\) is not a base-\(q\) fraction (recalling that the set of base-\(q\) fractions is a Lebesgue nullset), (7) gives \[f(x) =\inf_{I_{\emptyset}}f+\left(\inf_{I_{x_{1}}}f-\inf_{I_{0}}f\right) +\left(\inf_{I_{x_{1},x_{2}}}f-\inf_{I_{x_{1}}}f\right)+...\] \[=\sum_{n=0}^{\infty}c_{x_{1},...,x_{n}}. \tag{8}\] Let \(N\) be a random variable such that for \(f(x)>0\), conditionally on \(X=x\), \[\mathrm{P}(N=n\,|\,X=x)=c_{x_{1},...,x_{n}}/f(x),\quad n=0,1,...\] By (8) and since \(c_{x_{1},...,x_{n}}\geq 0\), this is a well-defined conditional distribution. By Bayes theorem, conditioned on \(N=n\) with \(\mathrm{P}(N=n)>0\), \(X\) follows an absolutely continuous distribution with PDF \[f(x\,|\,n)=c_{x_{1},...,x_{n}}/\mathrm{P}(N=n).\] Therefore, since \(f(x|n)\) is constant on each of the intervals \(I_{k;n}\), \((X_{1},...,X_{n})\) (interpreted as nothing if \(n=0\)) and \(T^{n}(X)\sim\mu\) are conditionally independent given \(N=n\). Consequently, \((X_{1},...,X_{N})\) and \(T^{N}(X)\sim\mu\) are independent. **Corollary 3.2**.: _For the coupling construction in the proof of Theorem 3.1, conditioned on \(X=x\) with \(f(x)>0\), we have_ \[\mathrm{P}(N\leq n\,|\,X=x)=\sum_{k=0}^{n}c_{x_{1},...,x_{k}}/f(x),\quad n=0,1,..., \tag{9}\] _where \(x_{k}=\lfloor T^{k}(x)q\rfloor\) for \(1\leq k\leq n\). Moreover,_ \[\mathrm{P}(N\leq n)=q^{-n}\sum_{k=1}^{q^{n}}\inf_{I_{k;n}}f,\quad n=0,1,... \tag{10}\] **Remark 2**.: _Corollary 3.2 is used in Section 5 to quantify how many digits are needed._ _Since a PDF is only defined up to a set of measure zero, it is possible for a distribution to have several PDFs that are almost everywhere lower semi-continuous but give rise to different constants \(c_{x_{1},...,x_{n}}\). Hence the distribution of \((X_{1},\ldots,X_{N})\) is not uniquely defined. For example, if \(X\sim\mu\), letting \(f\) be the indicator function on \([0,1)\) gives \(N=0\) almost surely, whilst letting \(f\) be the indicator function on \([0,1)\setminus\{x_{0}\}\) for some \(x_{0}\in[0,1)\) gives \(\mathrm{P}(N\leq n)=1-q^{-n}\). By (10), in order to make \(N\) as small as possible, we prefer a version of \(f\) which is as large as possible._ Proof.: The proof of Theorem 3.1 gives immediately (9). Thus, for \(n=0,1,...\), \[\mathrm{P}(N\leq n)=\int_{0}^{1}\mathrm{P}(N\leq n\,|\,X=x)f(x)\,\mathrm{d}x= \sum_{k=0}^{n}\sum_{j=0}^{q^{k}}c_{j;k}q^{-k}.\] So \(\mathrm{P}(N=0)=c_{\emptyset}\) in agreement with (10). For \(n=1,2,...\), we have \[\sum_{k=0}^{n}\sum_{j=0}^{q^{k}}c_{j;k}q^{-k} =c_{\emptyset}+\sum_{k=1}^{n}\sum_{(x_{1},...,x_{k})\in\{0,1,...,q-1 \}^{k}}c_{x_{1},...,x_{k}}q^{-k}\] \[=\inf_{I_{\emptyset}}f+\sum_{x_{1}\in\{0,1,...,q-1\}}\left(\inf_{I _{x_{1}}}f-\inf_{I_{\emptyset}}f\right)q^{-1}+...\] \[+\sum_{(x_{1},...,x_{n})\in\{0,1,...,q-1\}^{n}}\left(\inf_{I_{x_{1 }},...,x_{n}}f-\inf_{I_{x_{1}},...,x_{n-1}}\right)q^{-n}\] \[=q^{-n}\sum_{(x_{1},...,x_{n})\in\{0,1,...,q-1\}^{n}}\inf_{I_{x_{1 }},...,x_{n}}f\] \[=q^{-n}\sum_{j=1}^{q^{n}}\inf_{I_{j;n}}f.\] Thereby (10) follows. **Corollary 3.3**.: _Let the situation be as in Theorem 3.1. The output of the following simulation algorithm is distributed as \(X\sim f\):_ 1. _Draw_ \(N\) _from (_10_)._ 2. _Conditionally on_ \(N\)_, generate a discrete random variable_ \(K\) _with_ \[\mathrm{P}(K=k-1\,|\,N=n)\propto c_{k;n},\quad k=1,...,q^{n},\ n=0,1,...\] (11) 3. _Independently of_ \((N,K)\) _pick a random variable_ \(U\sim\mu\)_._ 4. _Output_ \((K+U)q^{-N}\)_._ Proof.: Let \(a_{n}=\sum_{k=1}^{q^{n}}c_{k;n}\) be the normalizing constant in (11). Conditioned on \(N=n\) with \(\mathrm{P}(N=n)>0\), steps (b) and (c) give that \(U\sim\mu\) and \(K\) are independent, so the conditional distribution of \((K+U)q^{-N}\) is absolutely continuous with a conditional PDF given by \[f(x\,|\,n)=q^{n}c_{k;n}/a_{n}\quad\text{if }x\in I_{k;n}.\] Moreover, we get from (10) that \(\mathrm{P}(N=0)=c_{\emptyset}\) and \[\mathrm{P}(N=n)=\mathrm{P}(N\leq n)-\mathrm{P}(N<n)=a_{n}q^{-n},\quad n=1,2,...\] Therefore, the (unconditional) distribution of \((K+U)q^{-N}\) is absolutely continuous with a PDF which at each point \(x=\sum_{n=1}^{\infty}x_{n}q^{n}\in[0,1)\) with \(x_{n}=\lfloor T^{n}(x)\rfloor\) is given by \[\sum_{n=0}^{\infty}f(x\,|\,n)\mathrm{P}(N=n)=\sum_{n=0}^{\infty}q^{n}\left(c_{x _{1},...,x_{n}}/a_{n}\right)a_{n}q^{-n}=\sum_{n=0}^{\infty}c_{x_{1},...,x_{n}}.\] This PDF agrees with (8), so \((K+U)q^{-N}\sim f\). Denote by \(\mathcal{B}\) the class of Borel subsets of \([0,1)\). The total variation distance between two probability measures \(\nu_{1}\) and \(\nu_{2}\) defined on \(\mathcal{B}\) and with PDFs \(g_{1}\) and \(g_{2}\), respectively, is given by \[d_{\mathrm{TV}}(\nu_{1},\nu_{2})=\sup_{A\in\mathcal{B}}|\nu_{1}(A)-\nu_{2}(A)| =\frac{1}{2}\|g_{1}-g_{2}\|_{1}, \tag{12}\] see e.g. Lemma 2.1 in [10]. Then Theorem 3.1 shows the following. **Corollary 3.4**.: _Let the situation be as in Theorem 3.1. Then_ \[d_{\mathrm{TV}}(P_{n},\mu)\leq\mathrm{P}(N>n),\quad n=0,1,... \tag{13}\] **Remark 3**.: _In general the coupling inequality (13) is sharp: For \(n=0,1,...\), let \(b_{n}=1-d_{\mathrm{TV}}(P_{n},\mu)=\int_{0}^{1}\min\{1,f_{n}(t)\}\,\mathrm{d}t\) (with \(f_{0}=f\)). It is well-known that \(b_{n}\) is the maximal number such that there exists a coupling between \(T^{n}(X)\sim P_{n}\) and a uniform random variable \(U\sim\mu\) for which \(T^{n}(X)=U\) with probability \(b_{n}\) (see e.g. Theorem 8.2 in [9]). Thus \(d_{\mathrm{TV}}(P_{n},\mu)=\mathrm{P}(N>n)\) if and only if \(\int_{0}^{1}\min\{1,f_{n}(t)\}\,\mathrm{d}t=q^{-n}\sum_{k=1}^{q^{n}}\inf_{I_{k ;n}}f\). In particular, \(d_{\mathrm{TV}}(P_{0},\mu)=\mathrm{P}(N>0)\) if and only if \(X\sim\mu\)._ _It follows from Corollary 3.4 that (7) implies \(\lim_{n\to\infty}d_{\mathrm{TV}}(P_{n},\mu)=0\). In Theorem 4.1 below we show that (7) is not needed for this convergence result._ Proof.: Using Corollary 3.3, let \(X=(K+U)q^{-N}\). For \(n=0,1,...\), if \(Q_{n}\) denotes the probability distribution of \(T^{n}(U)\), then \(Q_{n}=\mu\), and so \[d_{TV}(P_{n},\mu)=d_{TV}(P_{n},Q_{n})\leq\mathrm{P}(T^{n}(X)\neq T^{n}(U))\leq \mathrm{P}(N>n),\] where the first inequality is the standard coupling inequality for the coupled random variables \(T^{n}(X)\) and \(T^{n}(U)\), and the last inequality follows since \(N\leq n\) implies \(T^{n}(X)=T^{n}(U)\). Thereby (13) is verified. Asymptotic results We need some notation for the following theorem. For a real, measurable function \(g\) defined on \((0,1)\), denote its \(L_{1}\)- and supremum-norm by \(\|g\|_{1}=\int_{0}^{1}|g(t)|\,\mathrm{d}t\) and \(\|g\|_{\infty}=\sup_{x\in(0,1)}|g(x)|\), respectively, and denote the corresponding \(L_{1}\)-space by \(L_{1}(0,1)=\{g\,|\,\|g\|_{1}<\infty\}\). Let \(\bar{L}_{1}(0,1)=\{g\,|\int_{0}^{1}g(t)dt=1,\|g\|_{1}<\infty\}\) be the subset of functions with finite \(L_{1}\)-norm and integral over \([0,1]\) equal one, and \(\bar{L}_{1}^{\prime}(0,1)\subset\bar{L}_{1}(0,1)\) its subset of differentiable functions \(g\) such that \(\|g^{\prime}\|_{\infty}<\infty\). For \(g\in\bar{L}_{1}^{\prime}(0,1)\), \(n\in\mathbb{N}\), \(j=0,1,...,q^{n}-1\), and \(0<x<1\), define \(g^{\prime}_{n,j}(x)=g^{\prime}(x)\) if \(q^{-n}j<x<q^{-n}(j+1)\) and \(g^{\prime}_{n,j}(x)=0\) otherwise, and define \[g_{n}(x)=q^{-n}\sum_{j=0}^{q^{n}-1}g(q^{-n}(j+x)). \tag{14}\] Henceforth, we also think of \(f\) as an element of \(\bar{L}_{1}(0,1)\). **Theorem 4.1**.: _If \(f\in\bar{L}_{1}(0,1)\) and \(g\in\bar{L}_{1}^{\prime}(0,1)\) then_ \[d_{\mathrm{TV}}(P_{n},\mu)\leq\frac{1}{2}\|f-g\|_{1}+\frac{1}{6}q^{-2n}\sum_{j =0}^{q^{n}-1}\|g^{\prime}_{n,j}\|_{\infty}\leq\frac{1}{2}\|f-g\|_{1}+\frac{1}{ 6}q^{-n}\|g^{\prime}\|_{\infty}. \tag{15}\] _In particular,_ \[\lim_{n\to\infty}d_{\mathrm{TV}}(P_{n},\mu)=0 \tag{16}\] _and we have the following sharper convergence results. If \(f\in\bar{L}_{1}^{\prime}(0,1)\) then \(P_{n}\) converges exponentially fast:_ \[d_{\mathrm{TV}}(P_{n},\mu)\leq\frac{1}{6}q^{-2n}\sum_{j=0}^{q^{n}-1}\|f^{ \prime}_{n,j}\|_{\infty}\leq\frac{1}{6}q^{-n}\|f^{\prime}\|_{\infty}. \tag{17}\] _If \(\|f\|_{\infty}<\infty\) and \(f\) is continuous except for finitely many points, then_ \[|f_{n}(x)-1|\to 0\quad\text{uniformly for }x\in(0,1). \tag{18}\] _If \(f\) is twice differentiable then we have the following improvement of (17):_ \[d_{\mathrm{TV}}(P_{n},\mu)=\frac{1}{8}q^{-2n}\left|\sum_{j=0}^{q^{n}-1}f^{ \prime}(\xi_{nj})\right|+O(q^{-2n})\leq\frac{1}{8}q^{-n}\|f^{\prime}\|_{ \infty}+O(q^{-2n}) \tag{19}\] _where \(\xi_{n,j}\in(q^{-n}j,q^{-n}(j+1))\) is arbitrary._ Before proving this theorem we need the following lemma. **Lemma 4.2**.: _Let \(f\in\bar{L}_{1}(0,1)\), \(g\in\bar{L}_{1}^{\prime}(0,1)\). For every \(x\in(0,1)\),_ \[|g_{n}(x)-1|\leq q^{-2n}\left(x^{2}-x+\frac{1}{2}\right)\sum_{j=0}^{q^{n}-1}\|g _{n,j}^{\prime}\|_{\infty}\leq\frac{1}{2}q^{-n}\|g^{\prime}\|_{\infty}, \tag{20}\] _and_ \[\int_{0}^{1}|f_{n}(x)-g_{n}(x)|\,\mathrm{d}x\leq\|f-g\|_{1}. \tag{21}\] _If \(g\) is twice differentiable on \((0,1)\) with \(\|g^{\prime\prime}\|_{\infty}<\infty\) then for every \(x\in(0,1)\),_ \[g_{n}(x)-1=q^{-2n}\left(x-\frac{1}{2}\right)\sum_{j=0}^{q^{n}-1}g^{\prime}( \xi_{n,j})+O(q^{-2n}), \tag{22}\] _where each \(\xi_{n,j}\in(q^{-n}j,q^{-n}(j+1))\) is arbitrary._ **Remark 4**.: _Of course, (20) and (22) hold with \(g_{n}\) replaced by \(f_{n}\) if \(f\) is differentiable respectively twice differentiable with \(\|f^{\prime\prime}\|_{\infty}<\infty\). For Example 2 below it is useful to realize that in (22), \(q^{-n}\sum_{j=0}^{q^{n}-1}g^{\prime}(\xi_{n,j})\) is a Riemann sum for the integral \(\int_{0}^{1}g^{\prime}(t)\mathrm{d}t\)._ Proof.: Let \(x\in(0,1)\). From (14) we have \[g_{n}(x)-1 =\sum_{j=0}^{q^{n}-1}\int_{q^{-n}j}^{q^{-n(j+1)}}[g(q^{-n}(j+x))- g(t)]\,\mathrm{d}t\] \[=\sum_{j=0}^{q^{n}-1}\int_{0}^{1}q^{-n}[g(q^{-n}(j+x))-g(q^{-n}(j +y))]\,\mathrm{d}y. \tag{23}\] If \(g\) is differentiable on \((0,1)\) with \(\|g^{\prime}\|_{\infty}<\infty\), we get by the mean value theorem, \[|g(q^{-n}(j+x))-g(q^{-n}(j+y))|\leq\|g_{n,j}^{\prime}\|_{\infty}q^{-n}|x-y|,\] which yields the bound \[|g_{n}(x)-1|\leq q^{-2n}\int_{0}^{1}|x-y|\,\mathrm{d}y\sum_{j=0}^{q^{n}-1}\|g _{n,j}^{\prime}\|_{\infty}=q^{-2n}\left(x^{2}-x+\frac{1}{2}\right)\sum_{j=0}^ {q^{n}-1}\|g_{n,j}^{\prime}\|_{\infty}. \tag{24}\] Thereby (20) follows. Moreover, \[\int_{0}^{1}|f_{n}(x)-g_{n}(x)|\,\mathrm{d}x \leq\sum_{j=0}^{q^{n}-1}\int_{0}^{1}q^{-n}|f(q^{-n}(j+x))-g(q^{-n}(j +x))|\,\mathrm{d}x\] \[=\|f-g\|_{1} \tag{25}\] whereby (21) follows. If \(g\) is twice differentiable on \((0,1)\) with \(\|g^{\prime\prime}\|_{\infty}<\infty\), the mean value theorem gives \[g(q^{-n}(j+x))-g(q^{-n}(j+y))=g^{\prime}(\xi_{x,y})q^{-n}(x-y)=g^{\prime}(\xi _{n,j})q^{-n}(x-y)+O(q^{-2n}),\] where \(\xi_{x,y}\in(q^{-n}j,q^{-n}(j+1))\) depends on \(x\) and \(y\) and \(\xi_{n,j}\in(q^{-n}j,q^{-n}(j+1))\) is arbitrary. The second equality was obtained by applying the mean value theorem to \(g^{\prime}(\xi_{x,y})-g^{\prime}(\xi_{n,j})\). Inserting this into (23) yields \[g_{n}(x)-1=q^{-2n}\sum_{j=0}^{q^{n}-1}g^{\prime}(\xi_{n,j})\int_{0}^{1}(x-y)\, \mathrm{d}y+O(q^{-2n}),\] which reduces to (22). We are now ready for the proof of Theorem 4.1. Proof.: We have \[d_{\mathrm{TV}}(P_{n},\mu) =\frac{1}{2}\int_{0}^{1}|f_{n}(x)-1|\,\mathrm{d}x\] \[\leq\frac{1}{2}\int_{0}^{1}|f_{n}(x)-g_{n}(x)|\,\mathrm{d}x+\frac {1}{2}\int_{0}^{1}|g_{n}(x)-1|\,\mathrm{d}x\] \[\leq\frac{1}{2}\|f-g\|_{1}+\frac{1}{6}q^{-2n}\sum_{j=0}^{q^{n}-1 }\|g^{\prime}_{nj}\|_{\infty} \tag{26}\] \[\leq\frac{1}{2}\|f-g\|_{1}+\frac{1}{6}q^{-n}\|g^{\prime}\|_{\infty}\] where we get the equality from (12) and the second inequality from (20), (21), and since \(\int_{0}^{1}\left(x^{2}-x+1/2\right)\,\mathrm{d}x=1/3\). Thereby (15) is verified. Taking \(n\to\infty\) in (15) and using that \(\bar{L}^{\prime}_{1}(0,1)\) is dense in \(\bar{L}_{1}(0,1)\), we get (16). Equation (17) follows from (15) by setting \(g=f\). For the proof of (18) we suppose \(f\) is continuous except at \(x_{1},\ldots,x_{m}\in(0,1)\) and set \(x_{0}=0\) and \(x_{m+1}=1\). Let \(\delta>0\) and \[I_{n} =\{j\in\{0,1,\ldots,q^{n}-1\}\mid\exists i\in\{0,1,\ldots,m+1\}:|q ^{-n}j-x_{i}|<\delta\},\] \[J_{n} =\{0,1,\ldots,q^{n}-1\}\backslash I_{n}.\] By (23), \[f_{n}(x)-1 =\sum_{j\in I_{n}}\int_{q^{-n}j}^{q^{-n}(j+1)}\left(f(q^{-n}(j+x))-f (t)\right)\mathrm{d}t\] \[+\sum_{j\in J_{n}}\int_{q^{-n}j}^{q^{-n}(j+1)}\left(f(q^{-n}(j+x))- f(t)\right)\mathrm{d}t. \tag{27}\] Given \(\varepsilon>0\), we choose \(\delta\) so that \(\delta<\varepsilon/(6(m+2)\|f\|_{\infty})\). Then, since the cardinality of \(I_{n}\) is at most \((m+2)(2q^{n}\delta+1)\), the first sum in (27) is bounded by \[\left(\frac{2q^{n}\varepsilon}{6\|f\|_{\infty}}+m+2\right)q^{-n}\|f\|_{\infty} =\frac{\varepsilon}{3}+(m+2)q^{-n}\|f\|_{\infty}<\frac{\varepsilon}{2}\] for \(n\) sufficiently large. Moreover, for \(n\) large enough, the second sum in (27) is bounded by \(\varepsilon/2\) since \(f\) is uniformly continuous on \((0,1)\setminus\bigcup_{i=0}^{m+1}(x_{i}-\delta/2,x_{i}+\delta/2)\), which is a closed set. Thus, for large enough \(n\), \(|f_{n}(x)-1|<\varepsilon\) which gives (18) since \(\varepsilon>0\) is arbitrary. To prove (19) we use (22) with \(g\) replaced by \(f\). Then \[\int_{A}(f_{n}(t)-1)\,\mathrm{d}t=q^{-2n}\int_{A}\left(t-\frac{1}{2}\right)\, \mathrm{d}t\sum_{j=0}^{q^{n}-1}f^{\prime}(\xi_{nj})+O(q^{-2n}).\] We have \[\sup_{A\in\mathcal{B}}\left|\int_{A}\left(t-\frac{1}{2}\right)\,\mathrm{d}t \right|=\frac{1}{2}\sup_{A\in\mathcal{B}}\left|\int_{A}(2t-1)\,\mathrm{d}t \right|=\frac{1}{4}\int_{0}^{1}(2t-1)\,\mathrm{d}t=\frac{1}{8}\] where the second identity follows from (12). This gives (19). **Remark 5**.: _In continuation of Remark 3, by Theorem 4.1, \(b_{n}\to 1\) and under weak conditions the convergence is exponentially fast._ ## 5 So how many digits are needed? This section starts with some theoretical statistical considerations and continues then with some specific examples. Consider a parametric model for the probability distribution of \(X\) given by a parametric class of lower semi-continuous densities \(f_{\theta}\) where \(\theta\) is an unknown parameter. By Theorem 3.1 this specifies a parametric model for \((X_{1},...,X_{N})\) which is independent of \(T^{N}(X)\sim\mu\). In practice we cannot expect \(N\) to be observable, but let us imaging it is. Then, according to general statistical principles (see e.g. [1]), statistical inference for \(\theta\) should be based on the sufficient statistic \((X_{1},...,X_{N})\), whilst \(T^{N}(X)\) is an ancillary statistic and hence contains no information about \(\theta\). Moreover, Theorem 4.1 ensures (without assuming that the densities are lower semi-continuous) that \(T^{n}(X)\) is approximately uniformly distributed. Hence, if \(n\) is 'large enough', nearly all information about \(\theta\) is contained in \((X_{1},...,X_{n})\). **Remark 6**.: _For another paper it could be interesting to consider a so-called missing data approach for a parametric model of the distribution of \((X_{1},...,X_{N})\), with an unknown parameter \(\theta\) and treating \(N\) as an unobserved statistic (the missing data): Suppose \(X^{(1)},...,X^{(k)}\) are IID copies of \(X\), with corresponding'sufficient statistics' \((X_{1}^{(i)},...,X_{N^{(i)}}^{(i)})\), \(i=1,...,k\). The EM-algorithm may be used for estimation of \(\theta\). Or a Bayesian approach may be used, imposing a prior distribution for \(\theta\) and then considering the posterior distribution of \((N^{(1)},...,N^{(k)},\theta)\)._ According to Corollary 3.2, the number of digits we need will in general depend on the realization of \(X=x\). As a measure for this dependence, for \(f(x)>0\) and \(n=0,1,...\), we may consider \(\mathrm{P}(N>n\,|\,X=x)\) as a function of \(x\), which can be calculated from (9). Since \(N\leq n\) implies \(T^{n}(X)\sim\mu\), an overall measure which quantifies the number \(n\) of digits needed is given by \(\mathrm{P}(N>n)\), cf. (10). The use of these measures requires that \(f\) is lower semi-continuous, whilst the bounds in Theorem 4.1 for the total variation distance \(d_{\mathrm{TV}}(P_{n},\mu)\) hold without this condition. The following Examples 1 and 2 demonstrate how these measures can be used to quantify the number \(n\) of digits needed in order that \(N>n\) (conditioned or not on \(X=x\)) with a small probability or that \(d_{\mathrm{TV}}(P_{n},\mu)\) is small. **Example 1**.: _Any number \(y\neq 0\) can uniquely be written as \(y=sq^{k}(y_{0}+y_{f})\) where \(s=s(y)\in\{\pm 1\}\) is the sign of \(y\), \(k=k(y)\in\mathbb{Z}\) determines the decimal point of \(y\) in base-\(q\), \(y_{0}=y_{0}(y)\in\{1,...,q-1\}\) is the leading digit of \(y\) in base-\(q\), and \(y_{0}+y_{f}\) is the so-called significand of \(y\) in base-\(q\), where \(y_{f}=y_{f}(y)\in[0,1)\) is the fractional part of \(y_{0}+y_{f}\) in base-\(q\). Correspondingly, consider any real-valued random variable \(Y\neq 0\) (or just \(\mathrm{P}(Y=0)=0\)), so (almost surely) \(Y=Sq^{K}(X_{0}+X)\) where \(S=s(Y)\), \(K=k(Y)\), \(X_{0}=y_{0}(Y)\), and \(X=y_{f}(Y)\) are random variables. Let \(X_{1},X_{2},...\) be the digits of \(X\) in the base-\(q\) expansion, cf. (3). We call \(X_{0},X_{1},X_{2},...\) the significant digits of \(Y\) in base-\(q\). By definition \(Y\) satisfies the extended Newcomb-Benford law if_ \[\mathrm{P}(X_{0}=x_{0},...,X_{n}=x_{n})=\log_{q}\left(1+1\bigg{/}\sum_{j=0}^{n }q^{n-j}x_{j}\right) \tag{28}\] _for \(n=0,1,...\) and any \(x_{0}\in\{1,...,q-1\}\) and \(x_{j}\in\{0,1,...,q-1\}\) with \(1\leq j\leq n\). Equivalently, the log-significant of \(Y\) in base-\(q\), \(\log_{q}(X_{0}+X)\), is uniformly distributed on \([0,1)\) (Theorem 4.2 in [2]). Then \(X\) has CDF and PDF given by_ \[F(x)=\sum_{j=1}^{q-1}\log_{q}(j+x)-\log_{q}j,\quad f(x)=\sum_{j=1}^{q-1}\frac{1 }{\ln q}\frac{1}{j+x}, \tag{29}\] _for \(0\leq x\leq 1\)._ _The extended Newcomb-Benford law applies to a wide variety of real datasets, see [6, 2] and the references therein. The law is equivalent to appealing scale invariance properties: Equation (28) is equivalent to that \(Y\) has scale invariant significant digits (Theorem 5.3 in [2]) or just that there exists some \(d\in\{1,...,\)\(q-1\}\) such that \(\mathrm{P}(y_{0}(aY)=d)\) does not depend on \(a>0\) (Theorem 5.8 in [2]). Remarkably, for any positive random variable \(Z\) which is independent of \(Y\), if the extended Newcomb-Benford law is satisfied by \(Y\), it is also satisfied by \(YZ\) (Theorem 8.12 in [2])._ _For the remainder of this example, suppose (28) is satisfied. Considering (10) gives for \(n=0,1,...\) that_ \[\mathrm{P}(N\leq n)=\frac{q^{-n}}{\ln q}\sum_{j=1}^{q-1}\sum_{k=1}^{q^{n}} \frac{1}{j+kq^{-n}}.\] _The tail probabilities \(\mathrm{P}(N>n)\) decrease quickly as \(n\) and \(q\) increase, see the left panel in Figure 1 for plots of \(\mathrm{P}(N>n)\) against \(n\) for \(q=2,3,5,10\). The middle panel of Figure 1 shows \(\mathrm{P}(N>1\,|\,X=x)\) as a function of \(x\) for \(q=10\). We see large fluctuations, with probabilities dropping to zero when approaching the right limit of the intervals \(I_{k;1}\), where \(\inf_{I_{k;1}}f\) is attained. To avoid these fluctuations, the right panel of Figure 1 shows an upper bound on \(\mathrm{P}(N>n\,|\,X=x)\) as a function of \(x\) for \(q=10\) and \(n=0,1,2,3\). The upper bound is found by noting that on each \(I_{k;n}\), \(\mathrm{P}(N>n\,|\,X=x)\) is convex decreasing towards zero. Hence an upper bound is given by evaluating at the left end points and interpolating linearly. The plot shows that \(\mathrm{P}(N>n\,|\,X=x)\) is very close to zero for all \(x\) already for \(n=2\)._ _This is also in accordance with Theorem 4.1 stating that \(T^{n}(X)\) converges to a uniform distribution on \([0,1)\) and hence the first digit \(X_{n}\) of \(T^{n}(X)\) is approximately uniformly distributed on \(\{0,1,...,q-1\}\) when \(n\) is large. For \(n=1,2,...\) and \(x_{n}\in\{0,1,...,q-1\}\), we have_ \[\mathrm{P}(X_{n}=x_{n})=\log_{q}\left(\prod_{j=1}^{q-1}\prod_{i=1}^{q^{n-1}} \left(1+\frac{1}{jq^{n}+(i-1)q+x_{n}}\right)\right)\] _where \({\rm P}(X_{n}=x_{n})\) is a decreasing function of \(x_{n}\). The left part of Figure 2 shows plots of \({\rm P}(X_{n}=0)-{\rm P}(X_{n}=q-1)\) versus \(n\) for \(q=2,3,5,10\) indicating fast convergence to uniformity and that the convergence speed increases with \(q\). The right part of Figure 2 illustrates the stronger statement in (18) that the PDF \(f_{n}\) of \(T^{n}(X)\) converges uniformly to the uniform PDF._ _To further illustrate the fast convergence, we drew a sample of 1000 observations with CDF (29) and made a \(\chi^{2}\) goodness-of-fit test for uniformity of \(X_{n}\). Considering a significance level of 0.05, the rejection rate for 10.000 repetitions is shown in Table 1. Such a \(\chi^{2}\) test can also be used as a test for uniformity of the remainder \(T^{n-1}(X)\). A more refined test can be performed by basing the goodness-of-fit test on \(2^{k}\) combinations of the first \(k\) digits \((X_{n},\ldots,X_{n+k-1})\). The result is shown in Table 1 for \(k=1,2,3\). When \(n=1\) we always rejected the hypothesis that \((X_{n},\ldots,X_{n+k-1})\) is uniformly distributed, when \(n=2\) the rejection rate decreases as \(k\) grows and it is 0.067 for \(k=3\), and when \(n\geq 3\) the rejection rates are close to 0.05 as expected if the hypothesis is true. When we instead tried with a sample of 100 observations, even when \(n=1\) the test had almost no power for \(k=1,2,3\)._ **Example 2**.: _To illustrate how the convergence rate in Theorem 4.1 depends on the smoothness of \(f\), let \(f(t)=\alpha t^{\alpha-1}\) be a beta-density with shape parameters \(\alpha>0\) and \(1\). Then, \(f\in\bar{L}^{\prime}_{1}(0,1)\) if and only if \(\alpha=1\) or \(\alpha\geq 2\). Of course, \(P_{n}\) and \(\mu\) agree if \(\alpha=1\). For \(q=2\), Figure 2 shows plots of \(d_{\rm TV}(P_{n},\mu)\) and \(\ln(d_{\rm TV}(P_{n},\mu))\) versus \(n\) when \(\alpha=0.1,0. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \(k=1\) & 1.000 & 0.094 & 0.050 & 0.054 & 0.052 & 0.051 & 0.054 & 0.055 \\ \(k=2\) & 1.000 & 0.081 & 0.047 & 0.050 & 0.051 & 0.053 & 0.052 & 0.047 \\ \(k=3\) & 1.000 & 0.067 & 0.050 & 0.049 & 0.049 & 0.050 & 0.052 & 0.052 \\ \end{tabular} \end{table} Table 1: Rejection rate for a \(\chi^{2}\) goodness-of-fit test for uniformity of \((X_{n},\ldots,X_{n+k-1})\). \(\ln(d_{\rm TV}(P_{n},\mu))\) (cf. (19)) versus \(n\) when \(\alpha=2,5,10\). For the calculation of \(d_{\rm TV}(P_{n},\mu)\) observe that \(f_{n}^{\prime}\) is \(<0\) if \(\alpha<1\) and \(>0\) if \(\alpha>1\), so \(f_{n}(x_{0})=1\) for some unique \(x_{0}\in(0,1)\), and hence since \(F_{n}(0)-0=F_{n}(1)-1=0\),_ \[d_{\rm TV}(P_{n},\mu)=\frac{1}{2}\|f_{n}-1\|_{1}=\frac{1}{2}\bigg{|}\int_{0}^{ x_{0}}(f_{n}(t)-1){\rm d}t\bigg{|}+\frac{1}{2}\bigg{|}\int_{x_{0}}^{1}(f_{n}(t)-1 ){\rm d}t\bigg{|}=|F_{n}(x_{0})-x_{0}|.\] _We used the Newton-Raphson procedure to find \(x_{0}\) (the procedure always converges)._ _The first plot in Figure 2 shows that for all values of \(\alpha\), \(d_{\rm TV}(P_{n},\mu)\) goes to zero, as guaranteed by Theorem 4.1. The second plot indicates that for \(\alpha>1\), \(d_{\rm TV}(P_{n},\mu)\) decays exponentially at a rate independent of \(\alpha\), while for \(\alpha<1\), the decay is also exponential, but with a slower rate. The graphs in the third plot seem to approach zero, indicating that for \(\alpha\geq 2\), the rate of decay is indeed as given by (19), which holds since \(f^{\prime\prime}\) is bounded. In the middle plot, the decay rate also seems to be \(q^{-n}\) for \(\alpha=1.5\), though this is not guaranteed by Theorem 4.1. To see why the rate \(q^{-n}\) also holds for \(1<\alpha<2\), we argue as follows. In (23), (24), and (26), we may refine to the cases \(j=0\) and \(j>0\) (observing that \(\|f_{n,j}^{\prime}\|_{\infty}<\infty\) when \(j>0\)) to obtain the following modification of (17),_ \[d_{\rm TV}(P_{n},\mu)\leq q^{-n}\left(\frac{1}{2}\|f_{n,0}\|_{\infty}+\frac{1 }{6}q^{-n}\sum_{j=1}^{q^{n}-1}\|f_{n,j}^{\prime}\|_{\infty}\right).\] _Furthermore, since \(|f^{\prime}|\) is decreasing for \(\alpha<2\), \(\sum_{j=1}^{q^{n}-1}\|f_{n,j}^{\prime}\|_{\infty}q^{-n}\) is a lower Riemann sum for the improper Riemann integral \(\int_{0}^{1}|f^{\prime}(t)|\,{\rm d}t\), which exists and is finite when \(1<\alpha<2\). Consequently, for every \(x\in(0,1)\),_ \[d_{\rm TV}(P_{n},\mu)\leq q^{-n}\left(\frac{1}{2}\|f_{n,0}\|_{\infty}+\frac{1 }{6}\|f^{\prime}\|_{1}\right).\] _As in Example 1, we tested for uniformity of \(T^{n-1}(X)\) by a \(\chi^{2}\) goodness-of-fit test for uniform distribution of the \(k=3\) first digits \((X_{n},X_{n+1},X_{n+2})\) again using 10.000 replications of samples of 1000 observations from a beta-distribution with \(\alpha=0.1,0.5,1.5,2\). Table 2 shows that for \(\alpha=0.1\), uniformity is rejected in all samples for all \(n\) indicating that the distribution of the remainder remains far from uniform even for \(n=8\). For \(\alpha=0.5\), the rejection rate reaches the 0.05 level for \(n=5\), while for \(\alpha=1.5\), this happens already for \(n=2\) and for \(\alpha=5\) it happens around \(n=3\) or \(n=4\). For \(\alpha>1\) close to 1, the results are comparable to those for the Benford law in Example 1, while for large \(\alpha\) and \(\alpha<1\), the rejection rate is higher indicating slower convergence._ \begin{table} \begin{tabular}{l c c c c c c c c} \hline \(n\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \(\alpha=0.1\) & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ \(\alpha=0.5\) & 1.000 & 1.000 & 0.416 & 0.078 & 0.052 & 0.051 & 0.048 & 0.049 \\ \(\alpha=1.5\) & 1.000 & 0.123 & 0.049 & 0.048 & 0.051 & 0.052 & 0.047 & 0.051 \\ \(\alpha=5\) & 1.000 & 0.932 & 0.059 & 0.049 & 0.048 & 0.048 & 0.049 & 0.050 \\ \end{tabular} \end{table} Table 2: Rejection rate for a \(\chi^{2}\) goodness-of-fit test for uniformity of \((X_{n},\ldots,X_{n+2})\) in a beta-distribution for various values of \(\alpha\). **Remark 7**.: _In conclusion, Examples 1 and 2 demonstrate that the answer to the title of our paper ('How many digits are needed?') of course depend much on \(q\) (in Example 1, the higher \(q\) is, the fewer digits are needed) and on how much \(f\) deviates from the uniform PDF on \([0,1)\) (in Example 2, the more skew \(f\) is, the more digits are needed). Moreover, as this deviation increases or the sample size decreases, the \(\chi^{2}\) goodness-of-fit test as used in the examples becomes less powerful; alternative tests are discussed in [7]._ ## 6 The multivariate case Theorem 4.1 extends as follows. For a given positive integer \(k\), let now \(X=(X_{1},...,X_{k})\) be a \(k\)-dimensional random variable with values in the unit cube \([0,1)^{k}\) so that its CDF \(F(x_{1},...,x_{n})=\mathrm{P}(X_{1}\leq x_{1},...,X_{k}\leq x_{k})\) is absolutely continuous, and denote its multivariate PDF by \(f\). Extend the function \(T\) to be a function \(T:[0,1)^{k}\mapsto[0,1)^{k}\) so that \(T(x_{1},...,x_{k})=(T(x_{1}),...,T(x_{k}))\). For \(n=1,2,...\), denote the multivariate CDF of \(T^{n}(X)\) by \(F_{n}\). For a real Lebesgue integrable function \(g\) defined on \((0,1)^{k}\), let \(\|g\|_{1}=\int_{(0,1)^{k}}|g(t)|\,\mathrm{d}t\) and let \(L_{1}((0,1)^{k})\) be the set of such functions \(g\) (i.e., \(\|g\|_{1}<\infty\)). For a real \(k\)-dimensional function \(g=(g_{1},...,g_{k})\) defined on \((0,1)^{k}\), let \(\|g\|_{\infty}=\sup_{x\in(0,1)^{k}}\sqrt{g_{1}(x)^{2}+...+g_{k}(x)^{2}}\). Define the set \(\bar{L}_{1}((0,1)^{k})=\{g\in L^{1}((0,1)^{k})\,|\,\int_{(0,1)^{k}}g(t)\, \mathrm{d}t=1\}\) and \(\bar{L}^{\prime}_{1}((0,1)^{k})\) as its subset of differentiable functions \(g\) with gradient \[\nabla g(x_{1},\ldots,x_{k})=\left(\frac{\partial g}{\partial x_{1}}(x_{1}, \ldots,x_{k}),\ldots,\frac{\partial g}{\partial x_{k}}(x_{1},\ldots,x_{k})\right)\] such that \(\|\nabla g\|_{\infty}<\infty\). Thus, for \(k=1\), \(\bar{L}^{\prime}_{1}((0,1)^{k})=\bar{L}^{\prime}_{1}(0,1)\) as used in Theorem 4.1. For \(g\in\bar{L}^{\prime}_{1}((0,1)^{k})\), \(n\in\mathbb{N}\), \(j:=(j_{1},...,j_{k})\in\{0,1,...,q-1\}^{k}\), and \(x=(x_{1},...,x_{k})\in(0,1)^{k}\), define \(\nabla g_{n,j}(x)=\nabla g(x)\) if \(q^{-n}j_{i}<x_{i}<q^{-n}(j_{i}+1)\) for \(i=1,...,k\) and \(\nabla g_{n,j}(x)=0\) otherwise, and define \[F_{g}(x)=\int_{0}^{x_{1}}\cdots\int_{0}^{x_{k}}g(t_{1},...,t_{k})\,\mathrm{d }t_{1}\cdots\,\mathrm{d}t_{k}.\] For notational convenience, we can consider \(F\) and \(F_{n}\) to be functions defined on \((0,1)^{k}\), so \(F=F_{f}\). Let \(e=(1,\cdots,1)\), that is \(x\) with each component equal to \(1\), and as a short hand notation write \(\sum_{j=0}^{(q^{n}-1)e}...\) for \(\sum_{j_{1}=0}^{q^{n}-1}\cdots\sum_{j_{k}=0}^{q^{n}-1}...\), and for a real function \(g\) defined on \((0,1)^{k}\), \(n=1,2,...\), and \(x\in(0,1)^{k}\), let \[g_{n}(x)=q^{-nk}\sum_{j=0}^{(q^{n}-1)e}g(q^{-n}(j+x)).\] Then, as in (5), we see that \(F_{n}\) is absolutely continuous with PDF \(f_{n}\). Finally, let \(P_{n}\) be the probability distribution with CDF \(F_{n}\), \(\mu\) Lebesgue measure on \([0,1)^{k}\), and \(d_{\mathrm{TV}}(P_{n},\mu)\) the total variation distance between these measure (where (12) extends to the multivariate case with obvious modifications). **Theorem 6.1**.: _If \(g\in\bar{L}^{\prime}_{1}((0,1)^{k})\) then_ \[d_{\mathrm{TV}}(P_{n},\mu) \leq\frac{1}{2}\|f-g\|_{1}+\frac{1}{2}\sqrt{\frac{k}{3}}q^{-n(k+ 1)}\sum_{j=0}^{(q^{n}-1)e}\|\nabla g_{n,j}\|_{\infty} \tag{30}\] \[\leq\frac{1}{2}\|f-g\|_{1}+\frac{1}{2}\sqrt{\frac{k}{3}}q^{-n}\| \nabla g\|_{\infty}. \tag{31}\] _In particular,_ \[\lim_{n\to\infty}d_{\mathrm{TV}}(P_{n},\mu)=0.\] _Furthermore, if \(f\in\bar{L}^{\prime}_{1}((0,1)^{k})\) then \(P_{n}\) converges exponentially fast:_ \[d_{\mathrm{TV}}(P_{n},\mu)\leq\frac{1}{2}\sqrt{\frac{k}{3}}q^{-n(k+1)}\sum_{j =0}^{(q^{n}-1)e}\|\nabla f_{n,j}\|_{\infty}\leq\frac{1}{2}\sqrt{\frac{k}{3}}q^ {-n}\|\nabla f\|_{\infty}.\] _Finally, if \(\|f\|_{\infty}<\infty\) and \(f\) is continuous except for finitely many points, then_ \[|f_{n}(x)-1|\to 0\quad\text{uniformly for }x\in(0,1)^{k}. \tag{32}\] Proof.: Let \(x\in(0,1)^{k}\) and \(g\in\bar{L}^{\prime}_{1}((0,1)^{k})\). As in (23), \[g_{n}(x)-1 =\sum_{j=0}^{(q^{n}-1)e}\int_{q^{-n}j_{1}}^{q^{-n(j_{1}+1)}} \cdots\int_{q^{-n}j_{k}}^{q^{-n(j_{k}+1)}}[g(q^{-n}(j+x))-g(t)]\,\mathrm{d}t\] \[=\sum_{j=0}^{(q^{n}-1)e}\int_{(0,1)^{k}}q^{-nk}[g(q^{-n}(j+x))-g( q^{-n}(j+y))]\,\mathrm{d}y.\] By the mean value theorem, \[|g(q^{-n}(j+x))-g(q^{-n}(j+y))|\leq\|\nabla g_{n,j}\|_{\infty}q^{-n}\|x-y\|\] where \(\|\cdot\|\) is usual Euclidean distance. We estimate \[\int_{(0,1)^{k}}\|x-y\|\,\mathrm{d}y\leq\left(\int_{(0,1)^{k}}\|x-y\|^{2}\, \mathrm{d}y\right)^{1/2}\leq\frac{\sqrt{k}}{3}\] which yields the bound \[|g_{n}(x)-1|\leq q^{-n(k+1)}\frac{\sqrt{k}}{3}\sum_{j=0}^{(q^{n}-1)e}\|\nabla g_{n, j}\|_{\infty}\leq q^{-n}\frac{\sqrt{k}}{3}\|\nabla g\|_{\infty}.\] As in (25) we have \[\int_{0}^{1}|f_{n}(x)-g_{n}(x)|\ \mathrm{d}x\leq\|f-g\|_{1}.\] Combining the last two estimates gives \[2d_{\mathrm{TV}}(P_{n},\mu) \leq\|f-g\|_{1}+\sqrt{\frac{k}{3}}q^{-n(k+1)}\sum_{j=0}^{(q^{n}-1) e}\|\nabla g_{n,j}\|_{\infty}\] \[\leq\|f-g\|_{1}+\sqrt{\frac{k}{3}}q^{-n}\|\nabla g\|_{\infty}\] whereby (30) follows. To show (32), we assume for simplicity that \(f\) only has one discontinuity at \(x_{0}=(x_{0,1},\ldots,x_{0,k})\in(0,1)^{k}\). The case of more than one discontinuity can be treated as in the proof of Theorem 4.1. Let \(\delta>0\) and define \[I_{n} =\{(j_{1},\ldots,j_{k})\in\{0,1,\ldots,q^{n}-1\}^{k}\mid\exists i :j_{i}<\lfloor q^{n}\delta\rfloor\lor j_{i}>\lceil q^{n}(1-\delta)\rceil\},\] \[J_{n} =\{(j_{1},\ldots,j_{k})\in\{0,1,\ldots,q^{n}-1\}^{k}\mid\max_{i} |j_{i}-q^{n}x_{0,i}|<\lfloor q^{n}\delta\rfloor\},\] \[K_{n} =\{0,\ldots,q^{n}-1\}^{k}\backslash(I_{n}\cup J_{n}).\] Then \[f_{n}(x)-1 =\sum_{j\in I_{n}}\int_{(0,1)^{k}}q^{-nk}[f(q^{-n}(j+x))-f(q^{-n}( j+y))]\,\mathrm{d}y\] \[+\sum_{j\in J_{n}}\int_{(0,1)^{k}}q^{-nk}[f(q^{-n}(j+x))-f(q^{-n}( j+y))]\,\mathrm{d}y\] \[+\sum_{j\in K_{n}}\int_{(0,1)^{k}}q^{-nk}[f(q^{-n}(j+x))-f(q^{-n} (j+y))]\,\mathrm{d}y. \tag{33}\] If \(\varepsilon>0\) is given, we can choose \(\delta\) such that each term in (33) is less than \(\varepsilon/3\). This follows as in the proof of Theorem 4.1 by noting that the cardinality of \(I_{n}\) is at most \(\delta q^{n(k-1)}\), the cardinality of \(J_{n}\) is at most \(2\delta q^{n}\), and \(f\) is bounded on \((0,1)^{k}\) and uniformly continuous on the closed set \([\delta/2,1-\delta/2]^{k}\backslash C(x_{0})\) where \(C(x_{0})\) denotes the cube of sidelength \(\delta\) centered at \(x_{0}\) ## Acknowledgements Supported by The Danish Council for Independent Research -- Natural Sciences, grant DFF - 10.46540/2032-00005B.
2302.03123
$\mathscr Q$-Sets and Friends: Categorical Constructions and Categorical Properties
This work mainly concerns the -- here introduced -- category of $\mathscr Q$-sets and functional morphisms, where $\mathscr Q$ is a commutative semicartesian quantale. We describe, in detail, the limits and colimits of this complete and cocomplete category and prove that it has a classifier for regular subobjects. Moreover, we prove that it is $\kappa^+$-locally presentable category, where $\kappa=max\{|\mathscr Q|, \aleph_0)\}$ and describe a hierarchy of semicartesian monoidal closed structures in this category. Finally, we discuss the issue of 'change of basis' induced by appropriate morphisms between the parametrizing quantales involved in the definition of $\mathscr Q$-sets. In a future work we will address such questions in the full subcategory given by all Scott-complete $\mathscr Q$-sets
José Goudet Alvim, Caio de Andrade Mendes, Hugo Luiz Mariano
2023-02-06T21:15:13Z
http://arxiv.org/abs/2302.03123v1
# \(\mathscr{Q}\)-Set \(\mathscr{E}\) Friends - ###### Abstract This work mainly concerns the -here introduced- category of \(\mathscr{Q}\)-sets and functional morphisms, where \(\mathscr{Q}\) is a commutative semicartesian quantale. We describe, in detail, the limits and colimits of this complete and cocomplete category and prove that it has a classifier for regular subobjects. Moreover, we prove that it is \(\kappa^{+}\)-locally presentable category, where \(\kappa=max\{|\mathscr{Q}|,\aleph_{0}\}\}\) and describe a hierarchy of semicartesian monoidal closed structures in this category. Finally, we discuss the issue of "change of basis" induced by appropriate morphisms between the parametrizing quantales involved in the definition of \(\mathscr{Q}\)-sets. In a future work we will address such questions in the full subcategory given by all Scott-complete \(\mathscr{Q}\)-sets (see [4]). ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Quantales * 2.2 On \(\mathscr{Q}\)-Sets * 2.3 Some Examples * 3 Main Constructions * 3.1 Limits * 3.2 Colimits * 3.3 Subobject Classifier * 4 Local Presentability * 4.1 \(\kappa\)-Compact Objects * 4.2 Accessibility and Presentability * 5 Monoidal Structures * 5.1 Formalizing their Hierarchy * 6 Change of basis Introduction ### History _&_ Motivation Sheaf Theory is a well established area of research with applications in Algebraic Topology [8], Algebraic Geometry [10], Geometry [13], Logic [14], and others. A sheaf on a topological space \(X\) is a functor \(F:(\mathcal{O}(X),\subseteq)^{op}\to\mathbf{Set}\) that satisfies certain gluing properties expressed by an equalizer diagram, where \((\mathcal{O}(X),\subseteq)\) denotes the category whose objects are the elements of the set \(\mathcal{O}(X)\) of all open subspaces of \(X\), and the morphisms are given by set inclusions. There are other equivalent ways to express this definition, but the diagrammatic approach makes clear that the elements/points of \(X\) are not necessary. Thus it is straightforward to define sheaves for "spaces without points", that is, for a category \(H,\leq)\) for a given locale \(H\). The category of sheaves on locales was studied, for example, by Borceux in [6]. In the 1970s, the topos of sheaves over a locale/complete Heyting algebra \(\mathbb{H}\), denoted as \(\mathbf{Sh}(\mathbb{H})\), was described, alternatively, as a category of \(\mathbb{H}\)-sets [9]. More precisely, in [6], there were three categories whose objects were locale valued sets that are equivalent to the category of sheaves over a locale \(\mathbb{H}\). Nevertheless, there is a non-commutative and non-idempotent generalization of locales called "quantales", introduced by C.J. Mulvey [16]. Quantales show up in logic [23], and in the study of \(C^{*}\)-algebras [20]. Later, more general categories have been proposed, replacing locales by the Mulvey's quantales studied in [7]. Instead considering the traditional idempotent non-commutative quantales, that arose from certain \(C^{*}\)-algebras and its relationships with quantum physics; we are following proposals like in [12] and [19], that have connections with affine, fuzzy, and continuous logic. In this work we consider a class of commutative and integral/semicartesian quantales, which includes both the quantales of the ideals of commutative unital rings, MV-Algebras, Heyting Algebras and \(([0,1],\leq,\cdot)\) - which is isomorphic to \(([0,\infty],\geq,+)\). So far as we know, there are three notions of sheaves on _right-sided and idempotent_ quantales: in [5], sheaves on quantales are defined with the goal of forming Grothendieck toposes from quantales. In [15], the sheaf definition preserves an intimate relation with \(\mathscr{Q}\)-sets, an object introduced in the paper as a proposal to generalize \(\Omega\)-sets, defined in [9], for \(\Omega\) a complete Heyting algebra1. Footnote 1: Given a proper notion of morphisms between \(\Omega\)-sets, the resulting category is equivalent to the category of sheaves on \(\Omega\). More recently, in [1] and [21], sheaves are functors that make a certain diagram an equalizer. Herein we study sheaves on _semicartesian_ quantales. Our approach is similar to the last one but, since every idempotent semicartesian quantale is a locale (Proposition 2.4), our axioms and theirs are orthogonal in some sense. Besides, there is an extensive work about sheaves on _involutive quantale_, which goes back to ideas of Bob Walters [22] - which were recently studied by Hans Heymans, Isar Stubbe [11], and Pedro Resende [18] - for instance. This work mainly concerns the -here introduced- category of \(\mathscr{Q}\)-sets and functional morphisms, where \(\mathscr{Q}\) is a commutative semicartesian quantale. In a future work we will address such questions in the full subcategory given by all Scott-complete \(\mathscr{Q}\)-sets (see [4]). ### Main results and the paper's structure 1. We describe, in detail, the limits and colimits of this complete and cocomplete category; 2. We describe generators and prove that it has a classifier for regular subobjects; 3. We prove that it is \(\kappa^{+}\)-locally presentable category, where \(\kappa=max\{|\mathscr{Q}|,\aleph_{0})\}\); 4. We describe a hierarchy of semicartesian monoidal closed structures in this category; 5. We discuss the issue of "change of basis" induced by appropriate morphisms between the parametrizing quantales involved in the definition of \(\mathscr{Q}\)-sets. ## 2 Preliminaries ### Quantales **Definition 2.1:** A _quantale_ is a type of structure \(\mathscr{Q}=(|\mathscr{Q}|,\leq,\otimes)\) for which \((|\mathscr{Q}|,\leq)\) is a complete lattice; \((|\mathscr{Q}|,\otimes)\) is a semigroup2; and, moreover, \(\mathscr{Q}\) is required to satisfy the following distributive laws: for all \(a\in\mathscr{Q}\) and \(B\subseteq\mathscr{Q}\), Footnote 2: _i.e._ the binary operation \(\otimes:\mathscr{Q}\times\mathscr{Q}\to\mathscr{Q}\) (called multiplication) is associative. \[a\otimes\left(\bigvee_{b\in B}b\right) =\bigvee_{b\in B}\big{(}a\otimes b\big{)}\] \[\left(\bigvee_{b\in B}b\right)\otimes a =\bigvee_{b\in B}\big{(}b\otimes a\big{)}\] We denote by \(\mathrm{E}\,\mathscr{Q}\) the subset of \(\mathscr{Q}\) comprised of its idempotent elements. **Remark 2.1:** 1. In any quantale \(\mathscr{Q}\) the multiplication is increasing in both entries; 2. Since \(\bot\) is also the supremum of \(\emptyset\), for any \(a\), \(a\otimes\bot=\bot=\bot\otimes a\) 3. Since \(\top\) is \(\sup\mathscr{Q}\), then \(\top\otimes\top=\sup_{a,b}a\otimes b=\sup\mathbf{img}\otimes\) **Remark 2.2**:: If \((\mathscr{Q},\leq)\) is a complete lattice for which the binary infimum satisfies the above distributive laws, the resulting quantale has \(\top\) as its unit and is - in fact - a locale. Conversely, every locale is a unital quantale in such a manner. **Definition 2.2**:: A quantale \(\mathscr{Q}\) is said to be * _bidivisible_ when \[a\leq b\implies\exists\lambda,\rho:a\otimes\rho=b=\lambda\otimes a\] left (right) divisibility means to drop the \(\rho\) (\(\lambda\)) portion of the axiom. * _integral_ when \(\top\otimes a=a=a\otimes\top\). we say it's right-sided when the right equality holds, and left-sided when the left equality holds. * _unital_ when \(\otimes\) has a unit; * _semicartesian_ when \(a\otimes b\leq a\wedge b\) * _commutative_ when \(\otimes\) is; * _idempotent_ when \(a\otimes a=a\); * _linear and strict_ when \(\leq\) is a linear order and for \[a\neq\bot\ and\ a\otimes b=a\otimes c\ or\ b\otimes a=c\otimes a\implies b=c\] * _strong_ when for any \(e\) and \(A\)[12, cf. p. 30], \[e=e\otimes e\implies e\leq\bigvee_{a\in A}a\implies e\leq\bigvee_{a\in A}a\otimes a\] We offer the following diagram to explain some of the relations between those definitions: \((R|L)\)-_sided_\((L|R)\)-_\(divisible\)_\((R|L)\)-_\(sided\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\(integral\)_\) **Example 2.1:** Locales are - perhaps - the best example of quantales that are commutative, idempotent, integral (and hence both semicartesian and right-sided), divisible and strong (both trivially). Among which, and of special significance to Sheaf Theory, are the locales of open subsets of a topological space \(X\), where the order relation is given by the inclusion, the supremum is the union, and the finitary infimum is the intersection. **Example 2.2:** The extended half-line \([0,\infty]\) with order the inverse order - \(\geq\) -, and the usual sum of real numbers as the monoidal operation. Since the order relation is \(\geq\), the top element is \(0\) and the bottom elements is \(\infty\). We call this the Lawvere quantale due to its relation to Lawvere spaces (related to metric spaces). An isormophic quantale is the quantale given by \([0,1]\) this time with the usual order - and multiplication as the monoidal operation. The isomorphism is given by \[(x\in[0,1])\mapsto(-\ln(x)\in[0,\infty])\] A sub-example is the extended natural numbers \(\mathbb{N}\cup\{\infty\}\), as a restriction of the the Lawvere quantale, which is related to distance on graphs. All of these are unital but not locales. In fact, they are integral too. **Remark 2.3:** When we claim that a quantale isn't a locale, we do not mean to say that the underlying poset isn't a locale. Merely that the monoidal product isn't the poset's meet operation. **Example 2.3:** We list below some more examples of unital quantales that are not locales: 1. The set \(\mathcal{I}(R)\) of ideals of a commutative and unital ring \(R\) with order \(\subseteq\), and the multiplication as the multiplication of ideals. The supremum is the sum of ideals, the top element is \(R\) and the trivial ideal is the bottom; 2. The set \(\mathcal{RI}(R)\) of right (or left) ideals of an unital ring \(R\) with the same order and multiplication of the above example. Then the supremum and the top and the bottom elements are also the same of \(\mathcal{I}(R)\); 3. The set of closed right (or left) ideals of a unital \(C^{*}\)-algebra, the order is the inclusion of closed right (or left) ideals, and the multiplication is the topological closure of the multiplication of the ideals. For more details and examples we recommend [20]. **Example 2.4** (The "slice" construction): Given a quantale \(\mathscr{Q}\), we can form intervals between two extremes \(a\leq b\): \[[a,b]=\{p\in\mathscr{Q}\mid a\leq p\leq b\}\] This is evidently a complete lattice and the inclusion \([a,b]\hookrightarrow\mathscr{Q}\) preserves non empty sups and non empty infs. Now, suppose \(\mathscr{Q}\) is (commutative) semicartesian and unital, and consider - given any \(e\in Idem(\mathscr{Q})\) and \(b\in\mathscr{Q}\) such that \(e\leq b\) - the set \([e,b]\) is such that \(e\leq x,y\leq b\implies e=e\otimes e\leq x\otimes y\leq b\otimes b\leq b \otimes\top=b\) hence that \(\otimes\upharpoonright[e,b]\) is a subsemigroup. Moreover, that this has the structure of a quantale \(x\otimes\bigvee_{i}y_{i}=\bigvee_{i}x\otimes y_{i}\): there is not to check concerning the distributivity of \(x\in[e,b]\) with non-empty sups; for empty sups, we just have to notice that \(e=e\otimes e\leq x\otimes e\leq\top\otimes e=e\) **Proposition 2.1** (The "smooth" slice construction [2]): **:** The slice construction above obviously does not preserve unitality, integrality, (although it preserves semicartesian-ness). When \(\mathscr{Q}\) is (left -- right)-divisible and commutative (so that it is immediately also integral) we can adjust \(\otimes\) on each slice so as to remain divisible and commutative: If \(a,a^{\prime}\in[0,b]\), we can define \(\otimes_{b}\) as follows \[b\otimes(b\to a)\otimes(b\to a^{\prime})\] Proof.: It is not hard to see that in any quantale \(\mathscr{Q}\), \[a\otimes(a\to(a\otimes b))=a\otimes b\] and in left-divisible quantales, one can then show that \[b\leq a\implies a\otimes(a\to b)=b\] and similarly for right-divisible quantales (with the appropriate residue). Since we are in a commutative setting - which is important later -, we can just assume the above to hold. First, let us show that \(\otimes_{b}\) as defined is indeed associative. We do this by deferring associativity to \(\otimes\): \[x\otimes_{b}(y\otimes_{b}z) =b\otimes(b\to x)\otimes(b\to\overbrace{[b\otimes(b\to y)\otimes(b\to z)]}^{y \otimes_{b}z})])\] \[=(b\to x)\otimes b\otimes(b\to[b\otimes(b\to y)\otimes(b\to z)])\] \[=(b\to x)\otimes[b\otimes(b\to y)\otimes(b\to z)]\] \[=b\otimes(b\to x)\otimes(b\to y)\otimes(b\to z)\] Now, we can repeat that but swapping \(x\) and \(z\); the result is still the same - of course -, yielding \[x\otimes_{b}(y\otimes_{b}z)=z\otimes_{b}(y\otimes_{b}x)\] But commutativity gives \[z\otimes_{b}(y\otimes_{b}x)=z\otimes_{b}(x\otimes_{b}y)=(x\otimes_{b}y) \otimes_{b}z\] Also, notice that \(\otimes_{b}\) is commutative. Now suppose \(y,x_{i}\leq b\), then \[\left(\bigvee_{i}x_{i}\right)\otimes_{b}y =b\otimes\left(b\to\bigvee_{i}x_{i}\right)\otimes(b\to y)\] \[=\left(\bigvee_{i}x_{i}\right)\otimes(b\to y)\] \[=\bigvee_{i}x_{i}\otimes(b\to y)\] \[=\bigvee_{i}b\otimes(b\to x_{i})\otimes(b\to y)\] \[=\bigvee_{i}x_{i}\otimes_{b}y\] which, together commutativity, then gives full distributivity. Lastly, we should prove divisibility. Suppose \(x\leq a\leq b\), so that we may try and find a \(y\leq b\) such that \(x=y\otimes_{b}a\) - and commutativity gives full bidivisibility. Divisibility on \(\mathscr{Q}\) gives us that \(a\otimes(a\to x)=x\). Let's then consider \[y=b\otimes(a\to x)\] Bidivisibility gives \(y\leq b\) since it implies that \(\mathscr{Q}\) is semicartesian and obviously \(b\wedge(a\to x)\leq b\). Now, to show that the tensor has the desired property: \[a\otimes_{b}[b\otimes(a\to x)] =b\otimes(b\to a)\otimes(b\to[b\otimes(a\to x)])\] \[=(b\to a)\otimes b\otimes(b\to[b\otimes(a\to x)])\] \[=(b\to a)\otimes[b\otimes(a\to x)]\] \[=b\otimes(b\to a)\otimes(a\to x))\] \[=a\otimes(a\to x))\] \[=x\] **Proposition 2.2**:: Consider the category \(\mathbf{Quan}_{c,d}\) of commutative and divisible quantales and suprema, \(\top\) and \(\otimes\) preserving morphisms between them. Now take a one such quantale \(\mathscr{Q}\). The smooth slice construction is, in fact, a functor \[\mathscr{Q}/\underline{\phantom{\cdot}}:\mathscr{Q}^{\mathrm{op}}\to\mathbf{Quan }_{c,d}\] Proof.: Take \(a\leq b\). First we need to provide a morphism \(\mathscr{Q}/b\to\mathscr{Q}/a\) corresponding to the fact that \(a\leq b\). For that purpose, \[(x\in\mathscr{Q}/b)\xmapsto^{\mathscr{Q}/(a\leq b)}(a\otimes_{b}x\in\mathscr{ Q}/a)\] We know that \(b\mapsto a\otimes_{b}b=a\) and hence it preserves \(\top\); we also know that \[\bigvee_{i}x_{i}\mapsto a\otimes_{b}\bigvee_{i}x_{i}=\bigvee_{i}a\otimes_{b}x_{i}\] and hence it preserves suprema. Finally, take \(x\otimes_{b}y\), we know that it maps to \[a\otimes_{b}(x\otimes_{b}y)=b\otimes(b\to a)\otimes(b\to x)\otimes(b\to y)\] \[(a\otimes_{b}x)\otimes_{a}(a\otimes_{b}y) =a\otimes[a\to(a\otimes_{b}x)]\otimes[a\to(a\otimes_{b}y)]\] \[=a\otimes[a\to(b\otimes(b\to a)\otimes(b\to x))]\otimes[a\to(a \otimes_{b}y)]\] \[=(b\otimes(b\to a)\otimes(b\to x))\otimes[a\to(a\otimes_{b}y)]\] \[=a\otimes(b\to x)\otimes[a\to(a\otimes_{b}y)]\] \[=(b\to x)\otimes(a\otimes_{b}y)\] \[=(b\to x)\otimes b\otimes(b\to a)\otimes(b\to y)\] \[=b\otimes(b\to x)\otimes(b\to a)\otimes(b\to y)\] This proves it is a morphism, now it remains to be seen that \(\mathscr{Q}/\_\) is actually functorial. The first bit of functoriality is straightforward: \(\mathscr{Q}/(b\leq b)\) is \(b\otimes_{b}\_\) which is the identity since \(b\) is the unit. Now take \(a\leq b\leq c\) \[[\mathscr{Q}/(a\leq b)\circ\mathscr{Q}/(b\leq c)](x) =a\otimes_{b}(b\otimes_{c}x)\] \[=b\otimes(b\to a)\otimes(b\to[c\otimes(c\to b)\otimes(c\to x)])\] \[=b\otimes(b\to a)\otimes(b\to[b\otimes(c\to x)])\] \[=(b\to a)\otimes[b\otimes(c\to x)]\] \[=a\otimes(c\to x)\] **Example 2.5**:: There is also a notion of product of quantales whose order and product are given point-wise. It is quite trivial to see that the point-wise identity will be the identity, point-wise product will be the products etc. **Remark 2.4**:: The last two examples introduced in 2.3 are neither commutative nor semicartesian. The first is not idempotent but the second is, and both are right-sided (resp. left-sided) quantales [20, cf.]. **Example 2.6:** The main examples of strong quantales are Heyting algebras, \(([0,1],\leq,\cdot)\), strict linear quantales. Some MV-Algebras, like the Chang's \(([0,1],\wedge,\vee,\oplus,\otimes,0,1)\)\([\)**chang**\(]\)'s, are not strong. **Remark 2.5:** (i) The class of commutative and semicartesian quantales is closed under arbitrary products and and slices \([a,b]\) whenever \(a\) is idempotent; (ii) the class of strong quantales is closed under arbitrary products and under interval constructions; (iii) as mentioned, commutative divisible quantales are closed under smooth slices. **From now on, we assume all quantales to be commutative.** **Definition 2.3:** Let \(\mathscr{Q}\) be a a quantale, we define an alternative partial order \(\preceq\) given by \[a\preceq b\iff a=a\otimes b\] **Proposition 2.3** (Properties for Semicartesian Quantales)**: Given a semicartesian quantale, 1. Let \(\mathscr{Q}\) be a unital quantale, then it is integral. 2. \(a\preceq b\leq c\) implies \(a\preceq c\); 3. If \(e\in\mathrm{E}\,\mathscr{Q}\), \((e\preceq a)\iff(e\leq a)\). **Proposition 2.4:** If \(\mathscr{Q}\) is semicartesian and idempotent, it is in fact a complete distributive lattice and \(\otimes=\wedge\). In other words, it is a locale. Proof.: Suppose \(\mathscr{Q}\) is in fact idempotent, we have - because \(\otimes\) is increasing in both arguments - that \[a\leq b\implies a\leq c\implies a\leq b\otimes c\] Hence, if \(a\) is less than both \(b\) and \(c\), then it must be smaller than \(b\otimes c\); but since \(\mathscr{Q}\) is semicartesian, by remark 2.3 above, \(\otimes\leq\wedge\). This means that \(b\otimes c\) is a lowerbound for \(b\) and \(c\), but what we had gotten from idempotency means it's the greatest such upper bound. Thus the multiplication satisfies the universal property of infima. The above is just a particular case of [17, Proposition 2.1]. ### On \(\mathscr{Q}\)-Sets **Remark 2.6:** Hereon we are working exclusively with commutative semicartesian quantales, as opposed to ones without those properties. Given a quantale \(\mathscr{Q}\), one may form - roughly speaking - a \(\mathscr{Q}\)-space, wherein distances between points are measured by elements of \(\mathscr{Q}\) as opposed to - say - \([0,\infty)\) as we often do. This definition is made precise in the notion of a \(\mathscr{Q}\)-set. **Definition 2.4:** A \(\mathscr{Q}\)-set is a set endowed with a \(\mathscr{Q}\)-distance operation usually denoted by \(\delta\). _i.e_ a \(\mathscr{Q}\)-set is a set \(X\) with a map \(\delta:X^{2}\to X\) satisfying: 1. \(\delta(x,y)=\delta(y,x)\); 2. \(\delta(x,y)\otimes\delta(y,z)\leq\delta(x,z)\); 3. \(\delta(x,x)\otimes\delta(x,y)=\delta(x,y)\). and it is usual to denote \(\delta(x,x)\) by simply the "extent of \(x\)" written as \(\operatorname{E}x\). A couple of things might jump out to reader in the above definition. (i) \(\delta\) is symmetric, even though we have thrown out all but the vaguest notions tying ourselves to metric spaces; (ii) Why is the triangle inequality upside down? (iii) \(\operatorname{E}x\otimes\delta(x,y)=\delta(x,y)\), why not just ask that \(\operatorname{E}x=\top\)? Those questions are all valid - and answering the first and last ones differently has been done in the past and is the main difference between \(\mathscr{Q}\)-sets and \(\mathscr{Q}\)-enriched categories from a definitional perspective. The question of order being inverse is more one of sanity: since we treat a \(\mathscr{Q}\)-set as a set with \(\mathscr{Q}\)-valued equality, it makes sense to think that \(\operatorname{E}x\) is the maximally valid equality to \(x\) and hence the triangular inequality needs to be turned upsidedown - and turned into the transitivity of equality. **Remark 2.7:** When we speak of properties of the type \(P(\vec{x})\leq Q(\vec{x})\) in \(\mathscr{Q}\)-sets, it is often more insightful to think that the logically equivalent (but notationally less helpful) statement \[P(\vec{x})\to Q(\vec{x})\ (=\top)\] There are two main category structures that one can canonically endow the collection of all \(\mathscr{Q}\)-sets with. One is taking maps to be co-contractions (_i.e._ they make \(\delta\) bigger) - the other is to consider well behaved \(\mathscr{Q}\)-valued relations between the underlying sets. **Definition 2.5:** A functional morphism \(f:X\to Y\) is a function \(f\) between the underlying sets of \(X\) and \(Y\) such that \(f\) increases \(\delta\) and preserves \(\operatorname{E}\); that is to say \[\delta_{X}\leq\delta_{Y}\circ(f\times f)\] \[\operatorname{E}_{X}=\operatorname{E}_{Y}\circ f\] **Remark 2.8:** There is a suitable notion of morphism between \(\mathscr{Q}\)-sets which is carried by "\(\mathscr{Q}\)-valued" relations. This notion isn't explored in this paper, but they are called "relational morphisms" The reader should beware we don't often distinguish between \(\delta_{X}\) and \(\delta_{Y}\) and instead rely on suggestively named variables so as to indicate their type and hence the \(\delta\) they refer to. In other words, the reader is expected to be familiar with Koenig lookup3. Footnote 3: Which, to quote a great website – cppreference.com – Argument-dependent lookup, also known as ADL, or Koenig lookup, is the set of rules for looking up the unqualified function names in function-call expressions, including implicit function calls to overloaded operators. We denote by \(\mathscr{Q}\)-\(\mathbf{Set}_{r}\) the category \(\mathscr{Q}\)-sets and relational morphisms between them and by \(\mathscr{Q}\)-\(\mathbf{Set}_{f}\) the category with the same objects but functional morphisms between them instead. Since we won't be tackling \(\mathscr{Q}\)-\(\mathbf{Set}_{r}\), we take \(\mathscr{Q}\)-\(\mathbf{Set}\) to simply mean \(\mathscr{Q}\)-\(\mathbf{Set}_{f}\). **Definition 2.6**:: Instead of proving the category axioms for functional morphisms we promised to prove a stronger result - which is incidently useful for another paper of ours in the works - which is to prove that \(e\)-morphisms form a category (given a generic commutative unital quantale) and that functional morphisms form a wide subcategory of \(I\)-morphisms (for \(I\) the monoidal unit). So, let us define \(e\)-morphisms: given an idempotent element \(e\) of \(\mathscr{Q}\), a \(e\)-morphism is a functional morphism "up to error \(e\)": \[e\otimes\delta(x,x^{\prime})\leq\delta(f(x),f(x^{\prime}))\] \[\operatorname{E}f(x)=e\otimes\operatorname{E}f(x)\] **Proposition 2.5**:: We claim that the collection of \(\langle e,\varphi\rangle\) where \(\varphi\) is an \(e\)-morphism constitutes a category under the obvious composition laws. Furthermore, the identity function is a \(I\)-morphism where \(I\) is the unit of the quantale, and further still: \(I\)-morphisms are closed under composition and form a subcategory which is definitionally equal to \(\mathscr{Q}\)-\(\mathbf{Set}_{f}\). Proof.: Firstly, the obvious composition takes an \(e\)-morphism \(f\) and an \(e^{\prime}\)-morphism \(g\) to a \((e\otimes e^{\prime})\) morphism \(g\circ f\). Associativity is due to functional (in \(\mathbf{Set}\), that is) \(\circ\) associativity and the fact that \(\otimes\) makes \(\mathscr{Q}\) a semigroup. The fact that \(g\circ f\) is a \((e\otimes e^{\prime})\)-morphism is rather obvious and the proof is ommited. The identity is evidently a \(I\)-morphism - and of course that composing \(I\)-morphisms gives a \(I\otimes I=I\)-morphism. ### Some Examples **Example 2.7**:: The empty set is - vacuously - a \(\mathscr{Q}\)-set. **Example 2.8:** The set of idempotent elements of \(\mathscr{Q}\) will be denoted \(\operatorname{E}\mathscr{Q}\). Let \(X\subseteq\operatorname{E}\mathscr{Q}\), then \((X,\otimes)\) is a \(\mathscr{Q}\)-set. It is trivial to see that \((e,e^{\prime})\in\operatorname{E}\mathscr{Q}\times\operatorname{E}\mathscr{Q} \mapsto(e\otimes e^{\prime})\) satisfies all \(\mathscr{Q}\)-set laws, but as a first non-trivial example, we provide the details: \[\delta(e,e^{\prime}) =e\otimes e^{\prime}\] \[=e^{\prime}\otimes e\] \[=\delta(e^{\prime},e)\] \[\delta(e,e^{\prime})\otimes\delta(e^{\prime},e^{\prime\prime}) =e\otimes e^{\prime}\otimes e^{\prime}\otimes e^{\prime\prime}\] \[\leq e\otimes e^{\prime\prime}\] \[=\delta(e^{\prime},e)\] \[\delta(e,e^{\prime})\otimes\operatorname{E}e= e\otimes e^{\prime}\otimes e\otimes e\] \[=e\otimes e^{\prime}\] \[=\delta(e,e^{\prime})\] Note that for any \(e\in X\) satisfies \(\delta(e,e)=\operatorname{E}e=e\) and that \(e\otimes e^{\prime}\) is the infimum of \(\{e,e^{\prime}\}\) in the poset \(\operatorname{E}\mathscr{Q}\). **Remark 2.9:** One cannot use the whole of \(\mathscr{Q}\) in \(\operatorname{E}\mathscr{Q}\)'s stead, as \(\delta(x,y)\otimes\operatorname{E}x\) would not hold. The only reason it holds in the above example is because for idempotents \(\otimes=\wedge\). However, under conditions, one can obtain a \(\mathscr{Q}\)-set where the underlying set is \(\mathscr{Q}\) itself. **Example 2.9:** Suppose that \(\mathscr{Q}\) is a quantale with "idempotent upper approximations"4: Footnote 4: In [21] are described sufficient conditions for \(\mathscr{Q}\) to have such a property. \[\forall q\in\mathscr{Q}:\exists q^{+}\in\operatorname{E}\mathscr{Q}:q^{+}= \min\left\{e\in\operatorname{E}\mathscr{Q}\mid q\preceq e\right\}\] Then \[\delta(x,y)=\begin{cases}x\otimes y,&x\neq y;\\ x^{+},&x=y.\end{cases}\] defines a \(\mathscr{Q}\)-set structure on \(\mathscr{Q}\) itself. **Example 2.10:** Much akin to how Lawvere's quantale is a Lawvere space, a (integral and commutative) quantale \(\mathscr{Q}\) is a \(\mathscr{Q}\)-set. This is achieved with the following: \[\delta(x,y)=(x\to y)\wedge(y\to x)\] which is roughly equivalent to \(|x-y|\) for real numbers. This isn't necessarily the best \(\mathscr{Q}\)-set structure we can give them, as \(\operatorname{E}x=\top\) for any \(x\). Ways to mitigate this phenomenon, which is specially strange for \(\bot\), involve taking into account idempotents above \(x\). An important quantallic property is the existence of an operation \((\_)^{-}\) taking an element \(x\) to the value \(\sup\left\{e\in\operatorname{E}\mathscr{Q}\ |\ e\preceq x\right\}\). Multiplying \(\delta(x,y)\) by \(x^{-}\otimes y^{-}\) guarantees - for instance - that the above construction coincides with the terminal object when \(\mathscr{Q}\) is a locale. Another way to correct this, is to incorporate \(\operatorname{E}\mathscr{Q}\) more directly, considering the space with underlying set \(\mathscr{Q}\times\operatorname{E}\mathscr{Q}\) and \(\delta\) given by \[\delta((x,e),(y,a))=a\otimes e\otimes\left[(x\to y)\wedge(y\to x)\right]\] We write this \(\mathscr{Q}\)-set as \(\mathscr{Q}_{\operatorname{E}}\). **Example 2.11**:: A construction that is explored in this work [3] but is suited to be mentioned here is \(X\boxtimes X\), given by the underlying set \(|X|\times|X|\) and with \(\delta\) given by the product of the \(\delta\) of the coordinates. The reason this construction is relevant here is because \(\delta:X\boxtimes X\to\mathscr{Q}_{\operatorname{E}}\) in a natural way: \[(x,y)\mapsto(\delta(x,y),\operatorname{E}x\otimes\operatorname{E}y)\] And this happens to be a functional morphism. **Example 2.12**:: Suppose \((X,d)\) is a pseudo-metric space, then \((X,d)\) is a \([0,\infty]\)-set where \(\delta(x,y)\neq\bot=\infty,\forall x,y\in X\). **Example 2.13**:: Given a commutative ring \(A\), let the set of its (left) ideals be denoted \(\mathscr{I}_{A}\). \(\mathscr{I}_{A}\) is a quantale. Given a left \(A\)-module \(M\), we can endow it with the structure of a \(\mathscr{I}_{A}\)-set: \[\delta(x,y)=\bigvee\left\{I\in\mathscr{I}_{A}\ |\ I\cdot x=I\cdot y\right\}\] In fact, that supremum is attained at with a particular ideal. Moreover, \(Ex=A=\max\mathscr{I}_{A}\). **Remark 2.10** (Completeness):: As mentioned previously, there is a category of \(\mathscr{Q}\)-sets with relational morphisms between them. It happens that this category is (equivalent to) a reflective subcategory of \(\mathscr{Q}\)-\(\mathbf{Set}\). The objects of this reflective subcategory are called "Scott complete" \(\mathscr{Q}\)-sets. The notion of completeness has to do with singletons, which are \(\mathscr{Q}\)-valued distributions which "measure a point". And being Scott-complete the same as saying that all points we measure are actually there, and we don't measure any points more than once. This, in a sense, is a similar condition to that of a space being sober. There is at least one different notion of completeness, called gluing completeness - which is related to compatible local data having exactly one gluing. These notions, and the reflective subcategories they define are explored in a different article in development [4]. ## 3 Main Constructions In the present section, we provide the basic constructions in the category of all \(\mathscr{Q}\)-sets and functional morphisms: limits, colimits, (regular) subject classifier, etc. We start with the following: **Proposition 3.1** (Separating Family): **:** For each \(e\in\operatorname{E}\mathscr{Q}\) we can consider the singleton set \(S_{e}=\{e\}\) endowed with the natural \(\mathscr{Q}\)-set structure described in Example2.8. Then the set of such \(\mathscr{Q}\)-sets is a separating family for the category of \(\mathscr{Q}\)-sets. Proof.: Indeed, for any pair of morphisms \(f,g:X\to Y\), if \(f\neq g\) then must be some \(x\in X\) such that \(f(x)\neq g(x)\). Taking \(e=\{\operatorname{E}x\}\in\operatorname{E}\mathscr{Q}\) and the function \(s_{x}:S_{e}\to X\) given by \((e\in\{e\})\mapsto(x\in X)\), then it is a functional morphism and obviously separates \(f\) from \(g\). ### Limits **Proposition 3.2** (Terminal Object): **:** The terminal object is \(\top=(\operatorname{E}\mathscr{Q},\otimes)\). Proof.: The set of idempotent elements of \(\mathscr{Q}\), \(\operatorname{E}\mathscr{Q}\), endowed with the natural \(\mathscr{Q}\)-set structure described in Example2.8 ( \(\delta(e,e^{\prime})=e\otimes e^{\prime}\) ) must be the terminal object because \(\operatorname{E}e=e\). For each \(\mathscr{Q}\)-set \((X,\delta)\), the unique morphism with codomain \(\top\) is defined as: \[!:X\to\top\] \[f(x)=\operatorname{E}x\] Since functional morphisms preserve extents, one has that \(\operatorname{E}f(x)=\operatorname{E}x\) - however, \(\operatorname{E}f(x)=f(x)\) and thus \(f(x)=\operatorname{E}x\). This proves that there is at most one morphism \(X\to\operatorname{E}\mathscr{Q}\). On the other hand, \(\delta(x,y)\leq\operatorname{E}x\otimes\operatorname{E}y\) which just happens to make \(x\mapsto\operatorname{E}x\) a functional morphism: \[\delta_{X}(x,x^{\prime}) \leq\operatorname{E}x\otimes\operatorname{E}x^{\prime}=\delta(x,x^{\prime})\] \[\operatorname{E}x =\operatorname{E}x\otimes\operatorname{E}x=\operatorname{E}x\] **Proposition 3.3** (Non-empty products): **:** The product of \(\mathscr{Q}\)-sets \((X_{i})_{i\in I}\) is: _Proof._ \[\prod_{i\in I}X_{i}=\left(\left\{(x_{i})_{i\in I}\in\prod_{i\in I}|X_{i}|\ \right|\,\forall i,j\in I(\operatorname{E}x_{i}= \operatorname{E}x_{j})\right\},\delta\right)\] \[\delta((x_{i})_{i\in I},(y_{i})_{i\in I})=\bigwedge_{i\in I}\delta_{i}(x_{i}, y_{i})\] **Remark 3.1**: **:** Since \[(x_{i})_{i\in I}\in\prod_{i\in I}X_{i}\implies\forall i,j\in I:\operatorname{ E}x_{i}=\operatorname{E}x_{j}\] we can conclude that \[\operatorname{E}(x_{i})_{i\in I}=\bigwedge_{i\in I}\operatorname{E}x_{i}= \bigwedge_{i}\operatorname{E}x_{i}\] _Proof: \(\mathcal{Q}\)-set._ \[\delta((x_{i})_{i\in I},(y_{i})_{i\in I}) =\bigwedge_{i\in I}\delta_{i}(x_{i},y_{i})\] \[=\bigwedge_{i\in I}\delta_{i}(x_{i},y_{i})\] \[=\delta((y_{i})_{i\in I},(x_{i})_{i\in I})\] \[\delta((x_{i})_{i\in I},(y_{i})_{i\in I})\otimes\delta((x_{i})_{i \in I},(x_{i})_{i\in I})\] \[=\bigwedge_{i\in I}\delta(x_{i},y_{i})\otimes\bigwedge_{j\in I} \delta(x_{j},x_{j})\] \[=\bigwedge_{i\in I}\delta(x_{i},y_{i})\otimes\mathrm{E}(x_{j})_{ j\in J}\] \[=\bigwedge_{i\in I}\delta(x_{i},y_{i})\otimes\mathrm{E}(x_{i})\] \[=\bigwedge_{i\in I}\delta(x_{i},y_{i})\] \[\delta((x_{i})_{i\in I},(y_{i})_{i\in I})\otimes\delta((y_{i})_{j \in I},(x_{i})_{z\in I}) =\bigwedge_{i\in I}\delta(x_{i},y_{i})\otimes\bigwedge_{j\in I} \delta(y_{j},z_{j})\] \[\leq\bigwedge_{i\in I}\delta(x_{i},y_{i})\otimes\delta(y_{i},z_{ i})\] \[\leq\bigwedge_{i\in I}\delta(x_{i},z_{i})\] \[=\delta((x_{i})_{i\in I},(y_{i})_{i\in I})\] _Proof: Projections._ \[\pi_{i}:\prod_{i\in I}X_{i}\rightarrow X_{i}\] \[\pi_{i}((x_{j})_{j\in I})=x_{i}\] which are morphisms because \[\delta_{i}(\pi_{i}((x_{j})_{j\in I}),\pi_{i}((y_{j})_{j\in I})) =\delta_{i}(x_{i},y_{i})\] \[\geq\bigwedge\delta_{j}(x_{j},y_{j})\] \[=\delta((x_{j})_{j\in I},(y_{j})_{j\in I})\] \[\delta_{i}(\pi_{i}((x_{j})_{j\in I}),\pi_{i}((x_{j})_{j\in I})) =Ex_{i}\] \[=\bigwedge_{j\in I}Ex_{j}\] \[=\delta((x_{j})_{j\in I},(x_{j})_{j\in I})\] Proof:: Universality.: Let \(f_{i}:A\to X_{i}\) a family of morphisms. We define: \[h:A \to\prod_{i\in I}X_{i}\] \[h(a) =(f_{i}(a))_{i\in I}\] Then \(h\) satisfies the universal property: \[\pi_{i}\circ h(a) =\pi_{i}((f_{i}(a))_{i\in I})=f_{i}(a)\] \[\pi_{i}\circ h =f_{i}\] **Proposition 3.4**:: The equalizer of \(f,g:(X,\delta)\to(Y,\delta^{\prime})\) is: \[\operatorname{Eq}(f,g)=\left(\left\{x\in X\ |\ f(x)=g(x)\right\},\delta\restriction_{ \left\{x\in X\ |\ f(x)=g(x)\right\}}\right)\] Proof.: It is obviously a \(\mathscr{Q}\)-set. It obviously equalizes the pair, now, for a given \(\alpha:A\to X\) also equalizing the pair, Realize that \(\alpha\)'s image must be entirely contained in \(\operatorname{Eq}(f,g)\) as \[[f\circ\alpha(a)=g\circ\alpha(a)]\iff a\in\operatorname{Eq}(f,g)\] and hence the restriction of codomain yields the unique morphism that makes the diagram commute and establishes universality. **Remark 3.2**:: Considering that limits are given as equalizers of products, the category has been shown to be complete. **Proposition 3.5** (Monomorphisms): A morphism is a monomorphism iff it is a injective as a function. Proof.: Injectivity implying monomorphicity is trivial. The only reason the other direction isn't trivial is that, in principle, there might not be enough morphisms to witness a non-injective morphism not being a mono. But this is easy: suppose \(f(x)=f(y)\), consider the \(\mathscr{Q}\)-set \(\{*_{e}\}\) for \(e=\operatorname{E}x=\operatorname{E}y\), with \(\operatorname{E}*_{e}=e\). This \(\mathscr{Q}\)-set (eg. 2.8) can be included in our object by sending \(*_{e}\) to either \(x\) or \(y\); if \(f\) is monic, then both we would have that \(x=y\) thus establishing injectivity. **Proposition 3.6** (Regular monomorphisms): The class of regular monomorphisms coincides with the class of injective morphisms that preserve \(\delta\). Which are precisely the \(\delta\)-preserving injective functions between the underlying sets. Proof.: A monomorphism is regular when it is an equalizer of a pair of parallel arrows. Suppose \(g,h:A\to B\), one way to conceive of an equalizer is as being the maximal subobject of \(A\) making the diagram commute. It is quite trivial to see that subobjects in general _must_ be subsets with a point-wise smaller \(\delta\). Hence, the largest subobject of \(A\) that equalizes the pair is simply the subset of \(A\) where the functions agree on, with the largest possible \(\delta\): that being \(A\)'s. Hence, we see a pattern where equalizers - in general - are simply subsets with \(\delta\) coming from a restriction of the original object. Importantly, though, we refer to "regular subobjects" as monomorphisms that preserve \(\delta\) - as they have been equivalently characterized. The skeptical reader might question that we have merely shown that regular monos preserve \(\delta\), as opposed to showing such to be a sufficient condition. In which case, given the fact that monos are injective functions, \(\delta\)-preserving monos are simply subsets with its superset's \(\delta\); consider one such mono: \(f:A\mapsto X\) (we may think that \(A\subseteq X\) and \(\delta_{A}=\delta_{X_{|A\times A}}\)), one takes \(X_{f}=(X\amalg X)/\sim_{f}\) with \(\sim_{f}\) defined so as to exactly identify both copies of \(A\). \[\delta(\llbracket(x,i)\rrbracket\,,\llbracket(y,j)\rrbracket)=\begin{cases} \delta(x,y),&i=j\\ \bigvee_{a\in A}\delta(x,f(a))\otimes\delta(f(a),y),&i\neq j\end{cases}\] Proof.: \(X_{f}\) is a \(\mathscr{Q}\)-set. Take \(i,j,k\) to be different indices \[\delta(\llbracket(y,i)\rrbracket\,,\llbracket(y^{\prime},i) \rrbracket) =\delta(y,y^{\prime})\] \[=\delta(y^{\prime},y)\] \[=\delta(\llbracket(y^{\prime},i)\rrbracket\,,\llbracket(y,i) \rrbracket)\] \[\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{\prime},i) \right]\!\right])\otimes\delta(\left[\!\left[(y^{\prime},i)\right]\!\right],\left[\! \left[(y^{\prime\prime},i)\right]\!\right]) =\delta(y,y^{\prime})\otimes\delta(y^{\prime},y^{\prime\prime})\] \[\leq\delta(y,y^{\prime\prime})\] \[=\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{ \prime\prime},i)\right]\!\right])\] \[\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{\prime},i)\right] \!\right]) =\delta(y,y^{\prime})\otimes\delta(y,y)\] \[=\delta(y,y^{\prime})\] \[=\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{ \prime},i)\right]\!\right])\] \[\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{ \prime},j)\right]\!\right]) =\bigvee_{x\in X}\delta(y,f(x))\otimes\delta(f(x),y^{\prime})\] \[=\bigvee_{x\in X}\delta(y^{\prime},f(x))\otimes\delta(f(x),y)\] \[=\delta(\left[\!\left[(y^{\prime},j)\right]\!\right],\left[\! \left[(y,i)\right]\!\right])\] \[\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{ \prime},j)\right]\!\right])\otimes\delta(\left[\!\left[(y^{\prime},j)\right] \!\right],\left[\!\left[(y^{\prime\prime},j)\right]\!\right]) =\bigvee_{x\in X}\delta(y,f(x))\otimes\delta(f(x),y^{\prime}) \otimes\delta(y^{\prime},y^{\prime\prime})\] \[\leq\bigvee_{x\in X}\delta(y,f(x))\otimes\delta(f(x),y^{\prime}1)\] \[=\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{ \prime\prime},j)\right]\!\right])\] \[\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{ \prime},j)\right]\!\right])\otimes\delta(\left[\!\left[(y^{\prime},j)\right] \!\right],\left[\!\left[(y^{\prime\prime},k)\right]\!\right]) =\bigvee_{x\in X}\bigvee_{x^{\prime}\in X}\delta(y,f(x))\otimes \delta(f(x),y^{\prime})\otimes\delta(y^{\prime},f(x^{\prime}))\otimes\delta( f(x^{\prime}),y^{\prime\prime})\] \[\leq\bigvee_{x\in X}\bigvee_{x^{\prime}\in X}\delta(y,f(x)) \otimes\delta(f(x),f(x^{\prime}))\otimes\delta(f(x^{\prime}),y^{\prime\prime})\] \[=\bigvee_{x\in X}\delta(y,f(x))\otimes\delta(f(x),y^{\prime\prime})\] \[=\delta(\left[\!\left[(y,i)\right]\!\right],\left[\!\left[(y^{ \prime\prime},k)\right]\!\right])\] _Proof: Regularity._ Define \(g_{0},g_{1}:Y\to X_{f}\) as \(g_{i}(y)=(y,i)\). Both are morphisms: \[\delta(g_{i}(y),g_{i}(y^{\prime}))=\delta((y,i),(y^{\prime},i))=\delta(y,y^{ \prime})\] The equalizer of \(g_{0}\) and \(g_{1}\) are precisely \(\mathbf{img}\,f\): \[g_{0}(y)=g_{1}(y) \implies\,\llbracket(y,0)\rrbracket=\llbracket(y,1)\rrbracket\] \[\implies\,y\in Imf\] Thus, \(f\) is regular. ### Colimits **Proposition 3.7** (Initial object):: The initial object is \(\bot=(\varnothing,\varnothing)\) Proof.: This is a \(\mathscr{Q}\)-set by vacuity (Example 2.7). The empty function is a morphism again by vacuity, and it obviously must be the initial object, as morphisms are - in particular - functions as well. **Proposition 3.8** (Non-empty coproducts):: The coproduct of \(\mathscr{Q}\)-sets \((X_{i})_{i\in I}\) is: \[\coprod_{i\in I}X_{i} =(\coprod_{i\in I}|X_{i}|,\delta)\] \[\delta((x,i),(y,i)) =\begin{cases}\delta_{i}(x,y),&i=j\\ \bot,&i\neq j.\end{cases}\] Proof: It is a \(\mathscr{Q}\)-set.: Suppose that \(i\neq j\neq k\neq i\). Without loss of generality: \[\delta((x,i),(y,i)) =\delta_{i}(x,y)\] ( \[\delta\] is symmetric) \[=\delta_{i}(y,x)\] \[=\delta((y,i),(x,i))\] \[\delta((x,i),(y,j)) =\bot\] \[=\delta((y,j),(x,i))\] \[\delta((x,i),(y,i))\otimes E(x,i) =\delta_{i}(x,y)\otimes Ex\] (extensionality) \[=\delta_{i}(x,y)\] \[=\delta((x,i),(y,i))\] \[\delta((x,i),(y,j)\otimes E(x,i) =\bot\otimes Ex\] \[=\bot\] \[=\delta((x,i),(y,j))\] \[\delta((x,i),(y,i))\otimes\delta((y,i),(z,i)) =\delta_{i}(x,y)\otimes\delta_{i}(y,z)\] (triangular inequality) \[\leq\delta_{i}(x,z)\] \[=\delta((x,i),(z,i))\] \[\delta((x,i),(y,j)\otimes\delta((y,j),(z,j)) =\bot\otimes\delta((y,j),(z,j)\] \[=\bot\] \[\leq\delta((x,i),(z,j))\] \[\delta((x,i),(y,j)\otimes\delta((y,j),(z,k)) =\bot\otimes\bot\] \[=\bot\] \[\leq\delta((x,i),(z,k))\] Proof: Coprojections.: \[\underline{u}_{i} :X_{i}\rightarrow\coprod_{i\in I}X_{i}\] \[\underline{u}_{i}(x_{i})=(x_{i},i)\] By construction those are obviously morphims. _Proof: Universality._ Let \(f_{i}:X_{i}\to A\) a family of morphisms. We define: \[h:\coprod_{i\in I}X_{i} \to A\] \[h(x,i) =f_{i}(x)\] Then \(h\) satisfies the universal property: \[f_{i}\circ\uplus_{i}(x,i) =f_{i}(x) =h(x,i)\] \[f_{i}\circ\uplus_{i} =h\] **Example 3.1:** Given a nonempty index set \(I\), we have a \(\mathscr{Q}\)-set \(\coprod_{i\in I}\top\), given by \[\delta((e,i),(e^{\prime},i^{\prime}))=\begin{cases}e\otimes e^{\prime},& \text{if }i=i^{\prime};\\ \bot,&\text{otherwise.}\end{cases}\] **Proposition 3.9** (Coequalizers): **:** For \(f,g:(X,\delta)\to(Y,\delta^{\prime})\), we define the equivalence relation \(\sim\subseteq|Y|\times|Y|\) as the transitive closure of \(f(x)\sim f(x)\sim g(x)\sim g(x)\). The coequalizer of \(f,g:(X,\delta)\to(Y,\delta^{\prime})\) is: \[\text{coEq}(f,g) =\begin{pmatrix}Y\diagup,\delta\end{pmatrix}\] \[\delta(\left[\![y]\!\right],\left[\![y^{\prime}]\!\right]) =\bigvee_{\begin{subarray}{c}a\sim y\\ a^{\prime}\sim y^{\prime}\end{subarray}}\delta(a,a^{\prime})\] _Proof: \(\mathscr{Q}\)-set._ \[\delta(\left[\![y]\!\right],\left[\![y^{\prime}]\!\right]) =\bigvee_{\begin{subarray}{c}a\sim y\\ a^{\prime}\sim y^{\prime}\end{subarray}}\delta(a,a^{\prime})=\bigvee_{ \begin{subarray}{c}a\sim y\\ a^{\prime}\sim y^{\prime}\end{subarray}}\delta(a^{\prime},a)\] \[=\delta(\left[\![y^{\prime}]\!\right],\left[\![y]\!\right])\] \[\delta(\left[\![y\right],\left[\![y^{\prime}\right]\!]) \geq\delta(\left[\![y\right],\left[\![y^{\prime}\right]\!])\otimes \mathrm{E}\left[\![y\right]\!]\] \[=\delta(\left[\![y\right],\left[\![y^{\prime}\right]\!])\otimes \bigvee_{\begin{subarray}{c}a\sim y\\ a^{\prime}\sim y\end{subarray}}\delta(a^{\prime},a)\] \[=\bigvee_{\begin{subarray}{c}a\sim y\\ a^{\prime}\sim y^{\prime}\end{subarray}}\delta(a^{\prime},a)\otimes\bigvee_{ a\sim y}Ea\] \[\geq\bigvee_{\begin{subarray}{c}a\sim y\\ a^{\prime}\sim y^{\prime}\end{subarray}}\delta(a^{\prime},a)\otimes Ea\] \[=\bigvee_{\begin{subarray}{c}a\sim y\\ a^{\prime}\sim y^{\prime}\end{subarray}}\delta(a^{\prime},a)\] \[=\delta(\left[\![y\right],\left[\![y^{\prime}\right]\!])\] **Proposition 3.10** (Epimorphisms): Epimorphisms are precisely the surjective morphisms. Suppose that \(f:X\to Y\) is surjective. Then, \(\forall y\in Y:\exists x\in X:f(x)=y\). Let \(g,h:Y\to A\). Suppose that \(\forall x\in X:g\circ f(x)=h\circ f(x)\). Let \(y\in Y\). There is a \(x\in X\) such that \(f(x)=y\). Then \(g(y)=g\circ f(x)=h\circ f(x)=h(y)\). Proof.: Take some \(f:X\to Y\) and suppose \[\exists y_{0}\in Y:\forall x\in X:f(x)\neq y_{0}\] Let \(Y_{f}=\left(\left(Y\amalg Y\right)\diagup,\delta\right)\), as generated by \[y\in\mathbf{img}\,f\implies\,\uplus_{l}(y)\sim\uplus_{r}(y)\] Let \(g,h:Y\to Y_{f}\) as \(g(y)=\llbracket(y,0)\rrbracket\) and \(h(y)=\llbracket(y,1)\rrbracket\). Then \(g\neq h\) but \(g\circ f=h\circ f\). ### Subobject Classifier The goal of this short subsection is to establish that the category \(\mathscr{Q}\)-sets has a (almost trivial) classifier for the regular subobjects. **Remark 3.3**:: Note that the category is not balanced, since there are many bijective morphisms -in particular, morphisms that are mono+epi- that are not isomorphisms, that coincides with the bijective morphisms that preserves \(\delta\). **Proposition 3.11** (Regular Subobject Classifier):: Let \(\Omega=(\top\,\dot{\cup}\top,\delta)\) where: \[\delta((e,i),(e^{\prime},j))=e\otimes e^{\prime}\] and consider the morphism \(t:\top\to\Omega\) that includes \(\top\) in the second copy of \(\top\) in \(\Omega\): \(t(e)=(e,1)\). Then \(t:\top\to\Omega\) is a classifier for the regular subobjects. Proof.: Note first that \(\Omega\) is a well-defined \(\mathscr{Q}\)-set and that the identity map \(\top\coprod\top\to\Omega\) is a bijective morphism that almost never is an isomorphims (is isomorphism iff \(\mathscr{Q}\) has a unique idempotent member). Moreover, note that \(t:\top\to\Omega\) is a regular monomorphism. For each regular monomorphism \(f:X\to Y\), we define \(\chi_{f}:Y\to\Omega\) as \[\begin{cases}(\operatorname{E}y,1),&y=f(x),\\ (\operatorname{E}y,0),&y\in Y\setminus f[X]\end{cases}\] It is evident that this is a morphism, as this is akin to the terminal arrow, but we plug in an extra tag that doesn't interfere with \(\delta\) but allows us to keep track of the element's provenance. **Claim \(\chi_{f}\circ f=t\circ!_{X}\)**: \[\chi_{f}(f(x))=(Ef(x),1)=(Ex,1)=t(Ex)=t(!_{X}(x))\] **Claim**: The commutative diagram above \[(\Omega\stackrel{{ t}}{{\leftarrow}}\top\stackrel{{ [\chi]}}{{\leftarrow}}X\stackrel{{ f}}{{\rightarrow}}Y\stackrel{{ \check{f}}}{{\rightarrow}}\Omega)\] is a pullback square. Let \(u:X\rightarrow\top\underset{\Omega}{\times}Y\) be the unique morphism given by the universal property of pullbacks. We will show that \(u\) is a bijective morphism that preserves \(\delta\)s, thus it is an isomorphism. Note that \(u(x)=(E_{X}x,f(x))\) for each \(x\in X\), since \(E_{\top}(E_{X}x)=E_{X}x=E_{Y}f(x)\) and \(t(E_{X}x)=\chi_{f}(f(x))\). \(u\) is injective: If \(u(x)=u(x^{\prime})\) then, \(x=x^{\prime}\) since \(f\) is injective. \(u\) is surjective: if \((e,y)\in\top\underset{\Omega}{\times}Y\), then \(E_{\top}(e)=e=E_{Y}y\) and \((e,1)=t(e)=\chi_{f}(y)\) thus, by the definition of \(\chi_{f}\), \(y=f(x)\) for some (unique) \(x\in X\). Then \(e=E_{Y}y=E_{Y}f(x)=E_{X}x\) and \((e,y)=(E_{X}x,f(x))\). \(u\) preserves \(\delta\)s: \[\delta(u(x),u(x^{\prime}))=\delta((E_{X}x,f(x)),(E_{X}x^{\prime},f(x^{\prime}) ))=\] \[\delta_{\top}(E_{X}x,E_{X}x^{\prime})\wedge\delta_{Y}(f(x),f(x^{\prime}))= \delta_{\top}(E_{X}x,E_{X}x^{\prime})\wedge\delta_{X}(x,x^{\prime})=\] \[(E_{X}x\otimes E_{X}x^{\prime})\wedge\delta_{X}(x,x^{\prime})=\delta_{X}(x,x^{ \prime})\] finishing the proof of the claim. **Claim**: \(\chi_{f}\) is the unique arrow \(\check{f}:Y\rightarrow\Omega\) such that the diagram \[(\Omega\stackrel{{ t}}{{\leftarrow}}\top\stackrel{{ [\chi]}}{{\leftarrow}}X\stackrel{{ f}}{{\rightarrow}}Y \stackrel{{\check{f}}}{{\rightarrow}}\Omega)\] is a pullback. Let \(x\in X\). By the commutativity of the diagram, \(\check{f}(f(x))=t(!_{X}(x))=(E_{X}x,1)=(E_{Y}f(x),1)\). Let \(y\in Y\setminus f[X]\). Consider the \(\mathscr{Q}\)-set \(Z=\{e\}\), where \(e=E_{Y}y\), as described in 2.8 and the (well defined) morphism \(g:Z\to Y\) such that \(g(E_{Y}y)=y\). If \(\check{f}(g(e))=t(!_{Z}(e))=t(e)=(e,1)\), then there is a unique morphism \(v:Z\to X\) such that \(f(v(e))=g(e)=y\). Thus \(\check{f}(y)=\check{f}(g(e))\neq(e,1)\). But \(E_{\Omega}\check{f}(y)=E_{\Omega}\check{f}\circ g(e)=E_{Z}e=e\) and \(\check{f}(y)\in\{(e^{\prime},0),(e^{\prime},1)\}\) for some idempotent \(e^{\prime}\) such that \(e^{\prime}=e^{\prime}\otimes e^{\prime}=E_{\Omega}\check{f}(y)\). This means that \(\check{f}(y)=(e,0)=(E_{Y}y,0)\), finishing the proof of the claim. Local Presentability Needless to say that a category being locally presentable is a very strong property, in that it - for instance - allows us to construct right adjuncts to any functor that "ought" to have them (cocontinous) when the categories in question are locally presentable. It also reflects positively into the slices of the category in question. Consider a setting in which we have an object \(X\) of \(\mathscr{C}\) - a sufficiently cocomplete category - and a diagram \(D:\mathscr{X}\to\mathscr{C}\). There is a canonical map, given by universality of the coprojections5\(\underline{u}_{k}:D_{k}\to\operatorname{colim}_{k}D_{k}\): Footnote 5: Which the reader may read as “ip”, as it is the dual of pi. \[\operatorname{colim}_{k}\operatorname{\mathbf{hom}}(X,D_{k})\xrightarrow{ \varphi}\operatorname{\mathbf{hom}}(X,\operatorname{colim}_{k}D_{k})\] For \(\operatorname{\mathbf{hom}}(X,\underline{\phantom{\mathbf{hom}}})\) to preserve colimits it then suffices that this natural map be a natural isomorphism. And hence, simply an isomorphism pointwise - as it is already natural. This is how we shall go about showing that \(\mathscr{Q}\)-\(\mathbf{Set}\) is accessible, and being accessible and cocomplete it must be locally presentable. For accessibility, we need a regular cardinal \(\kappa\) - to be determined - and an (essentially) small collection of \(\kappa\) compact objects which generate \(\mathscr{Q}\)-\(\mathbf{Set}\) under \(\kappa\) directed colimits. First, then, we are going to search for one such class of objects, then show they generate the category appropriately. ### \(\kappa\)-Compact Objects To determine them, we must obviously settle on _some_ regular cardinal. It turns out that \(\left|\mathscr{Q}\right|^{+}\) - the successor cardinal of \(\mathscr{Q}\)'s cardinality - suffices. Thus defined, it is reasonably straightforward to present an essentially small class of \(\kappa\)-compact objects. Consider \(X\) a \(\mathscr{Q}\)-set such that its carrier set has cardinality less than \(\kappa\). It - we shall show - is compact wrt. \(\kappa\). This, of course, is to show that \(\operatorname{\mathbf{hom}}(X,\underline{\phantom{\mathbf{hom}}})\) preserves \(\kappa\)-directed colimits. This we do in two steps: showing surjectivity and injectivity of \(\varphi\) for our \(X\). **Lemma 4.1** (\(\varphi\) is surjective): \(\operatorname{Suppose}D:\langle I,\leq\rangle\to\mathscr{Q}\)-\(\mathbf{Set}\) is a \(\kappa\)-directed diagram - or it could just be \(\kappa\)-filtered it really doesn't matter either way. To show our claim, it would suffice that any arrow \(X\to\operatorname{colim}_{k}D_{k}\) actually factors through some (dependent on it) \(D_{i}\) - as we can then take the canonical maps from those \(\operatorname{\mathbf{hom}}\)s into \(\operatorname{colim}_{k}\operatorname{\mathbf{hom}}(X,D_{k})\) and have a section for \(\varphi\) - showing it is surjective. We do this by constructing an index set \(J\) such that a given \(f\) must factor through any \(i\in I\) greater than all \(j\in J\). Since our construction will ensure that \(|J|\leq\kappa\) and the partial order \(I\) (or domain category) is \(\kappa\)-directed (-filtered) - we will have at least one such \(i\) and we will have factored \(f\) appropriately. Proof.: The reader must forgive the following proof, as it isn't quite as insightful as to move or impart the reader with any deep beauty, but it is baroque enough to possibly deeply confuse them. Take some \(f:X\to\operatorname{colim}_{k}D_{k}\) for the rest of the proof. Defining \(J\):Given \(x,y\in X\), consider the set \[\Delta(x,y)=\{\delta_{i}(a,b)\ |\ i\in I\quad a,b\in D_{i}\quad\uplus_{i}(a)=f(x) \quad\uplus_{i}(b)=f(y)\}\] (Note that \(\Delta(x,y)\) is no emptier than \(X\), as \(f(x)\) must "be" (as in, up to \(\uplus\)) in some \(D_{i}\), and \(f(y)\), in some \(D_{j}\). And thus there is some \(k\) greater than both must "be" in at the same time.) If we admit the axiom of choice (which we do, and must to if we even want to start talking about things like \(|\mathscr{Q}|\) for non-special quantales), we can choose a a \(\Xi(x,y)\) \[\Xi(x,y)=\{(i,a,b)\ |\ i\in I\quad a,b\in D_{i}\quad\uplus_{i}(a)=f(x)\quad \uplus_{i}(b)=f(y)\}\] such that \(\Xi(x,y)\cong\Delta(x,y)\) and the isomorphism is given by the map \[(i,a,b)\mapsto\delta_{i}(a,b)\] And now, if we take the projection of \(\Xi(x,y)\) into the first coordinate, corresponding to the index \(i\) that \(a,b\) inhabit, we get a set we call \(\Gamma(x,y)\): \[\Gamma(x,y)=\pi_{I}[\Xi(x,y)]\] Now, the reader may be assured: if anything \(\Gamma(x,y)\) must have cardinality stricly less than \(\kappa\). As \(\Delta(x,y)\) is a subset of \(\mathscr{Q}\), which is itself smaller than \(\kappa\) - and we have only ever decreased its size by applying the above constructions. And so, we may indeed proceed with defining \(J\) \[J=\bigcup_{x,y}\Gamma(x,y)\] Since \(|X|<\kappa\) so does \(|X\times X|\) (provided, say, \(\kappa\) is infinite. We'd force \(\kappa\geq\omega\) otherwise, no harm done. Since \(|X\times X|<\kappa\) it follows that we are doing a union of less than \(\kappa\) sets of cardinalities that are lesser than \(\kappa\) - and regularity _is_ the fact that that itself must be smaller than \(\kappa\). And hence \(|J|\) is in fact still smaller than \(\kappa\). Let, therefore, \(\gamma\in I\) be some element greater than all \(j\in J\). Obviously the same can be done for filtered diagrams as opposed to posets. Again: no harm is done to the argument. Factoring \(f\) through \(D_{\gamma}\):Recall the construction for colimits, we may regard colimits as appropriate quotients of disjoints unions, this is now be centrally useful: Given \(x\in X\), its image under \(f\) is an equivalence class \(\llbracket(a,i)\rrbracket\) for an \(a\in D_{i}\) an elected representative (not necessarily democratically, the Axiom of Choice doesn't imply that everyone necessarily has a say). Hence, it is evident that by taking the indices of those representatives we shall have a set \(J^{\prime}\) of cardinalty smaller than \(\kappa\), to which is associated an element \(\gamma^{\prime}\)--greater than all its elements--such that we have a function \(X\xrightarrow{\bar{f}}D_{\gamma^{\prime}}\) taking \(x\) to \(D(i\leq\gamma^{\prime})(a)\), which factors \(f\). Betrayal!However, this function is _not_ necessarily a functional _morphism_ because we aren't taking \(\delta\) into account! \(\delta(x,y)\) might not be less than \(\delta(\bar{f}(x),\bar{f}(y))\). It _obviously_ factors \(f\)... in **Set**. Despite much fear and trembling (some loathing too), or largely because of them, it is possible to enhance it into a _morphism_ that factors it. This is done with the help of our old friend \(\gamma\). Redemption:A convenient fact left out (for dramatic purposes) of the first part of the proof is that \[\delta(f(x),f(y))=\sup\Delta(x,y)\] Which is because how \(\delta\) is defined on colimits: \[\delta(\llbracket a\rrbracket\,,\llbracket b\rrbracket)=\bigvee_{i\in I}\bigvee _{\begin{subarray}{c}\alpha\in\llbracket a\rrbracket\cap D_{i}\\ \beta\in\llbracket b\rrbracket\cap D_{i}\end{subarray}}\delta_{i}(\alpha,\beta)\] But also recall that \(D_{\gamma}\) is "above" all \(D_{j}\) for \(j\in J\): \[\delta_{j}(u,v)\leq\delta_{\gamma}(D(j\leq\gamma)(u),D(j\leq\gamma)(v))\] Hence, if \(\uplus_{j}(a)=f(x)\) and \(\uplus_{j}(b)=f(y)\) for some \(j\), it must follow that \[\delta_{j}(a,b)\leq\delta(D(j\leq\gamma)(a),D(j\leq\gamma)(b))\] But taking the supremum of the \((i,a,b)\) that do that is just \(\delta(f(x),g(y)\). We also know that it cannot grow any more than that and so we obtain: \[\delta(D(j\leq\gamma)(a),D(j\leq\gamma)(b))=\delta(f(x),f(y))\] This tells us that \(\gamma\) can do all that \(\gamma^{\prime}\) could - in that we can find preimages of \(f(x)\) under \(\uplus\) for every \(x\) - but also we can do so in such a way that is a _morphism_. We only need to concerns ourselves with \(\delta\), as the extent is trivially always preserved. This means we have indeed factored \(f\) as \[X\xrightarrow{\bar{f}}D_{\gamma}\xrightarrow{\uplus_{\gamma}}\operatorname{ colim}_{k}D_{k}\] and that tells us: \[\varphi(\llbracket\!\llbracket f\rrbracket\!\rrbracket)=\uplus_{\gamma}\circ\bar{f}=f\] **Lemma 4.2** (\(\varphi\) is injective).: The converse of the above holds as well: \(\varphi\) is injective. Proof.: Take \(X\) and \(D\) as above, we ought to show that \(\varphi\) is injective and thus that if \(\varphi(\llbracket\!\llbracket(f,i)\rrbracket\!\rrbracket)=\varphi(\llbracket \!\llbracket(g,j)\rrbracket\!\rrbracket)\) we ought to be able to show that \(\llbracket\!\llbracket(f,i)\rrbracket\!\rrbracket=\llbracket\!\llbracket(g,j)\rrbracket\!\rrbracket\). That is bound to be fun. Suppose, then, we do have such a pair. It follows definitionally that \[\uplus_{i}\circ f=\uplus_{j}\circ g\] Since these functions are extensionally the same, we have that for each \(x\in X\), \(f(x)\sim g(x)\). Where this equivalence is the equivalence the symmetric transitive closure of \[(a\in D_{i})\sim([D(i\leq j)](a)\in D_{j})\] defined on \(\coprod_{i}D_{i}\). Hence, it amounts to saying that \((a,i)\sim(b,j)\iff\text{there is a messy zig-zag diagram, as below, connecting them Consequently, to each \(x\) there is (at least one) finite diagram as above connecting \(f(x)\) and \(g(x)\). Each of those elected diagrams concerns only finitely many \(i\) in \(I\) - and there are less than \(\kappa\)\(x\) in \(X\). And thus if we take them all together, we will still have what amounts to less than \(\kappa\) indices. We call one such collection of indices simply \(J\), nevermind which one exactly. We take some \(\gamma\) greater than all \(j\) in \(J\), and once again consider \(D_{\gamma}\). Since The diagram formed by \(D\upharpoonright(J\cup\{\gamma\})\) is commutative, we have embedded the messy zig-zag for _every_\(x\) in a beautifully commutative way: Hence, \(g(x)\) and \(f(x)\) get identified _for every \(x\) at the same time_ in this particular \(D_{\gamma}\). In particular, we may simply take \(\bar{f}\) to be the morphism taking \(x\) to \(\bar{x}\), which coincides with \(\bar{g}\) - doing the same. But \[\mathbf{hom}(X,D(j\leq\gamma))(g) =[D(g\leq\gamma)]\circ g\] \[=\bar{g}\] \[=\bar{f}\] \[=[D(i\leq\gamma)]\circ f\] \[=\mathbf{hom}(X,D(i\leq\gamma))(f)\] And hence, we have a messy zig-zag (this time in \(\mathbf{Set}\)) connecting those arrows, and hence in the colimit they are actually the same: \[\llbracket(f,i)\rrbracket=\llbracket(g,j)\rrbracket\] and this is what we set out to prove. **Theorem 4.1** (\(|X|<\kappa\) implies \(\kappa\)-compactness)**:: Which is obvious in the light of the lemmas above. ### Accessibility and Presentability **Theorem 4.2** (\(\mathscr{Q}\)-\(\mathbf{Set}\) is indeed \(\kappa\)-accessible)**:: "Trivial". Proof.: Unsurprisingly, if you take \(Y\) some \(\mathscr{Q}\)-set and take the inclusion poset of \(\mathscr{P}^{<\kappa}(|Y|)\) which is always (due to regularity) \(\kappa\)-directed. In more precise terms, those "small" parts of \(Y\) get mapped to... themselves, with \(\delta\) given by restriction. It is immediately evident that the colimit of this diagram is \(Y\): Take some \(y\in Y\), it corresponds to the equivalence class containing manifold copies of itself but tagged with whichever subset that happened to contribute its membership to the disjoint union. Take a pair \(y,y^{\prime}\), since \(\delta\) on subsets will be given by restriction, the \(\delta\) of their corresponding classes will just be their \(\delta\). So the colimit is evidently just isomorphic to \(Y\). **Theorem 4.3** (Local Presentability)**:: Since we already know that \(\mathscr{Q}\)-\(\mathbf{Set}\) is a cocomplete category, we have actually shown that it is locally presentable. Since accessible cocomplete categories are invariably locally presentable. ## 5 Monoidal Structures It is already known, thanks to the construction of limits in a previous section, that \(\mathscr{Q}\)-\(\mathbf{Set}\) has a "reasonable" monoidal category structure. This product, however, doesn't often have an exponential associated to it: if it did, \(\_\times X\) would be cocomplete. Recall the construction of the categorical products: \[X\times Y=\{(x,y)\ |\ \operatorname{E}_{X}x=\operatorname{E}_{Y}y\}\] with \(\delta((x,y),(a,b))=\delta_{X}(x,a)\wedge\delta(y,b)\). Since coequalizers have to do with taking suprema, the product being cocomplete would mean that something like \[a\wedge\bigvee_{i}b_{i}=\bigvee_{i}a\wedge b_{i}\] would have to hold. This would make \(\mathscr{Q}\)'s underlying lattice a locale--although it does not force \(\otimes=\wedge\), of course. The question the arises: are there monoidal closed (and semicartesian) category structures naturally defined over \(\mathscr{Q}\)-\(\mathbf{Set}\) and how are those different structures related to each other? We offer some such structures - arranged in a hierarchy of monoidal products related to \(\otimes\). From the product construction, we can spot two extension points: we can change \(\delta\) to use \(\otimes\) as opposed to \(\wedge\) and we can use some other relation \(\sim\) between \(\operatorname{E}_{X}\) and \(\operatorname{E}_{Y}\) as opposed to \(=\). The former we always take, the latter requires some consideration so that desirable categorical properties may still hold. **Definition 5.1** (Locallic Congruence): A locallic congruence is an equivalence relation on a locale such that \[[\forall i\in I:a\sim b_{i}]\implies a\sim\bigvee_{i\in I}b_{i}\] \[[a\sim b,a\sim c]\implies[a\sim(b\wedge c)]\] We say "a locallic congruence over a \(\mathscr{Q}\)" for a quantale \(\mathscr{Q}\) meaning one such congruence over its locale of idempotent elements. **Definition 5.2** (Congruential Tensor): Our tensors come from locallic congruences on \(\mathscr{Q}\). Namely, taking one such congruence \(\sim\), we define the operation \[X\otimes Y=\{(x,y)\ |\ \operatorname{E}_{X}x\sim\operatorname{E}_{Y}y\}\] \[\delta((x,y),(a,b))=\delta_{X}(x,a)\otimes\delta(y,b)\] We should also define its action on morphisms: which is to take \((f:X\to Y,\ g:A\to B)\) to \[(x,a)\stackrel{{ f\otimes g}}{{\longmapsto}}(f(x),g(a))\] Functoriality is trivial. We claim the above defines an obvious functor, that this is actually associative and commutative, that it is semicartesian, cocomplete, and that it has a unit. We shall first provide the definition for the unit, and then proceed with the appropriate proofs for our claims. **Definition 5.3** (Congruential Tensor Unit):: Given the above \(\sim\), it defines equivalence classes on \(\operatorname{E}\mathscr{Q}\), and since \(\sim\) is closed under suprema: \[a\sim\sup\,\llbracket a\rrbracket\] And so, the set \(\operatorname{E}\mathscr{Q}\) is in bijection with the following regular subterminal, given by \[\{\sup\,\llbracket a\rrbracket\,\mid\,a\in\operatorname{E}\mathscr{Q}\}\] which is what we take to be \(1\) - the claimed unit for \(\otimes\). **Remark 5.1**:: In this section, we shall - for notational reasons - be using \(\otimes\) as a functor to denote a generic congruential tensor - coming from some fixed but generic \(\sim\). Where necessary/convenient, we may specify the relation to disambiguate. Later own, though, we shall use \(\otimes\) referring to the minimal tensor, given by \(a\sim b\iff a=b\). Similarly, we shall refer to the maximal tensor, given by the chaotic relation by the symbol \(\boxtimes\). **Lemma 5.1** (\(\otimes\) is commutative):: Proof.: Since \(\otimes:\mathscr{Q}\times\mathscr{Q}\to\mathscr{Q}\)--the actual algebraic operation on the quantale--is taken to be commutative, and the fibration over the extents is over a symmetric relation, it is obvious that it will be commutative. **Lemma 5.2** (\(\otimes\) is associative):: Proof.: Associativity has a canonical isomorphism we ought to consider - that being what we would do in **Set** if we were given the task there: \((a,(b,c))\mapsto((a,b),c)\). We ought to show that this is an isomorphism, instead of just a function between the underlying sets. This is achieved by realizing that \[(a,(b,c))\in A\otimes(B\otimes C)\iff\operatorname{E}a\sim\operatorname{E}b \sim\operatorname{E}c\iff((a,b),c)\in(A\otimes B)\otimes C\] And we do _that_ using \(\wedge\)-congruence: \[(a,(b,c))\in A\otimes(B\otimes C) \implies[\operatorname{E}b\sim\operatorname{E}c]\text{ and }[ \operatorname{E}a\sim\operatorname{E}b\otimes\operatorname{E}c]\] \[\implies[\operatorname{E}b\sim\operatorname{E}b\otimes\operatorname{E }c]\text{ and }[\operatorname{E}a\sim\operatorname{E}b\otimes\operatorname{E}c]\] \[\implies[\operatorname{E}a\sim\operatorname{E}b\sim\operatorname{E }c]\] Obviously, mutatis mutandis, one can prove the same for \(((a,b),c)\in(A\otimes B)\otimes C\) - this can also be seen as a consequence of the above and commutativity. Obviously, if the extent equivalence chain holds, then one can form the triples in either shape, so the logical equivalence holds. We have shown that the obvious associator is in fact a function, but it is also bound to be an isomorphism since it is evidently a bijection and preserves \(\delta\) thanks to the associativity of \(\otimes\) as a \(\mathscr{Q}\) operation. We call the above associator "\(\alpha\)". It is easy to see that \(\alpha\) is natural and indeed satisfies the pentagon identity. For the skeptical readers, a proof can be sketched: the forgetful functor back into \(\mathbf{Set}\) does not make commutative any diagram that didn't already enjoy the property - and the forgetful image of our pentagon identity is the restriction of a commutative diagram in set (the pentagon identity for the set-theoretical cartesian product). **Lemma 5.3** (\(\otimes\) has \(1\) as a unit)**::**: __ Proof.: We haven't given \(1\)'s \(\delta\) but have left it implicit in saying it is a regular subterminal - meaning it is a subset of \(\operatorname{E}Q\) with \(\delta\) defined as \(\wedge\). This suffices to show that it is indeed a \(\mathscr{Q}\)-set. Suppose now we have some \(\mathscr{Q}\)-set \(X\) and let us consider its product with \(1\): \[X\otimes 1=\{(x,\sup\,\llbracket e\rrbracket)\ |\ e\in\operatorname{E}\mathscr{Q},\ \operatorname{E}_{X}x\sim\operatorname{E}_{1}\sup\,\llbracket e\rrbracket\}\] And hence, we know that \(\operatorname{E}_{X}x\sim e\) if and only if \((x,\sup\,\llbracket e\rrbracket)\in X\otimes 1\). Since \(\sim\) is symmetric, it follows that the only element that can ever get paired with \(x\) is \(\sup\,\llbracket E\,x\rrbracket\). And hence \(|X|\) is in bijection with \(|X\otimes 1|\). This bijection is naturally \(\delta\)-preserving: \[\delta(x,y) =\delta(x,y)\otimes\operatorname{E}x\otimes\operatorname{E}y\] \[=\delta(x,y)\otimes\operatorname{E}x\otimes\operatorname{E}y \otimes\sup\,\llbracket\operatorname{E}x\rrbracket\otimes\sup\,\llbracket \operatorname{E}y\rrbracket\] \[=\delta(x,y)\otimes\sup\,\llbracket\operatorname{E}x\rrbracket \otimes\sup\,\llbracket\operatorname{E}y\rrbracket\] \[=\delta(x,y)\otimes\delta_{1}(\sup\,\llbracket\operatorname{E}x \rrbracket,\sup\,\llbracket\operatorname{E}y\rrbracket)\] \[=\delta((x,\sup\,\llbracket\operatorname{E}x\rrbracket),(y,\sup \,\llbracket\operatorname{E}y\rrbracket))\] One has projections \(X\otimes 1,1\otimes X\to X\) given by forgetting the second coordinate (or just taking the inverse of the bijection we have established above). Those are what will become our unitors: The morphisms \(\rho:X\otimes 1\to X\) and \(\lambda:1\otimes X\to X\) are both obviously natural in \(X\) and we won't spend any time proving it here. **Theorem 5.1** (\((\mathscr{Q}\)**-Set\(,\otimes,1)\)** is a symmetric monoidal category)**::**: In light of what we've established, we would have to show that the associator and unitor satisfy the triangle identity and that we have a braiding satisfying one hexagonal identity and the symmetry condition. Proof.: We establish the unitor-associator triangle identity by tracing around an element around the path, noting that \(\operatorname{E}(x,\sup\left[\operatorname{E}x\right])=\operatorname{E}x\) and hence \(\operatorname{E}x\sim\operatorname{E}y\). The braiding in question is simple: \((x,y)\mapsto(y,x)\) which can be formed as \(\sim\) is symmetric. Moreover, it is obviously \(\delta\)-preserving and bijective, making it an isomorphism. It is trivially natural on \(X\) and \(Y\), and swapping \(X\) for \(Y\) and vice versa evidently shows that the symmetry condition holds. So all that remains is to show the hexagon braiding identity. For now, let's give a name to the braiding: \(\beta:X\otimes Y\to Y\otimes Y\): Here, the reader is invited to trace an element's orbit along those paths, there are no traps - we swear. **Theorem 5.2** (\(\otimes\) is cocomplete in either entry):: Proof.: Showing it directly may be painful, so instead we opt to do it in two steps: proving it preserves coproducts and proving it preserves coequalizers. Since all colimits are coequalizers of coproducts, preserving both means to preserve all. Coproduct PreservationTake \(X_{i}\) for \(i\in I\) an indexed set of \(\mathscr{Q}\)-sets. We can inspect \((\coprod_{i}X_{i})\otimes Y\) and its elements are \[((x,i),y)\ x\in X_{i}\text{ st. }\operatorname{E}_{i}x\sim y\] Inspecting the elements of \(\coprod_{i}(X_{i}\otimes Y)\) yields \[((x,y),i)\ x\in X_{i}\text{ st. }\operatorname{E}_{i}x\sim y\] So it's just a shuffling of the label saying which index that instance of \(x\) belongs to. Completely immaterial. Their \(\delta\)s just as similar: \[\delta(((x,i),y),((x^{\prime},i^{\prime}),y^{\prime})) =\delta((x,i),(x^{\prime},i^{\prime}))\otimes\delta(y,y^{\prime})\] \[=\begin{cases}\delta(y,y^{\prime})\otimes\delta_{i}(x,x^{\prime} ),&i=i^{\prime},\\ \delta(y,y^{\prime})\otimes\bot,&i\neq i^{\prime}.\end{cases}\] \[\delta(((x,y),i),((x^{\prime},y^{\prime}),i^{\prime})) =\begin{cases}\delta_{X_{i}\otimes Y}((x,y),(x^{\prime},y^{\prime })),&i=i^{\prime},\\ \bot,&i\neq i^{\prime}.\end{cases}\] \[=\begin{cases}\delta_{i}(x,x^{\prime})\otimes\delta(y,y^{\prime}) &i=i^{\prime},\\ \bot,&i\neq i^{\prime}.\end{cases}\] Since \(\bot\) is absorbing, they are all the same. Coequalizer PreservationTake \(f_{i}:A\to B\) for a pair of fixed \(A\) and \(B\), let's consider the coequalizer which, as we know, is given by the quotient of the reflexive and transitive closure of the relation given by \[f_{i}(a)\approx f_{j}(a)\] So consider \(X\otimes f_{i}:X\otimes A\to X\otimes B\). First we note that an element of their coequalizer must be some \(\llbracket(x,b)\rrbracket_{\approx^{\prime}}\) with \(\approx^{\prime}\) generated by \[(X\otimes f_{i})(x,a)=(x,f_{i}(a))\approx^{\prime}(x,f_{j}(a))=(X\otimes f_{j} )(x,a)\] hence it is obvious (since it's the same \(x\) throughout the line and \(\approx^{\prime}\) is generated freely by the above conditions, that an equivalence class of the relation as defined above will and can only ever have the same \(x\) in all of its members. consider the projection on the second coordinate, ie. \(\pi_{B}\,\llbracket(x,b)\rrbracket\) as a subset of \(B\). It is evident that all \(b^{\prime}\) in such an assembly will be such that \(b^{\prime}\approx b\) - by definition. It is also true that \(\operatorname{E}b^{\prime}=\operatorname{E}b\) since the equivalence relation requires them to have come from the same point by possibly different - and zigzagging (but that doesn't matter, since extent will be preserved) - paths. Since the same will hold for the broader subset that is the equivalence class \(\llbracket b\rrbracket_{\approx}\), we can confidently say that \(\operatorname{E}x\sim\operatorname{E}\,\llbracket b\rrbracket_{\approx}\). Thus, we can safely hoist the \(X\) component out of each equivalence class \(\llbracket(x,b)\rrbracket\) and obtain an element \((x,\llbracket b\rrbracket_{\approx})\in X\otimes C\). Similarly, any such element will necessarily be such that we can form \((x,b^{\prime})\) for every \(b^{\prime}\in\llbracket b\rrbracket_{\approx}\) and hence comes from an element \(\llbracket(x,b)\rrbracket_{\approx^{\prime}}\) in the manner described above. This surjection we have shown is obviously also injective. So for an isomorphism, all that we would have to show is \(\delta\)-preservation. \[\delta((x,\llbracket b\rrbracket),(y,\llbracket\beta\rrbracket)) =\delta(x,y)\otimes\delta(\llbracket b\rrbracket\,,\llbracket \beta\rrbracket)\] \[=\delta(x,y)\otimes\bigvee_{\begin{subarray}{c}u\in\llbracket b \rrbracket\\ v\in\llbracket\beta\rrbracket\end{subarray}}\delta(u,v)\] \[=\bigvee_{\begin{subarray}{c}u\in\llbracket b\rrbracket\\ v\in\llbracket\beta\rrbracket\end{subarray}}\delta(x,y)\otimes\delta(u,v)\] \[=\bigvee_{\begin{subarray}{c}u\in\llbracket b\rrbracket\\ v\in\llbracket\beta\rrbracket\end{subarray}}\delta((x,u),(y,v))\] \[=\delta(\llbracket(x,u)\rrbracket\,,\llbracket(y,v)\rrbracket)\] **Theorem 5.3** (\((\mathscr{Q}\)-\(\mathbf{Set},\otimes)\) is monoidal closed)**::**: Proof.: \(X\otimes\_\) is a cocomplete endofunctor on a locally presentable category. ### Formalizing their Hierarchy As previously seen, an equivalence relation \(\sim\) gives rise to a tensor \(\stackrel{{\sim}}{{\otimes}}\) which we had heretofore neglected to mark with the relation that originated them. Since relations are ordered by inclusion (and hence, by implication) - it would be strange if some kind of similar hierarchy did not connect their tensorial spawn. To that effect, we have introduced a notion of morphism between monoidal categories called "translax" monoidal functors - which are neither lax nor oplax functors but do indeed form a 2-category with objects being monoidal categories. **Definition 5.4** (Translax Monoidal Functor):: Roughly speaking, a translax monoidal functor is a akin to a monoidal functor but the arrows associated to units are going in the reverse (hence trans) direction to the monoidal product side of things. In more formal terms, given monoidal categories \(\mathscr{A}\) and \(\mathscr{B}\), a monoidal functor \(F:\mathscr{A}\to\mathscr{B}\) is a functor endowed with a certain tensor "covariant" transformation and a "contravariant" unit map satisfying some coherence conditions. This is to say, a functor \(F\) between the underlying categories and \[\mu:F(\_\otimes_{\mathscr{A}}\_) \to F(\_)\otimes_{\mathscr{B}}F(\_)\] \[\varepsilon:1_{\mathscr{B}} \to F(1_{\mathscr{A}})\] satisfying \(F(X)\otimes F(Y\otimes Z)\)\(F(X)\otimes_{\mu}\)\(F(X)\otimes_{\mu}\)\(F(X\otimes Y)\otimes F(Z)\)\(F(X)\otimes_{\mu\otimes F(Z)}\)\(F(X)\otimes_{\mu}\ Consequently, we have maps - that are trivially natural on \(X\) and \(Y\) - given "functorially". This is to say: if we take the category of functors \(\mathscr{Q}\)-\(\mathbf{Set}\times\mathscr{Q}\)-\(\mathbf{Set}\to\mathscr{Q}\)-\(\mathbf{Set}\) and natural transformations between them, The mapping taking \(\sim\) to \(\widetilde{\otimes}\) and taking \(\sim\leq\approx\) to \(\widetilde{\otimes}\hookrightarrow\widetilde{\otimes}\) is trivially functorial, as the maps are just inclusions. **Theorem 5.5**:: There is a functor from the implication category of locallic congruences over \(\mathscr{Q}\) and the 2-category of monoidal categories with translax functors. That functor takes \(\sim\) to its associated tensor product. Proof.: We already know that the product inclusion is a natural transformation - functorially dependent on \(\sim\) - of the appropriate type. And we know that the unit inclusions are contravariantly functorial with respect to \(\sim\). Therefore, the obvious choice of \(F\) is \(\mathbf{id}_{\mathscr{Q}\text{-}\mathbf{Set}}\). What remains is to show that the above choices jointly form a translax functor. This verification, however, is rather dull to read and to transcribe from our notes - we shall omit it since it is quite trivial. ## 6 Change of basis There are plenty of possible definitions for morphisms between quantales. The most basic that is remotely useful is to be an order morphism jointly with a semigroup morphism. There are many examples of such morphisms, such as subquantale inclusions, projections. There are nontrivial such morphisms, such as when \(\mathscr{Q}\) is semicartesian and commutative, in which one may form \[q\mapsto q^{-}=\max\left\{e\in\operatorname{E}Q\ |\ e\preceq q\right\}\] which happens to be right adjoint to the inclusion of idempotents into the quantale. This means that relations arising from such morphisms would relate locallic-sets (which are topos-adjacent) and our \(\mathscr{Q}\)-sets. Thus, for now, let us explore these simple morphisms which are just non-decreasing functions that preserve products, and the obvious functor that arises from them. **Definition 6.1**:: Given \(f:\mathscr{P}\to\mathscr{Q}\), one defines \(f_{*}:\mathscr{P}\text{-}\mathbf{Set}\to\mathscr{Q}\text{-}\mathbf{Set}\) to be the functor \[f_{*}(X,\delta)=(X,f\circ\delta)\] with the trivial action on morphisms. As defined, this obviously preserves the identity. Moreover, its basically immediate that - if \((X,f\circ\delta)\) is indeed always a \(\mathscr{Q}\)-set whenever \((X,\delta)\) is a \(\mathscr{P}\)-set - that the action if functorial: we are just composing functions after all. To see the claim that remains, simply realize that all \(\mathscr{Q}\)-set axioms are either trivially valid (symmetry) or depend on \(\leq\) and \(\otimes\) to be preserved.
2305.13397
The Spin-Orbit Misalignment of TOI-1842b: The First Measurement of the Rossiter-McLaughlin Effect for a Warm Sub-Saturn around a Massive Star
The mechanisms responsible for generating spin-orbit misalignments in exoplanetary systems are still not fully understood. It is unclear whether these misalignments are related to the migration of hot Jupiters or are a consequence of general star and planet formation processes. One promising method to address this question is to constrain the distribution of spin-orbit angle measurements for a broader range of planets beyond hot Jupiters. In this work, we present the sky-projected obliquity ($\lambda=-68.1_{-14.7}^{+21.2} \,^{\circ}$) for the warm sub-Saturn TOI-1842b, obtained through a measurement of the Rossiter-McLaughlin effect using WIYN/NEID. Using the projected obliquity, the stellar rotation period obtained from the TESS light curve, and the projected rotation velocity from spectral analysis, we infer the 3D spin-orbit angle ($\psi$) to be $\psi=73.3^{+16.3}_{-12.9} \,^{\circ}$. As the first spin-orbit angle determination made for a sub-Saturn-mass planet around a massive ($M_{\rm *}=1.45 \,{\rm M_\odot}$) star, our result presents an opportunity to examine the orbital geometries for new regimes of planetary systems. When combined with archival measurements, our observations of TOI-1842b support the hypothesis that the previously established prevalence of misaligned systems around hot, massive stars may be driven by planet-planet dynamical interactions. In massive stellar systems, multiple gas giants are more likely to form and can then dynamically interact with each other to excite spin-orbit misalignments.
Kyle Hixenbaugh, Xian-Yu Wang, Malena Rice, Songhu Wang
2023-05-22T18:18:08Z
http://arxiv.org/abs/2305.13397v1
# The Spin-Orbit Misalignment of TOI-1842b ###### Abstract The mechanisms responsible for generating spin-orbit misalignments in exoplanetary systems are still not fully understood. It is unclear whether these misalignments are related to the migration of hot Jupiters or are a consequence of general star and planet formation processes. One promising method to address this question is to constrain the distribution of spin-orbit angle measurements for a broader range of planets beyond hot Jupiters. In this work, we present the sky-projected obliquity (\(\lambda=-68.1^{+21.2}_{-14.7}\)\({}^{\circ}\)) for the warm sub-Saturn TOI-1842b, obtained through a measurement of the Rossiter-McLaughlin effect using WIYN/NEID. Using the projected obliquity, the stellar rotation period obtained from the _TESS_ light curve, and the projected rotation velocity from spectral analysis, we infer the 3D spin-orbit angle (\(\psi\)) to be \(\psi=73.3^{+16.3}_{-12.9}\)\({}^{\circ}\). As the first spin-orbit angle determination made for a sub-Saturn-mass planet around a massive (\(\,M_{*}=1.45\)\(\mathrm{M}_{\odot}\)) star, our result presents an opportunity to examine the orbital geometries for new regimes of planetary systems. When combined with archival measurements, our observations of TOI-1842b support the hypothesis that the previously established prevalence of misaligned systems around hot, massive stars may be driven by planet-planet dynamical interactions. In massive stellar systems, multiple gas giants are more likely to form and can then dynamically interact with each other to excite spin-orbit misalignments. planetary alignment (1243), exoplanet dynamics (490), star-planet interactions (2177), exoplanets (498), planetary theory (1258), exoplanet systems (484) ## 1 Introduction Observed trends in stellar obliquity, the angle between a star's spin axis and the net orbital angular momentum vector of its companion planets, have provided insights into the prevalence of different formation mechanisms that shape the demographics of exoplanet systems (Schlauffman, 2010; Winn et al., 2010; Albrecht et al., 2012; Wang et al., 2021; Albrecht et al., 2021; Rice et al., 2022). Measurements of the Rossiter-McLaughlin (R-M, Rossiter, 1924; McLaughlin, 1924) effect have revealed that a significant fraction of hot Jupiters are spin-orbit misaligned (see Winn & Fabrycky, 2015 and Albrecht et al., 2022 for comprehensive reviews). However, the origins and evolution of spin-orbit misalignments remain unclear. Perhaps the most compelling observational pattern in stellar obliquity measurements thus far is the discovery that hot Jupiters around cool, low-mass stars are preferentially aligned. In contrast, hot Jupiters around hot, massive stars span a wide range of spin-orbit angles (Winn et al., 2010; Schlaufman, 2010; Albrecht et al., 2012; Wang et al., 2021), known as the \(T_{\mathrm{eff}}\) vs. obliquity relationship. This has conventionally been explained as a signature of tidal realignment, which operates with higher efficiency in cool, low-mass stars with hot Jupiters. Cool, low-mass stars, which have thick convective envelopes, may realign with the orbital planes of their close-in Jupiter companions through tidal interactions on a timescale \(\tau_{\Psi}\propto(\,M_{\mathrm{P}}/\,M_{*})^{-2}(a/R_{*})^{6}\)(e.g. Albrecht et al., 2012, 2022) that is often shorter than the system lifetime. In such tidal interactions, the planet realigns the outer convective layer of the star to the planet's orbital axis (Winn et al., 2010; Albrecht et al., 2022). A growing sample of constraints for parameters such as stellar obliquity and eccentricity has recently enabled further studies examining the robustness of this trend in different parameter regimes. For example, Rice et al. (2022) demonstrated that the \(T_{\rm eff}\) vs. obliquity trend has so far only clearly held for hot Jupiters on circular (\(e=0\)) orbits. It remains unknown whether the observed \(T_{\rm eff}\) vs. obliquity relationship (which can be extended to a \(\,M_{*}\) vs. obliquity relationship, as stellar mass and temperature are correlated for main-sequence stars) also extends to other planet demographics beyond \(e=0\) hot Jupiters. Previous obliquity measurements have focused on hot, massive planets around both hot and cool stars. Recent progress has been made for measuring obliquity for lower-mass planets and warm planets around cool stars (Schlaufman, 2010; Sanchis-Ojeda et al., 2013; Wang et al., 2018; Anisman et al., 2020; Dong et al., 2022; Wang et al., 2022; Rice et al., 2022). Making R-M measurements for lower-mass planets - particularly those on wide orbits around hot, massive stars, in a poorly populated region of parameter space - is vital to test whether the \(\,M_{*}/T_{\rm eff}\) vs. obliquity relationship extends to populations other than \(e=0\) hot Jupiters. Testing this will shed new light onto the understanding of the origins and evolution of spin-orbit misalignments. In this work, we present the fifth result from the Stellar Obliquities in Long-period Exoplanet Systems (SOLES) survey (Rice et al., 2021; Wang et al., 2022; Rice et al., 2022; Rice et al., 2023): a measurement of the sky-projected spin-orbit angle (\(\lambda=-68.1^{+21.2}_{-14.7}\)\({}^{\circ}\)) of TOI-1842b (Wittenmyer et al., 2022), a warm (\(P=9.5740\pm 0.0001\) days) sub-Saturn (\(\,M_{\rm P}=0.19^{+0.06}_{-0.04}\,\rm M_{J}\)) on a slightly eccentric orbit (\(e=0.13^{+0.07}_{-0.09}\)) around a massive (\(\,M_{*}=1.45^{+0.07}_{-0.14}\,\rm M_{\odot}\)) star. This is the first Rossiter-McLaughlin measurement of a sub-Saturn-mass (\(\,M_{\rm P}<0.3\,\rm M_{J}\)) planet around a high-mass star (\(\,M_{*}>1.2\,\rm M_{\odot}\)). We observed the R-M effect using the NEID spectrograph (Schwab et al., 2016) on the WIYN 3.5 m telescope. The measured obliquity helps demonstrate the framework for an alternative explanation for the origins and evolution of spin-orbit misalignments based on the idea of dynamical excitement (Wu et al., 2023). In this framework, initial planet multiplicity is the key tracer of misalignments, and the ability for planet-planet interactions is the key mechanism that is capable of exciting obliquity. The present paper is structured as follows: the R-M measurement is detailed in Section 2, followed by a description of the methods utilized to acquire the stellar parameters for the studied system are outlined in Section 3. In Section 4 the global model employed to measure the planet's spin-orbit angle is described. Finally, the implications of the results are discussed in Section 5. ## 2 Observations TOI-1842 was observed on May 9, 2022 with the high-resolution (\(R\sim 110,000\)) WIYN/NEID spectrograph. A total of 20 radial-velocity (RV) measurements were taken during the transit of TOI-1842b, with 1200-second exposure times, from 03:35-10:17 UT. The atmospheric conditions during the observations included a seeing of \(0.8-1.4^{\prime\prime}\) and an airmass range of \(\rm z=1.1-2.5\). At a wavelength of 5500A, the NEID spectrograph had a typical signal-to-noise ratio of 66 pixel\({}^{-1}\). The spectra from NEID were processed using the NEID Data Reduction Pipeline1, and the resulting RVs were obtained from the NExScI NEID Archive2. The NEID RV data obtained in this work is provided in Table 1 and displayed in the top panel of Figure 1. Footnote 1: More information can be found here: [https://neid.ipac.caltech.edu/docs/NEID-DRP/](https://neid.ipac.caltech.edu/docs/NEID-DRP/) Footnote 2: [https://neid.ipac.caltech.edu/](https://neid.ipac.caltech.edu/) ## 3 Stellar Parameters \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Time (BJD\({}_{\rm TDR}\))} & RV (m/s) & \(\sigma_{\rm RV}\) (m/s) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: NEID radial velocities during a transit of TOI-1842b. We determine the best-fit stellar atmospheric parameters (\(T_{\rm eff},~{}\log g,\) [M/H], and \(v\sin i_{*}\)) for each individual spectrum obtained from NEID for TOI-1842 by applying the iSpec code (Blanco-Cuaresma et al., 2014; Blanco-Cuaresma, 2019) with MARCS atmosphere models and the GES atomic line list to generate synthetic spectra. We employed the synthetic spectral fitting technique, which minimizes the chi-square value for the difference between synthetic and observed spectra using a non-linear least-squares (Levenberg-Marquardt) fitting algorithm (Markwardt, 2009). The best fit parameters for each individual spectra are combined into a distribution. We take the median of the distribution as the parameter value, and derive uncertainties from the scatter of the distribution for each parameter. We then used EXOFASTv2 (Eastman et al., 2019) to perform a spectral energy distribution (SED) fit of TOI-1842 by combining the MESA Isochrones & Stellar Tracks (MIST; Choi et al., 2016; Dotter, 2016) model with broadband photometry from multiple catalogs including Gaia DR2 (Gaia Collaboration et al., 2018), 2MASS (Cutri et al., 2003), and AllWISE (Cutri & et al., 2013). We applied Gaussian priors to \(T_{\rm eff}\) and [M/H] based on the values obtained from the iSpec analysis, as well as the corrected stellar parallax from Gaia DR2 (Stassun and Torres, 2018). We also enforced an upper limit on \(V\)-band extinction using Galactic dust maps Schlafly and Finkbeiner (2011). The resulting stellar parameters are presented in Table 2. ## 4 Obliquity Modeling We used allesfitter (Gunther and Daylan, 2020) to model the sky-projected spin-orbit angle, \(\lambda\), for TOI-1842b by performing a simultaneous global fit on light curves from _TESS_ Sectors 23 and 50, in-transit RVs from NEID, and published out-of-transit RVs from MINERVA and NRES obtained from Wittenmyer et al. (2022). The fitted parameters in our analysis are listed in Table 2. Each parameter was initialized with uniform priors, while initial guesses for \(P\), \(T_{0},~{}R_{\rm P}/\,R_{*}\), \((\,R_{*}+\,R_{\rm P})/a\), \(K\), \(e\cos\omega_{*}\), and \(e\sin\omega_{*}\) were obtained from the TOI-1842b planet discovery paper (Wittenmyer et al., 2022). Additionally, we fitted the transformed limb darkening coefficients for _TESS_ (\(q_{\rm 1:TESS}\), \(q_{\rm 2:TESS}\)) and NEID (\(q_{\rm 1:NEID}\), \(q_{\rm 2:NEID}\)), all of which were initialized with a value of 0.5 and sampled from a uniform distribution between 0 and 1. For each spectrograph used, a free parameter was introduced to account for potential radial velocity offsets. To account for instrument-specific effects, jitter terms were modeled and added in quadrature to the uncertainties. The bounds on the free parameter \(\lambda\) were established as \(-180^{\circ}\) to \(+180^{\circ}\). We used the Affine Invariant Markov Chain Monte Carlo (MCMC, Goodman and Weare, 2010) method (implemented with emcee(Foreman-Mackey et al., 2013)) with 100 walkers, each with 400,000 accepted steps, to sample the posterior distributions of all model parameters. The best-fit parameters and their \(1\,\sigma\) uncertainties are reported in Table 2 and are typically within \(1\,\sigma\) agreement with the values presented in Wittenmyer et al. (2022), except for \(T_{\rm eff}\), where we derive a lower value. Figure 1 shows the best-fit R-M model from our global fit, along with the corresponding residuals. The analysis reveals that TOI-1842b is misaligned, with sky-projected spin-orbit angle \(\lambda=-68.1^{+21.2}_{-14.7}\,^{\circ}\). To derive the stellar spin velocity and constrain the spin-orbit angle along the line of sight, we performed a periodogram analysis (Zechmeister and Kurster, 2009) on the TOI-1842 _TESS_ light curve (Sectors 23 and 50) with the transits masked out. This resulted in a rotational period (\(\,P_{\rm rot}\)) of \(11.350\pm 0.006\) days with a false-alarm probability (FAP) of less than 0.001. We adopted \(\,P_{\rm rot}\) to be \(11.350\pm 1.135\) d, since the latitudinal differential rotation establishes a lower bound for the precision of the measurements for \(\,P_{\rm rot}\) at 10% (Epstein and Pinsonneault, 2014; Aigrain et al., 2015). Based on this value, the stellar equatorial rotation velocity is calculated as \(v=\frac{2\pi\,R_{*}}{P_{\rm rot}}=9.0\pm 1.0\,\rm km/s\). To derive the stellar inclination, we employ MCMC method on \(\,R_{*}\), \(P_{\rm rot}\), and \(\cos i\) to account for interdependent variables \(v\) and \(v\sin i_{*}\)(Masuda and Winn, 2020; Hjorth et al., 2021). Gaussian Figure 1: The measured Rossiter-McLaughlin effect for TOI-1842 b suggests a significant misaligned orbit with a sky-projected obliquity of \(\lambda=-68.1^{+21.2}_{-14.7}\,^{\circ}\). The blue points with black error bars represent the measured in-transit radial velocities and their errors. The median model of the R-M effect is shown in red, with the corresponding \(1\,\sigma\) uncertainty shown in lighter red. This model was obtained from a global fit of all available radial velocity and transit data. priors on \(\,R_{*},\ P_{\rm rot}\), and \(v\sqrt{(1-cos^{2}i_{*})}\) were adopted. The likelihood function was taken to be \[\begin{split}\mathcal{L}&=\left(\frac{R_{*}/R_{\odot }-2.02}{0.07}\right)^{2}+\left(\frac{P_{\rm rot}-11.350\ {\rm d}}{1.135\ {\rm d}}\right)^{2}\\ &\quad+\left(\frac{v\sqrt{(1-\cos^{2}i_{*})}-6.03\ {\rm km/s}}{0.89\ {\rm km/s}} \right)^{2}.\end{split} \tag{1}\] We ran the MCMC process for 20,000 steps and 100 walkers, obtaining 50 independent samples, which converged. The resulting stellar inclination is \(46.4^{+12.3\ \circ}_{-10.1}\). Then, the true stellar obliquity \(\psi\) can be derived through the equation below (Albrecht et al., 2022, eq. 1), \[\cos\psi=\cos i_{*}\cos i+\sin i_{*}\sin i\cos\lambda \tag{2}\] where \(i_{*}\) is the stellar inclination and \(i\) is the orbital inclination. The resulting 3D obliquity (\(\psi\)) of TOI 1842 was determined to be \(\psi=73.3^{+16.3\ \circ}_{-12.9}\), indicating its misalignment. ## 5 Discussion It has been observed that hot Jupiters orbiting low-mass (\(\,M_{*}<1.2\ {\rm M}_{\odot}\)), cool (\(T_{\rm eff}\lesssim 6250\,{\rm K}\)) stars tend to be spin-orbit aligned, while those orbiting hot, massive stars show a wider range of spin-orbit misalignments (Winn et al., 2010; Schlaufman, 2010; Albrecht et al., 2012; Wang et al., 2021). This trend has been attributed to tides, with the explanation being that cool stars with deep convective envelopes and slower rotation rates, below the Kraft break (\(T_{\rm eff}\lesssim 6250\,{\rm K}\), Kraft, 1967), undergo faster tidal dissipation, resulting in their realignment with respect to their companion hot Jupiters' orbits (Albrecht et al., 2012). In contrast, hot, massive star systems are thought to retain their primordial obliquity due to slower tidal dissipation (Winn et al., 2010; Albrecht et al., 2012; Winn and Fabrycky, 2015; Albrecht et al., 2022). Additionally, previous works have suggested that Saturn-mass (\(M_{p}\sim 0.2-0.4\ {\rm M}_{\rm J}\)) and, by extension, sub-Saturn planets, may be more commonly misaligned than higher-mass Jovian planets (Schlaufman, 2010; Sanchis-Ojeda et al., 2013; Wang et al., 2018; Anisman et al., 2020; Dong et al., 2022; Rice et al., 2022). Sub-Saturns may be relatively susceptible to planet-planet scattering (Rasio and Ford, 1996; Raymond et al., 2010) and/or secular misalignment mechanisms (e.g. Petrovich et al., 2020). As sub-Saturns are less massive than Jovian planets, their host stars can less easily realign, as tidal dissipation timescales scale with \(\,M_{\rm P}^{-2}\)(Albrecht et al., 2012, 2022). However, recent research has found that warm Jupiters tend to be preferentially aligned (Rice et al., 2022). Given that warm Jupiters have longer orbital periods and are "tidally detached", meaning they cannot tidally realign their host star on the same timescale as the lifetime of the system, this challenges tidal dissipation as the origin for the alignment of warm Jupiters and suggests that they may be primordially spin-orbit aligned (Rice et al., 2022; Albrecht et al., 2022; Davies, 2019). One possibility is that hot and warm Jupiters may have formed through distinct mechanisms, with warm Jupiters forming through a more tranquil process and being initially aligned with their host star's equator, while hot Jupiters form through a more tumultuous process and are therefore misaligned (Dawson and Johnson, 2018; Rice et al., 2022). It is only cool, low-mass stars with orbiting hot Jupiters that may have undergone tidal realignment, leading to the observed trend (Winn et al., 2010; Anderson et al., 2021; Albrecht et al., 2022). We, however, propose a potential alternative for the observed relation between \(\,M_{*}/T_{\rm eff}\) and their obliquities. Our framework is based on the ability of systems to produce multiple compact planets and **have** their obliquity excited through post-disk planet-planet interactions. The currently observed stellar obliquity distribution may be a result of the planet-planet interactions, with tides playing a lesser role in altering obliquities. Throughout this section, we define warm Jupiters as planets with \(M_{p}>0.3\,{\rm M}_{\rm J}\) and star-planet separation \(a/R_{*}>11\). Accordingly, we define hot Jupiters as their closer-in analogues with \(M_{p}>0.3\,{\rm M}_{\rm J}\) and star-planet separation \(a/R_{*}\leq 11\), and we define Saturn-mass planets as those with \(M_{p}<0.3\ {\rm M}_{\rm J}\). It has been demonstrated that disk and stellar mass are correlated, with disk-to-stellar mass ratios on the order of a few percent (Williams and Cieza, 2011; Andrews et al., 2013; Andrews, 2020). Consequently, massive stars are associated with more massive protoplanetary disks (Andrews et al., 2013; Pascucci et al., 2016) capable of forming multiple Jupiter-mass planets (Johnson et al., 2010; Ghezzi et al., 2018; Yang et al., 2020). Cool, low-mass stars have lower-mass disks that may only form a single Jupiter-mass planet (Andrews et al., 2013; Ansdell et al., 2016; Pascucci et al., 2016; Dawson and Johnson, 2018; Yang et al., 2020). Thus, the observed \(\,M_{*}/T_{\rm eff}\) and obliquity relationship could be explained as **an indication that** hot stars are more capable of producing multiple Jupiters that interact and result in a wide range of obliquities. On the other hand, cool, low-mass stars lack the material to form multiple Jupiter-mass planets; thus these systems stay in a stable, aligned configuration with a single Jupiter. This can be seen in the top panels of Figure 2. Our hypothesis applies to low-mass planets around both cool and hot stars, with a wide range of spin-orbit angles expected for both types of stars, since both are capable of producing multiple low-mass planets, such as sub-Saturns. The bottom panels of Figure 2 show that sub-Saturns around cool stars span a wide range of stellar obliquities, in support of our hypothesis. However, TOI-1842b is the first sub-Saturn orbiting a hot, massive star with an R-M measurement. Its orbital misalignment supports our explanation. Further measurements in this population are needed to make a strong conclusion. To further explain our hypothesis, we present the four quadrants of star and planet mass and their observed obliquities in Figure 2. Throughout this section and in Figure 2, we define "misaligned" systems as those with \begin{table} \begin{tabular}{l c c} \hline \hline & NEID Spectrum & MIST+SED \\ & iSpec & EXOFAST\_2 \\ \hline Stellar Parameters: & & \\ \(M_{*}\) (M\({}_{\odot}\)) & - & \(1.45^{+0.07}_{-0.14}\) \\ \(R_{*}\) (R\({}_{\odot}\)) & - & \(2.03\pm 0.07\) \\ \(\log g\) (eps) & \(4.19\pm 0.28\) & \(3.98^{+0.04}_{-0.15}\) \\ \([M/H]\) (dex) & \(0.09\pm 0.10\) & \(0.27^{+0.15}_{-0.15}\) \\ \(T_{\rm eff}\) (K) & \(5931\pm 174\) & \(6033^{+95}_{-93}\) \\ \(v\sin i_{*}\) (km/s) & \(6.03\pm 0.89\) & - \\ \hline \hline \multicolumn{4}{c}{Prions for global fit} & Global fit 1: NEID} \\ \hline Stellar Parameters: & & \\ \(v\sin i_{*}\) (km/s) & \(\mathcal{U}(4.3;0.0;20)\) & \(6.21^{+3.64}_{-1.49}\) \\ \(P_{\rm rot}\) (days) & - & \(11.350\pm 1.135\) \\ \(i_{*}\) (deg) & - & \(46.4^{+12.3}_{-11.3}\) \\ \(\psi\) (deg) & - & \(73.3^{+10.3}_{-12.9}\) \\ Planetary Parameters: & & \\ **TOI-1842b:** & & \\ \(\lambda_{\rm b}\) (deg) & \(\mathcal{U}(0;-180;+180)\) & \(-68.1^{+21.2}_{-14.7}\) \\ \(P_{\rm f}\) (days) & \(\mathcal{U}(9.5739;8.5739;10.5739)\) & \(9.5740\pm 0.0001\) \\ \(R_{P;b}\) (\(R_{J}\)) & - & \(1.06^{+0.07}_{-0.06}\) \\ \(M_{P;b}\) (\(M_{J}\)) & - & \(0.19^{+0.04}_{-0.04}\) \\ \(T_{0;b}\) (BJD) \(-2459300\) & \(\mathcal{U}(25.871;24.871;26.871)\) & \(25.871\pm 0.005\) \\ \(i_{b}\) (deg) & - & \(87.0^{+1.7}_{-2.3}\) \\ \(e_{b}\) & - & \(0.13^{+0.16}_{-0.16}\) \\ \(\omega_{b}\) (deg) & - & \(100.5^{+0.02}_{-37.5}\) \\ \(\cos i_{\rm e}\) & \(\mathcal{U}(0.0;0.0;1.0)\) & \(0.05^{+0.04}_{-0.03}\) \\ \(K_{\rm b}\) (m s\({}^{-1}\)) & \(\mathcal{U}(15.9;0.0;1000.0)\) & \(17.3\pm 3.0\) \\ \(R_{\rm P}/\,R_{*}\) & \(\mathcal{U}(0.052;0.0;1.0)\) & \(0.0540^{+0.0022}_{-0.026}\) \\ ( \(R_{*}+R_{\rm b}\))/a\({}_{\rm b}\) & \(\mathcal{U}(0.028;0.0;1.0)\) & \(0.08\pm 0.02\) \\ \(\sqrt{e_{\rm b}}\cos\omega_{\rm b}\) & \(\mathcal{U}(0.0;-1.0;1.0)\) & \(-0.01^{+0.19}_{-0.17}\) \\ \(\sqrt{e_{\rm b}}\sin\omega_{\rm b}\) & \(\mathcal{U}(0.0;-1.0;1.0)\) & \(0.3^{+0.27}_{-0.3}\) \\ Transformed limb darkening coefficients: & & \\ \(q_{\rm i:neID}\) & \(\mathcal{U}(0.5;0;1)\) & \(0.39^{+0.37}_{-0.28}\) \\ \(q_{\rm i:\textit{2-neid}}\) & \(\mathcal{U}(0.5;0;1)\) & \(0.39^{+0.37}_{-0.28}\) \\ \(q_{\rm i:\textit{7-ESS}}\) & \(\mathcal{U}(0.5;0;1)\) & \(0.47^{+0.31}_{-0.33}\) \\ \(q_{\rm i:\textit{7-ESS}}\) & \(\mathcal{U}(0.5;0;1)\) & \(0.34^{+0.33}_{-0.22}\) \\ Physical limb darkening coefficients: & & \\ \(u_{\rm 1:NEID}\) & - & \(0.40^{+0.49}_{-0.20}\) \\ \(u_{\rm 2:NEID}\) & - & \(0.10^{+0.36}_{-0.36}\) \\ \(u_{\rm 1:\textit{7-ESS}}\) & - & \(0.46^{+0.31}_{-0.30}\) \\ \(u_{\rm 2:\textit{7-ESS}}\) & - & \(0.21^{+0.37}_{-0.39}\) \\ \hline \end{tabular} \end{table} Table 2: Priors and posteriors for the TOI-1842 b system global fitting. \(|\lambda|>10^{\circ}\) and \(\lambda\) differing from \(0\,^{\circ}\) at a \(3\,\sigma\) level, following the definition in Wang et al. (2022). * **Jupiters around Cool Stars:** Cool, low-mass stars tend to have less massive protoplanetary disks, which may only be capable of forming a single Jupiter-mass planet (Andrews et al., 2013; Ansdell et al., 2016; Pascucci et al., 2016; Dawson and Johnson, 2018; Yang et al., 2020). In these cases, there is no potential perturber in the same systems with sufficient mass to excite the spin-orbit angle of the lone Jupiter, resulting in aligned Jupiters around cool, low-mass stars as seen in the top left panel of Figure 2. * **Jupiters around Hot Stars:** Hot, massive stars tend to have more massive protoplanetary disks that are capable of forming multiple Jupiter-mass planets (Johnson et al., 2010; Ghezzi et al., 2018; Yang et al., 2020). The presence of multiple Jupiters presents the opportunity for interactions, such as Jupiter-Jupiter scattering (Rasio and Ford, 1996; Chatterjee et al., 2008) or secular interactions (Wu and Lithwick, 2011; Petrovich, 2015; Naoz et al., 2011), that can excite the spin-orbit angles of the Jupiters. As a result, Jupiters around hot, massive stars may be found to be spin-orbit misaligned due to the planet-planet interactions with other massive planets initially present in the same systems. This framework for Jupiters around hot, massive stars can be seen in the top right panel of Figure 2 and matches the \(T_{\rm eff}\) vs. obliquity relationship. * **Sub-Saturns around Cool Stars:** Despite the fact that cool, low-mass stars tend to have less massive protoplanetary disks, it is still possible for these disks to initially form multiple lower-mass planets, such as sub-Saturns. This scenario is more similar to the presence of multiple Jupiters around hot stars, as both situations involve the potential for interactions among compact multiple planets initially present in the same systems through mechanisms like planet-planet scattering or secular interactions, which can excite their spin-orbit angles. This can be seen through the mixed obliquity distribution for sub-Saturns around cool stars in the bottom left panel of Figure 2. * **Sub-Saturns around Hot Stars:** Given the similarity of the presence of multiple lower-mass planets around cool, low-mass stars and multiple Jupiters around hot, massive stars in terms of the potential for planet-planet interactions and exciting their spin-orbit angles, it is reasonable to ex Figure 2: Stellar mass vs. planet mass for systems obtained from Albrecht et al. (2022) and the TEPCat catalogue (Southworth, 2011). Unfilled circles denote aligned planets, while triangles denote misaligned planets. TOI-1842 is plotted in red. The trend toward alignment that is confined to Jupiter-mass planets orbiting cool, low-mass stars with all other mass combinations showing mixed alignment. Misaligned systems all fall below the red dashed line representing the \(1\sigma\) upper error to a linear fit performed on the misaligned systems. pect that a significant fraction of low-mass planets around hot, massive stars may also exhibit spin-orbit misalignment. Once again, this is due to post-disk planet-planet interactions in a protoplanetary disk capable of producing multiple low-mass planets. However, current observations in this area are limited, with TOI-1842b being the first measurement in this regime. Nevertheless, the misalignment of TOI-1842b supports this hypothesis, as shown in the bottom right panel of Figure 2. Further observations of low-mass planets around hot, massive stars will provide additional evidence to examine this region of parameter space. Doing so will be vital for further tests of this hypothesis. There appears to be an upper limit when looking at the misaligned systems in planet mass vs. stellar mass space. We place an empirical upper limit on these systems by performing a linear fit to these misaligned systems. The \(1\sigma\) upper error line of this fit is plotted as a dashed red line Figure2. We note that all misaligned systems in the data fall below this \(1\sigma\) line. Above this line, we expect systems to be aligned as the planet-to-stellar mass ratio is higher and it is unlikely these systems can produce multiple planets capable of interacting to excite obliquity. Below this line, we expect mixed alignment as the ratio of planet-to-stellar mass is low enough to produce multiple planets for which obliquity excitation works in the presented hypothesis. Although our framework proposes an alternative explanation for the observed spin-orbit angle distributions, the sharp cutoff of the obliquity distribution at the Kraft break (see top panel of Figure 2 in this paper, and Figure 8 in Albrecht et al., 2022) suggests that tidal dissipation could play a role in shaping the obliquity ranges of cool stars hosting hot Jupiters. Our hypothesis complements tidal theory and implies that initial degrees of obliquity for these cool stars may be smaller than those for hot stars. If our framework is correct, we anticipate that the obliquity distribution of warm Jupiters will show an increasing spread with a rising stellar mass. This is because the original obliquity distribution would show larger scatter with stellar mass due to an increased probability of multi-planet interactions that can excite obliquity. For warm Jupiters around hot stars, this obliquity should not be tidally damped, since the timescale for tidal re-alignment for warm Jupiters is expected to be longer than the lifetime of the system. ## 6 Acknowledgements We thank Lauren Weiss and Ji Wang for their insightful discussion at GLEAM 2022. Additionally, we thank Francisco Aros for his discussion during an Indiana University Astronomy Department Tea Talk. We also acknowledge the contributions of Brandon Radzom and Armaan Goyal for conversations regarding statistical methods. We are also grateful for general discussion with Jiayin Dong related to this work. We also thank Heidi Schweiker, Sarah E. Logsdon, and the NEID Queue Observers and WIYN Observing Associates for their skillful execution of our NEID observations, as well as the The NEID-SpecSoft Team for their data reduction pipeline. M.R. and S.W. thank the Heising-Simons Foundation for their generous support. This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute. numpy(Oliphant, 2006; Walt et al., 2011; Harris et al., 2020), matplotlib(Hunter, 2007), pandas(McKinney et al., 2010), scipy(Virtanen et al., 2020), allesfitter(Gunther & Daylan, 2020), emcee(Foreman-Mackey et al., 2013) WIYN/NEID
2301.09638
In situ Biological Particle Analyzer based on Digital Inline Holography
Obtaining in situ measurements of biological microparticles is crucial for both scientific research and numerous industrial applications (e.g., early detection of harmful algal blooms, monitoring yeast during fermentation). However, existing methods are limited to offer timely diagnostics of these particles with sufficient accuracy and information. Here, we introduce a novel method for real-time, in situ analysis using machine learning assisted digital inline holography (DIH). Our machine learning model uses a customized YOLO v5 architecture specialized for the detection and classification of small biological particles. We demonstrate the effectiveness of our method in the analysis of 10 plankton species with equivalent high accuracy and significantly reduced processing time compared to previous methods. We also applied our method to differentiate yeast cells under four metabolic states and from two strains. Our results show that the proposed method can accurately detect and differentiate cellular and subcellular features related to metabolic states and strains. This study demonstrates the potential of machine learning driven DIH approach as a sensitive and versatile diagnostic tool for real-time, in situ analysis of both biotic and abiotic particles. This method can be readily deployed in a distributive manner for scientific research and manufacturing on an industrial scale.
Delaney Sanborn, Ruichen He, Lei Feng, Jiarong Hong
2023-01-14T05:32:09Z
http://arxiv.org/abs/2301.09638v1
_In situ_ Biological Particle Analyzer based on Digital Inline Holography #### Abstract (200) Obtaining _in situ_ measurements of biological microparticles is crucial for both scientific research and numerous industrial applications (e.g., early detection of harmful algal blooms, monitoring yeast during fermentation). However, existing methods are limited to offer timely diagnostics of these particles with sufficient accuracy and information. Here, we introduce a novel method for real-time, _in situ_ analysis using machine learning assisted digital inline holography (DIH). Our machine learning model uses a customized YOLO v5 architecture specialized for the detection and classification of small biological particles. We demonstrate the effectiveness of our method in the analysis of 10 plankton species with equivalent high accuracy and significantly reduced processing time compared to previous methods. We also applied our method to differentiate yeast cells under four metabolic states and from two strains. Our results show that the proposed method can accurately detect and differentiate cellular and subcellular features related to metabolic states and strains. This study demonstrates the potential of machine learning driven DIH approach as a sensitive and versatile diagnostic tool for real-time, _in situ_ analysis of both biotic and abiotic particles. This method can be readily deployed in a distributive manner for scientific research and manufacturing on an industrial scale. _Keywords_ (\(<7\)): Holograms, Imaging, Plankton, Yeast, Machine Learning ## I Introduction Microparticles are ubiquitous in nature (dust, liquid droplets, sand, spores, fungi, bacteria etc.), and commonly appear in many industrial applications (e.g., manufacturing, food, cosmetics, pharmaceutical). Particularly, biological particles, which are derived from biological organisms including bacteria, fungi, algae, and cells, play important roles in the environment, human health, and industrial production. Technologies that can accurately characterize these particles (concentration, size, shape, composition, viability, etc.) _in situ_ - that is, in their natural environment or during industrial processes - in a timely fashion are critical. These technologies have numerous applications. For example, in medical diagnosis, the ability to analyze blood cells and detect abnormalities such as circulating tumor cells (CTCs) can help determine the course of certain cancers and corresponding treatments (Yu et al., 2014). In the production of alcoholic beverages, such as beer and wine, the concentration, viability, and vitality of yeast cells need to be closely monitored for fermentation control to achieve the desired taste profile of the final product (Mochaba et al.,1998; Heggart et al., 2000). It is also important to detect and identify any type of wild yeast contamination during the process to avoid spoilage. In the field of environmental science, long-term _in situ_ monitoring of different algal species and their concentrations in aquatic environments can help researchers understand the causes and dynamics of harmful algal blooms (HABs) and facilitate early detection and proper mitigation strategies to reduce their detrimental environmental and economic impacts (Ho & Michalak, 2015). Currently, the commonly used tools for biological particle analysis are based on light or acoustic scattering or coupler principle (Maltsev & Semyanov, 2013; Baddour et al., 2006; Sun & Morgan, 2010). Light and acoustic scattering methods capture forward, side, or backward scattered signals from the particles in the sample volume and derive their concentration and size distribution based on the assumption of their shapes and scattering properties. The coulter method detects the momentary changes in impedance as a voltage pulse when suspended particles go through the orifice in the electrolyte solution. The pulses can be used to measure particle counts and volume (size) represented in terms of equivalent spherical diameter. Although these tools provide rapid measurements of particle counts and size distribution with high throughputs, they lack fidelity for the characterization of biological particles due to the complex scattering properties associated with their shapes (many non-spherical) and non-uniform internal structures. As a result, these methods often cannot distinguish different types of particles (e.g., fungi, mold, strains of bacteria), nor can they provide additional important information, such as morphology or other physiological characteristics (e.g., viability or vitality). Autofluorescence spectroscopy is a recently developed technique for particle analysis that utilizes laser-induced fluorescence (LIF) to detect molecules that absorb laser light and emit it at a higher wavelength (Croce & Bottiroli, 2014). While LIF can differentiate between biological and inert particles, it is not able to identify specific types of microorganisms or distinguish between live and dead cells, which is important for many industrial applications. Other methods, such as combining light scattering with flow cytometry, provide additional viability counting (Davey & Kell, 1996; Shapiro 2004). However, these methods also cannot classify different types of particles and require special reagents to stain cells, limiting their use in the _in situ_ monitoring, especially in natural environments. In contrast, many laboratory-based particle analysis tools can detect and identify individual biological particles by obtaining additional information. For example, microscopic imaging is a commonly used method for obtaining morphological information. In conjunction with fluorescent staining, it can differentiate biological particles with similar morphological features with high specificity and determine viability (Coling & Kachar, 1998; Stephens & Allan, 2003). However, these methods are labor-intensive and low throughput, requiring complicated sample preparation, which limits their use for _in situ_ measurements or inline monitoring. To address these limitations, several label-free methods have been developed for biological particle analysis in recent years. Raman spectroscopy, for example, has been used to rapidly identify pathogen bacteria using the unique molecular compositions that result in subtle differences in their corresponding Raman spectra (Strola et al., 2014; Ho et al., 2019). However, this method requires samples with a high concentration of pure cells and cannot distinguish different organisms in mixed samples. Hyperspectral microscopic imaging can classify single cells of foodborne pathogens (Yoon et al., 2009; Eady et al., 2015; Kang et al., 2020), while quantitative phase imaging (QPI) has been used to extract detailed information about the biochemical composition of various biological particles (Popescu 2011), such as change of polyhydroxyalkanoates (Choi et al., 2021) and chromosomes (Sung et al., 2012) in individual live bacterial cells. These methods, however, require complicated optical setups and computationally intensive postprocessing, making them difficult to use for _in situ_ particle analysis. Since the beginning of the 21st century, digital inline holography (DIH) has emerged as a compact, label-free approach for the _in situ_ characterization of particles (Katz & Sheng, 2010; Kaikkonen et al., 2014; Nayak et al., 2019; Sauvageat et al., 2020). This method utilizes a coherent light source, such as laser, to illuminate a three-dimensional (3D) sample volume. A digital sensor, such as a camera, records (without focusing) the interference pattern generated by the scattered light from individual particles and non-scattered portion of the illumination beam (referred to as holograms). The recorded hologram contains the phase and intensity information of the sample, which can be used to derive the 3D location, size, morphology, and optical density of particles through digital reconstruction using different diffraction formulas (e.g., Fresnel and Rayleigh-Somerfield). In comparison to conventional microscopic imaging, DIH offers orders of magnitude larger depth of field and richer information about particle properties, as the optical properties of the particles can potentially be correlated with their biochemical compositions (Beuthan et al., 1996; Choi et al., 2010; Bista et al., 2011). However, conventional DIH has several issues related to data processing, such as high computational cost and low signal-to-noise ratio due to noises from cross interference between particle signals, which limits its widespread adoption as an _in situ_ tool. To address the challenges of DIH data processing, machine learning (ML) has been recently introduced. For example, Shao et al. (2020, 2020) proposed a modified U-net architecture for the fast extraction of 3D particle positions and size distribution directly from the holograms without conventional reconstruction steps. Other studies have implemented ML models for classifying cancer cells with molecule specific microbeads attached (Kim et al., 2018) and different species of plankton (Guo et al., 2021) from raw holograms. While these machine learning approaches have improved processing speed, they still require preprocessing steps such as object detection and segmentation before classification, adding complexity and computational burden. As a result, the implementation of DIH for real-time _in situ_ data processing remains a challenge. To address this challenge, we introduce a real-time hologram analysis approach based on the powerful one-stage detection and classification machine learning architecture, You Only Look Once (YOLO). YOLO has been widely used in computer vision with fast processing speed while maintaining high accuracy (\(>80\%\)) (Redmon et al., 2016). The YOLO architecture performs both object localization and object classification simultaneously, making it exceed the traditional deformable part model (DPM) and two-stage CNN based methods like R-CNN in object detection tasks with a 10x faster processing speed (Yan et al., 2014; Glenn et al., 2021). However, YOLO models are typically designed for conventional photographic objects (e.g., animals, license plate, plant) which are always in focus or close to focus and are composed of many well-defined features like contours and texture. In contrast, objects in holograms produce diffraction patterns that change significantly based on their 3D positions. It remains to be investigated whether and how the YOLO model can be adapted for hologram processing to develop accurate, robust, and real-time DIH for _in situ_ biological particle analysis. This paper is organized as follows. In the Materials and Methods section, we introduced and described a customized YOLO model for detecting and classifying individual biological particles from enhanced holograms without any additional steps. In the Results section, we demonstrated the effectiveness of our approach by applying it to classify 10 different species of plankton and differentiate yeast cells under different metabolic states and different strains during fermentation. Finally, in the Conclusions and Discussions section, we summarized our findings and discussed their implications. ## II Materials and Methods The DIH setup used in this study is illustrated in Fig. 1a. It uses a collimated laser as the illumination light source. The beam passes through the sample volume and the digital camera with a microscopic objective captures the fringe patterns (holograms) generated by the interference between the scattered light from the sample and non-scattered portion of the illumination beam. The recorded holograms are then passed to the processing board (e.g., CPU or GPU) for analysis. Raw holograms are enhanced through a moving window background subtraction to eliminate the noise associated with slow variation in background over time due to, for example, the change in light source intensity during the recording. The background for each hologram is calculated as the average intensity of the 50 consecutive holograms immediately prior to it. The enhanced holograms are obtained by subtracting the background from the raw hologram. A modified YOLOv5 machine learning architecture was used to detect and classify different biological particles in the enhanced holograms. The proposed YOLO architecture consists of three components (Fig. 1b): a backbone that extracts a collection of features (e.g., edges and corners of objects) from the input holograms, a neck that combines feature maps from different scales to generate a feature pyramid, and a prediction head that localizes and classifies individual objects based on the feature pyramid. The backbone is comprised of five convolution layers with a 3x3 kernel size. A shallower layer is used to accommodate the smaller size of the biological particles in our application. Small objects occupy few pixels, and the key features are progressively lost when passing through many convolutional or pooling layers (Nguyen et al., 2020). Reducing the number of convolutional layers in the backbone can also increase the efficiency of the model while maintaining the same level of accuracy. A maximum pooling layer is used after each convolution layer to reduce the dimensions of the feature map by summarizing the features presented in a local region. The pooling layer in our model reduces computational cost by decreasing the input image size by half, while also improving robustness to input variance using a filtered feature representation that helps prevent overfitting. The extracted feature map is then passed through three additional convolution layers in the neck with kernel size of 3x3, 3x3, and 1x1 respectively. The output of the downscaled feature map is fed into the prediction head through two separate pathways. In one of these pathways, an up-sampling layer is employed to generate the upscaled feature map, which is then combined with the original down-sampled feature map from the backbone via a concatenate layer. At the end of each pathway, a fully connected layer serves as the prediction head, which predicts the location of the objects, the confidence score associated with each object, and the class probabilities. Leaky ReLU activation function (Maas et al., 2013) is utilized after each convolution layer in the YOLO architecture, and stochastic gradient descent (SGD) is used for optimization during the training of the model. The Binary Cross Entropy with Logits Loss function is used for the loss calculation of class probability and object score. A non-maximum suppression step (Neubeck and Van Gool, 2006) is applied as a post-processing step to refine the initial prediction result and provide the final location and classification of each object in the input holograms. The proposed machine learning method is implemented using PyTorch and optimized using TensorRT. All test cases were run on a Nvidia V100 Tensor Core GPU. In this study, we used three datasets to evaluate the performance of our method. These datasets consist of 10 different plankton species, ale yeast under four different metabolic states during fermentation, and two strains of yeasts (ale and lager) under the same fermentation condition. The plankton dataset was obtained from Guo et al. (2021) and included holograms of 10 plankton species captured by a submersible digital holographic imaging system (HOLOCAM) at two different locations. The plankton species included in the dataset are _Chaetoceros debilis, Diatom sp., Diylum brightwelli, Chaetoceros concavicornis, Thalassiora sp., Copepod, Copepod nauplius, cf. Stromidium sp., Tripos cf. muelleri, and Tripos cf. furca_ (classified as type 0 to 9, respectively). The HOLOCAM used a 660 nm pulsed Nd-YAG laser as the illumination source and a 2048 \(\times\) 2048 pixel CCD camera to record the holograms. Fig. 5a shows sample holograms and their corresponding in-focus images. The plankton species captured in the holograms are typically on the scale of several hundreds of microns, and the field of view (FOV) of the images is 9.4 x 9.4 mm with a resolution of 4.59 \(\upmu\)m per pixel, which is sufficient to resolve unique features of the plankton species for classification (e.g., size, shape, diffraction pattern). To create the dataset, single cells were cropped from the enhanced holograms in the study by Guo et al. (2021) and divided into training database (70%), validation database (20%) and test database (10%). Table 1 summarizes the number of single cells of each sampled plankton species in the training and test datasets. The validation cell numbers for each species are included in the training dataset summary. Differences in the number of plankton cells represented in each class reflect the natural distribution of different species in the sampled water body during the HOLOCAM deployments. Additional details about the plankton dataset can be found in Guo et al. (2021). Synthetic holograms were Figure 1: (a) Schematic showing DIH setup for imaging particles in a 3D volume. (b) Schematic illustration of the proposed machine learning method for particle detection from holograms. Note that the input of the method is the hologram without reconstruction. The architecture of the customized YOLOv5 is described by the flowchart in the boxes with dashed lines. generated by combining multiple randomly selected single-particle holograms from the database created by Guo et al. (2021) to create new training, validation, and test datasets. These datasets were used to train a YOLO model that can detect different types of plankton species. The yeast dataset was generated by culturing two different strains of yeast cells, _Saccharomyces cerevisiae_ (ale yeast, Safale US-05) and _Saccharomyces pastorianus_ (lager yeast, Lallemand Diamond), under certain fermentation conditions. For metabolic state analysis, dry ale yeast was dissolved in sterile YPD media (Sigma Aldrich) and cultured overnight at 30 \({}^{\circ}\)C for 16 hours. The overnight culture was then centrifuged to remove the liquid and diluted into fresh YPD to an initial optical density of approximately 0.5 at 600 nm. The diluted culture was divided into 20 milliliters samples which were used to capture holograms at 0, 1 and 4 hours, corresponding to the start, lag (cells are adapting to their new environment and division has not yet begun), and log (cells start to divide and cell number rapidly increases) phases on the growth curve (Fig. 2). An extra 20 milliliter sample was imaged at 54 hours as the dead group, making a total of four groups. The holograms were captured using a FLIR camera (Blackfly S USB3) with a purple laser diode (405 nm wavelength) and a 10X objective lens. The size of the captured holograms is 1440 x 1080 pixel with a resolution of 0.34 \(\upmu\)m/pixel. According to the Nyquist sampling theorem Nyquist (1928), this resolution can resolve features in the images that are larger than 0.68 \(\upmu\)m, which is sufficient to distinguish the cellular features of yeast cells, typically 5-10 \(\upmu\)m in diameter as well as well as subcellular features such as vacuoles that are 1-5 \(\upmu\)m in size. During imaging, the yeast sample flowed through a customized microfluidic channel with sheath and main flows, with a flow rate of 8 \(\upmu\)L/min for the sheath (distilled water) and 0.4 \(\upmu\)L/min for the main flow (yeast sample). The width of the imaging channel was 1 mm and the depth was 0.5 mm. The fringe patterns (i.e., the \begin{table} \begin{tabular}{c c c c c c c c c c} & 01 & 02 & 03 & 04 & 05 & 06 & 07 & 08 & 09 & 10 \\ Train & 2754 & 7299 & 15120 & 5832 & 4770 & 3339 & 1854 & 5589 & 8208 & 21150 \\ Test & 306 & 811 & 1680 & 648 & 530 & 371 & 206 & 621 & 912 & 2350 \\ \end{tabular} \end{table} Table 1: Summary of the number of plankton cells used for training and testing. Figure 2: Growth curve of the ale yeast (red dot), and lager yeast (blue dot) strain fermented under 30 \({}^{\circ}\)C conditions. The cell concentration is estimated as the optical density (OD) measured at a wavelength of 600 nm. spacing and width of fringes) change with the distance between yeast cells and the hologram recording plane. If the focal plane of the cells is in the center of the 0.5 mm deep channel, the cells could appear anywhere from 0 to about 250 \(\upmu\)m from the focal plane (given varying focal distances). To train a machine learning model which can detect and characterize cells regardless of the size and fringe pattern differences associated with the recording depth, we included consistent numbers of cell images varying near and far from the recording plane for each class. Since most of the cells flowing in our microfluidic device are concentrated near the center of the channel by the sheath flow, we manually adjusted the z-position of the microfluidic channel to obtain holograms at varying distances from the focal plane. A distribution of particle distances from the hologram plane for each class in the yeast metabolic state dataset is presented in Fig. 3. For each metabolic state, 1000 holograms were captured at varying positions along the imaging channel at a frame rate of 12 frames per second (FPS). During the process of generating the labeled dataset for the metabolic states and strains, attached mother and daughter cells, as a result of the natural reproduction during fermentation, were often observed and labeled as a single object, while overlapping cells were given separate labels. We were able to differentiate attached mother and daughter yeast cells from overlapping cells based on the differences in the appearance of their diffraction patterns. The training set for model training consisted of both recorded and synthetic holograms. The recorded holograms from experiments only contained yeast cells from one metabolic group (or one strain), while synthetic holograms were generated by blending two or more recorded holograms randomly selected from the four metabolic groups (or two strains). This resulted in a mixture of yeast cells at various metabolic states (or from different strains) being present in the synthetic holograms. A total of 500 Figure 3: The distribution of imaging distance from focal plane for yeast cells included in the metabolic dataset. Different colors represent cells from the four metabolic states (0-h, 1-h, 4-h, and dead) respectively. Two imaging distances were used for each class and can be identified by distribution peaks. synthetic holograms were generated for the training set, which was then augmented by adjusting contrast, brightness, and adding random Gaussian noise, resulting in a 3-fold increase in the training size. The numbers of captured single cells contained in the training and test sets are listed in Table 2. For model performance evaluation, an additional 100 synthetic holograms were generated for the test set. For yeast strain classification, samples of lager yeast were prepared in a similar manner to the ale yeast. Lager strains are bottom fermenting and works at lower temperatures (8-15 \({}^{\mathrm{o}}\)C), in contrast to ale yeast which prefers higher temperatures (18-22 \({}^{\mathrm{o}}\)C). As shown in Fig. 2, lager yeast does not grow when cultured at 30 \({}^{\mathrm{o}}\)C, as indicated by the unchanged OD 600 over time. To assess the performance of the models for classification in each test case, we consider three criteria including precision, recall, and overall extraction rate. Precision, \(P_{n}\), is the proportion of true positives (\(TP_{n}\)) to the total predictions made for a class \(n\) and is calculated as \[P_{n}=\frac{TP_{n}}{TP_{n}+FP_{n}} \tag{1}\] where \(FP_{n}\) is the number of cells which are mislabeled as class \(n\), or false positives. Instances of background false positives occur when a model detects debris and abiotic particles or a small region of image background as yeast cells, but these occurrences are rare. For example, in the plankton and metabolic yeast test cases, background false positives account for only 3.9% and 1.7% of the total predictions, respectively. Therefore, background false positives are excluded from the calculation of precision when evaluating the classification performance for the convenience of direct comparison to other classifier models which do not encounter background false positives. Recall is the percentage of the correctly classified objects in a class \(n\) compared to the ground truth and is calculated as \[R_{n}=\frac{TP_{n}}{TP_{n}+FN_{n}} \tag{2}\] where \(FN_{n}\) is the number of false negatives, including cells that are either mislabeled as other classes or undetected (missed cells with no bounding boxes). Since the YOLO model performs detection and classification simultaneously (one-stage), cells that do not reach the selected confidence level to be classified into any of the object classes will appear as undetected (see Fig. 6b). This is different from the conventional classification models which always assign each cell to one class. We also use extraction rate (EA) to monitor the portion of undetected yeast cells (\(N_{miss}\)) as part of the comprehensive evaluation of our model. Extraction rate is calculated as \[EA=1-\frac{N_{miss}}{N_{total}} \tag{3}\] \begin{table} \begin{tabular}{c c c c c|c} & & \multicolumn{2}{c|}{Ale} & \multicolumn{1}{c|}{Lager} \\ \cline{2-6} & 0h & 1h & 4h & Dead & Inactive & Inactive \\ \cline{2-6} Train & 7290 & 5572 & 9094 & 14947 & 540 & 362 \\ Test & 2163 & 1610 & 2641 & 1157 & 64 & 99 \\ \end{tabular} \end{table} Table 2: Summary of the number of yeast cells used for training and testing. where \(N_{total}\) is the total number of cells in all classes. It is common for the extraction rate to decrease if the model's classification performance declines. The confidence threshold for model prediction is selected based on the F1 score, a harmonic mean of precision and recall. Fig. 4 shows an example how the F1 score varies with different confidence thresholds. The confidence threshold corresponding to the peak of F1 score is commonly selected to ensure highest possible precision and recall rates for the model, without emphasizing one over the other. ## III Results Three cases were presented to demonstrate that the proposed image-based _in situ_ particle analyzer is applicable to a variety of tasks in scientific research and industrial applications. The first case involved classifying plankton from 10 different species, and demonstrated an improvement in processing speed compared to the previous method (Guo et al., 2021) while maintaining a similar level of accuracy. The second case demonstrated the capability of the proposed method to differentiate yeast cells under four metabolic states from holograms without reconstruction. The third case showcased the ability of the proposed method to detect subtle differences in the subcellular structures such as the biochemical compositions of two different strains of yeast cells under the same fermentation conditions. ### Analysis of 10 different plankton species Plankton are incredibly important to aquatic ecosystems and play a significant role in various research areas such as aquatic ecology, ocean optics, and climate change. Sudden uncontrolled growth of specific types of plankton species can lead to HABs, which can be detrimental to both aquatic ecosystems and human health. The timing of the initiation of HABs and its dynamics are often unpredictable due to varying environmental conditions year from year and a lack of understanding of the specific triggers and growth factors involved. Accurate detection of different Figure 4: The change of F1 score with confidence threshold for each class for the YOLO model in the yeast metabolic state case. The peak of the curve represents the highest possible precision and recall rates for the model, without emphasizing one over the other. plankton species is essential for long-term _in situ_ monitoring of dynamic changes of their concentration which help improve our fundamental understanding of HABs. While DIH has been successfully applied for aquatic particle and HAB monitoring (Nayak et al., 2019) and automatic plankton classification (Guo et al., 2021), the computational intensity of image preprocessing has largely restricted its use in real-time analysis. Currently, it is unclear whether any neural network model is able to directly detect and distinguish different plankton species from unreconstructed holograms without any costly preprocessing. As shown in Fig. 5a, some species do not have distinctive holographic signatures (e.g., _Copepod Nauplii vs. Copepod_, and _Tripos cf. furca vs. Tripos cf. muelleri_), which may pose a challenge for accurate identification. Our trained YOLO model was tested at a confidence threshold of 0.3 with an overall extraction rate of 99.8%. The confusion matrix in Fig. 5b summaries the accuracy and prediction errors of the model for each plankton species. Most species are classified correctly with a precision greater than 90% (dark blue boxes). The average precision is 95.3% with a 91.7% recall. The slightly lower prediction accuracy for _Tripos_ cf. _furca_ (88%) is likely due to its diffraction patterns being similar to some other plankton species such as _Tripos_ cf. _muelleri_ (9% of the predicted _Tripos_ cf. _furca_ cells are actually _Tripos_ cf. _muelleri_). Our method achieved a similar average precision to that of the method proposed by Guo et al. (2021) (95.3% vs. 96.8%) with slightly lower recall (91.7% vs. 95.0%). The method proposed by Guo et al. (2021) involves intensive image preprocessing even without reconstruction which takes 1.6 - 2.5 seconds to process a single image. By contrast, our proposed method uses the pre-trained plankton detection model which is able to perform real-time, _in situ_ analysis of captured holograms after minimal processing (enhancement), Figure 5: (a) Sample holograms from each of the 10 plankton species (left) and their corresponding reconstructed in-focus images (right). (b) The confusion matrix summarizing the accuracy and prediction errors made by the model. (c) Prediction of plankton species distributed in a sample hologram with different types of planktons presented. Bounding boxes with different colors indicate different plankton species. processing over 40 frames per second (a single image in 0.025 seconds). Additionally, our method was applied to holograms selected from an _in situ_ dataset recorded by the HOLOCAM on 21 September 2015 at East Sound (Guo et al., 2021), which contain multiple different plankton species (Fig. 5c). Most plankton were detected successfully in these holograms, demonstrating the potential of our method as an _in situ_ monitoring tool for plankton species in water. **B. Analysis of yeast cells under different metabolic states** We used our proposed method to detect yeast cells under four different metabolic states. Based on the growth curve (Fig. 2), the holograms of ale yeast were captured at the start of fermentation (0h), during the lag phase (1-h), during the log phase (4-h), and when the cells were dead (54-h). Fig. 6a shows examples of the enhanced holograms of yeast cells from these four groups. Yeast cells from the same metabolic group may appear in different sizes and with different fringe patterns (spacing and width of fringes) depending on their distance to the focal plane during recording (small zoomed in figures below each enhanced hologram in Fig. 6a). Our dataset includes cells that could appear anywhere from 0 to 250 \(\upmu\)m from the focal plane (Fig. 3). Despite the lack of visible differences in the holograms of single cells between groups, our trained machine learning model was able to accurately classify individual yeast cells from the four groups (colored bounding boxes in Fig. 6a). To further demonstrate the model's classification capability, synthetic holograms were generated by blending randomly selected two or more enhanced holograms from different groups (Fig. 6b). These synthetic holograms contain a mixture of yeast cells from different metabolic states. When tested on the synthetic holograms, the machine learning model was also able to accurately classify the yeast cells from the four groups. Figure 6: (a) YOLO detection on the enhanced holograms of ale yeast at hour 0 (0-h), 1 (1-h), 4 (4-h) and 54 (dead) during fermentation. Samples of holograms of individual ale yeast cells are shown below. (b) YOLO detection on the synthetic holograms. Yeast cells under four different metabolic states (0-h, 1-h, 4h and dead) are predicted by bounding boxes with orange, green, purple, and blue color respectively. The numbers on the bound boxes represent the prediction confidence score. We used synthetic holograms (Fig. 6b) to evaluate the model's performance in classification. A confidence threshold of 0.65 was selected according to the F1 score (Fig. 4). The accuracy and prediction errors for each group were summarized in a confusion matrix shown in Fig. 7. Each column represents the true metabolic group of each cell and each row lists the predicted metabolic groups. The diagonal elements of the matrix show the percentage of correctly classified cells from each metabolic group, or precision. For each group, the precision is greater than 96% (0-h: 98.6%, 1-h: 98%, 4-h: 96.5%, and dead: 97%). The average precision across all conditions is 97.5% with an overall extraction rate of 90.8%. The average recall is 88.5% and remains relatively constant across each group (0-h: 88%, 1-h: 87%, 4-h: 90%, and dead: 89.4%). Since we utilized a one-stage YOLO model which performs detection and classification simultaneously, the recall rate is affected by both misclassification and missed detection. This is different from the reported recall of the conventional classifier models applied after the detection, which only considers misclassification as the false negative. Our average recall is 97.3% if the missed detections are excluded, to be compared directly to the conventional classifier models. Overall, our results demonstrate the potential of _in situ_ monitoring of the metabolic states of yeast cells during industrial production. **C. Analysis of yeast cells of two different strains** Different strains of yeast cells can have different fermentation characteristics and contribute unique flavor to beer (Mochaba et al.,1998; Heggart et al., 2000). It is important in the brewing industry to maintain the purity of yeast strains in order to ensure product quality and it is therefore crucial to detect any contamination from wild yeast or undesired strains. _Saccharomyces cerevisiae_ (ale yeast) and _Saccharomyces pasortiamus_ (lager yeast) are two common yeast strains used in beer fermentation (Bonatto 2021). Lager strains are bottom fermenting and thrive at lower temperature (8-15 \({}^{\circ}\)C), while ale yeast is top fermenting and works best at room temperature (18-22 \({}^{\circ}\)C). When both yeast cells were cultured at the same condition (as described in the Materials and Methods) at 30 \({}^{\circ}\)C, lager yeasts remained inactive, as indicated by the unchanged cell density Figure 7: The confusion matrix summarizing the accuracy and prediction errors made by proposed YOLO based method for classifying yeast cells under four metabolic states. The diagonal elements (dark blue boxes) indicate the precision. (Fig. 2), in contrast to the rapid growth of ale yeast. Holograms were captured of cells samples 1 hour after the initiation of fermentation for both ale and layer groups (Fig. 8a). The image resolution and variation in diffraction pattern from varying imaging depths are the same as the metabolic state experiments. Synthetic holograms containing a mixture of cells from both groups were also generated using blending (Fig. 8b). The trained YOLO model was able to accurately distinguish ale yeast from lager yeast in both recorded (Fig. 8a) and synthesized (Fig. 8b) holograms, despite their similar appearances (zoomed in images in Fig. 8a). The precision is 99.5% for ale yeast and 97% for lager yeast, with an average of 98.2% (Fig. 9). The recall for the ale yeast is 97.8% and 83.1% for the lager, resulting in an average of 90.5%. The overall extraction rate is 96.4%. The slightly lower recall for the lager yeast may be due to the presence of a higher number of abiotic particles in the recorded holograms, which could interfere with the yeast cells and hinder the model's ability to extract features. As a control experiment, we also captured holograms of inactive ale yeast cells, which were prepared by directly dissolving the dry ale yeast in distilled water without culturing. The average precision and recall were only 62% and 55.6%, respectively, indicating that the YOLO model is not able to distinguish between ale and lager yeast when both are inactive. The overall extraction rate is 87.2%. The lower extraction rate is likely due to the model's poor performance in classification. Because the YOLO model performs detection and classification simultaneously, more cells will be undetected if the model fails to classify them correctly. If the model is only trained on a detection task for any yeast cells, the extraction rate is 93%, comparable to the other yeast test cases. These results suggest that our method can distinguish different strains based on their unique metabolic characteristics during fermentation. Figure 8: (a) YOLO detection on the enhanced holograms of ale yeast and lager yeast at hour 1 during fermentation. Sample holograms of individual ale and lager yeast cells are shown below. (b) Detection result showcases for the synthetic holograms. Ale yeast cells are predicted by the purple bounding boxes and the lager yeast cells are predicted by the beige bounding boxes. The numbers on the bound boxes represent the confidence score. ## IV. Conclusions and Discussions In this study, we introduce a novel approach for real-time _in situ_ analysis of biological particles using machine learning assisted digital inline holography (DIH). Our machine learning model, which uses a modified YOLO v5 architecture customized for the detection and classification of holograms of small biological particles, is optimized using TensorRT for real-time processing. Unlike previous methods used to classify particles captured in holograms (Kim et al., 2018; Guo et al., 2021), our approach integrates particle localization and classification into a single step, significantly reducing processing time while maintaining prediction accuracy. We have demonstrated the capability of our novel approach for _in situ_ biological particle analysis using three test cases: classifying 10 different species of plankton, detecting yeast cells under four different metabolic states, and differentiating two yeast strains during fermentation. Our approach does not require any additional preprocessing (e.g., hologram reconstruction and particle segmentation) used in other studies (Kim et al., 2018; Guo et al., 2021), significantly reducing processing time and computational resources. Our DIH sensors, with onboard processing capability, are ideal for real-time, _in situ_ monitoring of the onset and development of harmful algal blooms, or the viability and vitality of yeast cells during various industrial processes. Our method is sensitive enough to detect subtle biochemical composition changes in single cells (Raschke and Knorr, 2009; Chan et al., 2012) and can also be used to distinguish different strains of yeast cells based on their unique fermentation characteristics. Overall, this work showcases the potential of machine learning assisted DIH as a novel and versatile tool for real-time, _in situ_ analysis of various biological particles, including their morphology, viability, vitality, and other important biophysical properties that are correlated with changes in optical density (e.g., membrane structure, protein). Compared to commonly used rapid particle analysis tools such as laser diffraction, acoustic scattering, and Coulter counter, our approach offers more information beyond size and concentration. It can be used for viability and vitality tests similar to conventional fluorescent microscopy and flow cytometry methods, but without the need for sample preparation (label-free) and with significantly higher throughput. Our optical setup is highly compact and cost-effective compared to other label-free technologies such as the _in situ_ and _in situ_. Figure 9: The confusion matrix summarizing the accuracy and prediction errors made by proposed YOLO based method for yeast strain detection during fermentation. The diagonal elements (dark blue boxes) indicate precision of the model prediction. as QPI and hyperspectral imaging, and the data process is not as computationally intensive with much higher throughput. With these features, our proposed method can be easily extended to the analysis of other types of particles (both biotic and abiotic) and can be deployed in a distributive manner for scientific research and manufacturing on an industrial scale. To apply the method to a specific application, holograms can be captured using similar hardware setup using customized objectives and camera sensors based on particles size and field of view. The biotic and abiotic particles in the holograms can then be manually labeled by a domain expert to form a training set and a machine learning model using our proposed architecture can be trained. Once the model is trained and loaded onboard, the entire system should be able to perform effectively for the application. However, it is important to note that the precision of our machine learning model may decrease when analyzing holograms with high particle concentrations due to the overlap of diffraction patterns, but this will not significantly influence the model's performance as long as the overlap is less than 50%. This conclusion is based on test results that showed high precision and recall in most cases. Our method is particularly valuable for analyzing particles in the applications that require high sensitivity, such as biocontaminants in sterile liquids (e.g., spring water, sterile liquid used in pharmaceutical industries and clinical applications). In these applications, particle concentrations are usually low and our model performance will not be significantly affected by overlapping fringes. In addition, despite the generalizability of our overall approach including the hardware setup and ML method, the trained ML model is specific to the application it is trained for, and its performance may be compromised when used on data outside the scope of its training. This is the limitation for common supervised learning approaches. To ensure the accuracy and robustness of the model, it may be necessary to retrain it and gather new labeled data when there are changes in particle properties (e.g., size, shape), medium properties (e.g., refractive index), or image acquisition settings (e.g., magnification, laser wavelength). In the future, it may be possible to mitigate this limitation by using unsupervised or semi-supervised machine learning model architectures in DIH data processing.
2308.12075
Stabilizing RNN Gradients through Pre-training
Numerous theories of learning propose to prevent the gradient from exponential growth with depth or time, to stabilize and improve training. Typically, these analyses are conducted on feed-forward fully-connected neural networks or simple single-layer recurrent neural networks, given their mathematical tractability. In contrast, this study demonstrates that pre-training the network to local stability can be effective whenever the architectures are too complex for an analytical initialization. Furthermore, we extend known stability theories to encompass a broader family of deep recurrent networks, requiring minimal assumptions on data and parameter distribution, a theory we call the Local Stability Condition (LSC). Our investigation reveals that the classical Glorot, He, and Orthogonal initialization schemes satisfy the LSC when applied to feed-forward fully-connected neural networks. However, analysing deep recurrent networks, we identify a new additive source of exponential explosion that emerges from counting gradient paths in a rectangular grid in depth and time. We propose a new approach to mitigate this issue, that consists on giving a weight of a half to the time and depth contributions to the gradient, instead of the classical weight of one. Our empirical results confirm that pre-training both feed-forward and recurrent networks, for differentiable, neuromorphic and state-space models to fulfill the LSC, often results in improved final performance. This study contributes to the field by providing a means to stabilize networks of any complexity. Our approach can be implemented as an additional step before pre-training on large augmented datasets, and as an alternative to finding stable initializations analytically.
Luca Herranz-Celotti, Jean Rouat
2023-08-23T11:48:35Z
http://arxiv.org/abs/2308.12075v2
# Stabilizing RNN Gradients through Pre-training ###### Abstract Numerous theories of learning suggest to prevent the gradient variance from exponential growth with depth or time, to stabilize and improve training. Typically, these analyses are conducted on feed-forward fully-connected neural networks or single-layer recurrent neural networks, given their mathematical tractability. In contrast, this study demonstrates that pre-training the network to local stability can be effective whenever the architectures are too complex for an analytical initialization. Furthermore, we extend known stability theories to encompass a broader family of deep recurrent networks, requiring minimal assumptions on data and parameter distribution, a theory that we refer to as the Local Stability Condition (LSC). Our investigation reveals that the classical Glorot, He, and Orthogonal initialization schemes satisfy the LSC when applied to feed-forward fully-connected neural networks. However, analysing deep recurrent networks, we identify a new additive source of exponential explosion that emerges from counting gradient paths in a rectangular grid in depth and time. We propose a new approach to mitigate this issue, that consists on giving a weight of a half to the time and depth contributions to the gradient, instead of the classical weight of one. Our empirical results confirm that pre-training both feed-forward and recurrent networks to fulfill the LSC often results in improved final performance across models. This study contributes to the field by providing a means to stabilize networks of any complexity. Our approach can be implemented as an additional step before pre-training on large augmented datasets, and as an alternative to finding stable initializations analytically. ## I Introduction Despite all the efforts to mitigate the negative effect that gradient explosion has on learning [1, 2, 3, 4, 5, 6, 7], how to properly initialize deep recurrent networks (\(d\)-RNN) remains an open problem. In fact, the literature has focused on simple architectures that are either deep but shallow in time (FFN for feed-forward networks) [5, 6, 7, 8, 9], or shallow in layers and deep in time (1-RNN) [2, 3, 4, 10, 11, 12], unintentionally missing the effect that the combined depth in layers and in time have on learning. Meanwhile, in the era of Transformer-based architectures [13, 14], the RNN hidden state has been introduced back from oblivion to allow the attention networks to observe longer sequences [15, 16, 17, 18], renewing the interest in such a fundamental piece of the learning tool-kit. Moreover, a good understanding of RNN dynamics is impactful in many scientific communities that make use of parametrized dynamical systems. This is the case for example when modeling biological neurons in computational neuroscience, or in neuromorphic computing, where recurrency is necessary to take advantage of highly energy efficient devices [19, 20, 21, 22, 23]. However, these neuron definitions are often complex, making standard parameter initialization strategies from deep learning sub-optimal, and designing a specific initialization strategy for each, cumbersome. Hence, there is a need for a tool to stabilize a wide variety of network definitions before training, and therefore, for broad theories of \(d\)-RNN stability to justify such a tool. To address such concerns, here we propose a pre-training to stability method that is simple to apply to a wide variety of architectures and neuron definitions, including among others differentiable and neuromorphic neurons, such as the LSTM, GRU and ALIF neurons [24, 25, 3, 22]. Additionally, we extend existing initialization theories, such as Glorot, Orthogonal and He [5, 6, 7], to be able to describe the stability of a wider family of \(d\)-RNN, and we quantify how the variance of the parameter update depends on time and depth. In contrast with classical descriptions [1, 2, 12], we find two sources of gradient exponential explosion. One is multiplicative, often described in the FFN and \(1\)-RNN literature, the other is additive, and is consequence of adding all the gradient paths in the time and depth grid. To reduce the variance from exponential to linear growth, we propose one sufficient condition, the Local Stability Condition (LSC), and we propose to weight the depth and time components of the gradient as a half with \(d\)-RNNs, given that the classical weight of one leads to the additive exponential explosion. Finally, we show experimentally that our pre-training to stability to attain the LSC, leads to better final performance for a broad range of \(d\)-RNNs, for both, differentiable and neuromorphic models [24, 25, 3, 22]. Our main contributions are summarized below: * We propose a pre-training to local stability method, that removes the need for architecture specific initializations, Sec. II-D; * We propose the Local Stability Condition (LSC) as an extension of existing stability theories, to describe the gradient stability of a wide family of multi-layer recurrent networks, with minimal assumptions on parameter and data distribution, Sec. II-B; * We show that a new form of exponential explosion arises, due to the addition of gradient paths in the time and depth grid, and we propose a method to suppress it, Sec. II-B; * We prove that the Glorot [5], He [7], and Orthogonal [6] initializations are special cases of the LSC, Sec. II-C; * We show on feed-forward and recurrent architectures, and on differentiable and neuromorphic neuron models, that pre-training to satisfy the LSC can improve final performance experimentally, Sec. III. ## II What makes deep RNNs different from shallow RNNs and deep FFNs? ### _General deep RNN and Notation_ Given that neither Glorot, Orthogonal and He, nor neuron specific 1-RNN theories describe \(d\)-RNN, we want to generalize those established theories of gradient stability to a wider family of \(d\)-RNN. Let us consider a stack of general recurrent layers, that will make the mathematical analysis simpler: \[\begin{split}\mathbf{h}_{t,l}&=g_{h}(\mathbf{h}_{t-1,l},\mathbf{h }_{t-1,l-1},\mathbf{\theta}_{l})\\ \hat{\mathbf{o}}_{t}&=g_{o}(\mathbf{h}_{t,L},\mathbf{\theta}_{o}) \end{split} \tag{1}\] where we index time and layer with \(t\in\{0,\cdots,T\},l\in\{0,\cdots,L\}\), and the hidden state \(\mathbf{h}_{t,l}\) depends on the previous time-step and layer in a general form \(g_{h}\). Notice that whenever the neuron of interest is defined with several hidden states, such as the LSTM or the ALIF, in our notation \(\mathbf{h}_{t,l}\) will represent the concatenation of all of them. We represent as \(\mathbf{h}_{t,0}\), the task input data fed to the first layer, \(l=1\). We perform the analysis for the most general form of \(g_{h},g_{o}\), with the only assumption that they are first order differentiable, or augmented with a surrogate gradient wherever they were not [25, 26, 27]. We denote vectors as lower case bold \(\mathbf{a}\), matrices as upper case \(A\), and their elements as lower case \(A\). The variable \(\hat{\mathbf{o}}_{t}\) represents the network output, where the hat means that it is the network approximation of the true task output \(\mathbf{o}_{t}\). If \(\mathbf{h}_{t,l}\in\mathbb{R}^{n_{l}}\), we call \(\mathbf{\theta}_{l}\in\mathbb{R}^{m_{t}}\) all the learnable parameters in layer \(l\), where the number of parameter elements \(m_{l}\) does not need to coincide with the layer width \(n_{l}\), and similarly for \(\mathbf{\theta}_{o}\in\mathbb{R}^{m_{o}}\). We call the matrices \(M_{k}\in\{\partial\mathbf{h}_{t,l}/\partial\mathbf{h}_{t-1,l},\partial\mathbf{h}_{t,l}/ \partial\mathbf{h}_{t-1,l-1}\}_{t,l}\), the _transition derivatives_, from one state to the following in time or in depth. We call \(\rho(M_{k})\) the transition derivative radius [28, 29], which is the largest modulus of its eigenvalues and it is often used to describe a matrix magnitude. ### _The Local Stability Condition (LSC)_ We say that the network satisfies the Local Stability Condition (LSC) when the expectation of each transition derivative radius is one or a half. We prove in App. A the following **Theorem 1** (Local Stability Condition, with radii \(\mathbf{E}\rho\in\{0.5,1\}\)).: _Be the multi-layer RNN in eq. 1. Setting the radii of every transition derivative \(M_{k}\) to \(\mathbf{E}\rho=1\) gives an upper bound to the parameter update variance that increases with time and depth as the binomial coefficient \(\frac{1}{T}\big{(}^{T+L+2}\big{)}\). Instead, setting all radii to \(\mathbf{E}\rho=0.5\) gives an upper bound that increases linearly as \(T\)._ First of all, we see that for \(\mathbf{E}\rho=1\), the bound to the variance of the parameter update grows with depth \(L\), while it does not for \(\mathbf{E}\rho=0.5\). Moreover, this result is particularly intriguing because \(\mathbf{E}\rho=1\) is known to annihilate the multiplicative sources of exponential explosions in FFN and \(1\)-RNN [2, 3, 4, 5, 6, 7, 10, 11], so, the variance is sub-exponential whenever \(L\) or \(T\) are fixed and the other tends to infinity. However, we observe that a \(d\)-RNN brings a new source of exponential explosion that is additive, and comes from an exponentially increasing number of gradient paths when we take \(T,L\rightarrow\infty\) simultaneously, as shown in Fig. 1 and proven in Lemma 2 App. C. Instead, setting \(\mathbf{E}\rho=0.5\) results in a variance that still increases with time and depth, but linearly, i.e. sub-exponentially, even when \(T,L\rightarrow\infty\) together. This choice of radius effectively results in a gradient that is locally averaged through time and depth with the most uninformative and entropic prior [30]. Note that Thm. 1 provides an upper bound. The proof relied on using matrix norms that are sub-additive and sub-multiplicative [28, 29]. However, we prove that for general matrices there is no matrix function that is super-additive and super-multiplicative (see Thm. 6), and hence, a lower bound cannot be found in general using a similar methodology. Nevertheless, if we restrict ourselves to positive semi-definite matrices, the determinant is super-additive and multiplicative. Therefore, a lower bound to the variance can be satisfied for that restricted family of matrices (see Thm. 5). Notice that the positive semi-definite requirement, could be in practice encouraged through a loss term during pre-training. Satisfying only the lower bound would therefore require all the transition derivatives to have determinant of one, whereas satisfying both the upper and lower bounds would require all their eigenvalues to be one. However, experimentally we found it difficult to stabilize through pre-training the lower bound or both bounds simultaneously, and despite being an interesting direction for future work, we chose to focus on stabilizing the upper bound. ### _Conventional initialization methods are LSC_ It is remarkable that our proposed LSC is not at odds with existing theories. In fact, we prove in App. B that the three major initialization schemes in the Deep Learning literature, are particular cases of the LSC when applied to FFN: **Corollary 1**.: _Glorot [5] and Orthogonal [6] initializations for linear FFN, and He [7] initialization for ReLU FFN, are special cases of the \(LSC\) for \(\mathbf{E}\rho=1\)._ For the previous Corollary, we needed to prove the following Lemma on random matrices **Lemma 1** (Expected Spectral Norm).: _Given a \(n\times n\) real random matrix, with elements drawn from a radially symmetric point process with mean zero and variance one, its \(k\) eigenvalues in expectation have modulus \(\mathbf{E}|\lambda_{k}|=\sqrt{2}\Gamma((k+1)/2)/\Gamma(k/2)\) for any \(k\), with vanishing variance as \(k\) increases._ where \(\Gamma(\cdot)\) is the gamma function. Lemma 1 is a version of the strong circular law [31] for random matrix eigenvalues that is valid in expectation for any matrix size. ### _Pre-train to stability_ **Gradient Descent Pre-training.** A limitation of current initialization methods is that every time a new architecture is introduced, a new theoretically justified initialization should be calculated to ensure network stability, and avoid the gradient exponential explosion. To address this, we suggest a pre-training approach where the network is trained to meet the LSC using the desired task inputs with randomized output classes, as a means of preparing the network for training. In the preparation phase, we minimize only the mean squared error loss between the radii \(\rho\) of all the transition derivatives \(M_{k}\), and a target radius \(\rho_{t}\), at each time step in the task: \[\mathcal{L}_{preparation}=\sum_{\forall M_{k}}\Big{(}\rho(M_{k})-\rho_{t} \Big{)}^{2} \tag{2}\] Similarly, [32] suggests pre-training the weight matrices to be orthogonal but not the transition derivatives themselves, and [33] suggests a pre-training to orthogonality only for the output linear layer. **Weight Multiplier and Shuffled Pre-training.** However, gradient descent pre-training results in \(M_{k}\) satisfying the LSC on average, with a wide variance. In order to reduce such variance, after applying the gradient update, we multiply the neuron weights by \(\kappa=clip(\rho_{t}/\rho(M_{k}))\), which increases the scale of the weights if the radius is smaller than the target, and viceversa. We clip the multiplicative factor between 0.85 and 1.15, to reach the target value gradually and to reduce oscillations. At each pre-training step, we multiply the input matrices of layer \(l\) by the layer transition derivative \(\kappa\) of layer \(l\), and we multiply the recurrent matrices by the time transition derivative \(\kappa\) of layer \(l\). To improve the randomness of the pre-trained model in an attempt to capture the statistics of the task and not to memorize the pre-training set, we shuffle each learnable tensor after applying \(\kappa\). We consider pre-training complete, if the following three criteria are met: (i) \(|\rho(M_{k})-\rho_{t}|\leq\epsilon\) with \(\epsilon=0.02\), (ii) the standard deviation of \(\rho(M_{k})\) across \(k\) in the current pre-training step is below \(0.2\), and (iii) a 10 steps exponential moving average of that standard deviation is below \(0.2\) also. These thresholds are chosen to make sure that the distance between \(\rho\to 0.5\) and \(\rho\to 1\) is statistically significant. Fig. 1: **Stabilizing to \(\rho=1\) results in additive explosion while \(\rho=0.5\) does not.** a) In a \(d\)-RNN gradients need to traverse both the time and the depth dimension when updating the learnable parameters. A transition derivative \(M_{k}\) represents only one arrow in the time and depth grid, and there are several multiplicative chains \(j_{i}\) to be considered, since the parameter update is going to use \(J_{t,l}^{T,L}\), the sum of all the multiplicative chains from \(T,L\) down to \(t,l\). b) However, the number of paths \(j_{i}\) is described by the binomial coefficient \(\binom{\Delta l+\Delta t}{\Delta t}\), and therefore increases exponentially when time and depth tend to infinity simultaneously, as in iii) and proven in App. C. In fact, an exponential growth looks like a straight line in a semi-log plot, as in iii). Instead, the aforementioned binomial coefficient grows only polynomially when either time or depth are kept fixed, as in i) and ii). c) We confirm experimentally our theoretical analysis, on a toy network and on the LSC pre-trained GRU: \(\rho=1\) reveals an explosion of additive origin (right panels), while \(\rho=0.5\) is able to stabilize gradients through time (left panels). The upper panels show network output (blue), derivative (orange), and our derivative bounds (green), for the toy network that we define as the PascalRNN, \(\mathbf{h}_{t,l}=\rho\mathbf{h}_{t-1,l}+\rho\mathbf{h}_{t-1,l-1}\), of depth 10 and gaussian input of mean zero and standard deviation 2, and lower panels show a LSC pre-trained GRU network of depth 7 and the SHD task as input. Both upper bounds to the derivative under \(\rho\in\{0.5,1\}\), are part of the proof for Thm. 1, and \(c_{1},c_{2}\), defined in Thm. 1 proof, are task and network dependent constants, that do not depend on time nor depth. Notice that the growth of the derivative and of the bound is backwards in time since backpropagation accumulates gradients backwards in operations, from \(T,L\) to \(0,0\). This confirms that standard FFN theories (\(\rho=1\)) cannot be directly applied to \(d\)-RNN, since they result in an unexpected additive gradient exponential explosion that is not accounted for. ### _Neural Networks_ **FFNs.** To prove that our pre-training to stability method is applicable to a wide variety of neural networks we apply it first to simple 30 layers FFNs. They are defined as \(\mathbf{h}_{l}=a(W_{l}\mathbf{h}_{l-1}+\mathbf{b}_{l})\), with \(W_{l},\mathbf{b}_{l}\) learnable matrices and biases, with three different activations \(a\): ReLU, sine and cosine. While applications of the sine and cosine activations do exist [13, 17, 34], here our interest is on using unusual activations as a proxy for a real world scenario, where we do not have a theoretically analyzed architecture, and we therefore lack of a so called correct initialization. When we apply the LSC to FFN the target is always \(\rho_{t}=1\). Notice that we use such a deep network to test the ability of each approach to stabilize it. Being only a simple FFN, it will not achieve state-of-the-art results. Our intention is also to proceed similarly as it was done to introduce He initialization [7], where a 30-layers ReLU network initialized with He was compared with one initialized with Glorot. In contrast, we show in Fig. 2 results for 7 learning rates, 4 seeds, 3 datasets and 3 activations, therefore 252 times more data than in Fig. 3 in [7]. **RNNs.** Then, we study our LSC preparation method on six different RNNs: four are fully differentiable, and two are non-differentiable but upgraded with a surrogate gradient to be able to train them through back-propagation. The fully differentiable networks are: the LSTM [3], the GRU [24], and two fully-connected simple RNN, defined as \(\mathbf{h}_{t,l}=a(W_{rec,l}\mathbf{h}_{t-1,l}+W_{in,l}\mathbf{h}_{t-1,l-1}+\mathbf{b}_{l})\) either with the activation \(a\) being the sigmoid \(\sigma\), or \(ReLU\), where \(W_{rec,l},W_{in,l},\mathbf{b}_{l}\) are learnable matrices and biases. As non-differentiable architectures we used two variants of the ALIF neuron [22, 25, 35, 36], a simplification of a biologically plausible neuron that is often used in Computational Neuroscience. The exact equations and initializations used are reported in App. H. We call ALIF\({}_{+}\) the configuration that is initialized with \(\beta\) positive, and ALIF\({}_{\pm}\) the same initialization with \(\beta\) both positive and negative. The variable \(\beta\) is also known as _spike frequency adaptation_, and when negative, firing encourages further firing, while if positive firing discourages further firing [37]. The RNN \(\sigma\), RNN \(ReLU\) and GRU have only one hidden state, while LSTM and ALIF have two, so we concatenate them into one state \(\mathbf{h}_{t,l}\) to calculate \(M_{k}\). The RNN we train have depth \(L=2\) and \(5\). The layer width is task and neuron model specific, and is chosen to have a comparable amount of parameters across neuron types (App. G). We pre-train the differentiable architectures to satisfy \(\rho_{t}=1\) and \(\rho_{t}=0.5\). Instead, in the case of non-differentiable architectures we only set \(\rho_{t}=1\), given that it was difficult to converge to \(\rho_{t}=0.5\), not only with two surrogate gradient shapes, the derivative of the fast sigmoid [27] and a multi-gaussian [35], but even when learnable dampening and sharpness were used. ### _The Datasets_ We train the FFN on MNIST [38], CIFAR10 and CIFAR100 datasets [39]. We describe in the following the datasets used for the RNN. More details can be found in App. G. **Spike Latency MNIST (sl-MNIST):** the MNIST digits [38] pixels (10 classes) are rescaled between zero and one, presented as a flat vector, and each vector value \(x\) is transformed into a spike timing using the transformation \(T(x)=\tau_{eff}\log(\frac{x}{x-\vartheta})\) for \(x>\vartheta\) and \(T(x)=\infty\) otherwise, with \(\vartheta=0.2,\tau_{eff}=50\)ms [40]. The input data is a sequence of \(50\)ms, \(784\) channels (\(28\times 28\)), with one spike per row. **Spiking Heidelberg Digits (SHD):** is a dataset developed to benchmark spiking neural networks [41]. It is based on the Heidelberg Digits (HD) audio dataset which comprises 20 classes, of ten spoken digits in English and German, by 12 individuals. These audio signals are encoded into spikes through an artificial model of the inner ear and parts of the ascending auditory pathway. **PennTreeBank (PTB):** is a language modelling task. The PennTreeBank dataset [42], is a large corpus of American English texts. We perform next time-step prediction at the word level. The vocabulary consists of 10K words. ### _Experimental set-up_ We pre-train on the train set, with the AdaBelief optimizer [43], with a learning rate of \(3.14\cdot e^{-3}\) and a weight decay of \(1\cdot e^{-4}\) until convergence to the target, for both the FFN and RNN. We train all our FFN and RNN networks with crossentropy loss and AdaBelief optimizer [43]. AdaBelief hyper-parameters are set to the default, as in [40, 44]. We always use the optimizer as a Lookahead Optimizer [45], with synchronization period of 6 and a slow step size of 0.5. We early stop on validation, with a patience of 10 epochs, and report 4 random seeds mean and standard deviation on the test set. On PTB we use perplexity as a metric [46], the lower the better, while on sl-MNIST and SHD, we use what we call the mode accuracy, the higher the better: the network predicts the target at every timestep, and the chosen class is the one that fired the most for the longest. For more training details see App. G. ## III Results ### _Pre-training FFN to LSC improves training_ We can see in Fig. 2, the effect on performance of pre-training FFNs to satisfy the LSC, and we compare it to the classical Glorot and He initializations. We repeat the training with different learning rates, and report test accuracy after early stopping on validation loss. We find that pre-training to LSC, outperforms the rest in most scenarios. It is especially interesting that even if on ReLU it tends to match in performance the theoretically equivalent He, it does so at a higher learning rate, with a very similar accuracy/learning rate curve but shifted in the \(x\)-axis to the right. The reason that there is no exact match, could be consequence of the stochasticity introduced by the mini-batch gradient based pre-training. For the cosine activation, it improves over Glorot, but performs worse than He on CIFAR10 and CIFAR100, which could be consequence of the cosine having a derivative of zero around input zero, which leads to a radius of the transition derivative that has a strong tendency to zero. However, notice that we always initialized the network as Glorot before pre-training to LSC, and the pre-training systematically improved performace with respect to Glorot. This seems to indicate that LSC always improves it's starting initialization. ### _Additive source of exponential explosion in \(d\)-RNN_ In contrast with FFNs, when we consider deep RNNs, there is a new source of gradient exponential explosion that is additive. We see in Fig. 1a) three ways the gradient has to descend through layers and time in a \(d\)-RNN, from \(L,t\) down to \(l,t\). Each arrow represents one transition derivative \(M_{k}\). This gradient grid is analogous to the Pascal Triangle grid, and as such, it hides an additive source of exponential explosion, since the number of paths increases exponentially as we move further away from the upper right corner. The number of shortest paths from the upper right corner to the lower left corner in the grid can be counted with the binomial coefficient \(\binom{\Delta t+\Delta t}{\Delta t}\), where \(\Delta l=L-l,\Delta t=T-t\). In fact, to prove Thm 1 for \(\mathbf{E}\rho=1\), we count how many gradient paths descend from \(T,L\) down to \(0,l\), and mathematically prove in Lemma 2 App. C that there are sub-exponentially many when we increase either time or depth leaving the other fixed, and exponentially many when both are increased together. Fig. 1b) confirms experimentally that the binomial coefficient that counts the number of shortest gradient paths in a grid of sides \(\Delta l\) and \(T\), grows sub-exponentially i) with \(T\) when \(\Delta l\) is fixed to \(\Delta l=5\) and ii) with \(\Delta l\) when \(T\) is fixed to \(T=5\) and grows exponentially iii) when both are increased together with a fixed ratio \(T/\Delta l=100\). In fact, an exponential growth results in a straight line in a semi-log plot, as in the rightmost panel. In Fig. 1c) we see how a simple \(d\)-RNN that we call PascalRNN, and GRU illustrate Thm. 1 validity. We define as PascalRNN the toy network \(\mathbf{h}_{t,l}=\rho\mathbf{h}_{t-1,l}+\rho\mathbf{h}_{t-1,l-1}\), and we show the experimental curves for \(L=10\) and \(T=100\), with 6 gaussian samples as input of mean zero and standard deviation of 2. We see that the derivative follows exactly the binomial coefficient for \(\rho=1\), and follows the constant \(1\) for \(\rho=0.5\), also found in the proof of Thm. 1. The shift between the orange and green curves has been introduced manually to ease visualization, but were completely overlapping otherwise. Similarly, we see that the bounds are respected in the panel below, where a \(L=7\) stack of GRU neurons was pretrained on the SHD task to satisfy either \(\rho\in\{0.5,1\}\). Therefore, both networks support the claim that \(\mathbf{E}\rho=0.5\) encourages a variance of the gradient that is constant through time, while \(\mathbf{E}\rho=1\) does not. ### _Pre-training \(d\)-RNNs to LSC improves training_ Figure 3 illustrates the impact on final performance of \(d\)-RNN network pre-training to satisfy the LSC, and how depth influences the result. Our results indicate that pre-training for stability improves performance in most scenarios. For differentiable networks, a target radius of \(\rho_{t}=0.5\) consistently produces superior outcomes, over \(\rho_{t}=1\) and over default initialization, and more markedly so for deeper networks. On the other side, even if the non-differentiable spiking architectures struggle to converge to \(\rho_{t}=0.5\), they still benefit from a pre-training to \(\rho_{t}=1\). ## IV Discussion and Conclusions Our study has demonstrated the efficacy of pre-training to local stability as a useful strategy for preparing neural networks prior to training. This approach is particularly beneficial when dealing with architectures for which the appropriate initialization hyper-parameters are not known a priori from mathematical analysis. In the traditional approach, the practitioner must determine the optimal initialization analytically [5, 6, 7, 8], which can be time-consuming and challenging, especially for complex architectures. Consequently, practitioners often resort to trial and error to identify the best initialization strategy for a given architecture, resulting in repeated training from scratch. In contrast, our pre-training technique is applicable Fig. 2: **Bounds stabilization through pre-training enhances FFN learning.** We investigate the effect that pre-training to achieve our Local Stability Condition (LSC) has on learning, for 30-layer FFNs. Such depth is not necessary to solve MNIST, and our interest is rather in confirming that we are able to stabilize gradients in very deep networks. Specifically, we stabilize the upper bound and we compare our results to two well-known initialization strategies for FFN, Glorot and He. Interestingly, even when the theoretically justified He initialization is available, such as for ReLU FFNs, the LSC can match or outperform it, even if it was initialized as Glorot before pre-training. In general, no theoretically justified initialization strategies are available for new architectures or unconventional activations, such as sine and cosine. Therefore, stability pre-training becomes a convenient approach to enhance learning. Notably, stabilizing to LSC tends to outperforms all other alternatives in most scenarios. to any architecture, whether old or new, and requires only a single pre-training session to achieve stability, obviating the need for multiple rounds of trial and error. Secondly, we have extended existing theories [2, 3, 4, 5, 6, 7, 8, 9, 10, 11], such as Glorot, He, and Orthogonal initializations, to propose the Local Stability Condition (LSC). This new theory allows for the description of the gradient stability of a wide range of multi-layer recurrent networks with minimal assumptions on the distributions of parameters and data. Our approach recovers classical results in fully-connected feed-forward architectures [5, 6, 7] and provides a way to work with both upper and lower bounds to the variance of the parameter update. While it is more common to work with upper bounds, we have shown that lower bounds are also possible, although they may not be as easy to stabilize in practice. Finally, we discovered that \(\rho_{t}=0.5\) is a more desirable target local radius for deep RNNs than \(\rho_{t}=1\). This finding contradicts the conventional line of reasoning [1, 2, 12] that focused on shallow RNNs, and used the time dimension as the effective depth, which leads to only finding multiplicative sources of gradient exponential explosion. Instead, in deep RNNs, depth in combination with time, introduce another source of gradient exponential explosion that is additive and cannot be effectively mitigated by a local radius of \(\rho_{t}=1\), but can be addressed by targeting \(\rho_{t}=0.5\). Given our computational constraints, we were not able to explore deeper networks, which is therefore left for future work, but our experimental results indicate that the deeper the network the more desirable \(\rho_{t}=0.5\) becomes for \(d\)-RNNs. Interestingly, the \(\rho_{t}=1\) polynomial explosion was observed empirically before [11, 12], but to our knowledge we give the first theoretical account for it, by observing that the binomial coefficient bound grows polynomially when depth is fixed. ## V Acknowledgments We thank Guillaume Bellec for many careful reads and insights on ALIF networks, and Emmanuel Calvet and Matin Azadmanesh for their feedback on the text. We thank Wolfgang Maass for the opportunity to visit their lab. We thank Terence Tao for suggesting to look at Kostlan Theorem to prove Lemma 1 and for the key insights to prove Theorem 6. Luca Herranz-Celotti was supported by the Natural Sciences and Engineering Research Council of Canada through the Discovery Grant from professor Jean Rouat, FRQNT and by CHIST-ERA IGLU. We thank Compute Canada for the clusters used to perform the experiments and NVIDIA for the donation of two GPUs.
2302.09368
Natural Language-conditioned Reinforcement Learning with Inside-out Task Language Development and Translation
Natural Language-conditioned reinforcement learning (RL) enables the agents to follow human instructions. Previous approaches generally implemented language-conditioned RL by providing human instructions in natural language (NL) and training a following policy. In this outside-in approach, the policy needs to comprehend the NL and manage the task simultaneously. However, the unbounded NL examples often bring much extra complexity for solving concrete RL tasks, which can distract policy learning from completing the task. To ease the learning burden of the policy, we investigate an inside-out scheme for natural language-conditioned RL by developing a task language (TL) that is task-related and unique. The TL is used in RL to achieve highly efficient and effective policy training. Besides, a translator is trained to translate NL into TL. We implement this scheme as TALAR (TAsk Language with predicAte Representation) that learns multiple predicates to model object relationships as the TL. Experiments indicate that TALAR not only better comprehends NL instructions but also leads to a better instruction-following policy that improves 13.4% success rate and adapts to unseen expressions of NL instruction. The TL can also be an effective task abstraction, naturally compatible with hierarchical RL.
Jing-Cheng Pang, Xin-Yu Yang, Si-Hang Yang, Yang Yu
2023-02-18T15:49:09Z
http://arxiv.org/abs/2302.09368v1
Natural Language-conditioned Reinforcement Learning with Inside-out Task Language Development and Translation ###### Abstract Natural Language-conditioned reinforcement learning (RL) enables the agents to follow human instructions. Previous approaches generally implemented language-conditioned RL by providing human instructions in natural language (NL) and training a following policy. In this _outside-in_ approach, the policy needs to comprehend the NL and manage the task simultaneously. However, the unbounded NL examples often bring much extra complexity for solving concrete RL tasks, which can distract policy learning from completing the task. To ease the learning burden of the policy, we investigate an _inside-out_ scheme for natural language-conditioned RL by developing a task language (TL) that is task-related and unique. The TL is used in RL to achieve highly efficient and effective policy training. Besides, a translator is trained to translate NL into TL. We implement this scheme as TALAR (**T**Ask **L**anguage with **p**edic**A**te **R**epresentation) that learns multiple predicates to model object relationships as the TL. Experiments indicate that TALAR not only better comprehends NL instructions but also leads to a better instruction-following policy that improves 13.4% success rate and adapts to unseen expressions of NL instruction. The TL can also be an effective task abstraction, naturally compatible with hierarchical RL. ## 1 Introduction Enabling the robot to work with humans is a hallmark of machine intelligence. Language is a vital connection between humans and machines (Kozierok et al., 2021), and it has been investigated for instructing robot execution, designing rewards, and serving as observation or action in reinforcement learning (RL) (Luketina et al., 2019). We are especially interested in developing agents that follow human instructions in this broad context. Natural language-conditioned reinforcement learning (NLC-RL) is a promising tool in this pursuit, which provides the policy with human instructions in natural language (NL) and trains the policy with RL algorithms. In this _outside-in_ learning (OIL, Figure 1-left) scheme, the policy is directly exposed to the NL instructions. Thus, the policy must comprehend the NL instructions and complete the RL tasks simultaneously. However, natural language is an unbounded representation of human instruction, which imposes an additional burden on the policy when solving concrete RL tasks. For example, to ask a robot to bring a drink, one may say: _Get me a drink_, while another may ask: _Can you take the beverage to me_? Despite having identical meanings, the two NL instructions are expressed in vastly different ways. To complete human instructions, the policy must simultaneously comprehend these diverse, unbounded NL instructions and solve the RL tasks, resulting in inefficient policy learning. In this paper, we investigate an _Inside-Out_ Learning (IOL) scheme to enable efficient and effective policy learning in NLC-RL, as depicted in Figure 1-right. The IOL develops a task language (TL) that is task-related and uniquely represents human instruction. The TL can be utilized in RL to alleviate the burden of policy learning by comprehending diverse NL expressions. In addition to developing TL, IOL trains a translator that translates NL into TL. A crucial aspect of IOL is how the task language is represented. We believe that _expressiveness_ and _conciseness_ are essential properties of language representation. Expressiveness ensures that the TL accurately reflects human instruction, while conciseness facilitates policy comprehension. To satisfy these two requirements, we propose representing TL with predicate representation, which is deemed expressive [2] and concise as a discrete representation. We introduce an implementation of IOL, called TALAR for **TA**sk **L**anguage with predic**A**te **R**epresentation. TALAR consists of three components: (1) a TL generator that generates TL with predicate representation, (2) a translator that translates NL into TL and (3) a policy that solves the RL tasks assigned by human instructions. Specifically, the TL generator develops TL through the identification of object relationships. To achieve this, the TL generator learns multiple (anonymous) predicates and their arguments to model the relationships between objects. The translator is trained to translate NL into TL using the variational auto-encoder [13] algorithm. With the optimized translator, TALAR trains an instruction-following policy that completes human instructions. Our contributions include the following: We propose a novel NLC-RL scheme, IOL, that enables highly efficient policy learning. IOL develops TL that serves as a unique representation of human instruction and trains the policy following TL. Besides, we present a specific IOL implementation, including a neural network architecture that automatically discovers relationships between the objects. Through our experiments in the CLEVR-Robot environment [14], we find that TALAR can better translate diverse NL expressions into a unique representation compared to traditional OIL methods. A policy can learn to complete human instructions efficiently and adapt to unseen NL instructions with the resulting TL. Moreover, the resulting TL effectively represents human instruction, providing a solid baseline for abstraction in hierarchical RL [1]. ## 2 Related Work This section begins with a summary of prior research on instruction following with RL, followed by two paragraphs discussing works pertinent to our methodology, i.e., language generation in RL and language translation. **Instruction following with RL.** Instruction-following problems require agents to perform tasks specified by NL instructions. Previous methods employ RL to train an instruction-following policy and expose the NL Figure 1: An illustration of OIL and IOL schemes in NLC-RL. **Left:** OIL directly exposes the NL instructions to the policy. **Right:** IOL develops a task language, which is task-related and a unique representation of NL instructions. The solid lines represent instruction following process, while the dashed lines represent TL development and translation. directly to the policy. For example, [Hill et al., 2021] encodes a single-sentence instruction with a pre-trained language model and feeds the policy with the NL encoding. [Misra et al., 2018] learns a policy that maps NL instructions to action sequences by marginalizing implied goal locations. [Chaplot et al., 2018] combines human instructions with agent observations via a multiplication-based mechanism and then pre-trains the instruction-following policy using behaviour cloning [Pomerleau, 1991]. Instead of exposing NL to the policy, [Akakzia et al., 2021] encodes NL to a manually-designed binary vector in which each element has meaning. Besides, the instruction-following policy has close ties to Hierarchical RL [Barto and Mahadevan, 2003] because the instructions can be naturally viewed as a task abstraction for a low-level policy [Blukis et al., 2022]. HAL [Jiang et al., 2019] takes advantage of the compositional structure of NL and makes decisions directly at the NL level to solve long-term, complex RL tasks. These previous methods either expose the unbounded NL instructions directly to the policy or encode the NL instructions to a scenario-specific manual vector, both of which have limitations. In contrast to them, we propose developing a task-related task language that is a unique representation of NL instruction and is, therefore, easily understood by the policy. **Language representation in RL.** We are interested in language representation, which is fundamental to developing task language. RL communities often consider language representation when agents learn effective message protocol to communicate with their partners [Eccles et al., 2019, Kang et al., 2020, Simoes et al., 2019, Patel et al., 2021]. Motivated by the discrete nature of human language, discrete representation has been widely used in prior research. For example, [Li et al., 2022] enables agents to communicate using discrete messages and demonstrates that discrete representation has comparable performance to continuous representation with a much smaller vocabulary size. One-hot representation [Patel et al., 2021, Lazaridou et al., 2017] and binary representation [Akakzia et al., 2021, Oliehoek and Amato, 2016] are prevalent forms of discrete language representation. For instance, [Patel et al., 2021] uses one-hot language representation to allow two agents to communicate to differentiate between images. However, these representations only employ discrete symbols and operate on a propositional level, lacking the ontological commitment of the predicate representation that the world consists of objects and their relationships [Russell and Norvig, 1995]. In this paper, we develop the task language following the discrete form of language representation while the predication representation is used. **Language translation.** In this paper, TALAR translates NL into TL, which lies in the domain of machine translation [Bahdanau et al., 2015]. In this field of study, numerous approaches have been developed [Rivera-Trigueros, 2022, Stahlberg, 2020]. Encoder-decoder [Cho et al., 2014] is a promising machine translation tool because of its ability to extract the effective features of the input sentences. For example, [Pagnoni et al., 2018] proposes to utilize a continuous latent variable as an efficient machine translation feature based on a variational auto-encoder [Kingma and Welling, 2014]. In this paper, we adhere to this class of machine translation techniques that employ an encoder-decoder structure, treating the NL as the source language and the TL as the target language. ## 3 Background **RL and NLC-RL.** A typical RL task can be formulated as a Markov Decision Process (MDP) [Puterman, 1994, Sutton and Barto, 1998], which is described as a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},P,r,\gamma,d_{0})\). Here \(\mathcal{S}\) represents the state space. \(\mathcal{A}\) is the finite action space defined by \(\mathcal{A}=\{a_{0},a_{1},\cdots,a_{|\mathcal{A}|-1}\}\). \(P\) represents the probability of transition while \(r\) represents the reward function. \(\gamma\) is the discount factor determining the weights of future rewards, whereas \(d_{0}\) is the initial state distribution. A policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) is a mapping from state space to the probability space over action space. In NLC-RL, the agent receives an NL instruction (\(L\)) that reflects the human's instruction on the agent. An instruction-following policy \(\pi(\cdot|s_{t},L)\) is trained to make decisions based on the current state \(s_{t}\) and NL instruction \(L\). The overall objective of NLC-RL is to maximize the expected return under different NL instructions: \[J=\mathbb{E}\bigg{[}\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t}|L)\big{|}s_{0} \sim P,a_{t}\sim\pi(\cdot|s_{t},L)\bigg{]}. \tag{1}\] For the accuracy of sake, we use \(L_{\text{N}}\) and \(L_{\text{T}}\) to denote NL and TL, respectively. **Predicate representation** uses discrete binary vectors to represent the relationship between objects or the property of an individual object. For example, the predicate representation vector [1, 1, 0, 0, 0, 1, 0] could represent a predicate expression Pred(a,b). In this instance, Pred is a predicate that represents a relationship, and the symbols a and b are its arguments. In the vector, the first code [1] in the vector indicates that the value of Pred is True (i.e., the relationship holds), whereas the red and blue one-hot codes represent the indexes of arguments a and b, respectively. It is demonstrated that predicate representation can be attained by employing networks to output predicate values. This way, the learning system can automatically identify relationships in a block stack task [1]. In TALAR, neural networks learn both predicates and their arguments. Refer to Appendix A for supplementary discussions on predicate representation. **Natural language as input for the neural network.** NL sentences are in variable lengths and cannot be fed directly into a fully connected network. A standard solution is to encode each word as an embedding [13] and loop over each embedding using a recurrent model. Except for the recurrent model, Bert [14] tokenizes the words before extracting sentence embedding features based on these tokens. Transformer [21] is the predominant model for natural language processing. Since we only need to convert NL sentences to fixed-length encoding in our experiments, we employ a pre-trained, lightweight Bert model. ## 4 Method This section presents our TALAR method for efficient policy learning in NLC-RL, which is based on the IOL scheme. We begin by introducing the task dataset for task language development and translation. _Definition 1_ (**Task dataset**).: _A task dataset \(\mathcal{D}=\{(s,s^{\prime},L_{\text{N}})_{i}\}\) consists of multiple triplets. Each triplet contains a natural language sentence \(L_{\text{N}}\) and a task state pair \((s,s^{\prime})\), where \(L_{\text{N}}\) describes state change from \(s\) to \(s^{\prime}\) in natural language._ We use a state pair instead of a single state for the following reasons: (1) NL instruction frequently describes a state change, e.g., turn the wheel to the left; (2) it is not easy to describe a single state concisely in complex task scenarios. We let a person describe each state pair to collect the task dataset. Figure 2 illustrates how TALAR makes use of task dataset for task language development and translation. The subsequent Figure 2: Overall training process of task language development and translation. **(a)** The overall training process. **(b)** Network architecture of the TL generator. **(c)** Architecture of one predicate module. **(d)** Network architecture of the translator. The number of predicate modules, arguments and predicate networks can be adjusted according to the task scale. subsections will elaborate on three critical components of TALAR: TL generation in predicate representation, NL translation by recovering TL, and policy training with reinforcement learning. Appendix C presents a summary of TALAR's training procedures. ### TL Generation in Predicate Representation TALAR trains a TL generator \(g_{\theta}(s,s^{\prime})\) represented by neural networks and parameterized with \(\theta\), which takes a state pair \((s,s^{\prime})\) as the input and outputs task language \(L_{T}\). Next, we will introduce how task language is developed by describing the network structure of the TL generator and how to optimize the TL generator. **Network architecture of TL generator.** As depicted in Figure 2(b-c), the TL generator comprises \(N_{\text{pm}}\) instances of predicate modules (PM). Each PM first extracts \(N_{\text{a}}\) arguments \((\arg_{1},\arg_{2},\cdots,\arg_{N_{\text{a}}})\) according to the input state pair, and then determines the Boolean values of \(N_{\text{pm}}\) predicates, given the extracted argument list. The predicate values are concatenated with the argument list and form the task language \(L_{\text{T}}^{i}\) of the \(i\)-th module. Finally, the TL generator concatenates all PMs' output and generates the task language \(L_{\text{T}}\). Note that the number of PMs \(N_{\text{pm}}\), arguments \(N_{\text{a}}\), and predicates \(N_{\text{pm}}\) can be modified based on the RL task scale. Specifically, each PM extracts the arguments through argument networks, denoted by \(\text{ArgNet}_{i}(s,s^{\prime})\). An argument network is implemented as fully-connected networks ending with a Gumbel-Softmax activation layer (Jang et al., 2017). Through the Gumbel-Softmax, the argument network is able to output a discrete one-hot vector \(\arg_{i}\) in form like \((0,1,\cdots,0)\), which represents an anonymous object. TALAR utilizes multiple predicate networks, denoted by \(\text{PredNet}_{i}(s,s^{\prime},\arg_{1},\cdots,\arg_{N_{\text{a}}})\), to determine the Boolean values of a set of anonymous predicates. Each predicate network outputs a 0-1 value, ending with a Gumbel-Softmax layer. All these 0-1 values are concatenated together with the argument list, yielding the task language \(L_{\text{T}}^{i}\) of the \(i\)-th PM. Note that without the argument list contained in \(L_{\text{T}}^{i}\), the resulting language cannot express different objects and therefore loses its expressiveness. All predicate networks within the same PM receive the identical argument list. In the TL generator, there are multiple PMs, each possessing its argument networks, while the parameters of each predicate network PredNet\({}_{i}\) are shared across PMs. The parameter sharing here makes the \(\text{PredNet}_{i}\) in different PMs identical, requiring them to capture consistent relations among the various arguments because they accept different arguments across PMs. Finally, the total task language is represented by \(L_{\text{T}}=[L_{\text{T}}^{1},\cdots,L_{\text{T}}^{N_{\text{pm}}}]\), which is a discrete binary vector. The Gumbel-Softmax activation technique permits the differentiation of the entire TL generation procedure. **Training of the TL generator.** Training the TL generator ensures that the generated TL captures the crucial aspects of a given state transition. Based on this idea, TALAR uses the Masked Language Modeling (MLM) technique (Devlin et al., 2019) to train the TL generator. MLM masks \(L_{\text{N}}\) sentences at random. For example, when the original sentence is _It is a happy day_, the masked sentence could be _It Mask a happy Mask_. Then, MLM utilizes \(L_{\text{T}}=g_{\theta}(s,s^{\prime})\) and the masked \(L_{\text{N}}\) to predict the masked words. We implement the above process using a pre-trained Bert language model (LM), which first tokenizes \(L_{\text{N}}\) into tokens \(T\). Then, TALAR selects two random token positions of \(T\) and replaces each with a unique token Mask. The TL generator is trained to optimize an MLM loss, which aims to predict the original tokens with masked tokens and task language: \[\mathcal{L}_{\text{MLM}}(\theta)=\operatorname*{\mathbb{E}}_{(s,s^{\prime},L_ {\text{N}})\sim\mathcal{D}}\left[-\sum_{i\in M}\log f(T_{i}|b(T_{\setminus M}),g_{\theta}(s,s^{\prime}))\right], \tag{2}\] where \(M\) denotes the set of the masked positions, \(T_{\setminus M}\) denotes the masked version of \(L_{\text{N}}\)'s tokens, \(T_{i}\) is the \(i\)-th token, \(b\) is the Bert model, and \(f\) is a fully-connected network. Note that \(f\) is also optimized via gradient backpropagation. We omit the notion of its parameters for simplicity. ### NL Translation by Recovering TL The objective of the translator is to translate the NL to the TL. TALAR trains the translator using variational auto-encoder (VAE) (Kingma and Welling, 2014). Specifically, given a TL \(L_{\text{T}}=g_{\theta}(s,s^{\prime})\) and corresponding NL \(L_{\text{N}}\), we expect the VAE can recover \(L_{\text{T}}\) from \(L_{\text{N}}\). Figure 2(d) presents the structure of the translator. TALAR uses a pre-trained LM to process \(L_{\text{N}}\) and a VAE to recover the task language. We let \(\widetilde{L_{\text{T}}}\) denote the TL generated by translator, \(q_{\phi_{1}}\) the VAE encoder parameterized with \(\phi_{1}\), and \(p_{\phi_{2}}\) the VAE decoder parameterized with \(\phi_{2}\). Then the VAE is trained to minimize the VAE loss: \[L_{\text{VAE}}(\phi_{1},\phi_{2})=\underset{(s,s^{\prime},L_{\text{N}})\sim \mathcal{D}}{\mathbb{E}}\left[D_{\text{KL}}(q_{\phi_{1}}(z|L_{N}),p(z))- \underset{z\sim q_{\phi_{1}}(\cdot|L_{\text{N}})}{\mathbb{E}}\left[\log p_{ \phi_{2}}(L_{\text{T}}|z)\right]\,\right], \tag{3}\] where \(L_{\text{T}}=g_{\theta}(s,s^{\prime})\), \(z\sim q_{\phi_{1}}(\cdot|L_{\text{N}})\) is the encoding generated by VAE encoder, and \(D_{\text{KL}}\) stands for KL-divergence. We choose VAE because of its capacity to learn effective latent features. However, VAE is not essential to implement IOL, as the translator can be trained using alternative supervised learning methods. We demonstrate the effectiveness of VAE empirically in Section 5.4. While some may be concerned about the translator's ability to recover \(L_{\text{T}}\) from \(L_{\text{N}}\) accurately, we emphasize that the translator is primarily responsible for recovering the key positions that reflect the NL instructions and not the entire \(L_{\text{T}}\). With the optimized translator, an instruction-following policy is trained to complete the human instructions, as described below. ### Policy Training With Reinforcement Learning TALAR uses reinforcement learning to train an Instruction-Following Policy (IFP) \(\pi(\cdot|s,\widetilde{L_{\text{T}}})\). When the agent collects samples from the environment, the task generates a random human instruction in NL, which is then translated into the task language \(\widetilde{L_{\text{T}}}\) by the translator. Next, the IFP makes decisions for the entire episode based on the current observation and \(\widetilde{L_{\text{T}}}\) until completing the instruction or reaching the maximum timestep. The IFP can be optimized with an arbitrary RL algorithm using the samples collected from the environments. In our implementation, we use PPO (Schulman et al., 2017) for TALAR and all baselines. Note that during IFP training, the translator's parameters are fixed to prevent the translator from overfitting the current IFP. ## 5 Experiments We conduct multiple experiments to evaluate TALAR and answer the following questions: (1) Can TALAR translate diverse NL instructions into a unique representation? (Section 5.1) (2) How does TALAR compare to traditional NLC-RL approaches in the instruction-following task? (Section 5.2) (3) Can TL serve as an abstraction for hierarchical RL? (Section 5.3) (4) How does every component influence the performance of TALAR? (Section 5.4) We perform experiments in CLEVR-Robot environment (Jiang et al., 2019), as shown in Figure 3. CLEVR-Robot is an environment for object interaction based on the MuJoCo physics engine (Todorov et al., 2012). The environment contains five movable balls and an agent (silverpoint). In each trajectory, the agent aims to complete a human instruction in NL that represents moving a specific ball to one direction (i.e., one of [front, behind, left, right]) of a destination ball. For example, an NL instruction can be _Move the red ball to the left of the blue ball_, or _Can you push the yellow ball to the right of the green ball?_ There are a total of 80 distinct human instructions. We use 18 different NL sentence patterns for each human instruction to describe it, yielding 1440 different NL instructions. To acquire the task dataset, we first train a policy that could move any specified ball to a specified position with PPO algorithms. Then, this policy will collect 100,000 state transitions, each corresponding to one random ball movement. Then, each state transition is assigned an NL description. We use Bert-base-uncased (Devlin et al., 2019) as all pre-trained language models in our experiments. All experiments are performed with different random seeds five times, and the shaded area in the figures represents the standard deviation across all five trials. We refer readers to Appendix D for additional implementation details. ### Task Language Development and Translation We first verify whether the TALAR can translate diverse NL expressions into a unique representation. To answer the question, we randomly sample 10 different human instructions. Each instruction is expressed using nine NL sentence patterns for ninety NL sentences. Then, the optimized translator translates these NL sentences into TL. As depicted in Figure 3(a), we project the resulting TL onto a two-dimensional plane using t-SNE [1]. Based on the projection results, we observe that TALAR learns a unique representation of NL. This conclusion can be obtained because TL can represent different NL expressions for the same human instruction in a close area. As a comparison, we also project the NL encoding directly output by a pre-trained Bert model, as shown in Figure 3(b). The points produced by Bert are scattered everywhere on the plane, indicating that a pre-trained Bert model fails to represent diverse human instructions uniquely. Besides, as depicted in Figure 3(c), an OIL baseline cannot distinguish the same human instruction with different NL expressions. This result suggests that the OIL baseline treats distinct NL expressions as distinct task objectives, which could slow policy learning. We refer readers to Appendix E.2 for more experiment results about the t-SNE projections. Figure 4: The t-SNE representations of different types of NL encoding. Points with the same marker stand for the encoding of nine different NL expressions that describe the same human instruction. We add a slight noise to the overlapping points for better presentation. **(a)** The t-SNE representations of the TL output by the translator. **(b)** The encoding output by Bert model. **(c)** The encoding output by the language encoding layer of the OIL baseline (Bert-continuous in Section 5.2). Figure 3: A visualization of CLEVR-Robot environment in our experiments. (a) In the beginning, one NL instruction is randomly sampled as _Can you move the cyan ball in front of the blue ball?_ Then agent executes actions to complete the instruction. (b) The task terminates if achieving the goal or reaching the maximum timestep. In addition to the above results, we observe that the generated TL is interpretable to some extend. We use the TL generator to output the TL regarding all state transitions in the task dataset, and observe the resulting TL and its the corresponding NL descriptions. Consequently, the output of the predicate network is related to the destination ball. Figure 5 presents the frequency of five destination balls when the predicate network outputs 1. PredNet\({}_{1}\) and PredNet\({}_{2}\) clearly target the blue and purple balls, respectively. However, PredNet\({}_{3}\) is more difficult to interpret than the other two networks. There could be other relations other than with the destination ball. ### Performance of Instruction-Following Policy .5With the optimized translator, we train an instruction-following policy in the CLEVR-Robot task, following the training process elaborated in Section 4.3. We first introduce three datasets of different NL expressions. (1) **Training set** contains nine NL sentence patterns (i.e., 720 NL instructions) for policy learning. All agents only interact with the training dataset when optimizing the policy. (2) **Testing set** contains nine NL sentence patterns (i.e., 720 NL instructions) that are different from the training set. (3) **Error-added set** contains the exact 720 NL instructions as the training set, with the addition of errors to each NL instruction, such as the word [the] being omitted. See Appendix D.1 for information regarding these three datasets and evaluation tasks. **Baselines for comparison**. We consider multiple baselines that are built upon OIL architecture (i.e., standard NLC-RL): (1) **Bert-binary** processes the NL with a pre-trained Bert LM. The language encoding from the Bert is processed to a binary vector by a fully-connected network. This binary vector's size equals TL generated by TALAR. To ensure the differentiability, we use a reparameterization trick [Tokui and Sato, 2016] that converts continuous vector to binary vector. (2) **Bert-continuous** is similar to Bert-binary, except that it replaces the binary vector with a continuous vector of the same size. (3) **One-hot** encodes the representation of all possible NL instructions (including training, testing and error-added) to a three-dimensional tensor, where each instruction has its position. **Experimental results.** Figure 6 presents the training curves on the instruction-following task with \begin{table} \begin{tabular}{c|c|c|c} \hline \hline MethodDataset & Training & Testing & Error-added \\ \hline TALAR & **99.9\(\pm\) 0.1** & **78.3 \(\pm\) 3.1** & **76.3 \(\pm\) 3.6** \\ \hline One-hot & 86.5 \(\pm\) 3.0 & 47.6 \(\pm\) 2.5 & 62.1 \(\pm\) 4.1 \\ \hline Bert-binary & 64.0 \(\pm\) 2.5 & 64.2 \(\pm\) 5.0 & 67.8 \(\pm\) 2.9 \\ \hline Bert-continuous & 60.7 \(\pm\) 1.9 & 54.0 \(\pm\) 1.3 & 57.5 \(\pm\) 2.1 \\ \hline \hline \end{tabular} \end{table} Table 1: A summary of the final success rate (%) in instruction-following task with different sets of NL expressions. The results are averaged over 5 seeds, and each data is evaluated for 40 episodes. Figure 5: Frequency of five destination balls when a predicate network outputs a value of 1. Each bar stands for the frequency of the ball with a certain colour. various NL instruction datasets, and Table 1 summarizes the final success rates of all methods. Overall, TALAR acquires a better instruction-following policy that increases the success rate by 13.4% relative to OIL baselines and adapts to previously unseen expressions of NL instruction. On the training NL instruction set, TALAR achieves a success rate greater than 99% within 2M timesteps, significantly faster than the two Bert-based baselines. Combined with the results in Section 5.1, the translator can effectively convert diverse NL expressions to a unique TL representation, which enables efficient policy learning. Besides, TALAR achieves a success rate greater than 76% in both testing and error-added sets, demonstrating greater capacity than baselines. While One-hot performs adequately on the training NL set, its generalization to the testing and error-added NL sets is limited. Two Bert-based baselines improve more slowly than TALAR on the training NL instruction set, which can be attributed to the fact that they simultaneously train a policy while acquiring skills and understanding NL. Bert's encoding of different NL expressions can be highly diverse, which adds complexity to OIL baselines to solve RL tasks; consequently, Bert-based baselines improve more slowly than TALAR during the training process. ### TL as an Abstraction for Hierarchical RL Previous experiments demonstrate that the resulting TL is a unique representation of the various NL expressions, which assists a policy in efficiently learning to follow NL instructions. In this section, we further explore the applicability of generated TL by examining if it can serve as an effective goal abstraction for hierarchical RL. Specifically, we train a high-level policy outputting a TL, instructing the IFP to complete a low-level task. We consider a baseline **HAL**(Jiang et al., 2019) for comparison. HAL takes advantage of the compositional structure of NL and directly makes decisions on the NL level. Following its original implementation, the high-level policy of HAL outputs the index of NL instruction. To ensure a fair comparison, HAL uses the IFP trained by TALAR as a low-level policy in our experiment. The high-level policies are trained with the PPO algorithm. We consider a long-term task based on the CLEVR-Robot environment, namely object arrangement, as shown in Figure 6(a). The task objective of object arrangement is to arrange objects to satisfy all ten constraints that are implicitly contained in the task. Figure 6(b) presents the comparison results. The high-level policy that uses TL as a low-level goal abstraction performs significantly better than that using NL in terms of improving speed. This result shows that TL can be a helpful goal abstraction naturally compatible with hierarchical RL. ### Ablation Study We further conduct ablation experiments to verify how each component affects the performance of TALAR. Figure 6: Training curves of different methods on three NL instruction datasets. The x-axis represents the total timesteps agent interacts with the environment, and the y-axis represents the success rate of completing instructions. The shaded area stands for the standard deviation over five random trials. **Predicate representation.** To evaluate the efficacy of predicate representation in TALAR, we replace it with binary/continuous vectors and derive two representations of TL, TL-binary and TL-continuous. In TL-binary/TL-continuous, a multi-layer perception network outputs the TL vector directly. Figure 8 shows the t-SNE projection of different kinds of TL representations. The results are computed based on the testing NL instruction set. For each human instruction with different NL expressions, the points produced by TL-predicate are more concentrated than those of TL-binary and TL-continuous. These results demonstrate the effectiveness of predicate representation for developing TL. Besides, Table 2 displays the final success rate of IFPs trained with various TL representations. Overall, all three types of representations are adequate for achieving a high task success rate on the training NL instruction set. However, predicate representation is more adaptable to unseen NL expressions and has a higher task success rate on testing/error-added NL instruction sets. **VAE of the translator.** TALAR employs a VAE for translator training. To demonstrate its efficacy, we introduce TALAR-MLP, which replaces the VAE in TALAR with a multi-layer perception (MLP) network and trains the translator using supervised learning loss. Then, we train an IFP using the MLP translator, and Figure 8: The t-SNE projections of the task language in different kinds of representations, with the points generated from the testing NL instruction set. Figure 7: Experiments in a long-term object arrangement task. (a) A snapshot of successful object arrangement, which aims to arrange objects to satisfy all 10 human instructions at the same time. (b) Training curves of different goal abstractions in object arrangement. the experiment results are depicted in Figure 9. Despite having comparable success rates on the training set, TALAR-VAE outperforms TALAR-MLP on the testing NL instruction set by 8.4% of success rate. This result suggests that VAE is advantageous when training a translator to generalize unseen NL expressions. **Number of predicate modules and predicate networks.** We conduct experiments to examine how the number of predicate modules/networks (i.e., \(N_{\text{pm}}\) and \(N_{\text{pn}}\)) in TALAR affects the performance. In our experiments, \(N_{\text{pm}}\) is selected from [1, 2, 4], while \(N_{\text{pn}}\) is selected from [2, 4, 6]. Figure 10 shows the experiment results. In general, greater \(N_{\text{pm}}\) and \(N_{\text{pn}}\) result in improved performance on the training NL instruction set (see Figure 9(a)). However, the experimental performance on the testing/error-added set is quite the opposite, as shown in Figure 9(c) when \(N_{\text{pm}}=4\) and \(N_{\text{pn}}=6\). This result could be attributed to the fact that, as the number of predicate modules and networks increases, the representation of TL becomes more complex (i.e., the vector size increases), making it more difficult for the policy to follow the TL. Besides, we also observe that, within a specific range of values (\(N_{\text{pm}}\leq 2\) and \(N_{\text{pn}}\leq 4\)), larger \(N_{\text{pn}}\) and \(N_{\text{pm}}\) would bring a better performance. These results serve as a guide for selecting appropriate hyper-parameters. Due to space constraints, experiments on TALAR involving a variety of argument networks are presented in Appendix E.4. ## 6 Conclusion This paper focuses on the topic of NLC-RL. We suggest that NL is an unbounded representation of human instruction, thereby imposing a substantial additional burden on the policy when solving RL tasks. To alleviate the burden, we investigate a new IOL scheme for NLC-RL by developing TL, which is task-related, and a unique representation of human instruction. Through our experiments, we verify that the resulting TL \begin{table} \begin{tabular}{c|c|c} \hline \hline TL rep.Dataset & Training & Testing & Error-added \\ \hline Predicate & **99.9** & **78.3** & **76.3** \\ \hline Binary & **99.8** & 77.1 & 75.7 \\ \hline Continuous & **99.8** & **78.1** & 66.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons of different representations of TL. The success rate (%) is averaged over 5 seeds. Figure 9: Training curves of IFP with different structures of the translator. See Appendix E.5 for the training curves on the error-added set. can uniquely represent human instructions with diverse NL expressions and is interpretable to some extent. Besides, the policy following TL can quickly learn to complete the instructions with a high success rate and adapts to previously unseen NL expressions. Moreover, the resulting TL is an effective goal abstraction of a low-level policy that serves as the basis for hierarchical RL. Although TALAR can effectively train a competent instruction-following policy, there are limitations. TALAR develops the task language using a static task dataset and, therefore, can not be directly applied to an open environment task. It is possible to mitigate this issue by dynamically extending the task dataset and fine-tuning the TL generator/translator during the policy learning process. Besides, TALAR requires a manual reward function for policy training, which may be inaccessible if the reward design is complex. Fortunately, there have been well-validated methods for solving sparse reward problems (Andrychowicz et al., 2017; Nair et al., 2018; Riedmiller et al., 2018), which is an effective substitute for the manual reward function. Finally, it would be interesting to involve the basic properties of predicate relationships (such as transitivity, reflexivity, and symmetry) when training the TL generator, which makes the resulting TL more meaningful and self-contained. We hope future research will investigate these intriguing questions and make strides toward training agents that interact with humans more effectively. Figure 10: Ablation study on the number of predicate modules/networks. The values in the heat map represent the success rate of IFPs trained for 2M timesteps, with different parameter configurations of \(N_{\text{pm}}\) and \(N_{\text{pn}}\).
2301.10369
Exact Fractional Inference via Re-Parametrization & Interpolation between Tree-Re-Weighted- and Belief Propagation- Algorithms
The computational complexity of inference -- required to compute the partition function, $Z$, of an Ising model over a graph of $N$''spins" -- is most likely exponential in $N$. Efficient variational methods, such as Belief Propagation (BP) and Tree Re-Weighted (TRW) algorithms, compute $Z$ approximately by minimizing the respective (BP- or TRW-) free energy. We generalize the variational scheme by building a $\lambda$-fractional interpolation, $Z^{(\lambda)}$, where $\lambda=0$ and $\lambda=1$ correspond to TRW- and BP-approximations, respectively. This fractional scheme -- coined Fractional Belief Propagation (FBP) -- guarantees that in the attractive (ferromagnetic) case $Z^{(TRW)} \geq Z^{(\lambda)} \geq Z^{(BP)}$, and there exists a unique (``exact") $\lambda_*$ such that $Z=Z^{(\lambda_*)}$. Generalizing the re-parametrization approach of \citep{wainwright_tree-based_2002} and the loop series approach of \citep{chertkov_loop_2006}, we show how to express $Z$ as a product, $\forall \lambda:\ Z=Z^{(\lambda)}{\tilde Z}^{(\lambda)}$, where the multiplicative correction, ${\tilde Z}^{(\lambda)}$, is an expectation over a node-independent probability distribution built from node-wise fractional marginals. Our theoretical analysis is complemented by extensive experiments with models from Ising ensembles over planar and random graphs of medium- and large-sizes. The empirical study yields a number of interesting observations, such as the ability to estimate ${\tilde Z}^{(\lambda)}$ with $O(N^{2::4})$ fractional samples and suppression of $\lambda_*$ fluctuations with an increase in $N$ for instances from a particular random Ising ensemble. We also verify and discuss the applicability of this approach to the problem of image de-noising.
Hamidreza Behjoo, Michael Chertkov
2023-01-25T00:50:28Z
http://arxiv.org/abs/2301.10369v3
# Exact Fractional Inference via Re-Parametrization & Interpolation ###### Abstract Inference efforts - required to compute partition function, \(Z\), of an Ising model over a graph of \(N\) "spins" - are most likely exponential in \(N\). Efficient variational methods, such as Belief Propagation (BP) and Tree Re-Weighted (TRW) algorithms, compute \(Z\) approximately minimizing respective (BP- or TRW-) free energy. We generalize the variational scheme building a \(\lambda\)- fractional-homotopy, \(Z^{(\lambda)}\), where \(\lambda=0\) and \(\lambda=1\) correspond to TRW- and BP-approximations, respectively, and \(Z^{(\lambda)}\) decreases with \(\lambda\) monotonically. Moreover, this fractional scheme guarantees that in the attractive (ferromagnetic) case \(Z^{(TRW)}\geq Z^{(\lambda)}\geq Z^{(BP)}\), and there exists a unique ("exact") \(\lambda_{*}\) such that, \(Z=Z^{(\lambda_{*})}\). Generalizing the re-parametrization approach of (Wainwright et al., 2001) and the loop series approach of (Chertkov & Chernyak, 2006a), we show how to express \(Z\) as a product, \(\forall\lambda:\ Z=Z^{(\lambda)}\mathcal{Z}^{(\lambda)}\), where the multiplicative correction, \(\mathcal{Z}^{(\lambda)}\), is an expectation over a node-independent probability distribution built from node-wise fractional marginals. Our theoretical analysis is complemented by extensive experiments with models from Ising ensembles over planar and random graphs of medium- and large- sizes. The empirical study yields a number of interesting observations, such as (a) ability to estimate \(\mathcal{Z}^{(\lambda)}\) with \(O(N^{4})\) fractional samples; (b) suppression of \(\lambda_{*}\) fluctuations with increase in \(N\) for instances from a particular random Ising ensemble. Machine Learning, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Bayesian Inference, Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Bayesian Bayesian Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian, Bayesian Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian, Inference, Bayesian, Inference Inference, Bayesian Inference, Bayesian, Inference Inference, Bayesian, Inference Inference, Bayesian, Inference Inference, Bayesian, Bayesian, Inference Inference, Bayesian, Inference Inference Inference, Bayesian, Bayesian Inference, Inference Inference, Bayesian Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference, Bayesian Inference Inference, Bayesian Inference Inference, Bayesian, Bayesian Inference Inference Inference, Bayesian * We extend the ideas of parameterized homotopy, interpolating between BP (Yedidia et al., 2001, 2005) and TRW (Wainwright, 2002; Wainwright et al., 2003; 2005), in the spirit of the fractional BP (Wiegerinck & Heskes, 2002; Chertkov & Yedidia, 2013), and therefore introducing a broader family of variational approximations. * We utilize and generalize re-parametrization (Wainwright et al., 2001), gauge transformation and loop calculus (Chertkov & Chernyak, 2006a,b; Chertkov et al., 2020) techniques, as well as the combination of the two (Willsky et al., 2007). * Our approach is also related to development of the MCMC techniques with polynomial guarantees, the so-called Fully Randomized Polynomial Schemes (FPRS), developed specifically for Ising models of a specialized, e.g. attractive (Jerrum & Sinclair, 1993) and zero-field, planar (Gomez et al., 2010; Ahn et al., 2016) types. ### This Manuscript Contribution We introduce fractional variational approximation interpolating between, by now classical, Tree Re-Weighted (TRW) and Belief Propagation (BP) cases. The fractional free energy, \(\bar{F}^{(\lambda)}=-\log Z^{(\lambda)}\), defined as minus logarithm of the fractional approximation to the exact partition function, \(Z=\exp(-\bar{F})\), requires solving an optimization problem, which is achieved practically by running a fractional version of one of the standard message-passing algorithm. Basic definitions, including problem formulation for the Ising models and variational formulation in terms of the node and edge beliefs (proxies for the respective exact marginal probabilities), are given in Section 2. Assuming that the fractional message-passing algorithm converges we study dependence of the Bethe Free energy on the fractional parameter, \(\lambda\), and relation between the exact value of the free energy (minus logarithm of the exact partition function) and the fractional free energy. We report the following theoretical results: * We show in Section 3 that \(\bar{F}^{(\lambda)}\) is a continuous and monotone function of \(\lambda\) (Theorem 3.1 proved in Appendix B) which is also convex in \(\lambda\) (Theorem 3.2 proved in Appendix C). * Our main theoretical result, Theorem 4.1, presented in Section 4 and proven in Appendix D, states that the exact partition function can be expressed as a product of the variational free energy and of the multiplicative correction, \(Z=Z^{(\lambda)}\mathcal{Z}^{(\lambda)}\). The latter multiplicative correction term, \(\mathcal{Z}^{(\lambda)}\), is stated as an explicit expectation of an expression over a well defined "mean-field" probability distribution, where both the expression and the "mean-field" probability distribution are stated explicitly in terms of the fractional node and edge beliefs. The theory is extended with experiments reported in Section 5. Here we show, in addition to confirming our theoretical statements (and thus validating our simulation settings) that: * Dependence of \(\bar{F}^{(\lambda)}\) and \(\log\mathcal{Z}^{(\lambda)}\) on \(\lambda\) is of a phase transition type when we move from the TRW regime at \(\lambda=0\) to the BP regime at \(\lambda>\bar{\lambda}\). * Evaluating \(Z^{(\lambda)}\mathcal{Z}^{(\lambda)}\) at different values of \(\lambda\) and confirming that the result is independent of \(\lambda\) suggests a novel approach to reliable and efficient estimate of the exact \(Z\). * Analyzing ensembles of the attractive Ising Models over graphs of size \(N\) we observe that fluctuations of the value of \(\lambda_{*}\) within the ensemble, where \(Z^{(\lambda_{*})}=Z\), decreases dramatically with increase in \(N\). This observation suggest that estimating \(\lambda_{*}\) for an instance from the ensemble allows efficient approximate evaluation of \(Z\) for any other instances from the ensemble. * Studying the sampling procedure to estimate \(\mathcal{Z}^{(\lambda)}\), we observe that the number of samples required for the estimate is either independent on the system size, \(N\), or possibly grows relatively weekly with \(N\). This observation confirms that our approach to estimation of \(Z\), consisting in evaluation of \(Z^{(\lambda)}\) by a message-passing, then followed by drawing a small number of samples to estimate the correction, \(\mathcal{Z}^{(\lambda)}\), is sound. * Analysis of the mixed Ising ensembles (where attractive and repulsive edges alternate) suggests that for instances with sufficiently many repulsive edges finding, \(\lambda_{*}\in[0,1]\) may not be feasible. We have a brief discussion of conclusions and path forward in Section 6. ## 2 Technical Preliminaries ### Ising Models: the formulation Graphical Models (GM) are the result of a marriage between probability theory and graph theory designed to express a class of high-dimensional probability distribution which factorize in terms of products of lower dimensional factors. The Ising model is an exemplary GM defined over an undirected graph, \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). The Ising Model is stated in terms of binary variables, \(x_{a}=\pm 1\), and singleton factors, \(h_{a}\in\mathbb{R}\), associated with nodes of the graph, \(a\in\mathcal{V}\) and pair-wise factors, \(J_{ab}\in\mathbb{R}\), associated with edges of the graph, \((a,b)\in\mathcal{E}\). The probability distribution of the Ising model observing a state, \(\mathbf{x}=(x_{a}|a\in\mathcal{V})\) is \[p(\mathbf{x}|\mathbf{J},\mathbf{h}) =\frac{\exp\left(-E\left(\mathbf{x};\mathbf{J},\mathbf{h}\right)\right)}{Z(\mathbf{ J},\mathbf{h})}, \tag{1}\] \[Z(\mathbf{J},\mathbf{h}) \doteq\sum_{\mathbf{x}\in\{\pm 1\}^{|\mathcal{V}|}}\exp\left(-E \left(\mathbf{x};\mathbf{J},\mathbf{h}\right)\right),\] (2) \[E\left(\mathbf{x};\mathbf{J},\mathbf{h}\right) \doteq\sum_{(a,b)\in\mathcal{E}}E_{ab}(x_{a},x_{b}),\] (3) \[\forall(a,b)\in\mathcal{E}:E_{ab}=-J_{ab}x_{a}x_{b}-(h_{a}x_{a}+h _{b}x_{b})/2, \tag{4}\] where \(\mathbf{J}\doteq(J_{ab}|(a,b)\in\mathcal{E})\), \(\mathbf{h}=(h_{a}|a\in\mathcal{V})\) are the pair-wise and singleton vectors, assumed given, \(E(\mathbf{x};\mathbf{J},\mathbf{h})\) is the energy function and \(Z(\mathbf{J},\mathbf{h})\) is the partition function. Solving the Ising model inference problem means computing \(Z\) - which is, generally, requires efforts which are exponential in \(N=|\mathcal{V}|\). ### Exact Variational Formulation Exact variational approach to computing \(Z\) consists in restating Eq. (2) in terms of the following Kullback-Leibler distance between \(\exp(-E(\mathbf{x};\mathbf{J},\mathbf{h}))\) and a probe probability distribution, \(\mathcal{B}(\mathbf{x})\in[0,1]^{|\mathcal{V}|}\), \(\sum_{\mathbf{x}}\mathcal{B}((\mathbf{x})=1\), called belief: \[\bar{F}\!=\!-\!\log Z\!=\!\min_{\mathcal{B}(\mathbf{x})}\!\!\sum_{\mathbf{x}}\left(E( \mathbf{x})\mathcal{B}(\mathbf{x})\!-\!\mathcal{B}(\mathbf{x})\log\mathcal{B}(\mathbf{x}) \right), \tag{5}\] where \(\bar{F}\) is also called (following widely accepted physics terminology) the free energy. The exact variational formulation (5) is the starting point for approximate variational formulations, such as BP (Yedidia et al., 2005) and TRW (Wainwright & Jordan, 2007), stated solely in terms of the marginal beliefs associated with nodes and edges, respectively: \[\forall a\in\mathcal{V},\ \forall x_{a}:\ \mathcal{B}_{a}(x_{a}) \doteq\sum_{\mathbf{x}\setminus x_{a}}\mathcal{B}(\mathbf{x}), \tag{6}\] \[\forall(a,b)\in\mathcal{E},\ \forall x_{a},x_{b}:\ \mathcal{B}_{ab}(x_{a},x_{b}) \doteq\!\!\sum_{\mathbf{x}\setminus(x_{a},x_{b})}\mathcal{B}(\mathbf{x}). \tag{7}\] Moreover, _fractional_ approach developed in this manuscript provides variational formulation in terms of the marginal probabilities generalizing (and, in fact, interpolating between) respective BP and TRW approaches. Therefore, we now turn to stating the fractional variational formulation. ### Fractional Variation Formulation Let us introduce a fractional-, or \(\lambda\)- _reparametrization_ of the belief (proxy for the probability distribution of \(\mathbf{x}\)) \[\mathcal{B}^{(\lambda)}(\mathbf{x})=\frac{\prod_{\{a,b\}\in\mathcal{E}}\left( \mathcal{B}_{ab}(x_{a},x_{b})\right)^{\rho^{(\lambda)}_{ab}}}{\prod_{a\in \mathcal{V}}\left(\mathcal{B}_{a}(x_{a})\right)^{\sum_{b\sim a}\rho^{(\lambda) }_{ab}-1}}, \tag{8}\] where \(b\sim a\) is a shortcut notation for \(b\) such that, given \(a\in\mathcal{V}\), \((a,b)\in\mathcal{E}\). Here in Eq. (8), \(\rho^{(\lambda)}_{ab}\) is the \(\lambda\)-parameterized edge appearance probability \[\rho^{(\lambda)}_{ab}=\rho_{ab}+\lambda(1-\rho_{ab}),\quad\lambda\in[0,1]. \tag{9}\] which is expressed via the \(\lambda=0\) edge appearance probability, \(\rho_{ab}\), dependent on the weighted set of the spanning trees, \(\mathcal{T}\doteq\{T\}\), of the graph according to the following TRW rules (Wainwright & Jordan, 2007): \[\forall(a,b)\in\mathcal{V}:\ \rho_{ab}=\!\!\sum_{T\in\mathcal{T},\,s \perp\,(a,b)\in T}\!\!\rho_{T},\ \sum_{T\in\mathcal{T}}\rho_{T}=1. \tag{10}\] A number of remarks are in order. First, \(\lambda=1\) corresponds to the case of BP. Then Eq. (8) is exact in the case of a tree graph, but it can also be considered as a (loopy) BP approximation in general. Second, and as mentioned above, \(\lambda=0\), corresponds to the case of TRW. Third, the newly introduced (joint) belief is not globally consistent, i.e. \(\sum_{\mathbf{x}}\mathcal{B}^{(\lambda)}(\mathbf{x})\neq 1\) for any \(\lambda\), including the \(\lambda=0\) (TRW) and \(\lambda=1\) (BP) cases. Substituting Eq. (8) into Eq. (5) we arrive at the following fractional approximation to the exact free energy stated as an optimization over all the node and edge marginal beliefs, \(\mathcal{B}\doteq(\mathcal{B}_{ab}(x_{a},x_{b})|\forall\{a,b\}\in\mathcal{E},\ x_{a},x_{b}=\pm 1)\cup(\mathcal{B}_{a}(x_{a})[\forall a\in\mathcal{V},\ x_{a}=\pm 1)\) \[\bar{F}^{(\lambda)}\doteq\min_{\mathcal{B}\in\mathcal{D}}F^{( \lambda)}(\mathcal{B}),\ F^{(\lambda)}(\mathcal{B})\doteq E(\mathcal{B})-H^{( \lambda)}(\mathcal{B}),\] (11) \[E(\mathcal{B})\doteq\!\!\sum_{(a,b)\in\mathcal{E}}\sum_{x_{a},x_ {b}=\pm 1}E_{ab}(x_{a},x_{b})\mathcal{B}_{ab}(x_{a},x_{b}),\] (12) \[H^{(\lambda)}(\mathcal{B})\dot{=}\!\!-\!\!\!\!\sum_{(a,b)\in \mathcal{E}}\!\!\!\rho^{(\lambda)}_{ab}\!\!\!\sum_{x_{a},x_{b}=\pm 1}\!\!\!\!\!\!\! \mathcal{B}_{ab}(x_{a},x_{b})\log\mathcal{B}_{ab}(x_{a},x_{b})\] \[\qquad\quad+\sum_{a\in\mathcal{V}}\left(\sum_{b\sim a}\rho^{( \lambda)}_{ab}\!-\!1\!\right)\!\!\!\sum_{x_{a}=\pm 1}\mathcal{B}_{a}(x_{a})\log\mathcal{B}_{a}(x_{a}),\] (13) \[\mathcal{D}\doteq\!\!\left(\!\!\begin{array}{c}\mathcal{B}_{a}(x _{a})=\sum_{x_{b}=\pm 1}\mathcal{B}_{ab}(x_{a},x_{b}),\\ \forall a\in\mathcal{V},\ \forall b\sim a,\ \forall x_{a}=\pm 1;\ all the formulas from the Appendices A.1,A.2, by \[Z^{(\lambda)}=\exp\left(-\bar{F}^{(\lambda)}\right)=\exp\left(-F^{ (\lambda)}(\mathcal{B}^{(\lambda)})\right) \tag{15}\] \[=\prod_{\{a,b\}\in\mathcal{E}}\Big{(}\sum_{x_{a},x_{b}}\exp\left(- \frac{E_{ab}(x_{a},x_{b})}{\rho^{(\lambda)}_{ab}}\right)\times\] \[\left(\mu^{(\lambda)}_{b\to a}(x_{a})\right)^{\frac{\sum\limits_{a} ^{c}\rho^{(\lambda)}_{ac}}{\rho^{(\lambda)}_{ab}}}\left(\mu^{(\lambda)}_{a \to b}(x_{b})\right)^{\frac{\sum\limits_{ac}^{c}\rho^{(\lambda)}_{bc}-1}{ \rho^{(\lambda)}_{ab}}}\Big{)}^{\rho^{(\lambda)}_{ab}}\times\] \[\prod_{a\in\mathcal{V}}\left(\sum_{x_{a}}\prod_{b\sim a}\mu^{( \lambda)}_{b\to a}(x_{a})\right)^{1-\sum\limits_{c=a}\rho^{(\lambda)}_{ac}}.\] ## 3 Properties of the Fractional Free Energy Given construction of the fractional free energy, described above in Section 2.3 and also detailed in Appendix A, we are ready to make the following statements about the fractional free energy. **Theorem 3.1**.: _[Monotonicity of the Fractional Free Energy] Assuming \(\boldsymbol{\rho}\doteq(\rho_{ab}|(a,b)\in\mathcal{E})\) is fixed, \(\bar{F}^{(\lambda)}\) is a continuous, monotone function of \(\lambda\)._ Proof.: See Appendix B. **Theorem 3.2**.: _[Convexity of Fractional] Assuming \(\boldsymbol{\rho}\doteq(\rho_{ab}|(a,b)\in\mathcal{E})\) is fixed and the model is attractive, \(\bar{F}^{(\lambda)}\) is a convex function of \(\lambda\)._ Proof.: See Appendix C. Notice that all the statements of this manuscript so far are made for an arbitrary Ising models, i.e. without any restrictions on the graph and vectors of the pair-wise interactions, \(\boldsymbol{J}\) and singleton biases, \(\boldsymbol{h}\). It appears that if the discussion is limited to attractive (ferromagnetic) Ising models, \(\forall(a,b)\in\mathcal{E}:\;J_{ab}\geq 0\), the following statement becomes a corollary of the Theorem 3.1 **Lemma 3.3**.: _[Exact Fractional] In the case of an attractive Ising model and any fixed \(\boldsymbol{\rho}\) there exists \(\lambda_{*}\in[0,1]\) such that, \(Z^{(\lambda_{*})}=Z\)._ Proof.: Recall that by construction, \(Z^{(\lambda=1)}\leq Z\), as proven in (Ruozzi, 2012). In words, the partition function computed within the Bethe (BP) approximation results in a lower bound to the exact partition function. On the other hand, we know from (Wainwright & Jordan, 2007), and also by construction, that \(Z^{(\lambda=0)}\geq Z\), i.e. TRW estimate of the partition function provides an upper bound to the exact partition function. These low and upper bounds, combined with the monotonicity of \(Z^{(\lambda)}\) stated in Theorem 3.1 results in the desired statement. ## 4 Fractional Re-Parametrization for Exact Inference **Theorem 4.1**.: _[Exact Relation Between \(Z\) and \(Z^{(\lambda)}\)]_ \[Z =Z^{(\lambda)}\mathcal{Z}^{(\lambda)}, \tag{16}\] \[\mathcal{Z}^{(\lambda)} \doteq\sum_{\boldsymbol{x}}\frac{\prod\limits_{\{a,b\}\in \mathcal{E}}\left(\mathcal{B}^{(\lambda)}_{ab}(x_{a},x_{b})\right)^{\rho^{( \lambda)}_{ab}}}{\prod\limits_{a\in\mathcal{V}}\left(\mathcal{B}^{( \lambda)}_{a}(x_{a})\right)^{\sum\limits_{c=a}\rho^{(\lambda)}_{ac}}}\] (17) \[=\mathbb{E}_{\boldsymbol{x}\sim p^{(\lambda)}_{0}(\cdot)}\left[ \frac{\prod\limits_{\{a,b\}\in\mathcal{E}}\left(\mathcal{B}^{(\lambda)}_{ab}(x _{a},x_{b})\right)^{\rho^{(\lambda)}_{ab}}}{\prod\limits_{a\in \mathcal{V}}\left(\mathcal{B}^{(\lambda)}_{a}(x_{a})\right)^{\sum\limits_{c=a }\rho^{(\lambda)}_{ac}}}\right],\] _where the fractional BP expression for the partition function, \(Z^{(\lambda)}\), is defined in Eq. (15); \(p^{(\lambda)}_{0}(\boldsymbol{x})\doteq\prod_{a}\mathcal{B}^{(\lambda)}_{a}(x _{a})\) is the component-independent distribution devised from the FBP-optimal node-marginal probabilities._ Proof.: See Appendix D Notice that \(\mathcal{Z}^{(\lambda)}\), defined in Eq. (17), is the exact multiplicative correction term, expressed in terms of the FBP solution, which should be equal to \(1\) at the optimal value of \(\lambda^{*}(J,H)\), which is achievable, according to Lemma 3.3, in the case of the attractive Ising model. ## 5 Numerical Experiments ### Setting, Use Cases and Methodology In this Section we present results of our numerical experiments - supporting and also developing further theoretical results of the preceding Sections. Specifically, we will describe details of our experiments with the Ising model in the following "use cases": (1) Over exemplary planar graph - \(N\times N\) square grid, where \(N=[3::25]\); (2) Over a fully connected graph, \(K_{N}\), where \(N=[3::8^{2}]\). In both cases we consider attractive models and mixed models - that is the models with some interactions being attractive (ferromagnetic), \(J_{ab}>0\), and some repulsive (antiferromagnetic), \(J_{ab}<0\). We experiment with the zero-field case, \(\boldsymbol{h}=0\), and also with the general (non-zero field) case. All of our models are "disordered" in the sense that we have generated samples of random \(\boldsymbol{J}\) and \(\boldsymbol{h}\). Specifically, in the attractive (mixed) case components of \(\boldsymbol{J}\) are i.i.d. from the uniform distribution, \(\mathcal{U}(0,1)\) (\(\mathcal{U}(-1,1)\)), and components of \(\boldsymbol{h}\) are i.i.d. from \(\mathcal{U}(-1,1)\). In some of our experiments we draw a single instance of \(\boldsymbol{J}\) and \(\boldsymbol{h}\) from the respective ensemble. However, in other experiments - aimed at analysis of the variability within the respective ensemble - we show results for a number of instances. We know that there is a big freedom in selecting a set of spanning trees and then re-weighting respective contributions to \(\mathbf{\rho}\doteq(\rho_{ab}|(a,b)\in\mathcal{E})\) according to Eq. (10). (See some discussion of the experiments with possible \(\mathbf{\rho}\) in (Wainwright et al., 2005). However, we decided not to test the freedom, and instead, in all of our experiments \(\mathbf{\rho}\) is chosen unambiguously for a given graph uniformly. As shown in (Wainwright, 2002), the edge-uniform re-weighting is optimal, i.e. it provides the lowest TRW upper-bound, in the case of highly symmetric graphs, such as fully connected or double-periodic square grid. It was also assumed in the TRW literature (but to the best of our knowledge never proven) that the edge-uniform re-weighting is (almost always) possible. We clarify this point in the following statement. **Lemma 5.1**.: _([Edge-uniform Weights]) For any graph with all nodes of degree two or higher there exists a subset of spanning trees, such that each edge contributes at least one spanning tree, and the edge weight is calculated according to the edge-uniform rule: \(\forall(a,b)\in\mathcal{V}:\quad\rho_{ab}=(|\mathcal{V}|-1)/|\mathcal{E}|\), where \(|\mathcal{V}|\) is the number of vertices and \(|\mathcal{E}|\) is the number of edges 1._ Footnote 1: The “degree two or higher” constraint on nodes is not restrictive, because we can either eliminate nodes with degree one (and also tree-like branches associated with them) by direct summation, or alternatively include the tree like branches in the appropriate number of spanning trees constructed for the graph ignoring the tree-like branches. Proof.: See Appendix E for constructive proof. To compute fractional free energy, \(\bar{F}^{(\lambda)}\) (minus log of the fractional estimate for partition function), we generalize approach of (Bixler and Huang, 2018) which allows efficient, sparse-matrix based, implementation. Our code will be made available at github upon acceptance of the paper. To compare the fractional estimate \(\bar{F}^{(\lambda)}=-\log Z^{(\lambda)}\), with the exact free energy, \(\bar{F}=-\log Z\), we either use direct computations (feasible for the \(8\times 8\) grid or smaller and for the fully connected graph over \(64\) nodes or smaller) or in the case of the planar grid and zero-field, when computation of the partition function is reduced to computing a determinant, we use the code from (Likhosherstov et al., 2019) (see also references therein). Our computations are done for the values of \(\lambda\) equally spaced with the increment \(0.05\), between \(0\) and \(1\), \(\lambda\in[0::0.05::1]\). We use Eq. (28) to estimate \(d\bar{F}^{(\lambda)}/d\lambda\), and then use finite difference approximation for derivative and Eq. (29) to estimate \(d^{2}\bar{F}^{(\lambda)}/d\lambda^{2}\). The log-correction term, \(\log\mathcal{Z}^{(\lambda)}=\log Z-\log Z^{(\lambda)}\), is estimated by direct sampling according to Eq. (17). (See Fig. (9) and respective discussion below for empirical analysis of the number of samples required to guarantee sufficient accuracy.) ### Properties of the Fractional Free Energy Our numerical results for the fractional estimate of the log-partition function (minus fractional free energy), \(\log Z^{(\lambda)}=-\bar{F}^{(\lambda)}\) and the log of the correction term, \(\log\mathcal{Z}^{(\lambda)}=\log Z-\log Z^{(\lambda)}=\bar{F}^{(\lambda)}- \bar{F}\), are shown as functions of \(\lambda\) in Fig. (1) for the use cases described above 2. We draw from this set of Figures the following empirical conclusions: Footnote 2: See also extended set of Figs. (4,5, 6,7) in the Appendix F, including dependence of \(d\bar{F}^{(\lambda)}/d\lambda,\,d^{2}\bar{F}^{(\lambda)}/d\lambda^{2}\) on \(\lambda\). * The monotonicity and convexity of \(\bar{F}^{(\lambda)}\), proven in Theorem 3.1 and Theorem 3.2, respectively, are confirmed. * of a phase transition type at some \(\bar{\lambda}\), when we move from the TRW regime at \(\lambda<\bar{\lambda}\) to the BP regime at \(\lambda>\bar{\lambda}\). Notice that estimate of the threshold, \(\bar{\lambda}\), decreases with the growth in \(N\). ### Relation between Exact and Fractional Figs. (4,5, 6,7), shown in the Appendix F, also give an empirical confirmation to the Theorem 3.3 statement in the part which concerns independence of, \(Z^{(\lambda)}\mathcal{Z}^{(\lambda)}\), of \(\lambda\). This observation, combined with the full statement of the Theorem 3.3, suggests that if two or more of empirical estimates of \(Z^{(\lambda)}\mathcal{Z}^{(\lambda)}\) at different \(\lambda\) are sufficiently close to each other we can use them to bound \(Z\) from above and below. Moreover, the full statement of the Theorem 4.1, i.e. equality between the left- and right- hand sides of Eq. (16), is also confirmed in all of our simulations with high accuracy (when we can verify it by computing \(Z\) directly). ### Concentration of \(\lambda_{*}\) in Large Ensembles Fig. (8) in the Appendix F shows dependence of \(\bar{F}^{(\lambda)}\) on \(\lambda\) for a number of instances drawn from two exemplary attractive use-case ensembles. We observe that variability in the value of \(\bar{F}^{(\lambda)}\) is sufficiently large. Variability of \(\lambda_{*}\), where \(Z^{(\lambda_{*})}=Z\), are also observed, even though it is significantly smaller. The last observation suggests that variability of \(\lambda_{*}\) within an attractive ensemble decreases with \(N\) when it grows. This guess is confirmed in our experiments with larger attractive ensembles illustrated in Fig. (2) for different \(N\). For each \(N\) in the case over \(N\times N\) grid we generate \(4\) different instances. We observe that as \(N\) increases variability of \(\lambda_{*}\) within the ensemble decreases dramatically. This observation is quite remarkable, as it suggests that it is enough to estimate \(\lambda_{*}\) for one instance in a large ensemble and then use it for accurate estimation of \(Z\) by simply computing \(Z^{(\lambda_{*})}\). Our estimations, based on the data shown in Fig. (2) and other similar experiments (not shown) suggest that the width of the probability distribution of \(\lambda_{*}\) within the ensemble scales as \(\propto 1/\sqrt{N}\) with increase in \(N\). ### Convergence of Sampling for \(\mathcal{Z}^{(\lambda)}\) Fig. (9), shown in the Appendix F, reports dependence of the sample-based estimate of \(\mathcal{Z}^{(\lambda)}\) on the number of samples. Our major observation here is that the result converges with the number of samples. Moreover, comparing the speed of convergence (with the number of samples) on the size of the system, \(N\), we estimate that the latter scales with the former as \(\mathcal{O}(N^{[2:4]})\). ### Fractional Approach for Mixed (attractive and repulsive) Cases Fig. (10) of the Appendix F, shows two distinct situations which may be observed in the mixed case where some of the interactions are attractive but other are repulsive, then allowing \(Z^{(\lambda)}\) to be smaller or larger than \(Z\). The former case is akin to the attractive model and \(\lambda_{*}\in[0,1]\), while in the later case there exists no \(\lambda_{*}\in[0.1]\) such that \(Z^{(\lambda_{*})}=Z\). ## 6 Conclusions and Path Forward This manuscript suggests a new promising approach to evaluating inference in Ising Models. The approach consists in, first, solving a fractional variational problem via a distributed message-passing algorithm resulting in the fractional estimations for the partition function and marginal beliefs. We then compute multiplicative correction to the fractional partition function by evaluating a well-defined expectation of the mean-field probability distribution both constructed explicitly from the marginal beliefs. We showed that the freedom in the fractional parameter is useful, e.g. for finding optimal value of the parameter, \(\lambda_{*}\), where the multiplicative correction is unity. Our theory validated experiments result in a number of interesting observations, such as a phase-transition like dependence of the fractional free-energy on \(\lambda\) and strong suppression of fluctuations of \(\lambda_{*}\) in large ensembles. As a path-forward we envision extending this fractional approach along the following directions: * Proving or disproving the concentration conjecture and small number of samples conjecture, made informally in Section 5.4 and Section 5.5, respectively. * Generalizing the extrapolation technique, e.g. building a scheme interpolating between TRW and Mean-Field. This will be of a special interest for the case of the mixed ensembles which are generally out of reach of the fractional approach (between TRW and BP) presented in the manuscript. * Generalizing the extrapolation technique to more general class of Graphical Models. * Extending the technique to the setting where \(\lambda_{*}\) will be learned directly from the data. We also anticipate that all of these developments, presented in this manuscript and others to follow, will help to make variational GM techniques competitive with other, and admittedly more popular, methods of Machine Learning, such as Deep Learning (DL). We envision to see in the future more examples where the variational GM techniques will be re-enforced with the automatic differentiation, e.g. in the spirit of (Lucibello et al., 2022), and also integrated into modern Deep Learning protocols, e.g. as discussed in Figure 1: The case of the Ising Model (a) with non-zero field and random interaction, \(h,J\sim\mathcal{U}(0,1)\) over \(3\times 3\) planar grid; and (b) with non-zero field and random interaction, \(h,J\sim\mathcal{U}(0,1)\) over \(K_{9}\) complete graph. We show fractional log-partition function (minus fractional free energy) - on the left- and the respective correction factor \(\mathcal{Z}^{(\lambda)}\) – on the right vs the fractional parameter, \(\lambda\). (Garcia Satorras & Welling, 2021). This hybrid GM-DL approach is expected to be especially needed and powerful in the physics problems where we are interested to learn from data reduced models with graphical structure prescribed by the underlying physics (or other quantitative disciplines).
2303.09041
A Multimodal Data-driven Framework for Anxiety Screening
Early screening for anxiety and appropriate interventions are essential to reduce the incidence of self-harm and suicide in patients. Due to limited medical resources, traditional methods that overly rely on physician expertise and specialized equipment cannot simultaneously meet the needs for high accuracy and model interpretability. Multimodal data can provide more objective evidence for anxiety screening to improve the accuracy of models. The large amount of noise in multimodal data and the unbalanced nature of the data make the model prone to overfitting. However, it is a non-differentiable problem when high-dimensional and multimodal feature combinations are used as model inputs and incorporated into model training. This causes existing anxiety screening methods based on machine learning and deep learning to be inapplicable. Therefore, we propose a multimodal data-driven anxiety screening framework, namely MMD-AS, and conduct experiments on the collected health data of over 200 seafarers by smartphones. The proposed framework's feature extraction, dimension reduction, feature selection, and anxiety inference are jointly trained to improve the model's performance. In the feature selection step, a feature selection method based on the Improved Fireworks Algorithm is used to solve the non-differentiable problem of feature combination to remove redundant features and search for the ideal feature subset. The experimental results show that our framework outperforms the comparison methods.
Haimiao Mo, Shuai Ding, Siu Cheung Hui
2023-03-16T02:25:05Z
http://arxiv.org/abs/2303.09041v1
# A Multimodal Data-driven Framework for Anxiety Screening ###### Abstract Early screening for anxiety and appropriate interventions are essential to reduce the incidence of self-harm and suicide in patients. Due to limited medical resources, traditional methods that overly rely on physician expertise and specialized equipment cannot simultaneously meet the needs for high accuracy and model interpretability. Multimodal data can provide more objective evidence for anxiety screening to improve the accuracy of models. The large amount of noise in multimodal data and the unbalanced nature of the data make the model prone to overfitting. However, it is a non-differentiable problem when high-dimensional and multimodal feature combinations are used as model inputs and incorporated into model training. This causes existing anxiety screening methods based on machine learning and deep learning to be inapplicable. Therefore, we propose a multimodal data-driven anxiety screening framework, namely MMD-AS, and conduct experiments on the collected health data of over 200 seaters by smartphones. The proposed framework's feature extraction, dimension reduction, feature selection, and anxiety inference are jointly trained to improve the model's performance. In the feature selection step, a feature selection method based on the Improved Fireworks Algorithm is used to solve the non-differentiable problem of feature combination to remove redundant features and search for the ideal feature subset. The experimental results show that our framework outperforms the comparison methods. Anxiety Screening, Mental Health Assessment, Multimodal Features, Feature Selection, Improved Fireworks Algorithm. ## 1 Introduction In 2019, mental illnesses, particularly anxiety disorders, were not only among the top twenty-five leading causes of excess global health spending, but also among the most disabling mental illnesses [1]. Furthermore, anxiety disorders are accompanied by immune disorders [2], and interfere with cognitive functions through memory and attention [3], thereby affecting normal life and work. Early anxiety assessment and appropriate interventions can greatly reduce the rate of self-harm and suicide in patients [4]. Psychological scales and routine health checks with professional medical equipment are traditional anxiety screening methods. The Self-rating Anxiety Scale (SAS) [5] and the Generalized Anxiety Disorder-7 (GAD-7) [6] are two psychological scales that are currently used for anxiety screening. Anxiety frequently results in a variety of symptoms or behavioral modifications, such as breathlessness [7], variations in blood pressure [8] and heart rate [9], respiration, tense muscles, and dizziness [10]. These objective signs can also be used as an important basis for anxiety screening. However, due to the limitation of lacking of medical resources in remote areas and high cost, routine health examinations such as Magnetic Resonance Imaging (MRI) [11], Computed Tomography (CT), electrocardiogram (ECG) [12, 13] and electroencephalogram (EEG) [9, 14], may not be available. Noncontact screening methods are another typical anxiety screening tool. They usually use computer vision or deep learning techniques to extract the behavioral or physiological features for anxiety screening. These methods have the advantages of low cost and convenience. The application of behavioral features [10, 15], speech features and text features provides more objective evidence for anxiety screening. Moreover, physiological signals, such as heart rate [10, 9], heart rate variability [12], and respiration rate, can be obtained by imaging photoplethysmography (iPPG) technology [16], which can also be used as important features for anxiety screening. Due to the complicated genesis and protracted nature of mental diseases [17], diagnosing them frequently involves knowledge from a number of fields, including biomedicine, psychology, and social medicine. It is a challenging problem to obtain more timely multimodal information about patient's health using traditional medical screening methods due to the limitation of medical resources [5]. In addition, multimodal data can also provide more objective evidence [18] to improve the accuracy of anxiety screening. Therefore, multimodal data will be the driving force behind the future development of anxiety screening [19]. However, the large amount of noise in the multimodal data and the imbalance of the data make the model prone to overfitting. In other words, the model cannot screen anxious patients with high precision and take intervention measures in advance, which may have a negative impact on their lives or mental conditions. In addition, due to the poor medical conditions in remote areas, model interpretability [20] and important features are crucial to assist primary care staff in anxiety screening. Traditional machine learning methods [13, 21] are difficult to deal with the problem of scalability and generalization of multimedia content data in a fast and accurate way. Deep learning methods [18, 22] based on computer vision have higher robustness and accuracy compared with traditional methods, and are therefore increasingly widely used for anxiety screening. Most of the existing technologies for anxiety screening focus on differentiable optimization problems. And the combination of high-dimensional and multimodal features is a non-differentiable problem when used as input to the model and incorporated into the model training. These existing methods are unable to meet the requirement on both the high accuracy and model-interpretability scenarios for anxiety screening. Therefore, we propose a Multimodal Data-driven framework for Anxiety Screening (MMD-AS). The contributions of this paper are as follows. * We propose a low-cost, noncontact, interpretable and easy-to-use anxiety screening framework that enables multimodal data capture and remote anxiety screening via smartphones only, which is suitable for scenarios with limited medical resources, such as health protection for seafers on long voyages and mental health screening in remote areas. * To improve the performance, the framework's components are jointly trained. In addition, our Improved Fireworks Algorithm (IFA) solves the non-differentiable problem in the case of feature combination by enhancing the local search capability, which filters out redundant features and reduces the noise in the data to find the best feature subset. * Experimental results of anxiety screening in more than 200 seafarers show that our framework has achieved high precision and model interpretability. More importantly, the results point out that multimodal data is essential for anxiety screening, and the important indicators for anxiety detection are identified, which are both beneficial to clinical practice. The rest of this paper is organized as follows. Section 2 reviews the related work on anxiety representations, anxiety screening, feature extraction and multimodal data-driven methods. Section 3 presents our proposed framework for anxiety screening. Section 4 presents the performance evaluation. Sections 5 discusses the limitations of the proposed framework. Finally, Section 6 concludes the paper. ## 2 Related Work In this section, we review the related work on anxiety representations, anxiety screening, feature extraction and multimodal data-driven methods. Table I summarizes the features and methods for anxiety screening. ### _Anxiety Representation_ Anxiety is a feeling of tension, worry, or restlessness. It occurs frequently in a variety of mental conditions, including phobias, panic disorders, and generalized anxiety disorders [23]. Anxiety is a typical response to risk and mental stress. The amygdala and hippocampus are activated by the feelings of fear and dread brought on by stress, which also affects the autonomic and parasympathetic nervous systems [24]. Patients with anxiety disorders exhibit physical symptoms that are linked to the disease, such as rapid breathing [7], heartbeat, BP [8], and additional symptoms [25] such as perspiration, tightness in the muscles, and vertigo. The physiological signals that are most frequently utilized in physiological research to evaluate mental health include ECG [26], heart rate, heart rate variability [8], EEG [27], and electrode signals, as shown in Table I. Patients with anxiety disorders exhibit structural and functional abnormalities in the nerve system that regulates emotion, according to brain imaging studies. As shown in Table I, a person's ability to manage their emotions can be plainly noticed in their facial and behavioral characteristics [10] as well as audio indicators (such as intonation and speech tempo) [28]. For example, the insula, frontal orbital cortex, anterior cingulate cortex, striatum, and amygdala all exhibit diminished response to unfavorable emotional stimuli [29]. Due to physiological respiratory issues, individuals with anxiety disorders have voices that are a mirror of their circumstances. Related studies have shown that anxious patients exhibit elevated wavelet, jitter, shimmer, and fundamental frequency (F0) mean coefficients [30]. Mel-Frequency Cepstral Coefficients (MFCCs) declines in the presence of anxiety [28]. The main signs of facial anxiety include changes in the eyes, including pupil size variations, blink rates [19], and gaze distribution [31], as well as in the lips, including lip twisting and mouth movement. Other key facial anxiety indicators include changes in the cheeks, head movement, and head speed [32]. Additional facial signs of anxiety include pallor, twitching eyelids, and stiffness in the face. Numerous studies have shown a link between the size of the pupil and emotional or mental activity. Dilated pupils may be a sign of higher anxiety levels [33]. The coherence and direction of the eyes, are also impacted by anxiety [31]. Increased gaze volatility during voluntary and stimulus-driven gazing are correlated with high levels of trait anxiety [34]. For instance, those who are nervous typically scan negative stuff more than those who are not anxious [10]. ### _Anxiety Screening Methods_ Traditional and noncontact screening methods are based on machine learning or deep learning techniques, which are the two primary categories of anxiety screening techniques. Psychological scales, [6] and assisting technologies (such as MRI [11], CT, ECG [26], and EEG [27], and biochemical indicators [44]) are frequently used in traditional screening approaches to evaluate anxiety levels. Noncontact screening methods mainly use computer vision [25] or deep learning techniques [42, 13] to extract behavioral characteristics and physiological signals related to anxiety. #### 2.2.1 Traditional Screening Methods Traditional mental health examinations commonly use psychological scales, such as the SAS [5] and GAD-7 [6] to ascertain whether patients are suffering anxiety. In real clinical settings, doctors routinely conduct structured interviews with patients to find out more about their mental health. The patient's body language and facial emotions should be closely observed by the doctor all the time. This method is severely constrained by the interactions between the doctor and patient as well as by the expertise and experience of psychiatrists. To make the proper diagnosis, medical professionals may also take into account additional data from tests such as MRI [11], CT, ECG [26], and EEG [27]. To find people who might have psychiatric problems, extensive biological data gathering is also carried out on them, such as monitoring inflammatory markers and hormone changes [44]. However, traditional screening methods place an undue emphasis on psychiatrists' training and experience. Traditional detection methods fall short in the face of unique situations, such as on long-distance voyages with limited medical resources [45]. #### 2.2.2 Noncontact Screening Methods Changes in behavioral characteristics, such as concentration on things (reflected in eye gaze duration [31], eye movement characteristics [34], pupil size, and changes in head posture [46]), mouth shape, eyebrow shape [10], facial expression [47], and gait [15], can reflect some extent the person's mental activity. These mental activities can lead to significant changes in a person's physiological characteristics, such as EEG [27], heart rate [12], and respiration rate. Noncontact anxiety screening methods capture or extract these behavioral or EEG changes mainly through computer vision or signal processing methods. A facial action coding system [25] is often used to characterize a person's facial behavior. To explore the relationship between behavioral and physiological features and anxiety, Canonical Correlation Analysis (CCA) methods [9] such as the Pearson Correlation Coefficient (PCC) [15, 36] are commonly used. Moreover, because of the physiological-behavioral link, machine learning methods can even perform better by incorporating EEG and eye-movement features. However, since correlation analysis methods tend to cause overfitting of classical machine learning methods such as SVM and KNN, sparse representation methods address this problem by introducing constraint terms [19]. To reduce the risk of model overfitting, sequence-based feature selection approaches [14] such as Sequential Backward Selection (SBS) and Sequential Forward Selection (SFS) are used to remove redundant features from the original data. The link between physiological symptoms and mental illness has led researchers to focus on additional identification of mental illness through physiological characteristics. Elevated heart rate, rapid breathing [48], high BP [8], dizziness, sweat, and muscle tension can all be used to objectively screen for anxiety. However, specialized hardware is needed for the traditional gathering of physiological signals. Its relatively high cost frequently prevents it from providing early diagnosis of psychiatric diseases. Physiological characteristics [49] such as blood volume pulse, heart rate, heart rate variability, and respiratory rate can be captured by signals from imaging photoplethysmography (iPPG). In fact, the iPPG signals are extracted by computer vision technology. Affordableness, noncontact, safety, the ability to obtain continuous measurements, and ease of use are just a few advantages of iPPG [16]. In the research of noncontact telemedicine monitoring and physical and mental health monitoring, iPPG offers a novel perspective. ### _Feature Extraction Methods_ As shown in Table I, the feature extraction methods for anxiety screening are mainly classified into three categories: neural network-based, correlation analysis, and signal processing methods. High data dimensionality and data redundancy are properties of the time series data (such as ECG [19] and EEG [27]) and image data (such as CT, MRI [11]) utilized for anxiety inference. The performance of screening anxiety methods may thus be adversely affected. To improve the performance of the traditional screening methods, the feature extraction methods in Table I, such as Neural Networks (NN) [9, 22, 39] and correlation analysis [15, 36], are used to extract features useful for anxiety inference. Nonlinear information useful for anxiety inference is frequently extracted using neural network-based feature extraction techniques including CNN, LSTM, and Radial Basis Function (RBF). Canonical Correlation Analysis (CCA), Sparse Canonical Correlation Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (SCCA) [19], and Principal Component Analysis (PCA) [35] are correlation analysis techniques for the extraction of anxiety-related features, which are helpful for inferring anxiety. In [35], PCA uses orthogonal transformations to project observations of a group of possibly correlated variables (such as the time and frequency domain statistical from respiratory signals) into the principle components of a group of linearly uncorrelated variables. The number of features in a dataset can be controlled by imposing a threshold using PCA, which is frequently used to minimize the dataset's dimensions. Due to acquisition equipment or environmental factors, the quality of physiological signal extraction or analysis is easily affected. For example, the iPPG signals used for anxiety inference is particularly susceptible to ambient lighting and motion artifacts [42]. So these physiological features from the iPPG signals contain a lot of noise. The signal processing techniques in Table I, such as Kalman filter [13] and fast Fourier transform [42], are typically used to reduce noise and eliminate its detrimental effects on the anxiety screening model. In [13], the enhanced Kalman filter processes the heart rate and accelerometry signals to follow the user's heart rate information in various contexts and choose the best model for anxiety detection based on the user's exercise conditions. ### _Multimodal Data-Driven Methods_ Due to complex etiology and long development cycle, the diagnosis of mental illness usually requires a multidisciplinary approach that combines biomedicine, psychology, and social medicine [17]. Moreover, multimodal data can provide more objective evidence for anxiety screening [18]. The multimodal data-driven approaches are promising for anxiety screening. By using the correlation analysis approach to examine the structural information between the EEG and eye movement data, it is possible to combine these two types of variables and detect anxiety more precisely [19]. Biophysical signals such as heart rate, skin electricity, and EEG in virtual reality applications, are extracted as features of different dimensions [9]. The features from the time domain and frequency domain are then fused to achieve high-precision anxiety detection. Several biosignals, including EEG, iPPG, Electrodermal Activity (EDA), and pupil size [18], are used to measure anxiety in various driving scenarios. However, the use of multimodal data will necessarily result in a rapid increase in data dimensions and feature redundancy. The increase in data dimensions leads to curse of dimensionality. The model's accuracy may be affected since redundant features may have a lot of noise. Decision Trees-based (DTs) approaches such as Random Forest (RF), Adaptive Boosting (AdaBoost) [10], Naive Bayes (NB) [38], extreme gradient boost (XGBoost) [40], can select features useful for anxiety detection. They use information attributes (such as information gain, information entropy, or Gini index [22]) to lessen the dimensionality and number of redundant features in the data. Besides, unbalanced datasets are very common, particularly in the medical industry. It also poses a significant obstacle to anxiety screening. The existing machine learning techniques [10, 13, 39, 38, 21] and neural network techniques [10, 39, 36, 22, 18] used for anxiety screening are therefore challenging to apply to this unique situation. ## 3 Proposed Framework Figure 1 shows the proposed Multimodal Data-driven framework for Anxiety Screening (MMD-AS) which consists of four main components: feature extraction, dimension reduction, feature selection, and anxiety inference for seaferers. The steps of the framework are as follows. First, feature extraction extracts the heart rate and respiratory rate features (iPPG), Behavior Features (BF), and Audio Features (AF) from facial videos. The Text Features (TF) are extracted from audio. The Questionnaires Features (QF) are extracted by processing the questionnaire. Next, the extracted iPPG, BF, AF, and TF features are processed by the designed "1DCNN+GRU" and "CNN\({}_{Text}\)" networks for dimension reduction, and then combined with the QF feature to create the feature vector \(F=[F_{iPPG},F_{BF},F_{AF},F_{TF},F_{QF}]\). Then, feature selection selects the feature subsets from the vector \(F\) based on the Improved Fireworks Algorithm (IFA). Finally, the data from the selected features are trained by the AdaBoost classifier for evaluating the generalization ability of the selected features based on the classification evaluation metrics. The trained model and selected features are finally used for anxiety inference. ### _Feature Extraction_ Table II shows the different features extracted in Feature Extraction. The heart rate and respiration rate features from iPPG signals, BF, AF, TF, and QF features from facial video and questionnaires, which are used for anxiety inference. #### 3.1.1 Heart Rate and Respiration Rate Features Patients with anxiety disorders experience overt clinical symptoms such as muscle tension, a racing heartbeat, rapid breathing [7], elevated Blood Pressure (BP) [8], and dizziness. One of the key aspects for identifying anxiety is the iPPG signals, which include Heart Rate (HR) and Respiratory Rate (RR). The face's motion-insensitive regions [42], such as the forehead and nose, can be used for extracting iPPG characteristics that contain information on HR and RR [49]. Therefore, the time-domain and frequency-domain features from iPPG signals in the range of HR and RR are used as one of the features for anxiety inference. Figure 2 depicts the process of extracting HR and RR features from iPPG signals. Each facial video is used for extracting pictures frame by frame. The key feature points from the face are extracted from each frame using a face detection algorithm, such as Google media pipe. The Region of Interest (ROI) from the face in each frame of the picture, such as the forehead and nose, can be precisely tracked based on the key feature points. The ROI (such as \(ROI_{Fore}\) and \(ROI_{Nose}\)) in each frame is resized to a fixed length and width. The ROI's various channels' pixel averages for each frame are used to create iPPG signals. Equation (1) is used to calculate the ROI's Mean Pixel (MP) value for the red channel in the \(t\)-th frame. In Equation (2), the MP of all frames constitutes the initial iPPG signal. \[MP_{R}(t)=\frac{1}{H_{ROI}\times W_{ROI}}\sum_{x=1}^{H_{ROI}}\sum_{y=1}^{W_{ ROI}}P_{R}(x,t,y) \tag{1}\] \[iPPG=\left[\begin{array}{c}MP_{R}(1),MP_{R}(2),\ldots,MP_{R}(t)\\ MP_{G}(1),MP_{G}(2),\ldots,MP_{G}(t)\\ MP_{B}(1),MP_{B}(2),\ldots,MP_{B}(t)\end{array}\right]_{C\times T^{\prime}} \tag{2}\] where the red channel pixel value at position (_x_, _y_) in the _t_-th frame is represented by _P_\({}_{R}\)(_x_, \(t\), _y_). Similarly, the green and blue channel pixel values at position (_x_, _y_) in the _t_-th frame are denoted as _P_\({}_{G}\) (_x_, \(t\), _y_) and _P_\({}_{B}\) (_x_, \(t\), _y_), respectively. The number of channels in each picture frame and the total number of frames in the video are denoted by \(C\) and \(T^{\prime}\), respectively. The signals from the nose and forehead are denoted as \(iPPG_{Nose}\) and \(iPPG_{Fore}\), respectively. In addition, the initial iPPG signals are processed by Butterworth filter [42] and Fast Fourier Transform (FFT) to obtain time domain and frequency domain signals with heartbeat and respiration information. The normal heartbeat and breathing of humans are in the range of [0.75, 3.33] Hz, and [0.15, 0.40] Hz, respectively. The time-domain signals within the normal HR and RR range of human body are separated from the initial iPPG signal by filtering, which are called \(iPPG_{Fore}^{TD}\) and \(iPPG_{Nose}^{TD}\), respectively. The frequency do \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt}} \hline Feature types & Description \\ \hline **1) Heart Rate and Respiration Rate Features:** & The iPPG signals extracted from the nose and forehead regions containing time and frequency domain features of heart rate (HR) and respiration rate (RR). \\ \hline **2) Behavioral Features (BF):** & 1) HP-(\(x_{HP},y_{HP},z_{HP}\)) describes the positional information of the head rotation when the center of the head is taken as the origin. \\ \(\quad\) Eye Gaze (EG) & 2) EG-(\(x_{EG},y_{EG}\)) describes the eye gaze direction. \\ \(\quad\) Action Unit (AU): AU01, AU02, AU04, AU05, AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU20, AU23, AU25, AU26, AU44, AU45 \\ \hline **3) Audio Features (AF):** & The audio in the video uses the vocal separation method to obtain audio from human voices. Then, AF are extracted by audio analysis technology [51], Phonation features traduced P0’s first and second derivatives (\(F0^{1}\) and \(F0^{2}\)), jitter, shimmer, Amplitude Perturbation Quotient (APQ), Pitch Perturbation Quotient (PDQ), and Logarithmic Energy (LE). \\ \hline **4) Text Features (TF):** & Process the text in the audio through the iFlyte toolkit [52], and then extract the text features using the pre-trained BERT model [53]. \\ \hline **5) Questionnaires Features (QP):** & 1) AT: It includes the stages of before boarding, sailing, and after disembarking. \\ \(\quad\) Assessment Time (AT) & 2) PF: It includes marital status, family size, income, place of household registration, position, working hours, smoking and alcohol use [38]. \\ \(\quad\) Personal Information (PF) & 3) BFPT: It includes extraversion, agreeheless, openness, conscientiousness, and neuroticism. \\ \(\quad\) Sleep Quality (SQ) [55] & 4) SQ: It is evaluated by Pittsburgh Sleep Quality Index (PSQI) [45]. \\ \(\quad\) Emotional State (ES) & 5) Lifestyle: It is evaluated by Health-Promoting Lifestyle Profile-1 (HPLP) [56]. \\ \(\quad\) Work Environment (WE) & 6) ES: It is evaluated by multidimensional fatigue inventory (MFI) [57], GAD-7 items and patient health questionnaire 9 (PHQ) [6], Depression Anxiety and Stress Scale 21 (DASS) [5]. \\ \(\quad\) Social Support (SS) & 7) WE: It includes company culture, equipment management and maintenance, office environment, safety. \\ \(\quad\) Family Relationships (FR) & 8) En: It includes type of entertainment, frequency of participation in activities. \\ \(\quad\) & 9) AL: It is evaluated by suicide behaviors questionnaire-revised (SBQ-R) [58]. \\ \(\quad\) & 10) SS: It is evaluated by social support rating scale (SSRS) [45]. \\ \(\quad\) & 11) FR: It is evaluated by family assessment device-general functioning (FAD-GF) [45]. \\ \hline \end{tabular} \end{table} TABLE II: Features extracted from facial video and questionnaires. Fig. 1: The framework for anxiety screening. Fig. 2: Heart rate and respiration rate features extraction. main signals with HR or RR are extracted from the iPPG signals using the FFT. FFT is able to extract frequency domain features (such as \(iPPG_{Fore}^{FD}\) and \(iPPG_{Nose}^{FD}\)) from HR or RR that as well as efficiently minimizing the noise of the iPPG signals. The iPPG features \(iPPG=[iPPG^{TD},iPPG^{FD}]\) from the time and frequency domains in the iPPG signals are denoted as \(iPPG^{TD}=[iPPG_{Fore}^{TD},iPPG_{Nose}^{TD}]_{2\times C\times T}\) and \(iPPG^{FD}=[iPPG_{Fore}^{FD},iPPG_{Nose}^{FD}]_{2\times C\times T^{\prime}}\), respectively. #### 3.1.2 Behavioral Features Key signs of anxiety in the behavioral context include changes in pupil size, blink rate [19], gaze distribution [31], lip twisting, mouth shape movements, cheek changes, head movement, and head speed [32]. These behavioral features can be described by facial Action Units (AUs) and used as characteristics for anxiety inference. The different AUs of the face are defined by the Facial Action Coding System (FACS) [50], which describes muscle movements in particular locations. They are used to describe a person's facial activity, including those of the mouth, chin, lips, eyes, eyelids, and eyebrows. Different combinations of AUs can be used to describe various facial behaviors or emotional expressions. A facial behavior analysis toolkit [46] is used for analyzing the behavioral features of each frame from the facial video. \(EG(t)=(x_{EG}(t),y_{EG}(t))\), \(AUs(t)\), \(HP(t)=(x_{HP}(t),y_{HP}(t),z_{HP}(t))\) are Eye Gaze (EG), AUs, and Head Posture (HP) features extracted from the \(t\)-th frame, respectively. Each frame in the video is used to extract behavioral features, and they are combined to create a sequence of behavioral features \(BF\) = [\(EG\), \(AUs\), \(HP\)]. \[EG=[EG(1),EG(2),...,EG(t)] \tag{3}\] \[AUs=[AUs(1),AUs(2),...,AUs(t)] \tag{4}\] \[HP=[HP(1),HP(2),...,HP(t)] \tag{5}\] #### 3.1.3 Audio Features Negative emotions, such as anxiety, and depression, may cause changes in the somatic and autonomic nervous systems that are reflected in muscle tension and respiratory system [59]. These changes can have an impact on prosody and speech quality. In [59], the mean, median, standard deviation, maximum, and minimum of the time domain and frequency domain signals of Zero-Crossing Rate (ZCR) in each sliding time window are extracted. Previous research has demonstrated a correlation between a few auditory factors and anxiety, including fundamental frequency F0 [30], the first and second formant frequency (F1 and F2), phonation, MFCCs, and wavelet coefficients. The Phonation consists of the F0's first and second derivatives (\(F0^{1}\) and \(F0^{2}\)), as well as jitter, shimmer, Amplitude Perturbation Quotient (APQ), Pitch Perturbation Quotient (PPQ) and Logarithmic Energy (LE). Partial acoustic signatures vary in different directions and intensities with anxiety. Anxious seafarers have higher F0 mean, F1 mean, jitter, shimmer, and wavelet coefficients. MFCCs are decreased with anxiety [28]. As a result, these audio features as shown in Table II are also used as one of the characteristics for anxiety inference. There may have background noises in the original audio due to the background environment. It is necessary to eliminate the background noises from the audio and extract the seafarer's voice from each video. Only seafarers' voices are present in the audio data after vocal separation. Initial audio features, such as MFCCs, F0, ZCR, prosody, and phonation, are obtained from the audio data after they have been processed by audio analysis technology [51]. Phonation features include \(F0^{1}\), \(F0^{2}\), jitter, shimmer, perturbation quotient, pitch perturbation quotient, and logarithmic energy. Finally, MFCCs, F0, Phonation, Prosody and ZCR form the audio features \(AF\)=[\(MFCCs\), \(F0\), \(Phonation\), \(Prosody\), \(ZCR\)]. #### 3.1.4 Text Features When a person works in a closed environment for a long time, his relationship with relatives and friends, and work status are closely related to anxiety [45]. By asking these questions shown in Figure 4, seafarers' phone cameras record audio and facial video of their responses to learn more about their health. Therefore, text features from audio are also used as the main features for anxiety inference. The main steps of text feature extraction from audio are as follows. First, human voice is obtained after it has undergone vocal separation processing. Next, the iFlyte toolkit [52] with 97.5% accuracy is used for Chinese voice analysis, and then the Chinese text is extracted from the audio. Finally, the pre-trained BERT model [53] is used to process each sentence from the text in the audio to create the text feature vector. Figure 4 shows a sample of the seafarer's responses to the two questions. Seafarers answer questions based on their current situation. Fig. 4: Extracted text features based on the answers from seafarers’ relationship with relatives and friends, and work status. Fig. 3: Behavioral feature extraction. #### 3.1.5 Questionnaires Features As a result of turbulence, an airtight environment, vibration noise, variations in circadian rhythm, a monotonous diet, and social isolation, long-haul seafarers are exposed to serious health risks [45]. It can easily trigger a variety of physical and mental health problems for seafarers, including those related to anxiety, diet, illness, fatigue, depression, and cognition. Many studies have found that a variety of factors such as personality traits [54], poor sleep quality (leading to fatigue) [45], bad emotional state, attitude to life [58], a lack of family and social support [60], and so on, can contribute to anxiety. In the questionnaire, most of the questions in Table II provide options to be accepted which represent different severity or grade levels. Table III shows some example questions in the questionnaire. Therefore, each question in the questionnaire can be assigned with a score. The questionnaire features are processed and then denoted as QF. ### _Dimension Reduction_ The majority of features are time series data. The data dimensions of the original HR and RR features from the time domain and frequency domain, behavioral features, and text features from audio are often rather high. For example, a total of 18 AUs can be extracted from each frame of a facial picture given in Figure 3, so the total number of dimensions of AUs obtained from a video with a sampling rate of 25 Frames Per Second (FPS) and a duration of one second is 18 \(\times\) 25. It is similar to the dimensions of other time series features. To reduce the data dimensions and the cost of feature selection in the next step, the original features are processed by deep learning networks [9, 39] for dimension reduction, which is shown in Figure 5. The Convolutional Neural Network (CNN) can effectively extract spatial features of high-dimensional data [41], while Gated Recurrent Units (GRU) network is a variant of the Long Short-Term Memory (LSTM), which can extract temporal features in time series data [61]. Compared with time series data, the dimensions of questionnaire features are not high, so it does not need dimension reduction. Therefore, as depicted in Figure 5, the feature vector \([F_{iPPG},F_{BF},F_{AF}]\) and \(F_{TF}\) are produced when the iPPG, behavior, and audio, and text features are processed by the "1DCNN+GRU" and "CNN\({}_{Text}\)" networks, accordingly. Then, the feature vector \(F=[F_{iPPG},F_{BF},F_{AF},F_{TF},F_{QF}]\) is composed of \([F_{iPPG},F_{BF},F_{AF}]\), \(F_{TF}\), and \(F_{QF}\). Several "1DCNN+GRU" networks with the same structure are employed to extract spatiotemporal features from iPPG, behavior, and audio respectively to reduce data dimensions. The "1DCNN+GRU" network consists of three 1D-MaxPool layers, three 1D-convolutional layers, and two GRU layers. A Rectified Linear Unit (ReLU) layer and a dropout layer are added after each convolutional layer and MaxPool layer, respectively. The _Conv-k(p1, p2)_ denotes the _k_-th convolutional layer. The _p1_ and _p2_ represent the out channels and kernel size, respectively. Additionally, the \(TF\) features are processed by the "CNN\({}_{Text}\)" network to obtain the feature vector \(F_{TF}\). The "CNN\({}_{Text}\)" network shown in Figure 5 consists of one embedding layer, two convolutional layers, two ReLU layers, one pooling layer, one flatten layer, two dropout layers, and two Dense layers. In Figure 5, convolutional layers are used to extract different features of the input. The first layer of convolutional layers may only be able to extract some low-level features, and more layers of networks can iteratively extract more complex features from low-level features. The max-pooling layer not only reduces the feature dimension but also retains more texture information. The ReLU layer is to increase the nonlinear fitting ability of the neural network. The dropout layer can enhance the generalization ability of the model. The embedding layer is to encode sentences or words. The flattened layer is to make the multi-dimensional input one-dimensional, and it is often used in the transition from the convolutional layer to the fully connected layer. The dense layer is to change the previously extracted features through nonlinear changes, extract the association between these features, and finally maps them to the output space. ### _Feature Selection_ There are still a lot of redundant features in \(F\)=[\(F_{iPPG}\), \(F_{BF}\), \(F_{AF}\), \(F_{TF}\), \(F_{QF}\)] extracted from the original data. In addition, due to the data imbalance problem, the parameter learning of the model is more biased toward the majority classes during the training process. As such, the performance of the model will be adversely affected. The feature selection [22] approaches, such as the filter and wrapper methods, are used for anxiety screening to remove Fig. 5: Dimension reduction by the “1DCNN+GRU+CNN\({}_{Text}\)”. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Questions & \multicolumn{5}{c}{Grade levels: low\(\rightarrow\)high} \\ \hline 1) I feel energized. & 0 & 1 & 2 & 3 & \(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\backslash\)\(\backslash\)\(\backslash\backslash\backslash\)\(\backslash\backslash\)\(\ redundant features from the original data to enhance the effectiveness of models. Embedding methods, which combine the advantages of filter and wrapper techniques, can also be used for anxiety screening. Moreover, bio-inspired methods that introduce randomness into the search process to avoid local optimum to learn the model parameters are more conducive for predicting the minority class. Therefore, the feature selection method based on our Improved Fireworks Algorithm (IFA) is used to search for feature subsets to solve the feature combination optimization problem, which is non-differentiable. #### 3.3.1 Improved Fireworks Algorithm Fireworks Algorithm (FA) [62] is a new swarm intelligence algorithm proposed in recent years, and its idea comes from the fireworks explosion process in real life. FA automatically balances local and global search capabilities by regulating the quantity of offspring generated by fireworks through the explosion intensity. The former can hasten population convergence, whilst the latter can guarantee population diversity. The original FA uses the explosion, mutation, selection, and mapping rules as its four major operators. Based on the original FA [62], our Improved Fireworks Algorithm (IFA) enhances the explosion radius (also called explosion amplitude) of FA in the explosion operator to improve the local search capability by Equations (6) and (7) while leaving the other components of the algorithm unaltered. \[R_{i}^{\text{new}}=\left\{\begin{array}{l}x_{CF}\times(1+N(0,1))-x_{i},S_{i }=S_{\max}\\ R_{\max}\frac{f(x_{i}^{\text{best}})-x_{\min}^{\text{pred}}+\varepsilon}{ \sum_{i=1}^{N}(f(x_{i}^{\text{pred}})-x_{\min}^{\text{pred}})+\varepsilon},S_{ i}\neq S_{\max}\end{array}\right. \tag{6}\] \[x_{i}^{\text{best}}=\left\{\begin{array}{l}x_{i},f\left(x_{i}\right)<f\left( x_{i}^{\text{pbest}}\right)\\ x_{i}^{\text{pbest}},\text{ \emph{otherwise}}\end{array}\right. \tag{7}\] where the Core Fireworks (CF) \(x_{CF}\) is the individual with the best fitness value in the fireworks population. The Gaussian distribution function \(N(0,1)\) has a mean of zero and a variance of 1. \(S_{i}\) is the number of explosion sparks produced by the \(i\)-th firework individuals \(x_{i}\). \(S_{max}\) is the maximum number of explosion sparks. \(R_{max}\) is the maximum explosion radius that the fireworks individuals are allowed to displace. The \(i\)-th firework individual's fitness value is represented by \(f(x_{i})\). The worst firework individual's fitness value is \(Y_{max}\) = max\(\{f(x_{1})\), \(f(x_{2})\),..., \(f(x_{i})\}\). The FA chooses the next generation of fireworks from candidate individuals, and the others are discarded. The best historical information in the candidate set will not be fully utilized using this strategy in the FA. Equations (6) and (7) illustrate how our IFA generates an adaptable explosion radius by utilizing the historical information \(x_{i}^{\text{pbest}}\) from the \(i\)-th fireworks individual \(x_{i}\). If the \(i\)-th fitness value of \(x_{i}\) is smaller than that of \(x_{i}^{\text{pbest}}\), the \(x_{i}^{\text{pbest}}\) is updated by Equation (7). #### 3.3.2 IFA-based Feature Selection The process of feature selection based on the Improved Fireworks Algorithm is shown in Figure 6. The purpose of the IFA iteration is to search for the feature subset's locations. In other words, each individual \(x_{i}\) of the IFA can represent a set of feature subsets, which are used to determine the corresponding dimension of the selected features from the feature vector \(F=[F_{iPPG},F_{BF},F_{AF},F_{TF},F_{QF}]\). For example, the value of each dimension of individual \(x_{i}\) generated by each iteration of IFA is in the range of [0,1]. After \(x_{i}\) is discretized by Equation (8), it can represent a set of decision variables. \[x_{ij}^{B}=\left\{\begin{array}{l}0,x_{ij}<0.5\\ 1,otherwise\end{array}\right. \tag{8}\] where \(x_{ij}^{B}\) represents the _Binary_ value of the \(j\)-th dimension of the \(i\)-th individual \(x_{i}\) after being discretized. That is to say, when \(x_{ij}<0.5\) and \(x_{ij}^{B}=0\), it means that the corresponding to the \(j\)-th dimension in the feature vector \(F\) is not selected, otherwise it is selected. The green squares with the value in Figure 6 represent the index of the selected feature, while the red squares represent the index of the unselected feature at the discretized position of the \(i\)-th individual \(x_{i}^{B}\). Finally, the Selected Features \(SF=[SF_{1},SF_{2},SF_{4},...,SF_{j-2},SF_{j}]\) can be determined from feature vector \(F\) by \(x_{i}^{B}=[x_{1}^{B},x_{12}^{B},...,x_{ij}^{B}]\), and \(j\)=1, 2,..., \(d\). And \(d\) is the total number of dimensions of \(F\). ### _Anxiety Inference_ The parameter learning for anxiety inference models can be viewed as a 0-1 programming problem. If the feature vector \(F\) in Figure 6 has \(d\)-dimensional features, there are \(2^{d}\) possible feature combinations. The feature subset is searched by IFA from the solution space of \(2^{d}\) feature combinations to find the one that can achieve the best model's overall classification evaluation metrics. In previous studies [19], the model's performance usually uses one or more classification evaluation metrics, such as accuracy, precision, sensitivity, specificity, or F1-score. However, when Fig. 6: Feature selection based on the Improved Fireworks Algorithm. Fig. 7: Anxiety Inference. the samples in the data set are unbalanced, these metrics can hardly distinguish the model's performance [63]. In addition, too low sensitivity or specificity may cause adverse consequences. For instance, a test with low sensitivity may cause errors, and fail to detect correctly. The test's low specificity might lead to a lot of false positive results, which is quite stressful for patients. Therefore, it is necessary to measure the performance of the model by multiple classification evaluation metrics, such as the Area Under Curve (AUC), accuracy (Acc), precision (Pre), sensitivity (Sen), specificity (Spe), and F1-score (F1), and use Equation (9) as the loss function for model optimization. A penalty factor \(\lambda=0.2\times d\) needs to be introduced to limit the dimensions of the selected features. \[\begin{split} minf(x_{i}^{B})=-(AUC+Acc+Pre+Sen+\\ F1+Spe),\sum_{j=1}^{d}x_{ij}^{B}\geq\lambda\end{split} \tag{9}\] \[Acc=\frac{TP+TN}{TP+TN+FP+FN} \tag{10}\] \[Pre=\frac{TP}{TP+FP} \tag{11}\] \[Sen=\frac{TP}{TP+FN} \tag{12}\] \[F1=\frac{2TP}{2TP+FP+FN} \tag{13}\] \[Spe=\frac{TN}{TN+FP} \tag{14}\] where True-Positive (TP) and True-Negative (TN) denote the accurate classification of the anxiety and anxiety-free samples, respectively. False-Negative (FN) and False-Positive (FP) suggest that the anxiety and anxiety-free samples are wrongly categorized respectively. In addition, AdaBoost [64] is a technique for ensemble learning that uses an iterative process to improve weak classifiers by learning from their mistakes [10, 13]. Due to its effective classification performance and the interpretability of the inference process, Adaboost is utilized as the classifier for anxiety screening. Figure 7 depicts the anxiety inference process. The feature subset produced by the feature selection method can be used for determining the selected features. The data corresponding to these features is divided into training and test sets. The training set is used to train the classifier. The trained classifier assesses the feature subset based on the predictions from the test set. When a feature subset is evaluated once, the number of evaluations is increased accordingly, ie, \(Eve=Eve+1\). If the total number of evaluations reaches the predefined maximum number of evaluations, ie, \(Eve>MaxEve\), the selected features and trained classifier are used for anxiety inference. Otherwise, the feature selection method process searches for a new feature subset. ## 4 Performance Evaluation To gather information about the physical and mental health from seafarers for conducting the experiments, we have collaborated with West China Hospital of Sichuan University for performance evaluation. These experiments mainly focus on seafarers' health perception and intervention. On August 12, 2020, the Biomedical Ethics Committee of Hefei University of Technology gave the study its full approval with experiment registration number W2020JSFW0388. All participating seafarers had given the consent to the experiments. ### _Experiments_ We have designed a system that can be accessed using mobile devices based on the WeChat applet platform. With their smartphones, seafarers can then check their physical and mental well-being. First, seafarers need to fill in a questionnaire. As shown in Table II, the content of the questionnaires includes personality traits [54], poor sleep quality (leading to fatigue) [45], bad emotional state, attitude to life [58], family relationship, social support [60], etc. Next, the seafarers are then asked additional questions on their work status, and relationships with family and friends after completing the questionnaires. The seafarer's responses are captured on camera by their phones. Each video capture lasts for 30 seconds. The smartphone concurrently records the audio data while recording the video. The sampling rates for audio and video are 25 FPS and 22.05 kHz, respectively. The video has a 480 \(\times\) 480 pixel resolution. The average, maximum, and minimum audio durations are 29.31 seconds, 30.44 seconds, and 28.96 seconds, respectively. When responding to the inquiries, the seafarer's face is kept as visible as possible on the recorded screen, as shown on the right side of the interface in Figure 8. One frame per second is taken to assess the acquired video's quality before the video is transferred to the server. Fig. 8: The interface of seafarers’ physical and mental health assessment. The video is uploaded to the server if the rate of faces being detected in it exceeds 90%. Otherwise, it needs to be captured again. In the experiments, all seaferers taking part in the physical and mental health evaluation were male. The age range of seafarers was 19 to 58. There were 2 seafarers' age between 18 and 20, 167 people between the ages of 20 and 40, and 20 seafarers' age between 40 and 60. The seafarers' collective age was known to be 31, on average. The age information for other seafarers was not recorded due to errors. The start and end dates for data collection were from June 2020 to June 2021. A total of 227 of seafarers' data were recorded: 189 of them had no anxiety, 33 had mild anxiety, and 5 had moderate anxiety. In other words, 189 people were labeled "anxiety-free" and the remaining 38 people were labeled "anxiety". The GAD-7 [6] was used to measure seafarers' anxiety levels. Additionally, psychiatrists were invited to validate the outcomes of the GAD-7 scale that the seafarers had completed. ### _Performance Results_ Our proposed MMD-AS framework consists of the dimension reduction component "1DCNN+GRU+CNN\({}_{Text}\)", feature selection component "IFA", and anxiety inference component "AdaBoost". Table IV shows the performance results of different methods for different components. The performance is computed using Average (Avg) = (Acc + AUC + Pre + Sen + F1 + Spe)/6. \(\Delta\)Avg shows the average performance difference of the method when compared to the proposed MMD-AS framework. Overall, the proposed MMD-AS has achieved the best performance with Avg = 97.55%. #### 4.2.1 Performance on Dimension Reduction Methods The dimension reduction method of the proposed MMD-AS framework is "1DCNN+GRU+CNN\({}_{Text}\)". The methods, which are used for comparison with the framework's dimension reduction method, include "1DCNN+LSTM+CNN\({}_{Text}\)" (M1), "LSTM+CNN\({}_{Text}\)" (M2) [36], "1DCNN+CNN\({}_{Text}\)" (M3), and Principal Component Analysis (PCA) [35] (M4). The experiments are conducted by using the same feature selection component "IFA", and anxiety inference component "AdaBoost" [10, 13] of the proposed MMD-AS framework with different dimension reduction methods. The performance results of MMD-AS with the dimension reduction method "1DCNN+GRU+CNN\({}_{Text}\)" has achieved the best performance. The proposed MMD-AS framework has achieved performance improvements of 0.84%, 1.28%, 0.54%, and 39.28%, respectively when compared with M1 to M4. However, the performance of M4 with dimension reduction method PCA performs worse than the methods based on deep learning-based dimension reduction methods such as "1DCNN+GRU+CNN\({}_{Text}\)" (MMD-AS), "LSTM+CNN\({}_{Text}\)" (M2) and "1DCNN+CNN\({}_{Text}\)" (M3). The reason for the performance differences between them is that the dimension reduction methods in MMD-AS and M1 to M3 are deep learning networks, which can extract time-series features from high dimensions and can effectively reduce the original data's dimensions to improve the performance. #### 4.2.2 Performance on Feature Selection Methods The feature selection method of the proposed MMD-AS framework is IFA. The methods, which are used for comparison with the framework's feature selection method, include Fireworks Algorithm (FA) [62] (M5), Bat Algorithm (BA) [65] (M6), Particle Swarm Optimization (PSO) [66] (M7), and Selecting \(K\)-Best (SKB) [67] (M8). The experiments are conducted by using the same dimension reduction component "1DCNN+GRU+CNN\({}_{Text}\)", and the anxiety inference component "AdaBoost" of the proposed MMD-AS framework with different feature selection methods. The proposed MMD-AS framework with the feature selection method "IFA" has achieved the best performance. The proposed MMD-AS framework has achieved performance improvements of 0.45%, 0.19%, 5.32%, and 2.25%, respectively when compared with M5 to M8. The proposed MMD-AS framework with the feature selection method IFA has the improved explosion radius, which offers better local search capability to guide the fireworks population to find the better feature subset and reduce the noises in the features. Therefore, The MMD-AS's IFA algorithm outperforms the Fireworks Algorithm \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Different components in the methods} \\ \cline{2-5} & Dimension Reduction & Feature Selection & Anxiety Inference & Avg(\%) & \(\Delta\)Avg(\%) \\ \hline \hline **MMD-AS (ours)** & 1DCNN+GRU+CNN\({}_{Text}\) & IFA & AdaBoost & **97.55** & \(\cdots\) \\ \hline M1 & 1DCNN+LSTM+CNN\({}_{Text}\) & IFA & AdaBoost & 96.71 & -0.84 \\ M2 & LSTM+CNN\({}_{Text}\) & IFA & AdaBoost & 96.27 & -1.28 \\ M3 & 1DCNN+CNN\({}_{Text}\) & IFA & AdaBoost & 97.01 & -0.54 \\ M4 & PCA & IFA & AdaBoost & 58.27 & -39.28 \\ \hline M5 & 1DCNN+GRU+CNN\({}_{Text}\) & FA & AdaBoost & 97.10 & -0.45 \\ M6 & 1DCNN+GRU+CNN\({}_{Text}\) & BA & AdaBoost & 97.36 & -0.19 \\ M7 & 1DCNN+GRU+CNN\({}_{Text}\) & PSO & AdaBoost & 92.23 & -5.32 \\ M8 & 1DCNN+GRU+CNN\({}_{Text}\) & SKB & AdaBoost & 95.30 & -2.25 \\ \hline M9 & 1DCNN+GRU+CNN\({}_{Text}\) & IFA & DT & 94.83 & -2.72 \\ M10 & 1DCNN+GRU+CNN\({}_{Text}\) & IFA & RF & 96.75 & -0.80 \\ M11 & 1DCNN+GRU+CNN\({}_{Text}\) & IFA & LR & 94.08 & -3.47 \\ M12 & 1DCNN+GRU+CNN\({}_{Text}\) & IFA & KNN & 80.03 & -17.52 \\ M13 & 1DCNN+GRU+CNN\({}_{Text}\) & IFA & SVM & 92.43 & -5.12 \\ M14 & 1DCNN+GRU+CNN\({}_{Text}\) & IFA & MLP & 95.05 & -2.50 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Performance comparison (%) of different methods. (M5) by 0.44%. However, PSO (M7) and SKB (M8) perform quite poorly when compared with other feature selection methods based on swarm intelligence algorithms such as IFA, FA, and BA. The main reason for the poor performance of PSO (M7) is that the PSO's search capability is not as good as other swarm intelligence algorithms, such as IFA, FA, and BA. In addition, as features with small variance may contain important information that distinguishes samples, the dimension reduction method of SKB may cause these features to be filtered, which may be the reason for the poor performance of SKB (M8). #### 4.2.3 Performance on Anxiety Inference Methods The anxiety inference component of the proposed MMD-AS framework is AdaBoost. The methods, which are used for comparison with the framework's anxiety inference method, include Decision Tree (DT) [12] (M9), Random Forest (RF) [39] (M10), Logistic Regression (LR) [10] (M11), K-Nearest Neighbors (KNN) [27] (M12), Support Vector Machines (SVM) [39] (M13), and Multilayer Perceptron (MLP) [68] (M14). The experiments are conducted by using the same dimension reduction component "1DCNN+GRU+CNN\({}_{Text}\)", and feature selection component IFA of the proposed MMD-AS framework with different anxiety inference methods. The proposed MMD-AS framework with the anxiety inference method "AdaBoost" has achieved the best performance, with performance improvements of 2.72%, 0.80%, 3.47%, 17.52%, 5.12%, and 2.50% respectively when compared with M9 to M14. Since AdaBoost is an ensemble learning technique that uses an iterative process to improve weak classifiers [10, 13] by learning from mistakes, it can learn features that are conducive to anxiety inference. ### _Ablation Study_ We have conducted an experiment for an ablation study to evaluate the effectiveness of each component in our proposed MMD-AS framework: (1) M15 is MMD-AS without feature selection. (2) M16 is MMD-AS without feature selection and anxiety inference. As M16 only has a dimension reduction component "1DCNN+GRU+CNN\({}_{Text}\)", it cannot perform classification. So we add a fully connected layer behind the component to enable M16 with a classification function. (3) M17 is MMD-AS without dimension reduction and feature selection. Table V shows the results of the ablation experiments. We have the following observations. First, MMD-AS outperforms M15 by 5.65%, which demonstrates the capabilities of the feature selection method IFA in terms of feature selection and feature denoising. Second, MMD-AS outperforms M16 by 3.56%, which shows that the components IFA and AdaBoost improve the model's performance. Third, MMD-AS outperforms M17 by 8.25%, which shows that the components "1DCNN+GRU+CNN\({}_{Text}\)" and IFA can improve the model's performance. Overall, the different components of the proposed MMD-AS framework are important for the framework to achieve the best performance. ### _Analysis on Feature Selection_ Feature importance scores are used to indicate which features are useful for anxiety screening. When the feature selection algorithm selects a feature of a certain dimension in the feature \(F\), the importance score of the feature is increased by one. Figures 9(a) to 9(d) show the ranking of our proposed MMD-AS framework on the importance scores of different types of features such as iPPG features (including HR and RR features), BF, AF, TF, and QF for anxiety screening. In Figure 9(a), the importance of QF and BF features are ranked the highest and the second highest among the five categories of features, respectively. Figure 9(b) shows the feature importance scores of physiological representations in anxiety screening, such as audio features, and iPPG containing heart rate and respiration rate features. Audio features, such as Prosody, fundamental frequency features (including F0 and F01), Pitch Perturbation Quotient (PPQ), Jitter, and Amplitude Perturbation Quotient (APQ), play a more important role in anxiety screening. Their feature importance scores are all greater than 150. The iPPG features from frequency domain signals including iPPG\({}_{Fore}^{FP}\) and iPPG\({}_{Nose}^{FP}\) are more important for anxiety screening. Compared with the iPPG features from the nose area, iPPG features from the forehead are more important for anxiety screening, since the forehead is dense with blood vessels [42]. Figure 9(c) demonstrates that anxiety screening is more heavily influenced by characteristics in the chin (e.g. AU17), eyes (e.g. AU02, AU05, AU01, AU45, AU04), and lips (e.g. AU10, AU23, AU20) areas. The scores of these features are mostly distributed in the range of [160, 206]. Figure 9(d) shows the contribution of text and questionnaire features to anxiety screening. The feature importance scores of SSRS, MFI, PF, HPLP, and PSQI features ranked among the top in anxiety screening. The importance scores of these features are all greater than 1400. Due to the complex etiology and long development cycle of anxiety, it usually requires a combination of multidisciplinary knowledge such as biomedicine, psychology, and social medicine to assist diagnosis [17]. In clinical practice, multimodal data are essential for anxiety screening [19, 18]. Therefore, based on the results of the feature analysis, we suggest that screening patients with anxiety \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Different components in the framework} & \multirow{2}{*}{Avg(\%)} & \multirow{2}{*}{\(\Delta\)Avg(\%)} \\ \cline{2-2} \cline{5-5} & Dimension Reduction & & & & \\ \hline **MMD-AS (ours)** & 1DCNN+GRU+CNN\({}_{Text}\) & IFA & AdaBoost & **97.55** & - \\ M15 & 1DCNN+GRU+CNN\({}_{Text}\) & - & AdaBoost & 91.90 & -5.65 \\ M16 & 1DCNN+GRU+CNN\({}_{Text}\) & - & - & 93.99 & -3.56 \\ M17 & - & - & Adaboost & 89.30 & -8.25 \\ \hline \hline \end{tabular} \end{table} TABLE V: Experimental results of the ablation study of the MMD-AS framework. should be combined with multimodal information. In addition, behavioral features (such as chin, eyes, lip area), physiological signals from the frequency domain (such as heart rate, respiration rate), audio characteristics (such as the Prosody, fundamental frequency features, PPO, Jitter, and APQ), social support, fatigue status, sleep quality [45], lifestyle and other important modal characteristics can be used as indicators for clinical practice. ## 5 Limitations There are still some shortcomings with our proposed framework. First, our dataset has some limitations, such as uneven data and the small sample size. Due to the inherent characteristics of the seafarers' profession, our dataset lacks health data related to female seafarers. Based on the results of health examinations, shipping companies have restricted seafarers with severe mental illness from boarding the ships for work, which may lead to the lack of data samples of severe anxiety in our dataset. Our framework can only learn from the existing samples, and not from the parameters in the missing samples. Our framework is driven by multimodal data, which may result in our model not being able to effectively screen out people suffering from anxiety when faced with certain new samples, which in turn limits the generalization and application. Second, in the standard clinical diagnostic process, physicians are required to conduct a structured interview with the patient to further determine the patient's mental health status. Although the GAD-7 is used to label seafarers' anxiety levels, the lack of structured interviews with seafarers may lead to misdiagnosis or underdiagnosis of a small number of seafarers. However, multimodal data can provide physicians with more objective evidence when screening for anxiety in seafarers. Psychologists re-examine the psychological scales completed by seafarers, which reduces the probability of misdiagnosis or underdiagnosis of seafarers. Third, important features can serve as objective evidence for anxiety reasoning, which needs more cohort experiments to verify. Since the factors leading to the formation of anxiety are multifaceted, which involve biological, medical, and Fig. 9: The importance scores ranking of the proposed MMD-AS on different features sociological aspects. Therefore, it is very necessary to further validate this conclusion by increasing the data sample size and conducting cohort experiments. Nonetheless, these important characteristics are beneficial to design cohort research experiments to investigate the mechanisms of anxiety development. Because of the above issues, we will focus on the following three areas in our future research. First, knowledge from a range of fields, such as biomedical, psychological, and sociomedical, needs to be integrated into the proposed framework to enhance interpretability. By utilizing wearable technology and noncontact technologies, we will focus on extracting features from different dimensions, such as physiological features, heart rate variability, changes in facial temperature, distribution of facial temperature, audio features), behavioral features, family social support, sleep quality, and fatigue status. By combining these characteristics with clinical expert knowledge, more scientific evidence useful for anxiety inference can be provided. In addition, additional objective evidence expert domain knowledge will be integrated into our framework to assist primary care physicians in anxiety screening. Second, the increased data sample size allows for a more comprehensive data distribution, which can enhance the robustness of the anxiety screening framework. The results of the analysis of important features based on the use of anxiety inference are used to build cohort research experiments, which in turn assist physicians in studying how anxiety is developed. Finally, our framework offers the advantages of low cost, ease of use, noncontact, interpretable and high accuracy. In addition, our framework enables anxiety screening of seafarers by simply analyzing multimodal health data via smartphones. It is invaluable in future telemedicine scenarios. Therefore, our framework will be extended and applied to anxiety detection for large populations in scenarios where medical resources are limited, such as health coverage for seafarers on long voyages, or in remote areas. ## 6 Conclusion Existing methods for anxiety screening have some drawbacks, such as the inability to solve the non-differentiable problem of feature combinations and the inability to meet the requirement of scenarios with limited medical resources. To overcome these drawbacks, we have proposed a multimodal data-driven framework called MMD-AS for seafarers' anxiety screening in this paper. The experimental results of comparative experiments and ablation studies of different components in the framework show that our proposed framework has achieved the best performance among the comparison methods, and each component of the proposed framework is important for performance improvement. In addition, due to the advantages of low cost, noncontact and convenient operation, the proposed framework and the suggested indications for anxiety screening have certain guiding significance and application value for application scenarios with limited medical resources, such as the health protection of seafarers on long-distance voyages and anxiety screening in remote areas. For further work, we will collect relevant health data from more people, and apply the proposed framework for anxiety screening in clinical practices, which can provide more detailed and scientific evidence for anxiety screening to help study the process of anxiety development. ## Acknowledgments The authors would like to thank the professors Wei Zhang and Yuchen Li from West China Hospital of Sichuan University for the guidance on experimental design and data collection. This work was supported in part by the 2020 Science and Technology Project of the Maritime Safety Administration of the Ministry of Transport of China (No. 0745-2041CCIEC016) and the National Natural Science Foundation of China (No. 91846107, 72293581, 72293580).
2310.11736
Kernel Learning in Ridge Regression "Automatically" Yields Exact Low Rank Solution
We consider kernels of the form $(x,x') \mapsto \phi(\|x-x'\|^2_\Sigma)$ parametrized by $\Sigma$. For such kernels, we study a variant of the kernel ridge regression problem which simultaneously optimizes the prediction function and the parameter $\Sigma$ of the reproducing kernel Hilbert space. The eigenspace of the $\Sigma$ learned from this kernel ridge regression problem can inform us which directions in covariate space are important for prediction. Assuming that the covariates have nonzero explanatory power for the response only through a low dimensional subspace (central mean subspace), we find that the global minimizer of the finite sample kernel learning objective is also low rank with high probability. More precisely, the rank of the minimizing $\Sigma$ is with high probability bounded by the dimension of the central mean subspace. This phenomenon is interesting because the low rankness property is achieved without using any explicit regularization of $\Sigma$, e.g., nuclear norm penalization. Our theory makes correspondence between the observed phenomenon and the notion of low rank set identifiability from the optimization literature. The low rankness property of the finite sample solutions exists because the population kernel learning objective grows "sharply" when moving away from its minimizers in any direction perpendicular to the central mean subspace.
Yunlu Chen, Yang Li, Keli Liu, Feng Ruan
2023-10-18T06:15:35Z
http://arxiv.org/abs/2310.11736v2
# Kernel Learning in Ridge Regression "Automatically" Yields Exact Low Rank Solution ###### Abstract We consider kernels of the form \((x,x^{\prime})\mapsto\phi(\left\|x-x^{\prime}\right\|_{\Sigma}^{2})\) parametrized by \(\Sigma\). For such kernels, we study a variant of the kernel ridge regression problem which simultaneously optimizes the prediction function and the parameter \(\Sigma\) of the reproducing kernel Hilbert space. The eigenspace of the \(\Sigma\) learned from this kernel ridge regression problem can inform us which directions in covariate space are important for prediction. Assuming that the covariates have nonzero explanatory power for the response only through a low dimensional subspace (central mean subspace), we find that the global minimizer of the finite sample kernel learning objective is also low rank with high probability. More precisely, the rank of the minimizing \(\Sigma\) is with high probability bounded by the dimension of the central mean subspace. This phenomenon is interesting because the low rankness property is achieved without using any explicit regularization of \(\Sigma\), e.g., nuclear norm penalization. Our theory makes correspondence between the observed phenomenon and the notion of low rank set identifiability from the optimization literature. The low rankness property of the finite sample solutions exists because the population kernel learning objective grows "sharply" when moving away from its minimizers in any direction perpendicular to the central mean subspace. ## 1 Introduction In statistics and machine learning, an evolving body of literature increasingly advocates for the use of multiple or parameterized kernels in kernel methods. This approach of optimizing over a range of kernels frequently results in superior statistical performance when compared to methods that utilize only a single kernel. Despite its growing popularity among practitioners, the optimized kernel's statistical properties have not been explored in depth. We consider kernels parameterized by \(\Sigma\) \[(x,x^{\prime})\mapsto k_{\Sigma}(x,x^{\prime})=\phi(\left\|x-x^{\prime}\right\| _{\Sigma}^{2}) \tag{1.1}\] where \(\left\|x-x^{\prime}\right\|_{\Sigma}=\sqrt{(x-x^{\prime})^{T}\Sigma(x-x^{ \prime})}\), and \(\phi\) is a real-valued function so that \(k_{\Sigma}\) is a kernel for every positive semidefinite \(\Sigma\). An example is the Gaussian kernel where \(\phi(z)=\exp(-z)\). Rewriting \(\Sigma=UU^{T}\), we see that \(\left\|x-x^{\prime}\right\|_{\Sigma}=\left\|U^{T}(x-x^{\prime})\right\|_{2}\) so that optimizing the kernel over \(\Sigma\) is equivalent to finding an optimal linear transformation, which maps \(x\) to \(U^{T}x\), before utilizing a fixed kernel \((x,x^{\prime})\mapsto\phi(\left\|x-x^{\prime}\right\|_{2}^{2})\) on the transformed input. In this paper, we identify a previously unnoticed phenomenon in the literature that occurs when optimizing such kernels over \(\Sigma\) in the context of nonparametric regression. Traditionally, a nonparametric relationship between input \(X\) and output \(Y\) can be learned by solving the following kernel ridge regression (KRR) problem [14]: \[\underset{f,\gamma}{\text{minimize}}\,\frac{1}{2}\mathbb{E}_{n}\left[(Y-f(X)- \gamma)^{2}\right]+\frac{\lambda}{2}\,\|f\|_{\mathcal{H}}^{2}\] where \(\mathbb{P}_{n}\) is the empirical distribution of the data \((X,Y)\) from a population distribution \(\mathbb{P}\), \(\lambda>0\) is a ridge regularization parameter, \(f\) is the prediction function, \(\mathcal{H}\) is a reproducing kernel Hilbert space (RKHS) predefined by the practitioner, and \(\gamma\) is an intercept. We study a variant of KRR where we also optimize the reproducing kernel Hilbert space \(\mathcal{H}_{\Sigma}\) whose kernel is given by \(k_{\Sigma}\): \[\underset{f,\gamma,\Sigma}{\text{minimize}}\quad\frac{1}{2}\mathbb{E}_{n} \left[(Y-f(X)-\gamma)^{2}\right]+\frac{\lambda}{2}\,\|f\|_{\mathcal{H}_{ \Sigma}}^{2}\quad\text{ subject to }\quad\Sigma\succeq 0 \tag{1.2}\] where \(\Sigma\succeq 0\) requires the matrix \(\Sigma\) to be positive semidefinite. Incorporating such a parameterized kernel into KRR allows adaptive learning of a distance metric, enhancing the model's performance by emphasizing relevant data features. This not only leads to improved generalization and accuracy but also enhances the interpretability of the model (see Section 1.4 for more background details). ### A Puzzling Numerical Experiment An interesting phenomenon occurs when solving the KRR problem (1.2). Below we illustrate it via a simulation experiment though we have observed the phenomenon on real data as well. In the simulation experiment we collect \(n=300\) samples, each having a \(50\)-dimensional feature vector \(X\) from an isotropic normal distribution in \(\mathbb{R}^{50}\). The response variable, \(Y\), follows a simple relationship with \(X\): \(Y=0.1(X_{1}+X_{2}+X_{3})^{3}+\tanh(X_{1}+X_{3}+X_{5})+\epsilon\) for some independent noise term \(\epsilon\sim\mathsf{N}(0,\sigma^{2})\) with \(\sigma=0.1\). Here \((X_{1},X_{2},\ldots,X_{50})\) is the coordinate representation of \(X\). Apart from the noise \(\epsilon\), \(Y\) given \(X\) is a combination of two simple functions of linear projections of \(X\): cube and hyperbolic tangent. We apply gradient descent to minimize the kernel learning objective (1.2) and obtain as output an optimal \(50\times 50\) matrix \(\Sigma_{n}^{*}\) from the solution tuple \((f_{n}^{*},\gamma_{n}^{*},\Sigma_{n}^{*})\) (further algorithmic details in Section 7). Surprisingly, across repeated trials, this \(50\times 50\) matrix \(\Sigma_{n}^{*}\) consistently demonstrates a low rank property--\(\text{rank}(\Sigma_{n}^{*})\leq 2\)--for a wide range of the ridge parameter \(\lambda\) (Figure 1). This is intriguing because our experimental design does not include any explicit regularization that promotes low-rankness on the parameter \(\Sigma\). For instance, our objective in (1.2) lacks nuclear norm penalties, and our gradient descent algorithm is carefully tuned towards convergence (so no early stopping is used). Yet, the emerging matrix \(\Sigma_{n}^{*}\), although noisy, hints at a remarkable low-rank inclination: \(\text{rank}(\Sigma_{n}^{*})\leq 2\), a phenomenon observed across a wide spectrum of \(\lambda\) in over 100 repeated experiments. This low-rank property of the \(\Sigma_{n}^{*}\) matrix is neither confined to a specific choice of \(X\) distribution nor to a specific choice of functional form for the conditional mean of \(Y\) given \(X\). We performed a series of experiments of this type and we consistently found this low-rank property in \(\Sigma_{n}^{*}\) across a broad spectrum of scenarios. The collection of our experiments and findings will be documented in Section 7. ### Main Results This paper seeks a quantitative understanding of why the kernel learning objective produces exactly low-rank solutions even in a finite sample setting. We explore * For what data distribution is the solution, \(\Sigma\) of (1.2), expected to be low rank? * Is this phenomenon observed with different kernels, such as \(k_{\Sigma}(x,x^{\prime})=x^{T}\Sigma x^{\prime}\)? To answer these questions, we need to develop a mathematical understanding of the kernel learning objective. For technical reasons, our theoretical findings only apply to a variant of the objective given by \[\underset{f,\gamma,\Sigma}{\text{minimize}}\quad\frac{1}{2}\mathbb{E}_{n} \left[(Y-f(X)-\gamma)^{2}\right]+\frac{\lambda}{2}\left\|f\right\|_{\mathcal{H }_{\Sigma}}^{2}\quad\text{ subject to }\quad\Sigma\succeq 0,\;\;\left|\!\left| \Sigma\right|\!\right|\leq M. \tag{1.3}\] Here, we introduce a constraint, \(\left|\!\left|\Sigma\right|\!\right|\leq M\), where \(\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\) represents a unitarily invariant norm on \(p\)-by-\(p\) matrices (e.g., operator norm, Frobenius norm, Schatten norm). The addition of this constraint ensures the existence of a minimizer. Nonetheless, our theoretical findings do not depend on the specific choice of \(M\) or the specific norm \(\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\) being added. #### 1.2.1 Data Distribution and Regularity Assumptions Our main results rely on a few assumptions on the data distributions as well as a mild regularity assumption on the real-valued function \(\phi\) in (1.1). Throughout this paper, we assume that the data are independently and identically drawn from \((X,Y)\sim\mathbb{P}\) with \(\mathbb{E}[\left\|X\right\|_{2}^{4}]<\infty\) and \(\mathbb{E}[Y^{2}]<\infty\). Role of Dimension ReductionOur first assumption is that a low-dimension linear projection of \(X\) captures all information of \(Y\) from the conditional mean \(\mathbb{E}[Y|X]\). Our hope is, by optimizing \(\Sigma\) in the kernel, it learns the projection. Hence, the belief in such a low-dimensional model holds (or approximately holds) is fundamental, and is the main reason why we may hope \(\Sigma\) to be low rank. To formalize this idea, we use dimension reduction concepts from statistics [11]. Specifically, a subspace \(S\) is called a _mean dimension reduction_ subspace if \(\mathbb{E}[Y|X]=\mathbb{E}[Y|\Pi_{S}X]\), where \(\Pi_{S}\) denotes the Euclidean projection onto a subspace \(S\) of the ambient space \(\mathbb{R}^{p}\). We then introduce the smallest such subspace, termed the _central mean subspace_ in the literature [10]. **Definition 1.1** (Central Mean Subspace [10, Definition 2]).: _Let \(S_{*}\) denote the intersection over all mean dimension-reduction subspaces:_ \[S_{*}=\cap\{S:\mathbb{E}[Y|X]=\mathbb{E}[Y|\Pi_{S}X]\}.\] _If \(S_{*}\) itself is a mean dimension-reduction subspace, it is then called the central mean subspace._ **Assumption 1** (Existence of Central Mean Subspace).: _The central mean subspace \(S_{*}\) exists._ Note that the central mean subspace does not always exist, because the intersection of dimension reduction subspaces is not necessarily a dimension reduction subspace. Nonetheless, the central mean subspace does exist under very mild distributional conditions on \(X\), e.g., when \(X\) is a continuous random variable with convex, open support or has Lebesgue density on \(\mathbb{R}^{p}\)[10]. Predictive PowerOur second assumption is mild: \(X\) has predictive power for \(Y\). Interestingly, this seemingly mild assumption is necessary for the phenomenon to occur. **Assumption 2** (\(X\) has predictive power of \(Y\)).: \(\operatorname{Var}(\mathbb{E}[Y|X])\neq 0\)_._ Covariate's Dependence StructureOur third assumption is about the dependence structure of the covariate \(X\). We'll denote the orthonormal complement of the subspace \(S_{*}\) as \(S_{*}^{\perp}\). **Assumption 3** (Dependence Structure of \(X\)).: _(a) \(\operatorname{Cov}(X)\) has full rank, and (b) \(\Pi_{S_{*}}X\) is independent of \(\Pi_{S_{*}^{\perp}}X\)._ Part (a) is mild. Part (b) is stylized. In fact, our extensive numerical experiments suggest the phenomenon holds in many situations where Part (b) is violated. The independence assumption only reflects the limit of what we are able to prove regarding the observed phenomenon. We shall make further comments on this assumption, and make comparisons to existent results in the literature after we state our main theorems (Section 1.5). Function \(\phi\)'s RegularityFinally, our result requires one more regularity condition pertaining to the real-valued function \(\phi\) that defines the kernel \(k_{\Sigma}\). **Assumption 4** (Regularity of \(\phi\)).: \(\phi(z)=\int e^{-tz}\mu(dt)\) _holds for every \(z\) where \(\operatorname{supp}(\mu)\subseteq[m_{\mu},\infty)\) and \(m_{\mu}\in(0,\infty)\). Moreover, \(\phi^{\prime}_{+}(0)=\lim_{z\to 0^{+}}(\phi(z)-\phi(0))/z\) exists._ The condition draws inspiration from Schoenberg's fundamental work [12], which characterizes all real-valued functions \(\phi\) ensuring \(k_{\Sigma}\) is a _positive definite_ kernel for every positive definite matrix \(\Sigma\) in every dimension \(p\). Such a function \(\phi\) must conform to \(\phi(z)=\int e^{-tz}\mu(dt)\) with \(\mu\) being a finite, nonnegative measure supported on \([0,\infty)\) but not concentrated at zero. The first part of the condition, thereby, can be viewed as a robust requirement of \(\mu\) not being a point mass at zero. The second part of the condition implies that \(\phi\) is differentiable on \((0,\infty)\), and right differentiable at \(0\). So this condition requires \(k_{\Sigma}\) to be differentiable. A common kernel that meets these two criteria is the Gaussian kernel. #### 1.2.2 Formal Statement Our first result (Theorem 1.1) says that at every one of its population minimizers, the kernel learning objective returns a \(\Sigma^{*}\) whose column space is contained in or equal to the central mean subspace \(S_{*}\). For clarity, the population objective is given by: \[\underset{f,\gamma,\Sigma}{\text{minimize}}\quad\frac{1}{2}\mathbb{E}\left[(Y-f( X)-\gamma)^{2}\right]+\frac{\lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2} \quad\text{ subject to }\quad\Sigma\succeq 0,\quad\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\! * (Constraint \(\Sigma:\Sigma\succeq 0\)). Some may contend that the positive semidefinite constraint \(\Sigma\succeq 0\), which forces the solution \(\Sigma_{n}^{*}\) to have non-negative eigenvalues (akin to eigenvalue thresholding), is serving as the explicit regularization that induces a low rank \(\Sigma_{n}^{*}\). Yet our result not only says that \(\Sigma_{n}^{*}\) is low-rank but also that its rank is bounded by \(\dim(S_{*})\). Note the positive semidefinite constraint alone would not lead to this sharper bound: see the next bullet point. * (Role of \(k_{\Sigma}\)). In Section 6 and 7, we examine an alternative kernel objective. Instead of using the kernel as defined in equation (1.1), we consider kernels of the form \((x,x^{\prime})\mapsto\psi(x^{T}\Sigma x^{\prime})\) where \(\Sigma\) is positive semidefinite. Empirically, this change results in the absence of the low-rankness phenomenon observed in our earlier simulations. Section 1.3 clarifies the role of the kernel \(k_{\Sigma}\) defined in equation (1.1), showing how it interacts with our statistical modeling in a way that leads to the phenomenon. ### Technical Insight This section elucidates our main idea behind proving Theorem 1.2. For space considerations, here we only discuss the case where \(\lambda\in(0,\lambda_{0}]\) for the \(\lambda_{0}\) defined in Theorem 1.1. Given Theorem 1.1, any population minimizer \(\Sigma^{*}\) must obey \(\operatorname{rank}(\Sigma^{*})=\dim(S_{*})\). Our goal is to show that this low rankness property is preserved by the empirical minimizer \(\Sigma_{n}^{*}\): \[\lim_{n\to\infty}\mathbb{P}(\operatorname{rank}(\Sigma_{n}^{*})=\dim(S_{*}), \ \ \forall(\Sigma_{n}^{*},f_{n}^{*},\gamma_{n}^{*})\text{ minimizing \eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def}}))=1.\] Definitions and NotationTo understand this property of \(\Sigma_{n}^{*}\), we first introduce a function that takes partial minimization over the prediction function and the intercept \(\gamma\): \[J_{n}(\Sigma)=\min_{f,\gamma}\frac{1}{2}\mathbb{E}_{n}\left[(Y-f(X)-\gamma)^ {2}\right]+\frac{\lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}. \tag{1.5}\] Then the empirical minimizer \(\Sigma_{n}^{*}\) corresponds to a minimizer of \[\operatorname*{minimize}_{\Sigma}J_{n}(\Sigma)\quad\text{subject to}\quad \Sigma\succeq 0,\ \left\|\!\left|\Sigma\right|\!\right|\leq M. \tag{1.6}\] Similarly, we define the population objective \(J(\Sigma)\) by replacing \(\mathbb{E}_{n}\) in equation (1.5) by \(\mathbb{E}\). Then the population minimizer \(\Sigma^{*}\) corresponds to a minimizer of an analogously defined problem: \[\operatorname*{minimize}_{\Sigma}J(\Sigma)\quad\text{subject to}\quad\Sigma \succeq 0,\ \left\|\!\left|\Sigma\right|\!\right|\leq M. \tag{1.7}\] Let's define \[\mathcal{C}=\{\Sigma:\Sigma\succeq 0\}\ \text{ and }\ \mathcal{M}=\{\Sigma:\Sigma \succeq 0,\ \operatorname{rank}(\Sigma)=\dim(S_{*})\}. \tag{1.8}\] We use \(\mathcal{C}_{M}=\{\Sigma:\Sigma\succeq 0,\left\|\!\left|\Sigma\right|\! \right|\leq M\}\) to denote the compact constraint set. Main PropertiesOur strategy relies on establishing two properties of \(J\) and \(J_{n}\). * (Sharpness) For every minimizer \(\Sigma^{*}\) of the population objective \(J\), there is a \(\rho>0\) such that \[\lim_{t\to 0^{+}}\frac{1}{t}(J(\Sigma^{*}+tW)-J(\Sigma^{*}))\geq\rho\left\|\! \left|\!\left|W\right|\!\right|\] (1.9) holds for every matrix \(W\) in the tangent cone of \(\mathcal{C}\) at \(\Sigma^{*}\) with \(\operatorname{col}(W)\subseteq\operatorname{col}(\Sigma^{*})^{\perp}\). Additionally, \(\nabla J\) exists and is continuous on \(\mathcal{C}\) (for gradient definition on \(\mathcal{C}\), see Definition 3.1). This property emphasizes the stability of \(\Sigma^{*}\) as a minimizer: small perturbations in certain directions \(W\) lead to an increase in the objective \(J\) locally at least linear in the size of \(W\). Note the considered perturbations \(W\) are those in the tangent cone of \(\mathcal{C}\) perpendicular to the set \(\mathcal{M}\)1. Technically, this linear increase of \(J\) when deviating away from \(\mathcal{M}\) holds not just at \(\Sigma^{*}\) but within a neighborhood of \(\Sigma^{*}\), thanks to the continuity of \(\nabla J\). Statistically, given \(\operatorname{col}(\Sigma^{*})=S_{*}\), this property says that perturbation of \(\Sigma^{*}\) in any direction \(W\) whose columns are orthogonal to the central mean subspace \(S_{*}\) would induce such a linear growth of the population objective \(J\). As a comparison, in classical quadratically mean differentiable models, where the true parameter is statistically identifiable and in the interior of the parameter space, the population objective (expected negative log-likelihood) has a local _quadratic_ growth away from its minimizer (the true model parameter), with the curvature given by the Fisher information [13]. Footnote 1: Note \(\mathcal{M}\) is a submanifold of the ambient space that consists of all symmetric matrices. The normal space at any \(\Sigma^{\prime}\in\mathcal{M}\) to the ambient space is the set of all symmetric \(\Sigma\) with \(\operatorname{col}(\Sigma)\subseteq\operatorname{col}(\Sigma^{\prime})^{\perp}\)[12]. * (Uniform Convergence) The value and gradient of the empirical objective \(J_{n}\) converges uniformly in probability to those of the population objective \(J\) on the constraint set \(\mathcal{C}_{M}\), i.e., \[\sup_{\Sigma\in\mathcal{C}_{M}}|J_{n}(\Sigma)-J(\Sigma)|=o_{P}(1),\qquad\sup_ {\Sigma\in\mathcal{C}_{M}}|\!|\!|\nabla J_{n}(\Sigma)-\nabla J(\Sigma)|\!|\!| =o_{P}(1)\] (1.10) as the sample size \(n\to\infty\). The \(o_{P}(1)\) stands for convergence to zero in probability. Main ArgumentWith these properties in mind, our main argument, which is largely inspired by sensitivity analysis of minimizers' set identifiability in the optimization literature [15], is as follows. * The uniform convergence property of objective values ensures that the empirical minimizers converge (in probability) to their population counterparts as \(n\to\infty\). Since all population minimizers fall within \(\mathcal{M}\), this implies that the empirical minimizers converge to \(\mathcal{M}\). * To further show the empirical minimizers fall within \(\mathcal{M}\), the sharpness property becomes crucial. The sharpness property ensures the empirical objective, whose gradient is uniformly close to that of population objective, is still with high probability sharp, i.e., increasing at least linearly when moving in normal directions away from \(\mathcal{M}\) near the population minimizers. To summarize, while uniform convergence of objective values implies convergence of minimizers, the sharpness property offers the stability that preserves the minimizers' low rankness property--a special case of set identifiability in optimization--under perturbations. The role of kernel \(k_{\Sigma}\)It's important to note that the sharpness property (1.9) is fundamentally tied to the kernel defined in (1.1). For kernel objectives that use kernels \((x,x^{\prime})\mapsto\psi(x^{T}\Sigma x^{\prime})\), the low rankness phenomenon does not occur and the objectives do not exhibit the sharpness property. ### Background: Kernel Learning To place the results in context, we describe its genesis. A large body of research has emphasized the need to consider multiple kernels or parameterization of kernels in regression. The overarching consensus is that choosing the "right" kernel is instrumental for both predictive accuracy and data representation [1]. Among many, one prominent method of kernel parameterization is through a linear combination of fixed kernels, represented as \(k_{\eta}(x,x^{\prime})=\sum_{i=1}^{M}\eta_{i}k_{i}(x,x^{\prime})\)[1, 10]. The aim is to learn the nonnegative weights \(\eta\). Several studies show that this approach can enhance predictive performance on real datasets, e.g., [1, 2]. Nonetheless, the method often falls short in its interpretability. Since the original description of individual features, \(x\), is lost during kernel embedding \(k_{i}\), it becomes challenging to discern which specific components of our input data, \(x\), are the most influential in making predictions. On the other hand, there's also a growing interest in learning a scaling parameter \(\Sigma\) for kernels, where \(k_{\Sigma}(x,x^{\prime})=\phi(\|x-x^{\prime}\|_{\Sigma}^{2})\) as described in (1.1). The primary interests of this approach, as pointed out by several studies, are twofold: the model's interpretability and its demonstrated effectiveness in predictions across a collection of benchmark real-world datasets. Among many work that considers learning such a kernel, early work by Vapnik and colleagues [21, 22], for instance, proposed a method equivalent to learning a diagonal matrix \(\Sigma\) in \(k_{\Sigma}\). This method aids in feature selection by removing redundant coordinates in \(x\). Subsequent research by Fukumizu, Bach, and Jordan [13, 14] pivoted towards learning a projection matrix \(\Sigma\) to assist in subspace dimension reduction. Besides enhancing interpretability by removing data redundancy, these models demonstrate good predictive performance in many fields, e.g., bioinformatics [21], and sensing and imaging [21]. Recent empirical endeavors have consistently explored learning a matrix \(\Sigma\) in such a kernel, with numerical findings suggesting that learning such a \(\Sigma\) in kernel ridge regressions can outperform random forest, multilayer neural networks and transformers, attaining the state-of-the-art predictive performance in a collection of benchmark datasets [14]. However, while empirical studies on learning such kernel \(k_{\Sigma}\) are abundant, theoretical studies on statistical properties of the solution matrix \(\Sigma\) have been relatively scarce. This is the realm where our contribution is mainly focused. A notable work related to us is the research by Fukumizu, Bach, and Jordan [14]. Focusing on the problem of sufficient dimension reduction, they studied the statistical properties of empirical minimizer \(\Sigma\) within their kernel ridge regression objective. Their main results showed that the column space of \(\Sigma\) converges to the target of inference, the central dimension reduction subspace. However, an inherent limitation of their study hinges on a significant assumption: that the dimension \(r\) of the central subspace is known. In fact, their algorithm uses this knowledge to enforce an explicit rank constraint \(\Sigma\in\Lambda_{r}\) in their kernel objective, where \(\Lambda_{r}=\{\Pi_{S}|\dim(S)=r\}\) represents the set of all projection matrices of rank \(r\). In contrast, our research does not require prior knowledge of the dimension of the central mean subspace. We do not place such a rank constraint \(\Sigma\in\Lambda_{r}\) in the objective. Our main result shows, under proper statistical modeling assumptions and choice of \(\lambda\), the empirical minimizer \(\Sigma\) adapts to the problem's low dimensional structure: \(\operatorname{rank}(\Sigma)=\dim(S_{*})\) simply holds with high probability. We then leverage the result \(\operatorname{rank}(\Sigma)=\dim(S_{*})\) to show the consistency of the column space of \(\Sigma\) for the central mean subspace \(S_{*}\). In the most recent development, Jordan and a subset of authors of the current paper delved into a scenario where \(\Sigma\) is diagonal [11]. Their findings revolve around projected gradient descent leading to sparse diagonal matrices. However, our focus diverges. We do not dwell on the algorithmic intricacies nor impose diagonal requirements on the matrix \(\Sigma\) in our objectives. The proof techniques are also very much different. ### Relation to Dimension Reduction Literature Theorem 1.2 and Corollary 5.1 demonstrate that our proposed kernel learning procedure can serve as an inferential tool for the central mean subspace. We will now explore its applicability and constraints. In the context of inference, Assumption 1 is essential. Assumption 2 can be checked statistically using goodness-of-fit test, e.g., [11]. Assumption 3, which demands independence between \(\Pi_{S_{*}}X\) and \(\Pi_{S_{*}^{c}}X\), might seem restrictive. Nonetheless, there are scenarios where our findings are directly applicable, including: * When we can intentionally design \(X\) to follow a specific distribution during data collection, as in computer experiments [12]. Notably, Assumption 3 is satisfied when \(X\sim\mathsf{N}(0,I_{p})\). * When equipped with prior knowledge regarding the marginal distribution of \(X\)[13]. If \(X\) is continuous with support \(\mathbb{R}^{p}\), we can then apply the reweighting technique (e.g., [1]) to allocate distinct weights to different data points, reducing the inference problem to the case of \(X\sim\mathsf{N}(0,I_{p})\). In particular, each data point \(X=x\) is given a weight \(w(x)=p_{G}(x)/p_{X}(x)\) where \(p_{G}\) and \(p_{X}\) denote the densities of \(\mathsf{N}(0,I_{p})\) and \(X\), respectively. In more general cases, the Voronoi weighting method can be utilized to mitigate violations of normality [10]. As a comparison, we note that many existent methodologies that target inference of the central mean subspace also hinge on specific distributional assumptions. Notably: * Sliced Inverse Regression (SIR) [14]: The inference method based on sliced inverse regression necessitates a coverage condition, see, e.g., [15, 16]. This condition doesn't hold when \(Y\) is categorical with values from a set of \(k\) elements when \(k\leq\dim(S_{*})\)[15]. * Minimum Average Variance Estimation (MAVE) [17]: This approach (and related approaches that estimate the derivative of \(\mathbb{E}[Y|X=x]\), e.g., [18, 19]) relies on \(X\) being continuous and not discrete, see, e.g., [17, 18, 19]. * Fourier method: This approach assumes Gaussianity of \(X\)[16, 18]. * Contour Regression: This method demands that \(X\) follow an elliptical distribution [16]. In contrast, our assumptions do not confine \(X\) to a specific distribution type or make assumptions about the continuity of \(X\) or \(Y\). So our assumptions are not more restrictive than existing ones. Conversely, our assumptions, though not more restrictive, are also not universally more flexible compared to existing ones. When we try to delineate the scenarios and conditions, our assertions aren't that our assumptions, or procedures, should always be the appropriate ones in practice. Finally, our procedure infers the central mean subspace without prior knowledge of its dimension (Theorem 1.2 and Corollary 5.1), contrasting with existing kernel dimension reduction methods that rely on a rank constraint assuming prior knowledge of the dimension [11]. Since this aspect has been elaborated at the near end of Section 1.4, we thereby do not repeat it here. ### Implications for Sparse Regularization #### 1.6.1 This paper's Perspective In statistics and machine learning, regularization often refers to techniques that enforce "simplicity" in solutions. It's commonly implemented by adding a penalty term to the loss function, thereby deterring solution complexity [10]. To achieve solutions that are "simple" in structure, notable penalty terms such as the \(\ell_{1}\) norm encourage vector sparsity, while the nuclear norm targets low-rank matrices [10]. Ideas supporting the importance of penalty terms are based on arguments that sparsity and low rankness of minimizers are inherently unstable under perturbations. This paper, however, shows that assessing solution sparsity or low rankness based merely on the presence or absence of penalty terms does not capture the full picture. The perspective we offer is grounded in the sensitivity analysis techniques of minimizers' set identifiability in the optimization literature, e.g., [12, 13, 14, 15, 16]. This domain tackles a key problem: given that a solution \(\theta^{*}\) to a minimizing objective aligns with the set inclusion \(\theta^{*}\in\mathsf{S}\), under what conditions will a smooth perturbation of the objective ensure that the minimizers of the perturbed problems align with the set inclusion \(\theta^{*}_{n}\in\mathsf{S}\)? The crux of the answer often lies in the original minimization objective obeying a "sharpness" property around its minimizer (the term "sharpness" is borrowed from, e.g., [12, 13] among many in the field), which roughly speaking, means that the original objective grows at least linearly when moving away from the set \(\mathsf{S}\). Specifically, in nonlinear constrained optimization when \(\mathsf{S}\) is a smooth subset of the boundary of a _convex_ constraint set (called an "identifiable surface"), the sharpness property corresponds to the notion of strict complementary slackness at \(\theta^{*}\)[12]. When \(\mathsf{S}\) is a submanifold of an ambient Euclidean space, this requires the directional derivative of the original objective at \(\theta^{*}\) to be bounded away from zero along the normal directions at \(\theta^{*}\) with respect to \(\mathsf{S}\), which is formally described in the definition of _partial smoothness_[12]. Defining a "sharpness" condition for a general set \(\mathsf{S}\) still relies on case-specific studies (we perform an analysis for a specific set of low-rank matrices for our purpose in Section 5), in which case only the notion of _identifiable set_ captures the broad idea [13, 14]. Building on this insight, we argue that an empirical minimizer would maintain the "simple" structure of its population solution if the population objective possesses a form of sharpness property in relation to the set of "simple" solutions, and if the gradient and value of the empirical objective converge uniformly to the population counterparts. This perspective is useful as it illuminates why, despite our kernel learning objective lacking explicit nuclear norm regularization (or other low-rank promoting penalties), we still observe an exact low-rank solution in finite samples. Specifically, for our kernel learning objective, the set \(\mathsf{S}\) corresponds to the set of low-rank matrices. Since the objective displays a "sharpness" with respect to \(\mathsf{S}\), by showing identifiability results with respect to the set of low-rank matrices, we demonstrate the low-rankness of empirical minimizers. According to this perspective, one aspect of the contribution of this work is to delineate conditions under which the kernel learning objective at the population level displays the desirable sharpness property. Our main theoretical results provide a set of sufficient and necessary conditions for when this happens. We show that Assumption 2 is necessary, and Assumptions 1--3 are sufficient for the kernel defined in (1.1). The sharpness property disappears when the kernel in the objective is replaced by other kernels of the form \((x,x^{\prime})\mapsto\psi(x^{T}\Sigma x^{\prime})\). #### 1.6.2 Connections to Prior Work on Implicit Regularization Recent literature demonstrates a collection of methods that induce exact low-rank solutions under the umbrella of _implicit regularization_. We shall discuss two lines of existent work in which implicit regularization is associated with solutions that are exactly low rank. The first line of research direction demonstrates that the solution's low rankness is intimately tied to the _algorithm_ used, e.g., [1 _Convex Analysis_: For a convex set \(\mathcal{C}\), the tangent cone at \(x\in\mathcal{C}\), \(\mathcal{T}_{\mathcal{C}}(x)\), is the set of limits \(v=\lim_{n\to\infty}t_{n}(x_{n}-x)\) for some sequence \(x_{n}\in\mathcal{C}\) and \(t_{n}\to\infty\). The normal cone at \(x\in\mathcal{C}\), \(\mathcal{N}_{\mathcal{C}}(x)\), is the set of \(w\) such that \(\langle w,v\rangle\leq 0\) for every \(v\in\mathcal{T}_{\mathcal{C}}(x)\). _Variational Analysis_: Let \(f:\mathbb{R}^{p}\mapsto\mathbb{R}\cup\{+\infty\}\) with a point \(x\in\mathbb{R}^{p}\) such that \(f(x)\) is finite. The subdifferential of \(f\) at \(x\), denoted by \(\partial f(x)\), consists of all vectors \(v\) such that \(f(\tilde{x})\geq f(x)+\langle v,\tilde{x}-x\rangle+o(\|\tilde{x}-x\|_{2})\) as \(\tilde{x}\to x\). _Function Spaces and Norms_: The Lebesgue space \(\mathcal{L}_{r}(\mathbb{R}^{p})\) consists of \(f:\mathbb{R}^{p}\to\mathbb{C}\) such that \(\left\|f\right\|_{\mathcal{L}_{r}(\mathbb{R}^{p})}<\infty\) where \(r\in[1,\infty]\). The space \(\mathcal{L}_{2}(\mathbb{R}^{p})\) has inner product \(\langle f_{1},f_{2}\rangle_{\mathcal{L}_{2}(\mathbb{R}^{p})}=\int f(x) \overline{g(x)}dx\). The space \(\mathcal{C}_{b}(\mathbb{R}^{p})\) consists of all bounded, continuous functions \(f:\mathbb{R}^{p}\to\mathbb{C}\) and is endowed with \(L_{\infty}\) norm. For ease of notation, we also use \(\|f\|_{\infty}\) to represent \(\|f\|_{L_{\infty}(\mathbb{R}^{p})}\). _Fourier Transform_: \(\mathcal{F}f\) denotes the Fourier transform of \(f\in\mathcal{L}_{2}(\mathbb{R}^{p})\), where \(\mathcal{F}f(\omega)=(2\pi)^{-p}\int e^{-i\langle x,\omega\rangle}f(x)dx\). \(\mathcal{F}^{-1}f\) denotes the inverse Fourier transform of \(f\in\mathcal{L}_{2}(\mathbb{R}^{p})\): \(\mathcal{F}^{-1}f(x)=\int e^{i\langle x,\omega\rangle}f(\omega)d\omega\). The Fourier inversion theorem then asserts that for any \(f\in\mathcal{L}_{2}(\mathbb{R}^{p})\), we have \(f=\mathcal{F}\mathcal{F}^{-1}f=\mathcal{F}^{-1}\mathcal{F}f\) in \(\mathcal{L}_{2}(\mathbb{R}^{p})\). _Kernel_: A symmetric function \(k:\mathbb{R}^{p}\times\mathbb{R}^{p}\to\mathbb{R}\) is a positive semidefinite kernel if \(\sum_{i,j=1}^{m}\alpha_{i}\alpha_{j}k(x_{i},x_{j})\geq 0\) holds for all \(m\), \(\alpha_{1},\alpha_{2},\ldots,\alpha_{m}\in\mathbb{R}\), and \(x_{1},x_{2},\ldots,x_{m}\in\mathbb{R}^{p}\). It is called a positive definite kernel if it is first positive semidefinite, and moreover, \(\sum_{i,j=1}^{m}\alpha_{i}\alpha_{j}k(x_{i},x_{j})=0\) holds if and only if \(\alpha_{1}=\alpha_{2}=\ldots=\alpha_{m}=0\) for all mutually distinct \(x_{1},x_{2},\ldots,x_{m}\in\mathbb{R}^{p}\). It is called an integrally positive definite kernel if \(\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{p}}k(x,x^{\prime})d\nu(x)d\nu(x^{ \prime})>0\) holds for every nonzero signed measure \(\nu\) on \(\mathbb{R}^{p}\) with finite total variation. ### Paper Organization The rest of the paper is organized as follows. Section 2 introduces the basic properties of the RKHS \(\mathcal{H}_{\Sigma}\). Section 3 provides the proof of Theorem 1.1, demonstrating that every minimizer of the population objective \(J\) is of low rank. Moreover, it formally describes and proves the sharpness property of the population objective \(J\) around the population minimizers. Section 4 establishes that the value and gradient of the empirical objective \(J_{n}\) uniformly converge to those of the population objective \(J\) on any given compact set. Section 5 provides the proof of Theorem 1.2, demonstrating that every minimizer of the empirical objective \(J_{n}\) is of low rank with high probability. The proof utilizes the results in previous sections. Section 6 explains why the observed phenomenon does not occur if the kernel in Equation (1.1) is replaced by the inner product kernel \((x,x^{\prime})\mapsto\psi(x^{T}\Sigma x^{\prime})\), as the new population objective no longer satisfies the proper sharpness property. Section 7 provides additional experiments investigating the scope of the observed phenomenon. Section 8 concludes the paper with future work discussions. ### Reproducibility The codes for all experiments and figures are available online: [https://github.com/tinachentc/kernel-learning-in-ridge-regression](https://github.com/tinachentc/kernel-learning-in-ridge-regression) ## 2 Preliminaries on the RKHS \(\mathcal{H}_{\Sigma}\) This section summarizes the basic properties we need on the kernel \(k_{\Sigma}\) and the RKHS \(\mathcal{H}_{\Sigma}\). According to Assumption 4, we are interested in the kernel \(k_{\Sigma}\) of the following form. \[k_{\Sigma}(x,x^{\prime})=\phi(\left\|x-x^{\prime}\right\|_{\Sigma}^{2})=\int_{0} ^{\infty}e^{-t\|x-x^{\prime}\|_{\Sigma}^{2}}\mu(dt) \tag{2.1}\] where \(\mu\) is a nonnegative measure whose support is away from zero. This equation expresses the idea that \(k_{\Sigma}\) is a weighted sum of the Gaussian kernel \((x,x^{\prime})\mapsto e^{-t\|x-x^{\prime}\|_{\Sigma}^{2}}\) over different scale \(t>0\). For each positive semidefinite kernel \(k_{\Sigma}\), there's an associated RKHS \(\mathcal{H}_{\Sigma}\)[11, 10]. We shall use \(\left\|\cdot\right\|_{\mathcal{H}_{\Sigma}}\) and \(\langle\cdot,\cdot\rangle_{\mathcal{H}_{\Sigma}}\) to denote the norm and inner product on \(\mathcal{H}_{\Sigma}\) throughout this paper. For an introduction to RKHS and its definition, see, e.g., [12, Chapter 12]. Our analysis requires four basic properties of the RKHS \(\mathcal{H}_{\Sigma}\) whose proofs are relatively standard. The first concerns embedding of the space \(\mathcal{H}_{\Sigma}\) into \(\mathcal{C}_{b}(\mathbb{R}^{p})\). **Proposition 1** (Continuous Embedding of \(\mathcal{H}_{\Sigma}\) into \(\mathcal{C}_{b}(\mathbb{R}^{p})\)).: _For every \(\Sigma\succeq 0\), and \(f\in\mathcal{H}_{\Sigma}\):_ \[\left\|f\right\|_{L_{\infty}(\mathbb{R}^{p})}\leq\sqrt{\phi(0)}\left\|f \right\|_{\mathcal{H}_{\Sigma}}.\] **Proof** Fix \(f\in\mathcal{H}_{\Sigma}\). For any \(x\), \(\left|f(x)\right|=\left|\langle f,k_{\Sigma}(x,\cdot)\rangle_{\mathcal{H}_{ \Sigma}}\right|\leq\left\|f\right\|_{\mathcal{H}_{\Sigma}}\left\|k_{\Sigma}(x,\cdot)\right\|_{\mathcal{H}_{\Sigma}}=\sqrt{\phi(0)}\left\|f\right\|_{ \mathcal{H}_{\Sigma}}\), where the two identities uses the reproducing property of \(k_{\Sigma}\) with respect to \(\mathcal{H}_{\Sigma}\). The second is about the relations between different RKHS \(\mathcal{H}_{\Sigma}\) over the matrix parameter \(\Sigma\succeq 0\). Let \(\mathcal{H}_{I}\) denote the RKHS corresponding to \(k_{I}\), which stands for the kernel \(k_{I}(x,x^{\prime})=\phi(\left\|x-x^{\prime}\right\|_{2}^{2})\) for the identity matrix \(I\). We then have the following proposition of \(\mathcal{H}_{\Sigma}\). **Proposition 2** (Connections between \(\mathcal{H}_{\Sigma}\) and \(\mathcal{H}_{I}\)).: _Let \(\Sigma=UU^{T}\succeq 0\) where \(\Sigma,U\in\mathbb{R}^{p\times p}\). Then_ \[\mathcal{H}_{\Sigma}=\{f:\mathbb{R}^{p}\mapsto\mathbb{R}|\text{there exists }g:\mathbb{R}^{p}\mapsto\mathbb{R}\text{ such that }f(x)=g(U^{T}x)\text{ for }g\in\mathcal{H}_{I}\}.\] _Moreover, when \(\Sigma\succ 0\), we can define a unique mapping \(\iota_{U}:\mathcal{H}_{\Sigma}\to\mathcal{H}_{I}\) that maps \(f\in\mathcal{H}_{\Sigma}\) to the unique \(g\in\mathcal{H}_{I}\) such that \(f(x)=g(U^{T}x)\) holds for \(x\in\mathbb{R}^{p}\). This mapping \(\iota_{U}\) is notably an isometry: \(\langle f_{1},f_{2}\rangle_{\mathcal{H}_{\Sigma}}=\langle\iota_{U}f_{1},\iota _{U}f_{2}\rangle_{\mathcal{H}_{I}}\) holds for every \(f_{1},f_{2}\in\mathcal{H}_{\Sigma}\)._ **Proof** This follows from the constructive proof of a general RKHS in [12, Theorem 12.11]. The key observation is that, for \(\Sigma=UU^{T}\), the associated kernel \(k_{\Sigma}\) obeys \(k_{\Sigma}(x,x^{\prime})=k_{I}(U^{T}x,U^{T}x^{\prime})\) for every \(x,x^{\prime}\). Note very similar results have also been noticed in the literature, e.g., [10]. The third is an explicit characterization of the space \(\mathcal{H}_{\Sigma}\). For every positive definite \(\Sigma\succ 0\), let us define \[Q_{\Sigma}(\omega)=\int_{0}^{\infty}\frac{1}{\sqrt{|\det\Sigma|}}\cdot\left( \frac{1}{4\pi t}\right)^{p/2}e^{-\left\|\omega\right\|_{(4t\Sigma)^{-1}}^{2} }\mu(dt). \tag{2.2}\] Note \(Q_{\Sigma}\) is well-defined for every \(\Sigma\succ 0\) since \(\mu\) has support away from \(0\) by Assumption 4. **Proposition 3** (Characterization of the Hilbert Space \(\mathcal{H}_{\Sigma}\)).: _Assume Assumption 4._ _Let \(\Sigma\succ 0\) be positive definite. The space \(\mathcal{H}_{\Sigma}\) consists of functions_ \[\mathcal{H}_{\Sigma}=\left\{f\in\mathsf{C}(\mathbb{R}^{p})\cap\mathcal{L}_{2} (\mathbb{R}^{p}):\mathsf{F}f/\sqrt{Q_{\Sigma}}\in\mathcal{L}_{2}(\mathbb{R}^{p })\right\}.\] _The inner product satisfies \(\langle f_{1},f_{2}\rangle_{\mathcal{H}_{\Sigma}}=\langle\mathsf{F}f_{1}/ \sqrt{Q_{\Sigma}},\mathsf{F}f_{2}/\sqrt{Q_{\Sigma}}\rangle_{\mathcal{L}_{2}( \mathbb{R}^{p})}\) for every \(f_{1},f_{2}\in\mathcal{H}_{\Sigma}\)._ **Proof** Proposition 3 is deduced from the description of a general translation-invariant RKHS, e.g., [20, Theorem 10.12]. The property we use is the following connection between the Fourier transform of \(Q_{\Sigma}\) and the kernel \(k_{\Sigma}\). First, \(k_{\Sigma}(x,x^{\prime})=\Phi_{\Sigma}(x-x^{\prime})\) where \(\Phi_{\Sigma}(z)=\phi(\|z\|_{\Sigma}^{2})\). Second, \(\mathcal{F}\Phi_{\Sigma}=Q_{\Sigma}\), which equivalently states \(\mathcal{F}^{-1}Q_{\Sigma}=\Phi_{\Sigma}\), meaning \(\int e^{i\langle\omega,z\rangle}Q_{\Sigma}(\omega)d\omega=\Phi_{\Sigma}(z)\). The last is about the expressive power of the RKHS \(\mathcal{H}_{\Sigma}\). Let us denote \[\mathcal{H}_{\Sigma}+\mathbb{R}=\{u:\mathbb{R}^{p}\to\mathbb{R}\mid u(x)=f(x)+ \gamma\text{ where }f\in\mathcal{H}_{\Sigma}\text{ and }\gamma\in\mathbb{R}\}\,.\] **Proposition 4** (Denseness of \(\mathcal{H}_{\Sigma}+\mathbb{R}\) in \(\mathcal{L}_{2}(\mathbb{Q})\)).: _Assume Assumption 4._ _Let \(\Sigma\succ 0\) be positive definite. For every probability measure \(\mathbb{Q}\), \(\mathcal{H}_{\Sigma}+\mathbb{R}\) is dense in \(\mathcal{L}_{2}(\mathbb{Q})\): given \(u\in\mathcal{L}_{2}(\mathbb{Q})\), for any \(\epsilon>0\), there exists \(u_{\epsilon}\in\mathcal{H}_{\Sigma}+\mathbb{R}\) such that \(\|u-u_{\epsilon}\|_{\mathcal{L}_{2}(\mathbb{Q})}<\epsilon\)._ **Proof** We leverage a result in [10, Proposition 5], which says, for an RKHS \(\mathcal{H}\), \(\mathcal{H}+\mathbb{R}\) is dense in \(L_{2}(\mathbb{Q})\) for every probability measure \(\mathbb{Q}\) if and only if \(\mathcal{H}\) is characteristic, meaning that if two probability measures \(\mathbb{Q}_{1}\) and \(\mathbb{Q}_{2}\) are such that \(\int fdQ_{1}=\int fdQ_{2}\) holds for every \(f\in\mathcal{H}\), then \(\mathbb{Q}_{1}=\mathbb{Q}_{2}\). Using this result, it suffices to show that \(\mathcal{H}_{\Sigma}\) is a characteristic RKHS for every \(\Sigma\succ 0\). That \(\mathcal{H}_{I}\) being characteristic is implied by a known result on radial kernels [11, Proposition 5], which states that an RKHS with kernel \((x,x^{\prime})\mapsto\int_{0}^{\infty}e^{-t\|x-x^{\prime}\|_{2}^{2}}\nu(dt)\) is characteristic if the finite measure \(\nu\) is not a point mass at zero. The case for general \(\mathcal{H}_{\Sigma}\) is implied by the fact that \(\mathcal{H}_{I}\) is a characteristic RKHS thanks to Proposition 2. ## 3 Sharpness Property of Population Objective In this section, we describe a sharpness behavior of the _population_ objective near its minimizer: \[\text{minimize}\,J(\Sigma)\ \ \text{ subject to }\ \Sigma\succeq 0,\ \ \|\Sigma\|\leq M. \tag{3.1}\] The first step is to establish a structural property of the minimizer \(\Sigma^{*}\) itself, as detailed in Theorem 1.1. Specifically, there exists a constant \(\lambda_{0}<\infty\) that depends on \(M,\mathbb{P},\|\!\! NotationLet us recall that \(J(\Sigma)=\min_{f,\gamma}U_{\Sigma}(f,\gamma)\) where \[U_{\Sigma}(f,\gamma)=\frac{1}{2}\mathbb{E}[(Y-f(X)-\gamma)^{2}]+\frac{\lambda}{2 }\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}.\] For any matrix \(\Sigma\), the tuple \((f_{\Sigma},\gamma_{\Sigma})\) denotes the unique minimizer of \(U_{\Sigma}\) (the uniqueness can be proven by standard arguments, see, e.g., [13, Proposition 7]), thus \(J(\Sigma)=U_{\Sigma}(f_{\Sigma},\gamma_{\Sigma})\). The residual term at \(\Sigma\) is denoted as \(r_{\Sigma}(x,y)=y-f_{\Sigma}(x)-\gamma_{\Sigma}\). ### Fidelity This section proves Theorem 1.1. Let \(U\in\mathbb{R}^{p\times p}\) be a possibly asymmetric matrix. We consider the auxiliary function \[H(U)=\min_{g,\gamma}\frac{1}{2}\mathbb{E}[(Y-g(U^{T}X)-\gamma)^{2}]+\frac{ \lambda}{2}\left\|g\right\|_{\mathcal{H}_{I}}^{2}. \tag{3.3}\] **Lemma 3.1**.: \(J(\Sigma)=H(U)\) _holds for every \(\Sigma=UU^{T}\succeq 0\)._ **Proof** For \(\Sigma=UU^{T}\succ 0\), that \(J(\Sigma)=H(U)\) follows directly from Proposition 2. For the general case \(\Sigma=UU^{T}\succeq 0\), see Appendix C.1. The main idea is, due to representer theorem [10], \(J(\Sigma)\) and \(H(U)\), as minimum values of kernel ridge regressions, will be equal if the values of kernels at the covariates are the same. Note \(k_{\Sigma}(x,x^{\prime})=k_{I}(U^{T}x,U^{T}x^{\prime})\) for all \(x,x^{\prime}\). As a result, any minimizer \(\Sigma^{*}\) has the form \(\Sigma^{*}=U^{*}(U^{*})^{T}\) where \(U^{*}\) minimizes \[\operatorname*{minimize}_{U}H(U)\quad\text{subject to}\quad\left|\!\left| \!\left|\!\left|U^{T}\right|\!\right|\!\right|\!\right|\leq M. \tag{3.4}\] Main ArgumentWe aim to prove two properties that hold for every minimizer \(U^{*}\) of (3.4): 1. For every \(\lambda>0\), we have \(\operatorname{col}(U^{*})\subseteq S_{*}\). 2. For every \(\lambda\in(0,\lambda_{0}]\), where \(\lambda_{0}<\infty\) depends on \(M\), \(\mathbb{P}\), \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\ We apply Lemma 3.2[(ii)] to \(U^{*}\) and obtain \(U^{*}=\Pi_{S_{*}}U^{*}\). This implies \(\operatorname{col}(U^{*})\subseteq S_{*}\). We now prove Part (b). We first prove that there is \(\epsilon_{0}>0\) such that the lower bound holds \[\inf_{U:\operatorname{rank}(U)<\dim(S_{*})}H(U)>\mathbb{E}[\operatorname{Var} (Y|X)]+\epsilon_{0}. \tag{3.6}\] To see this, we first note that there's a pointwise lower bound on \(H(U)\) given by: \[H(U)\geq\mathbb{E}[(Y-\mathbb{E}[Y|U^{T}X])^{2}]=\mathbb{E}[\operatorname{Var} (Y|\Pi_{\operatorname{col}(U)}X)].\] To make this bound uniform over \(U\) whose rank is below \(\dim(S_{*})\), we take infimum and obtain: \[\inf_{U:\operatorname{rank}(U)<\dim(S_{*})}H(U)\geq\inf_{S:\dim(S)<\dim(S_{*} )}\mathbb{E}[\operatorname{Var}(Y|\Pi_{S}X)].\] Now we will use the fact that \(S_{*}\) is the _minimal_ dimension reduction subspace. This definition implies that for any subspace \(S\) with \(\dim(S)<\dim(S_{*})\), the projection of \(X\) onto \(S\) does not capture all the information in \(X\) about \(Y\). Therefore, \(\epsilon_{0}(S):=\mathbb{E}[\operatorname{Var}(Y|\Pi_{S}X)]-\mathbb{E}[ \operatorname{Var}(Y|X)]>0\) holds for any such subspace \(S\). Lemma 3.4 further shows that this bound can be made uniform: if we define \(\epsilon_{0}:=\inf\{\epsilon_{0}(S):\dim(S)<\dim(S_{*})\}\), then \(\epsilon_{0}>0\). The proof is in Appendix C.2. **Lemma 3.4** (Uniform Gap).: \(\inf\{\mathbb{E}[\operatorname{Var}(Y|\Pi_{S}X)]:\dim(S)<\dim(S_{*})\}> \mathbb{E}[\operatorname{Var}(Y|X)]\)_._ On the other hand, we will prove that, for the constant \(\epsilon_{0}>0\) from (3.6), there exists \(\lambda_{0}<\infty\) depending only on \(M\), \(\mathbb{P}\), and such that for every \(\lambda\in(0,\lambda_{0}]\): \[H(U^{*})<\mathbb{E}[\operatorname{Var}(Y|X)]+\epsilon_{0} \tag{3.7}\] To prove this, consider an arbitrary full-rank matrix \(U_{0}\) in the feasible set \(\{U:\big{|}\!\big{|}UU^{T}\big{|}\!\big{|}\!\big{|}\leq M\}\), and let \(\Sigma_{0}=U_{0}U_{0}^{T}\succ 0\). By Proposition 4, the regression function \(x\mapsto\mathbb{E}[Y|X=x]\in\mathcal{L}_{2}(\mathbb{P})\) can be arbitrarily well approximated by \(\mathcal{H}_{\Sigma_{0}}+\mathbb{R}\) under \(\mathcal{L}_{2}(\mathbb{P})\), which implies then \(J(\Sigma_{0})<\mathbb{E}[\operatorname{Var}(Y|X)]+\epsilon_{0}\) for small enough \(\lambda\), say \(\lambda\in(0,\lambda_{0}]\). Since \(H(U_{0})=J(\Sigma_{0})\) by Lemma 3.1, and since \(H(U^{*})\leq H(U_{0})\) as \(U^{*}\) is the minimizer, the proof is thus complete. Combining (3.6) and (3.7), we infer that for every \(\lambda\in(0,\lambda_{0}]\), any minimizer \(U^{*}\) of (3.4) obeys \(\operatorname{rank}(U^{*})\geq\dim(S_{*})\). Since \(\Pi_{S_{*}^{\perp}}U^{*}=0\) by Part (a), we get \(\operatorname{rank}(U^{*})=\dim(S_{*})\) and \(\operatorname{col}(U^{*})=S_{*}\). #### 3.1.1 Proof of Lemma 3.2 In our proof, \(\mathbb{E}_{\Pi_{S_{*}^{\perp}}}X\) denotes the marginal expectation with respect to \(\Pi_{S_{*}^{\perp}}X\), and \(\mathbb{E}[\cdot\mid\Pi_{S_{*}^{\perp}}X]\) represents the conditional expectation (conditioning on \(\Pi_{S_{*}^{\perp}}X\)). Note \(X=\Pi_{S_{*}}X+\Pi_{S_{*}^{\perp}}X\). Our proof starts with an identity on \(H\): \[2H(U)=\min_{f,\gamma}\mathbb{E}_{\Pi_{S_{*}^{\perp}}X}\left[\mathbb{E}\big{[} (Y-f(U^{T}\Pi_{S_{*}}X+U^{T}\Pi_{S_{*}^{\perp}}X)-\gamma)^{2}\mid\Pi_{S_{*}^{ \perp}}X\big{]}+\lambda\left\|f\right\|_{\mathcal{H}_{I}}^{2}\right],\] which satisfies \[\begin{split}&\geq\mathbb{E}_{\Pi_{S_{*}^{\perp}}X}\left[\min_{f, \gamma}\left\{\mathbb{E}\big{[}(Y-f(U^{T}\Pi_{S_{*}}X+U^{T}\Pi_{S_{*}^{\perp}} X)-\gamma)^{2}\mid\Pi_{S_{*}^{\perp}}X\big{]}+\lambda\left\|f\right\|_{ \mathcal{H}_{I}}^{2}\right\}\right]\\ &\stackrel{{(*)}}{{=}}\min_{f,\gamma}\mathbb{E}[(Y-f (U^{T}\Pi_{S_{*}}X)-\gamma)^{2}]+\lambda\left\|f\right\|_{\mathcal{H}_{I}}^{2}= 2H(\Pi_{S^{*}}U).\end{split} \tag{3.8}\] To understand \((*)\), let us introduce \[\iota(z)=\min_{f,\gamma}V(z;f,\gamma)\ \text{ where }\ V(z;f,\gamma)=\mathbb{E}[(Y-f(U^{T} \Pi_{S_{*}}X+z)-\gamma)^{2}]+\lambda\left\|f\right\|_{\mathcal{H}_{I}}^{2}.\] A fundamental relation for any function \(f\) and scalar \(\gamma\) is \[V(U^{T}\Pi_{S^{\perp}_{*}}X;f,\gamma)=\mathbb{E}\big{[}(Y-f(U^{T}\Pi_{S_{*}}X+U^{ T}\Pi_{S^{\perp}_{*}}X)-\gamma)^{2}\mid\Pi_{S^{\perp}_{*}}X\big{]}+\lambda\left\|f \right\|_{\mathcal{H}_{I}}^{2}.\] This is because \(\mathbb{E}[Y|X]=\mathbb{E}[Y|\Pi_{S_{*}}X]\) and \(\Pi_{S_{*}}X\) is independent of \(\Pi_{S^{\perp}_{*}}X\) by Assumption 3. By taking infimum over \(f\) and \(\gamma\), this leads us to the identity: \[\iota(U^{T}\Pi_{S^{\perp}_{*}}X)=\min_{f,\gamma}\mathbb{E}\big{[}(Y-f(U^{T}\Pi _{S_{*}}X+U^{T}\Pi_{S^{\perp}_{*}}X)-\gamma)^{2}\mid\Pi_{S^{\perp}_{*}}X\big{]} +\lambda\left\|f\right\|_{\mathcal{H}_{I}}^{2}.\] Furthermore, \(z\mapsto\iota(z)\) is a constant due to the translation invariance property of \(\mathcal{H}_{I}\): \(\left\|f(\cdot+z)\right\|_{\mathcal{H}_{I}}=\left\|f\right\|_{\mathcal{H}_{I}}\) holds for every \(f\in\mathcal{H}_{I}\) and \(z\in\mathbb{R}\). Thus, we can deduce \(\mathbb{E}_{\Pi_{S^{\perp}_{*}}X}[\iota(U^{T}\Pi_{S^{\perp}_{*}}X)]=\iota(0)\), validating \((*)\) in our earlier identity. At this point, we have proven for every matrix \(U\): \[H(U)\geq H(\Pi_{S^{*}}U).\] Now we investigate the equality case \(H(U)=H(\Pi_{S_{*}}U)\). The inequality in (3.8) becomes an equality if and only if \[f_{0}(x+c)=f_{0}(x)\ \ \text{holds for every }c\in\operatorname{supp}(U^{T}\Pi_{S^{\perp}_{*}}X)\] where \((f_{0},\gamma_{0})=\operatorname{argmin}_{f,\gamma}\mathbb{E}[(Y-f(U^{T}\Pi_ {S_{*}}X)-\gamma)^{2}]+\lambda\left\|f\right\|_{\mathcal{H}_{I}}^{2}\). Below we consider the case where \(U\neq\Pi_{S_{*}}U\), or equivalently, \(\Pi_{S^{\perp}_{*}}U\neq 0\). By Assumption 3, \(\operatorname{Cov}(X)\) is full rank, thus implying that \(f_{0}(\cdot+c)=f_{0}(\cdot)\) for some \(c\neq 0\). Taking the Fourier transform yields \(\mathcal{F}f_{0}(\omega)e^{i\langle c,\omega\rangle}=\mathcal{F}f_{0}(\omega)\) for almost everywhere \(\omega\) in \(\mathbb{R}^{p}\), and since \(c\neq 0\), \(\mathcal{F}f_{0}(\omega)=0\) holds almost everywhere \(\omega\) in \(\mathbb{R}^{p}\) under the Lebesgue measure. As \(f_{0}\in\mathcal{H}_{I}\) is continuous on \(\mathbb{R}^{p}\), this implies that \(f_{0}(x)=0\) for all \(x\), which only occurs when \(H(U)=H(0)=\frac{1}{2}\mathrm{Var}(Y)\). Thus, if \(H(U)<H(0)\) and \(U\neq\Pi_{S_{*}}U\), then \(H(U)>H(\Pi_{S^{*}}U)\). This completes the proof. #### 3.1.2 Proof of Lemma 3.3 It suffices to prove that \(J(\Sigma)<J(0)\) for any full rank matrix \(\Sigma\) in the feasible set \(\{\Sigma:\left|\kern-1.0pt\left|\Sigma\right|\kern-1.0pt\right|\kern-1.0pt \right|\kern-1.0pt\}\leq M\}\). For a matrix \(\Sigma\), recall our notation \(J(\Sigma)=\min_{f,\gamma}U_{\Sigma}(f,\gamma)\), where \(U_{\Sigma}(f,\gamma)\) denotes \[U_{\Sigma}(f,\gamma)=\frac{1}{2}\mathbb{E}[(Y-f(X)-\gamma)^{2}]+\frac{\lambda }{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}.\] Also, recall that \((f_{\Sigma},\gamma_{\Sigma})\) is the (unique) minimizer of the functional \((f,\gamma)\mapsto U_{\Sigma}(f,\gamma)\). A basic characterization of \(f_{\Sigma}\) is through the Euler-Lagrange equation associated with \(\min_{f,\gamma}U_{\Sigma}(f,\gamma)\). **Lemma 3.5** (Euler-Lagrange).: _Fix \(\Sigma\succeq 0\). The identity below holds for every \(h\in\mathcal{H}_{\Sigma}\):_ \[\mathbb{E}[r_{\Sigma}(X,Y)h(X)]=\lambda\langle f_{\Sigma},h\rangle_{\mathcal{ H}_{\Sigma}}. \tag{3.9}\] **Proof** This follows by taking the first variation of the cost function \(U_{\Sigma}(f,\gamma)\) w.r.t. \(f\). Let's go back to the proof of Lemma 3.3. First, \(J(0)=\frac{1}{2}\mathrm{Var}(Y)\) where the minimizer is \(f_{0}\equiv 0\) and \(\gamma_{0}=0\). Next, we show \(J(\Sigma)<\frac{1}{2}\mathrm{Var}(Y)\) for any full rank \(\Sigma\). It suffices to show that \((f_{\Sigma},\gamma_{\Sigma})\neq(f_{0},\gamma_{0})\), which would then imply \(J(\Sigma)=U_{\Sigma}(f_{\Sigma},\gamma_{\Sigma})<U_{\Sigma}(f_{0},\gamma_{0} )=\frac{1}{2}\mathrm{Var}(Y)\). Now, suppose on the contrary that \(f_{\Sigma}\equiv 0\) and \(\gamma_{\Sigma}=\mathbb{E}[Y]\). Then (3.9) implies \(\operatorname{Cov}(Y,h(X))=0\) for every \(h\in\mathcal{H}_{\Sigma}\). Equivalently, \(\mathbb{E}[(\mathbb{E}[Y|X]-\mathbb{E}[Y])(h(X)+\gamma)]=0\) for every \(h\in\mathcal{H}_{\Sigma},\gamma\in\mathbb{R}\). By Proposition 4, the function \(x\mapsto\mathbb{E}[Y|X=x]-\mathbb{E}[Y]\) can be approximated arbitrarily well by the set \(\mathcal{H}_{\Sigma}+\mathbb{R}\) under \(\mathcal{L}_{2}(\mathbb{P})\). This then implies that \(\mathbb{E}[Y|X]-\mathbb{E}[Y]=0\) holds almost surely under \(\mathbb{P}\), indicating \(\mathrm{Var}(\mathbb{E}[Y|X])=0\) which contradicts Assumption 2! As a result, \((f_{\Sigma},\gamma_{\Sigma})\neq(f_{0},\gamma_{0})\) and \(J(\Sigma)<J(0)\) for every full rank \(\Sigma\). ### Sharpness We now prove that at every minimizer, the sharpness property (i.e., (3.2)) holds. We start by analyzing the differentiability property of the function \(J:\mathcal{C}\to\mathbb{R}\). To deal with matrices \(\Sigma\) that lie on the boundaries of \(\mathcal{C}\), we introduce the following notion of gradient. **Definition 3.1** (Gradient Notion on \(\mathcal{C}\)).: _Let \(f:\mathcal{C}\to\mathbb{R}\). The function \(f\) has a gradient \(\nabla f(\Sigma)\) at \(\Sigma\in\mathcal{C}\) if \(\nabla f(\Sigma)\) is a symmetric matrix, and for every converging sequence \(\Sigma_{n}\in\mathcal{C}\) to \(\Sigma\):_ \[f(\Sigma_{n})=f(\Sigma)+\langle\nabla f(\Sigma),\Sigma_{n}-\Sigma\rangle+o( \|\Sigma_{n}-\Sigma\|_{F}).\] _We say \(f\) is differentiable on \(\mathcal{C}\) if \(\nabla f\) is everywhere well-defined on \(\mathcal{C}\)._ **Remark** The gradient of \(f\) at \(\Sigma\), if exists, is unique due to the tangent cone \(\mathcal{T}_{\mathcal{C}}(\Sigma)\) containing an open ball. It coincides with the standard notion of gradient for every \(\Sigma\) in the interior of \(\mathcal{C}\). The following lemma shows that \(J\) is differentiable on \(\mathcal{C}\). The crux of Lemma 3.6 is the explicit gradient formula provided in equation (3.10). **Lemma 3.6** (Gradient Formula).: _The gradient \(\nabla J(\Sigma)\) exists at every \(\Sigma\in\mathcal{C}\) with_ \[\nabla J(\Sigma)=-\frac{1}{2\lambda}\mathbb{E}[r_{\Sigma}(X,Y)r_{\Sigma}(X^{ \prime},Y^{\prime})\partial_{\Sigma}k_{\Sigma}(X,X^{\prime})]. \tag{3.10}\] _In the above, \((X^{\prime},Y^{\prime})\) stands for an independent copy of \((X,Y)\). Also, \(\nabla J\) is continuous on \(\mathcal{C}\)._ The intuition of the gradient formula (3.10) is discussed in Section 3.3. The proof details can be found in Appendix B. We now establish a form of non-degeneracy condition at every minimizer \(\Sigma^{*}\) in Theorem 3.1. The non-degeneracy condition (3.11) is indeed equivalent to the sharpness property of the objective at \(\Sigma^{*}\) presented in Corollary 3.1, and also, bears a close resemblance to the notion of strict complementary slackness condition at \(\Sigma^{*}\) from the optimization literature (see Remark after Corollary 3.1). **Theorem 3.1** (Non-degeneracy Condition at \(\Sigma^{*}\)).: _Let Assumptions 1-4 hold. Fix \(M\in(0,\infty),|\!|\!|\cdot|\!|\!|\!|\)._ _Let \(\lambda\in(0,\infty)\). For any minimizer \(\Sigma^{*}\) of (3.1), there exists \(\rho>0\) such that the following holds:_ \[\Pi_{S^{\perp}_{*}}\nabla J(\Sigma^{*})\Pi_{S^{\perp}_{*}}\succeq\rho\Pi_{S^ {\perp}_{*}}. \tag{3.11}\] **Proof** We verify the condition (3.11). Recall \(\phi(z)=\int_{0}^{\infty}\exp(-tz)\mu(dt)\). Let us introduce \(\tilde{\phi}(z)=\int_{0}^{\infty}\exp(-tz)\tilde{\mu}(dt)\) where \(\tilde{\mu}(dt)=t\mu(dt)\). Note that under Assumption 4, we can express the gradient \(\partial_{\Sigma}k_{\Sigma}(x,x^{\prime})\) as the following matrix: \[\partial_{\Sigma}k_{\Sigma}(x,x^{\prime})=-\tilde{k}_{\Sigma}(x,x^{\prime}) \cdot(x-x^{\prime})(x-x^{\prime})^{T}.\] where \(\tilde{k}_{\Sigma}(x,x^{\prime})=\tilde{\phi}(\|x-x^{\prime}\|_{\Sigma}^{2})\). Substituting this formula into Eq. (3.10) results in \[\nabla J(\Sigma)=\frac{1}{2\lambda}\mathbb{E}\left[r_{\Sigma}(X,Y)r_{\Sigma} (X^{\prime},Y^{\prime})\tilde{k}_{\Sigma}(X,X^{\prime})\cdot(X-X^{\prime})(X-X ^{\prime})^{T}\right]. \tag{3.12}\] Let \(\Sigma^{*}\) denote a minimizer of \(J\). Recall Theorem 1.1 states that \(\mathrm{col}(\Sigma^{*})\subseteq S_{*}\). Consequentially, using equation (3.12), we obtain the following identity at \(\Sigma=\Sigma^{*}\): \[\Pi_{S^{\perp}_{*}}\nabla J(\Sigma^{*})\Pi_{S^{\perp}_{*}} =\frac{1}{2\lambda}\mathbb{E}\left[r_{\Sigma^{*}}(X,Y)r_{\Sigma^ {*}}(X^{\prime},Y^{\prime})\tilde{k}_{\Sigma}(X,X^{\prime})\cdot\Pi_{S^{\perp} _{*}}(X-X^{\prime})(X-X^{\prime})^{T}\Pi_{S^{\perp}_{*}}\right]\] \[=\frac{1}{2\lambda}\mathbb{E}[r_{\Sigma^{*}}(X,Y)r_{\Sigma^{*}}(X ^{\prime},Y^{\prime})\tilde{k}_{\Sigma^{*}}(X,X^{\prime})]\cdot\mathbb{E}[\Pi _{S^{\perp}_{*}}(X-X^{\prime})(X-X^{\prime})^{T}\Pi_{S^{\perp}_{*}}]. \tag{3.13}\] The last line is due to the following. The term \(\Pi_{S^{\perp}_{*}}(X-X^{\prime})(X-X^{\prime})^{T}\Pi_{S^{\perp}_{*}}\) is a function of \(\Pi_{S^{\perp}_{*}}X\) and \(\Pi_{S^{\perp}_{*}}X^{\prime}\). On the other hand, \(\tilde{k}_{\Sigma^{*}}(X,X^{\prime})\) is a function of \(\Pi_{S^{*}}X\) and \(\Pi_{S^{*}}X^{\prime}\) since \(\mathrm{col}(\Sigma^{*})\subseteq S_{*}\). Additionally, \(\mathbb{E}[r_{\Sigma^{*}}(X,Y)|X]=\mathbb{E}[r_{\Sigma^{*}}(X,Y)|\Pi_{S_{*}}X]\) by the definition of \(S_{*}\) and by Proposition 2. Because \(\Pi_{S_{*}}X\) and \(\Pi_{S^{\perp}_{*}}X\) are independent by Assumption 3, it then implies that the two terms \[\mathbb{E}[r_{\Sigma^{*}}(X,Y)r_{\Sigma^{*}}(X^{\prime},Y^{\prime})\tilde{k}_ {\Sigma^{*}}(X,X^{\prime})|X,X^{\prime}]\ \ \text{and}\ \ \Pi_{S^{\perp}_{*}}(X-X^{\prime})(X-X^{\prime})^{T}\Pi_{S^{\perp}_{*}}\] are also independent. This allows us to split the expectation of their product into the product of their expectations, resulting in the last line of (3.13). To complete the proof of Theorem 3.1, we further lower bound the RHS of Eq. (3.13). First, for \(\gamma\) defined to be the minimum eigenvalue of \(\mathrm{Cov}(X)\), with \(\gamma>0\) by Assumption 3, we have the lower bound: \[\mathbb{E}[\Pi_{S^{\perp}_{*}}(X-X^{\prime})(X-X^{\prime})^{T}\Pi_{S^{\perp}_{ *}}]\succeq\gamma\Pi_{S^{\perp}_{*}}. \tag{3.14}\] Next, we show the quadratic form satisfies \[\mathbb{E}[r_{\Sigma^{*}}(X,Y)r_{\Sigma^{*}}(X^{\prime},Y^{\prime})\tilde{k}_ {\Sigma^{*}}(X,X^{\prime})]>0. \tag{3.15}\] To prove it, a key observation is that the mapping \((x,x^{\prime})\mapsto\tilde{k}_{I}(x,x^{\prime})\) is an integrally positive definite kernel, which means that for any nonzero signed measure \(\nu\) with finite total variation, \(\int_{\mathbb{R}^{p}}\int_{\mathbb{R}^{p}}\tilde{k}_{I}(x,x^{\prime})\nu(dx) \nu(dx^{\prime})>0\). This property can be deduced by Proposition 5 of [11], given the fact that \(\tilde{k}_{I}\) is a radial kernel with \(\tilde{\mu}\) not a zero measure. For a more transparent argument, we introduce the notation \(\zeta_{t}(\omega)=\frac{1}{(4\pi t)^{p/2}}e^{-\|\omega\|_{2}^{2}/(4t)}\) and note the following calculations: \[\begin{split}\iint\tilde{k}_{I}(x,x^{\prime})\nu(dx)\nu(dx^{ \prime})&\stackrel{{(a)}}{{=}}\iint\left(\int e^{-t \|x-x^{\prime}\|_{2}^{2}}\tilde{\mu}(dt)\right)\nu(dx)\nu(dx^{\prime})\\ &\stackrel{{(b)}}{{=}}\iint\left(\iint e^{-i\langle x -x^{\prime},\omega\rangle}\zeta_{t}(\omega)d\omega\tilde{\mu}(dt)\right)\nu( dx)\nu(dx^{\prime})\\ &\stackrel{{(c)}}{{=}}\iint\left(\iint e^{-i\langle x -x^{\prime},\omega\rangle}\nu(dx)\nu(dx^{\prime})\right)\zeta_{t}(\omega)d \omega\tilde{\mu}(dt)\\ &=\iint\left|\int e^{-i\langle x,\omega\rangle}\nu(dx)\right|^{2 }\zeta_{t}(\omega)d\omega\tilde{\mu}(dt).\end{split} \tag{3.16}\] In the above, step \((a)\) is due to the definition of \(\tilde{k}_{I}\), step \((b)\) is due to the known fact that Fourier transform of Gaussian is Gaussian and step \((c)\) is due to Fubini's theorem. Given \(\zeta_{t}(\omega)>0\) for all \(t,\omega\), the integral in the final expression is nonnegative, and since \(\tilde{\mu}\) is not a zero measure, the integral is zero only if \(\omega\mapsto\int e^{-i\langle x,\omega\rangle}\nu(dx)\) is a zero constant, which happens only when \(\nu\) is a zero measure. This proves that \(\iint\tilde{k}_{I}(x,x^{\prime})\nu(dx)\nu(dx^{\prime})>0\) for all nonzero finite signed measure \(\nu\). Next, write \(\Sigma^{*}=(U^{*})(U^{*})^{T}\). It is important to note that \(\mathbb{E}[r_{\Sigma^{*}}(X,Y)|X]=\mathbb{E}[r_{\Sigma^{*}}(X,Y)|\Pi_{S_{*}}X]\) by the definition of \(S_{*}\) and by Proposition 2, and thus \(\mathbb{E}[r_{\Sigma^{*}}(X,Y)|X]\) is just a function of \((U^{*})^{T}X\). Let's write \(\mathbb{E}[r_{\Sigma^{*}}(X,Y)|X]=\tilde{r}_{\Sigma^{*}}(Z)\) for some function \(\tilde{r}_{\Sigma^{*}}\) where \(Z=(U^{*})^{T}X\). With this notation, we can rewrite (3.15) to be \(\mathbb{E}[\tilde{r}_{\Sigma^{*}}(Z)\tilde{r}_{\Sigma^{*}}(Z^{\prime})\tilde{k }_{I}(Z,Z^{\prime})]>0\). Given we showed that \((x,x^{\prime})\mapsto\tilde{k}_{I}(x,x^{\prime})\) is integrally positive definite, it is sufficient to show that \(\tilde{r}_{\Sigma^{*}}(Z)\) is not almost surely equal to zero, or equivalently, \(\mathbb{E}[r_{\Sigma^{*}}(X,Y)|X]\) is not almost surely equal to zero under the probability measure \(\mathbb{P}\). To see the last part \(\mathbb{E}[r_{\Sigma^{*}}(X,Y)|X]\neq 0\), we use the Euler-Lagrange equation. Note the Euler-Lagrange equation (Lemma 3.5) shows the following holds for every \(h\in\mathcal{H}_{\Sigma^{*}}\): \[\mathbb{E}[r_{\Sigma^{*}}(X,Y)h(X)]=\lambda\langle f_{\Sigma^{*}},h\rangle_{ \mathcal{H}^{*}_{\Sigma}}. \tag{3.17}\] Suppose \(\mathbb{E}[r_{\Sigma^{*}}(X,Y)|X]=0\). Then \(f_{\Sigma^{*}}=0\) must be the zero function. Then \(r_{\Sigma^{*}}(X,Y)=Y-\mathbb{E}[Y]\) which in turn implies \(\mathbb{E}[Y|X]=\mathbb{E}[Y]\) and \(\operatorname{Var}(\mathbb{E}[Y|X])=0\), a contradiction to Assumption 2. This means that \(\mathbb{E}[r_{\Sigma^{*}}(X,Y)|X]\neq 0\) under the \(\mathcal{L}_{2}(\mathbb{P})\), and thereby the inequality (3.15) holds. Finally, based on (3.13), (3.14), and (3.15), we can immediately see the existence of \(\rho>0\) such that the nondegeneracy condition (3.11) holds. **Corollary 3.1** (Sharpness Property).: _Let Assumptions 1-4 hold. Fix \(M\in(0,\infty)\) and \(|\!|\!|\cdot|\!|\!|\)._ _Let \(\lambda\in(0,\infty)\). For any minimizer \(\Sigma^{*}\), there exists \(\rho>0\) such that_ \[\langle\nabla J(\Sigma^{*}),W\rangle\geq\rho\,|\!|\!|W|\!|\!| \tag{3.18}\] _holds for every matrix \(W\in\mathcal{T}_{\mathcal{C}}(\Sigma^{*})\) with \(\operatorname{col}(W)\subseteq S^{\perp}_{*}\)._ **Proof** The proof uses a basic characterization of the set \[\mathcal{W}=\{W:W\in\mathcal{T}_{\mathcal{C}}(\Sigma^{*}),\ \operatorname{col}(W) \subseteq S^{\perp}_{*}\}.\] **Lemma 3.7**.: \(\mathcal{W}=\{W:W=\Pi_{S^{\perp}_{*}}W\Pi_{S^{\perp}_{*}}\succeq 0\}\)_._ **Proof** Suppose \(W\in\mathcal{W}\). Now we argue that \(W=\Pi_{S^{\perp}_{*}}W\Pi_{S^{\perp}_{*}}\succeq 0\). To see this, we first note that \(W\in\mathcal{T}_{\mathcal{C}}(\Sigma^{*})\). This implies that \(W=\lim_{n}t_{n}(\Sigma_{n}-\Sigma^{*})\) for some sequence \(\Sigma_{n}\in\mathcal{C}\) and \(t_{n}>0\). As \(\operatorname{col}(\Sigma^{*})\subseteq\operatorname{col}(S_{*})\) by Theorem 1.1, this shows \(\Pi_{S^{\perp}_{*}}(\Sigma_{n}-\Sigma^{*})\Pi_{S^{\perp}_{*}}=\Pi_{S^{\perp}_{ *}}\Sigma_{n}\Pi_{S^{\perp}_{*}}\succeq 0\) for every \(n\). As a result, \(\Pi_{S^{\perp}_{*}}W\Pi_{S^{\perp}_{*}}=\lim_{n}\{t_{n}\Pi_{S^{\perp}_{*}}( \Sigma_{n}-\Sigma^{*})\Pi_{S^{\perp}_{*}}\}\succeq 0\). Because we have further \(W=W^{T}\) and \(\operatorname{col}(W)\subseteq S^{\perp}_{*}\), this yields \(W=\Pi_{S^{\perp}_{*}}W\Pi_{S^{\perp}_{*}}\succeq 0\). On the other hand, it is clear that any matrix \(W\succeq 0\) must belong to \(\mathcal{T}_{\mathcal{C}}(\Sigma^{*})\), and any matrix \(W=\Pi_{S^{\perp}_{*}}W\Pi_{S^{\perp}_{*}}\) must obey \(\operatorname{col}(W)\subseteq S^{\perp}_{*}\). Take \(W\in\mathcal{T}_{\mathcal{C}}(\Sigma^{*})\) with \(\operatorname{col}(W)\subseteq\operatorname{col}(S^{\perp}_{*})\). Then \(W=\Pi_{S^{\perp}_{*}}W\Pi_{S^{\perp}_{*}}\succeq 0\) by Lemma 3.7. Thereby, \[\langle\nabla J(\Sigma^{*}),W\rangle=\langle\Pi_{S^{\perp}_{*}}\nabla J(\Sigma ^{*})\Pi_{S^{\perp}_{*}},W\rangle.\] Note we can lower bound the RHS quantity by \(\rho\operatorname{tr}(W)\) for any constant \(\rho>0\) that satisfies the nondegeneracy condition (3.11) in Theorem 3.1. The proof is complete since the mappings \(W\mapsto\operatorname{tr}(W)\) and \(W\mapsto|\!|\!|W|\!|\) are uniformly bounded by each other up to a positive constant over \(W\in\mathcal{C}\). **Remark** (Equivalence between sharpness property and non-degeneracy condition) From the proof of Corollary 3.1, we can see the non-degeneracy condition (3.11) and the sharpness property (3.18) are indeed equivalents. Specifically, given \(\Sigma\) for which \(\operatorname{col}(\Sigma)\subseteq S\) where \(S\) is a linear subspace: A differentiable function \(f:\mathcal{C}\to\mathbb{R}\) is said to satisfy the non-degeneracy condition at \(\Sigma\in\mathcal{C}\) if \(\Pi_{S^{\perp}}\nabla J(\Sigma)\Pi_{S^{\perp}}\succeq\rho\Pi_{S^{\perp}}\) for some \(\rho>0\). This is equivalent to the sharpness property that requires for some \(\rho>0\), \(\langle\nabla f(\Sigma),W\rangle\geq\rho\,|\!|\!|W|\!|\!|\) holds for all \(W\in\mathcal{T}_{\mathcal{C}}(\Sigma)\) with \(\operatorname{col}(W)\subseteq S^{\perp}\). **Remark** (Connection between non-degeneracy condition and strict complementary slackness) The non-degeneracy condition (3.11) at the minimizer \(\Sigma^{*}\) bears a close resemblance, albeit distinct from, the strict complementary slackness condition in optimization [1, Section 3.3.3]. Below we discuss their connections. Suppose \(|\!|\!|\Sigma^{*}|\!|\!|<M\) holds. Then \(\Sigma^{*}\) is a local minimizer of \[\operatorname*{minimize}_{\Sigma}J(\Sigma)\quad\text{ subject to }\quad\Sigma \succeq 0.\] where \(\Sigma\succeq 0\) is the only possibly active constraint for the minimizer \(\Sigma^{*}\). The strict complementary slackness condition at \(\Sigma^{*}\) is then as follows: for some constant \(\rho>0\), \[\Pi_{\mathrm{col}(\Sigma^{*})^{\perp}}\nabla J(\Sigma^{*})\Pi_{\mathrm{col}( \Sigma^{*})^{\perp}}\succeq\rho\Pi_{\mathrm{col}(\Sigma^{*})^{\perp}}. \tag{3.19}\] Clearly, the non-degeneracy condition (3.11) and the complementary slackness condition (3.19) are identical when \(\mathrm{col}(\Sigma^{*})=S_{*}\), which occurs when \(\lambda\in(0,\lambda_{0}]\) by Theorem 1.1. In such situations, we'll demonstrate that the empirical minimizer \(\Sigma_{n}^{*}\) retains the same rank, \(\dim(S_{*})\), as the population minimizer \(\Sigma^{*}\) with high probability. However, for general \(\lambda>0\), the most we can say is that \(\mathrm{col}(\Sigma^{*})\subseteq S_{*}\) as also indicated by Theorem 1.1. Given this, the non-degeneracy condition (3.11) is implied by the strict complementary slackness condition (3.19), but not the other way around. In this more general setting, we'll show the empirical minimizer \(\Sigma_{n}^{*}\) has rank at most \(\dim(S_{*})\) with high probability. ### Discussion of Gradient Formula The gradient formula (3.10) expresses a relation between the gradient of the objective \(J\) and the residual \(r\), both implicitly defined through the variational problem (1.4). In this section, we describe a brief set of heuristics that render this formula plausible. For simplicity, we first restrict our discussion to \(\Sigma\succ 0\). Formal justifications of the gradient formula (3.10) based on the intuitions below for all \(\Sigma\succeq 0\) are given in Appendix B. #### 3.3.1 Invoking the Envelope Theorem Recall that \(J\) is the minimum value of the variational problem. \[J(\Sigma)=\min_{f,\gamma}\frac{1}{2}\mathbb{E}[(Y-f(X)-\gamma)^{2}]+\frac{ \lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}. \tag{3.20}\] Let us use \((f_{\Sigma},\gamma_{\Sigma})\) to denote the minimizer of the variational problem on the RHS. Our derivation starts with an application of the envelope theorem, which states under regularity conditions, the gradient of the value function of a variational problem \(v(x)=\min_{y}c(x,y)\) exists, and satisfies \(\nabla v(x)=\partial_{x}c(x,y(x))\) where \(y(x)\) is the minimizer of \(\min_{y}c(x,y)\) for a given \(x\). We wish to apply the envelope theorem to obtain a formula for the gradient of \(J\). We substitute the analytic form of \(\left\|f\right\|_{\mathcal{H}_{\Sigma}}\) into equation (3.20): \[J(\Sigma)=\min_{f,\gamma}\frac{1}{2}\mathbb{E}[(Y-f(X)-\gamma)^{2}]+\frac{ \lambda}{2}\int\frac{|\mathsf{F}f(\omega)|^{2}}{Q_{\Sigma}(\omega)}d\omega. \tag{3.21}\] Assume regularity holds so the envelope theorem applies. Then we are able to derive \[\nabla J(\Sigma)=-\frac{\lambda}{2}\int\frac{|\mathsf{F}f_{\Sigma}(\omega)|^{ 2}}{Q_{\Sigma}^{2}(\omega)}\partial_{\Sigma}Q_{\Sigma}(\omega)d\omega. \tag{3.22}\] Operationally, we derive equation (3.22) following the envelope theorem. Indeed, we start by taking the partial derivative of the cost function of the variational problem (3.21) with respect to \(\Sigma\), and then replace \(f\) with \(f_{\Sigma}\) since \((f_{\Sigma},\gamma_{\Sigma})\) is the minimizer of the variational problem for a given \(\Sigma\). #### 3.3.2 Using Fourier Basis as Test Functions Here we further characterize the integrand on the RHS of equation (3.22). By taking a first-order variation of the variational problem (3.21), we derive the Euler-Lagrange equation that the minimizer \(f_{\Sigma}\) must obey. For every test function \(h\in\mathcal{H}_{\Sigma}\), there is \[\mathbb{E}[r_{\Sigma}(X,Y)h(X)]=\lambda\cdot\int\frac{\mathsf{F}f_{\Sigma}( \omega)\overline{\mathsf{F}h(\omega)}}{Q_{\Sigma}(\omega)}d\omega \tag{3.23}\] by Proposition 3 and Lemma 3.5. Recall \(\left\|h\right\|_{\mathcal{H}_{\Sigma}}^{2}=\int\frac{|\mathsf{F}h|^{2}( \omega)}{Q_{\Sigma}(\omega)}d\omega\) for a real-valued function \(h\in\mathcal{H}_{\Sigma}\). By decomposing a complex-valued function into real and imaginary parts, we can see that \[\mathbb{E}[r_{\Sigma}(X,Y)\overline{h(X)}]=\lambda\cdot\int\frac{\mathsf{F}f_{ \Sigma}(\omega)\overline{\mathsf{F}h(\omega)}}{Q_{\Sigma}(\omega)}d\omega \tag{3.24}\] indeed holds for every \(\mathbb{C}\)-valued function \(h\) satisfying \(\int\frac{|\mathsf{F}h|^{2}(\omega)}{Q_{\Sigma}(\omega)}d\omega<\infty\). Let us use \(h_{\omega_{0}}(x)=e^{i\langle x,\omega_{0}\rangle}\) to denote the Fourier basis. Then in Fourier analysis we recognize that \(\mathsf{F}h_{\omega_{0}}(\cdot)\) is, in a distributional sense, equal to \(\delta(\cdot-\omega_{0})\), the Dirac delta "function" centered at \(\omega_{0}\). Given this, we assume temporarily that equation (3.24) also holds for the Fourier basis \(h=h_{\omega_{0}}\) in the following sense (which is what we will prove rigorously in the Appendix B). That is, for all \(\omega_{0}\) except for a Lebesgue measure zero set, there is \[\mathbb{E}\left[r_{\Sigma}(X,Y)e^{-i\langle\omega_{0},X\rangle}\right]= \lambda\cdot\int\frac{\mathsf{F}f_{\Sigma}(\omega)\delta(\omega-\omega_{0})}{Q _{\Sigma}(\omega)}d\omega=\lambda\cdot\frac{\mathsf{F}f_{\Sigma}(\omega_{0})}{ Q_{\Sigma}(\omega_{0})}. \tag{3.25}\] By taking the squared norm of both sides we obtain that \[\mathbb{E}\left[r_{\Sigma}(X,Y)r_{\Sigma}(X^{\prime},Y^{\prime})e^{-i\langle \omega_{0},X-X^{\prime}\rangle}\right]=\lambda^{2}\cdot\frac{|\mathsf{F}f_{ \Sigma}(\omega_{0})|^{2}}{Q_{\Sigma}^{2}(\omega_{0})} \tag{3.26}\] holds for almost everywhere \(\omega_{0}\) under the Lebesgue measure. #### 3.3.3 Finalizing Computations We now substitute equation (3.26) into equation (3.22). This renders the identity \[\nabla J(\Sigma)=-\frac{1}{2\lambda}\int\mathbb{E}\left[r_{\Sigma}(X,Y)r_{ \Sigma}(X^{\prime},Y^{\prime})e^{-i\langle\omega,X-X^{\prime}\rangle}\right] \partial_{\Sigma}Q_{\Sigma}(\omega)d\omega. \tag{3.27}\] As \(k_{\Sigma}(x,x^{\prime})=\int e^{-i\langle\omega,x-x^{\prime}\rangle}Q_{\Sigma }(\omega)d\omega\) holds for all \(\Sigma\), we take partial derivative with respect to \(\Sigma\): \[\partial_{\Sigma}k_{\Sigma}(x,x^{\prime})=\int e^{-i\langle\omega,x-x^{ \prime}\rangle}\partial_{\Sigma}Q_{\Sigma}(\omega)d\omega.\] Substituting this identity into equation (3.27) yields the desired gradient formula (3.10). #### 3.3.4 Caveats Turning the above rough idea into a rigorous proof, of course, takes substantial work! Most notably, for \(\Sigma\succ 0\), we need to justify equations (3.22) and (3.25) with rigor. The former is justified by proving the required regularity of \(f_{\Sigma}\) for the envelope theorem. For the latter, we rely on the mollifier technique in Fourier analysis. Also, there's the additional work to extend the aforementioned arguments to \(\Sigma\succeq 0\). Once we obtain the gradient formula (3.10), its continuity in \(\Sigma\) is immediate. For proof details, see Appendix B. Uniform Approximation of Empirical Objective In this section, we show that there is, on any given compact set, with probability tending to one, the uniform convergence of the empirical objective \(J_{n}\) to its population counterpart \(J\). Furthermore, we show that there's a parallel result for the gradient where \(\nabla J_{n}\) converges uniformly to \(\nabla J\). **Theorem 4.1** (Uniform Convergence of Objectives and Gradients).: _For any compact set \(\Sigma\) within the semidefinite cone \(\{\Sigma:\Sigma\succeq 0\}\), we have the following convergence as the sample size \(n\to\infty\)_ \[\sup_{\Sigma\in\Sigma}|J_{n}(\Sigma)-J(\Sigma)|=o_{P}(1) \tag{4.1}\] \[\sup_{\Sigma\in\Sigma}\left|\!\left|\!\left|\nabla J_{n}(\Sigma)- \nabla J(\Sigma)\right|\!\right|\!\right|=o_{P}(1) \tag{4.2}\] The main challenge to establish uniform convergence comes from the appearance of the prediction function \(f\), which is _infinite-dimensional_ in its nature, in the definition of \(J\). Note, however, the uniform convergence of the objective value (equation (4.1)), for a specific set \(\Sigma\), appeared in the statistics literature [10], where \(\Sigma\) collects all projection matrices of a given rank. Here, we show uniform convergence holds for a general compact set \(\Sigma\). The main contribution here, thereby, is to show the uniform convergence of the gradient (equation (4.2)). For our proof, we repackage key elements in Fukumizu et.al.'s proof of the convergence of objective values, namely, tools from the empirical process theory in the RKHS setting. More precisely, Fukumizu et.al.'s arguments construct a suitable family of covariance operators in the RKHS, and by showing their uniform convergence, infer the same for the objective value, as the objective is a continuous functional of the operators. Our arguments show that the uniform convergence of operators also leads to that of the gradient, utilizing the connections between the gradient and covariance operator we build below based on the gradient formula in Lemma 3.6. Section 4.1 elaborates on these ideas, and the organization of the remaining proofs is outlined at the end of Section 4.1. ### Analysis Framework: Representation Using Covariance Operators In our convergence analysis, we shall study our objective function \(J\) and its gradient \(\nabla J\). Here we introduce notation that puts emphasis on its dependence on the underlying probability measure \(\mathbb{Q}\): \[J(\Sigma;\mathbb{Q})=\min_{f,\gamma}\frac{1}{2}\mathbb{E}_{\mathbb{Q}}\left[( Y-f(X)-\gamma)^{2}\right]+\frac{\lambda}{2}\left\|f\right\|_{\mathcal{H}_{ \Sigma}}^{2}. \tag{4.3}\] Under this notation, there is a unified expression for the population and empirical objective: \[J(\Sigma)=J(\Sigma;\mathbb{P})\ \ \text{and}\ \ J_{n}(\Sigma)=J(\Sigma;\mathbb{P}_{n}).\] Here \(\mathbb{Q}\) can be the population measure \(\mathbb{P}\) and the empirical measure \(\mathbb{P}_{n}\). We are interested in how much \(J(\Sigma;\mathbb{Q})\) and \(\nabla J(\Sigma;\mathbb{Q})\) change when we replace \(\mathbb{Q}=\mathbb{P}\) by \(\mathbb{Q}=\mathbb{P}_{n}\). Clearly, the analysis would be simpler if we focus on a single RKHS \(\mathcal{H}_{I}\) than a family of RKHS \(\mathcal{H}_{\Sigma}\) indexed by \(\Sigma\). Motivated by Lemma 3.1, we introduce for every probability measure \(\mathbb{Q}\): \[H(U;\mathbb{Q})=\min_{f,\gamma}\frac{1}{2}\mathbb{E}_{\mathbb{Q}}\left[(Y-f(U^ {T}X)-\gamma)^{2}\right]+\frac{\lambda}{2}\left\|f\right\|_{\mathcal{H}_{I}}^{2}. \tag{4.4}\] By Lemma 3.1 (which, as demonstrated in its proof, also applies to \(\mathbb{Q}=\mathbb{P}_{n}\)), there is an identity that connects \(J\) and \(H\) at every \(\Sigma=UU^{T}\): \[J(\Sigma;\mathbb{Q})=H(U;\mathbb{Q}). \tag{4.5}\] Below, we use the tuple \((f_{U;\mathbb{Q}},\gamma_{U;\mathbb{Q}})\) to denote the unique minimizer of the variational objective on the RHS of equation (4.4). It is easy to note that the intercept obeys \(\gamma_{U;\mathbb{Q}}=\mathbb{E}_{\mathbb{Q}}[Y]-\mathbb{E}_{\mathbb{Q}}[f_{U; \mathbb{Q}}(X)]\). A central tool in our convergence analysis is the _cross-covariance operator_, introduced in [1]. Essentially, the cross-covariance operator, as a generalization of the covariance matrix, captures covariance relations between random variables under a collection of test functions. Below, for a given matrix \(U\), we construct covariance operators \(\mathsf{C}_{U;\mathbb{Q}},\mathsf{V}_{U;\mathbb{Q}}\) in the same way as in [10]. Later, we shall see how these operators connect to \(J(\Sigma;\mathbb{Q})\) and \(\nabla J(\Sigma;\mathbb{Q})\). Given the RKHS \(\mathcal{H}_{I}\), and a matrix \(U\), we use \(\mathsf{C}_{U;\mathbb{Q}}:\mathcal{H}_{I}\to\mathcal{H}_{I}\) to denote the unique linear operator such that for every pair of test functions \(h_{1},h_{2}\in\mathcal{H}_{I}\) \[\mathrm{Cov}_{\mathbb{Q}}(h_{1}(U^{T}X),h_{2}(U^{T}X))=\langle h_{1},\mathsf{ C}_{U;\mathbb{Q}}h_{2}\rangle_{\mathcal{H}_{I}}. \tag{4.6}\] Similarly, we use \(\mathsf{V}_{U;\mathbb{Q}}:\mathcal{H}_{I}\to\mathbb{R}\) to denote the unique linear operator such that for every \(h\in\mathcal{H}_{I}\) \[\mathrm{Cov}_{\mathbb{Q}}(h(U^{T}X),Y)=\mathsf{V}_{U;\mathbb{Q}}h. \tag{4.7}\] The existence of such cross-covariance operators \(\mathsf{C}_{U;\mathbb{Q}}\) and \(\mathsf{V}_{U;\mathbb{Q}}\) is discussed in [10, Section 2]. The operator \(\mathsf{C}_{U;\mathbb{Q}}\) is always a bounded, self-adjoint, non-negative operator, for every \(U\) and measure \(\mathbb{Q}\). The operator \(\mathsf{V}_{U;\mathbb{Q}}\) is bounded whenever \(\mathbb{E}_{\mathbb{Q}}[Y^{2}]<\infty\), in which case the Riesz representation theorem says there is \(v_{U;\mathbb{Q}}\in\mathcal{H}_{I}\) for which \(\mathsf{V}_{U;\mathbb{Q}}h=\langle v_{U;\mathbb{Q}},h\rangle_{\mathcal{H}_{I}}\) holds for every \(h\in\mathcal{H}_{I}\). Furthermore, there is the isometry \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left| \!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left| where \((X,Y),(X^{\prime},Y^{\prime})\) are independent copies from \(\mathbb{Q}\). Here, the function \(r_{\Sigma;\mathbb{Q}}(x,y)\), which denotes the residual at \(\Sigma\) under \(\mathbb{Q}\), satisfies the following relation for every \(\Sigma=UU^{T}\): \[\begin{split} r_{\Sigma;\mathbb{Q}}(x,y)&=y-f_{U; \mathbb{Q}}(U^{T}x)-\gamma_{U;\mathbb{Q}}\\ &=y-f_{U;\mathbb{Q}}(U^{T}x)-(\mathbb{E}_{\mathbb{Q}}[Y]- \mathbb{E}_{\mathbb{Q}}[f_{U;\mathbb{Q}}(U^{T}X)]).\end{split} \tag{4.12}\] Recall \(f_{U;\mathbb{Q}}\) can be represented by the covariance operators through equation (4.9). Together, equations (4.11), (4.12), and (4.9) form the needed connection between the gradient and cross-covariance operators for our proof. Here's the roadmap for the rest of the proofs. In Section 4.2, we establish the uniform convergence of the empirical covariance operators to the population counterparts. Section 4.3 shows how this convergence of operators translates to that of the objectives \(J\), while Section 4.4 shows how this convergence of operators translates to that of the gradients \(\nabla J\). ### Uniform Convergence of Covariance Operators It is widely known in the literature that there is the pointwise convergence of the empirical cross-covariance operators to the population counterpart under the operator norm, e.g., [11, 12, 13]. Indeed, given a matrix \(U\) and a measure \(\mathbb{P}\), there's the convergence in probability as the sample size \(n\to\infty\): \[\left|\!\left|\!\left|\mathsf{C}_{U;\mathbb{P}_{n}}-\mathsf{C}_{U;\mathbb{P}} \right|\!\right|\!\right|_{\mathrm{op}}=o_{P}(1),\ \ \ \ \ \left|\!\left|\!\left|\mathsf{V}_{U;\mathbb{P}_{n}}-\mathsf{V}_{U;\mathbb{P}} \right|\!\right|\!\right|_{\mathrm{op}}=o_{P}(1). \tag{4.13}\] Here, we need to refine pointwise convergence by showing that such convergence could be made uniform over a collection of matrices \(U\in\mathsf{U}\). While uniform convergence of such operators has been shown for specific sets that comprise projection matrices of a given rank [12], here we show the same result indeed holds for a general compact set \(\mathsf{U}\). The proof is deferred to Appendix A.1. **Lemma 4.1** (Uniform Convergence of Covariance Operators).: _For any compact set \(\mathsf{U}\), we have the convergence as the sample size \(n\to\infty\):_ \[\sup_{U\in\mathsf{U}}\left|\!\left|\!\left|\mathsf{C}_{U;\mathbb{P}_{n}}- \mathsf{C}_{U;\mathbb{P}}\right|\!\right|\!\right|_{\mathrm{op}}=o_{P}(1),\ \ \ \ \ \sup_{U\in\mathsf{U}}\left|\!\left|\!\left|\mathsf{V}_{U;\mathbb{P}_{n}}- \mathsf{V}_{U;\mathbb{P}}\right|\!\right|\!\right|_{\mathrm{op}}=o_{P}(1). \tag{4.14}\] ### Uniform Convergence of Objectives We prove Part I of Theorem 4.1, which asserts the uniform convergence of objective values described in (4.1). The key ingredients are the uniform convergence of the covariance operators, and the relation between the objective value and the covariance operators given in equation (4.10). Recall equation (4.10). For the measure \(\mathbb{Q}\), there is the identity that holds at every \(\Sigma=UU^{T}\): \[J(\Sigma;\mathbb{Q})=\frac{1}{2}\mathrm{Var}_{\mathbb{Q}}(Y)-\frac{1}{2}T( \mathsf{C}_{U;\mathbb{Q}},v_{U;\mathbb{Q}}) \tag{4.15}\] where the notation \(T\) denotes the functional \(\mathsf{P}(\mathcal{H}_{I})\times\mathcal{H}_{I}\to\mathbb{R}\): \[T:(\mathsf{C},v)\mapsto\left\|(\mathsf{C}+\lambda I)^{-1/2}v\right\|_{ \mathcal{H}_{I}}^{2}. \tag{4.16}\] In the above, \(\mathsf{B}(\mathcal{H}_{I})\) denote the Banach space consisting of all self-adjoint and bounded operators on the Hilbert space \(\mathcal{H}_{I}\), endowed with the operator norm topology. Let \(\mathsf{P}(\mathcal{H}_{I})\subseteq\mathsf{B}(\mathcal{H}_{I})\) denote the subset of operators from \(\mathsf{B}(\mathcal{H}_{I})\) that are non-negative and equipped with the subspace topology. Let \(\mathsf{P}(\mathcal{H}_{I})\times\mathcal{H}_{I}\) denote the Cartesian product equipped with the product topology. Note then, for \(\lambda>0\), it is easy to show the mapping \(T\) is continuous on its domain \(\mathsf{P}(\mathcal{H}_{I})\times\mathcal{H}_{I}\). The uniform convergence of covariance operators on every compact set \(\mathsf{U}\) (Lemma 4.1) translates to that of the objective values. Indeed, for every compact set \(\mathsf{U}\), we must have as \(n\to\infty\): \[\sup_{U\in\mathsf{U}}|T(\mathsf{C}_{U;\mathbb{P}_{n}},v_{U;\mathbb{P}_{n}})-T (\mathsf{C}_{U;\mathbb{P}},v_{U;\mathbb{P}})|=o_{P}(1).\] Consequentially, using (4.15), this leads to the convergence \[\sup_{\Sigma\in\boldsymbol{\Sigma}}|J(\Sigma;\mathbb{Q}_{n})-J(\Sigma; \mathbb{Q})|=o_{P}(1)\] that holds on every compact set \(\boldsymbol{\Sigma}\) as \(n\to\infty\). This completes the proof. ### Uniform Convergence of Gradients Here we prove Part II of Theorem 4.1, namely, the uniform convergence of gradients in equation (4.2). Recall the formula for the population and empirical gradient: \[\nabla J(\Sigma;\mathbb{P}) =-\frac{1}{2\lambda}\mathbb{E}_{\mathbb{P}}[r_{\Sigma;\mathbb{P} }(X;Y)r_{\Sigma;\mathbb{P}}(X^{\prime};Y^{\prime})\partial_{\Sigma}k_{\Sigma}( X,X^{\prime})] \tag{4.17}\] \[\nabla J(\Sigma;\mathbb{P}_{n}) =-\frac{1}{2\lambda}\mathbb{E}_{\mathbb{P}_{n}}[r_{\Sigma; \mathbb{P}_{n}}(X;Y)r_{\Sigma;\mathbb{P}_{n}}(X^{\prime};Y^{\prime})\partial_{ \Sigma}k_{\Sigma}(X,X^{\prime})]\] In the proof, we introduce an intermediate quantity \(\psi_{n}(\Sigma)\), which serves as a proof device that interpolates between the population gradient \(\nabla J(\Sigma;\mathbb{P})\) and the empirical gradient \(\nabla J(\Sigma;\mathbb{P}_{n})\): \[\psi_{n}(\Sigma)=-\frac{1}{2\lambda}\mathbb{E}_{\mathbb{P}_{n}}[r_{\Sigma; \mathbb{P}}(X;Y)r_{\Sigma;\mathbb{P}}(X^{\prime};Y^{\prime})\partial_{\Sigma}k _{\Sigma}(X,X^{\prime})]. \tag{4.18}\] The convergence of gradients is then reduced to the following two lemma, which say that \(\psi_{n}(\Sigma)\) is uniformly close to the empirical gradient \(\nabla J(\Sigma;\mathbb{P}_{n})\) as well as the population gradient \(\nabla J(\Sigma;\mathbb{P})\) over \(\Sigma\in\boldsymbol{\Sigma}\). With these two results, we conclude the uniform convergence of the gradients. **Lemma 4.2** (Uniform Closeness between \(\psi_{n}(\Sigma)\) and \(\nabla J(\Sigma;\mathbb{P}_{n})\)).: _For any compact set \(\boldsymbol{\Sigma}\), there's the convergence as the sample size \(n\to\infty\):_ \[\sup_{\Sigma\in\boldsymbol{\Sigma}}\left|\!\left|\nabla J(\Sigma;\mathbb{P}_{ n})-\psi_{n}(\Sigma)\right|\!\right|\!\right|=o_{P}(1). \tag{4.19}\] **Lemma 4.3** (Uniform Closeness between \(\psi_{n}(\Sigma)\) and \(\nabla J(\Sigma;\mathbb{P})\)).: _For any compact set \(\boldsymbol{\Sigma}\), there's the convergence as the sample size \(n\to\infty\):_ \[\sup_{\Sigma\in\boldsymbol{\Sigma}}\left|\!\left|\nabla J(\Sigma;\mathbb{P})- \psi_{n}(\Sigma)\right|\!\right|\!\right|=o_{P}(1). \tag{4.20}\] #### 4.4.1 Proof of Lemma 4.2 Uniform Convergence of ResidualsThe key to prove Lemma 4.2 is the uniform convergence of the residual function under the \(L_{\infty}\) norm. **Lemma 4.4** (Uniform Convergence of Residuals).: _For any compact set \(\boldsymbol{\Sigma}\), we have the convergence as the sample size \(n\to\infty\):_ \[\sup_{\Sigma\in\boldsymbol{\Sigma}}\left\|r_{\Sigma;\mathbb{P}_{n}}-r_{\Sigma ;\mathbb{P}}\right\|_{\infty}=o_{P}(1). \tag{4.21}\] **Proof** We initiate our proof with a bound on the gap of the residuals that holds for \(\Sigma=UU^{T}\): \[\left\|r_{\Sigma;\mathbb{P}_{n}}-r_{\Sigma;\mathbb{P}}\right\|_{\infty}\leq 2 \left\|f_{U;\mathbb{P}}-f_{U;\mathbb{P}_{n}}\right\|_{\infty}\leq 2\sqrt{\phi(0)} \left\|f_{U;\mathbb{P}}-f_{U;\mathbb{P}_{n}}\right\|_{\mathcal{H}_{I}}. \tag{4.22}\] In the above, the first inequality is due to (4.12) and the triangle inequality. The second inequality is due to the continuous embedding of the RKHS \(\mathcal{H}_{I}\) into the space \(\mathcal{C}_{b}\) (Proposition 1). Thus, it remains to prove that for every compact set \(\mathsf{U}\), there's the convergence as \(n\to\infty\): \[\sup_{U\in\mathsf{U}}\left\|f_{U;\mathbb{P}}-f_{U;\mathbb{P}_{n}}\right\|_{ \mathcal{H}_{I}}=o_{P}(1). \tag{4.23}\] We shall see the uniform convergence of \(f\) is implied by that of the cross-covariance operators. Indeed, by equation (4.9) there's the identity that holds for every matrix \(U\in\mathsf{U}\) \[f_{U;\mathbb{P}}=W(\mathsf{C}_{U;\mathbb{P}},v_{U;\mathbb{P}})\ \ \text{and}\ \ f_{U;\mathbb{P}_{n}}=W(\mathsf{C}_{U;\mathbb{P}_{n}},v_{U;\mathbb{P}_{n}}).\] In the above, \(W:\mathsf{P}(\mathcal{H}_{I})\times\mathcal{H}_{I}\to\mathcal{H}_{I}\) denotes the mapping \[W:(\mathsf{C},v)\mapsto(\mathsf{C}+\lambda I)^{-1}v \tag{4.24}\] which is continuous on its domain whenever \(\lambda>0\) (\(\mathcal{H}_{I}\) is equipped with the norm topology, and the Cartesian product \(\mathsf{P}(\mathcal{H}_{I})\times\mathcal{H}_{I}\) is equipped with the product topology). As a result, the uniform convergence of the covariance operators (Lemma 4.1) indicates that of \(f\) (equation (4.23)). Completing the Proof of Lemma 4.2Let \(e_{1},e_{2},\ldots,e_{p}\) be a standard basis of \(\mathbb{R}^{p}\). To prove Lemma 4.2, it suffices to prove that entrywise there is the uniform convergence \[\sup_{\Sigma\in\mathsf{\Sigma}}|e_{i}^{T}(\nabla J(\Sigma;\mathbb{P}_{n})- \psi_{n}(\Sigma))e_{j}|=o_{P}(1). \tag{4.25}\] for every pair \((i,j)\). Below we establish this. Fix \((i,j)\). We compute and bound the difference \[\begin{split}&|e_{i}^{T}(\nabla J(\Sigma;\mathbb{P}_{n})-\psi_{n }(\Sigma))e_{j}|\\ &\quad=\frac{1}{2\lambda}\times\left\|\mathbb{E}_{P_{n}}\left[(r _{\Sigma;\mathbb{P}_{n}}(X,Y)-r_{\Sigma;\mathbb{P}}(X,Y))(r_{\Sigma;\mathbb{P} _{n}}(X^{\prime},Y^{\prime})+r_{\Sigma;\mathbb{P}}(X^{\prime},Y^{\prime})) \partial_{\Sigma_{ij}}k_{\Sigma}(X,X^{\prime})\right]\right|\\ &\quad\leq\frac{1}{2\lambda}\times\left\|(r_{\Sigma;\mathbb{P}_{n} }-r_{\Sigma;\mathbb{P}})(X,Y)\right\|_{L_{\infty}(\mathbb{P}_{n})}\times \left\|\partial_{\Sigma_{ij}}k_{\Sigma}(X,X^{\prime})\right\|_{L_{2}(\mathbb{P }_{n})}\times\left\|(r_{\Sigma;\mathbb{P}_{n}}+r_{\Sigma;\mathbb{P}})(X,Y) \right\|_{L_{2}(\mathbb{P}_{n})}.\end{split} \tag{4.26}\] The last line above is due to Holder's inequality. Below we bound individual terms on the RHS of equation (4.26) uniformly over \(\Sigma\in\mathsf{\Sigma}\). For the first term, it converges uniformly to zero in probability by Lemma 4.4: \[\sup_{\Sigma\in\mathsf{\Sigma}}\left\|(r_{\Sigma;\mathbb{P}_{n}}-r_{\Sigma; \mathbb{P}})(X,Y)\right\|_{L_{\infty}(\mathbb{P}_{n})}=o_{P}(1). \tag{4.27}\] For the second term, note \(\partial_{\Sigma_{ij}}k_{\Sigma}(x,x^{\prime})=\phi^{\prime}(\left\|x-x^{ \prime}\right\|_{\Sigma}^{2})e_{i}^{T}(x-x^{\prime})(x-x^{\prime})^{T}e_{j}\). By Holder's inequality we have the uniform estimate \[\sup_{\Sigma\in\mathsf{\Sigma}}\left\|\partial_{\Sigma_{ij}}k_{\Sigma}(X,X^{ \prime})\right\|_{L_{2}(\mathbb{P}_{n})}\leq 4\left\|\phi^{\prime}\right\|_{ \infty}\cdot\left\|e_{i}^{T}X\right\|_{L_{4}(\mathbb{P}_{n})}\cdot\left\|e_{j }^{T}X\right\|_{L_{4}(\mathbb{P}_{n})}=O_{P}(1). \tag{4.28}\] The last equality is due to \(\left\|\phi^{\prime}\right\|_{\infty}\leq\left|\phi^{\prime}_{+}(0)\right|<\infty\) and the moment assumption \(\mathbb{E}_{P}[\left\|X\right\|_{2}^{4}]<\infty\). For the last term, we use the basic estimate \(\left\|r_{\Sigma;\mathbb{Q}}(X,Y)\right\|_{L_{2}(\mathbb{Q})}\leq\left\|Y \right\|_{L_{2}(\mathbb{Q})}\) that holds for every matrix \(\Sigma\) and every measure \(\mathbb{Q}\) (Lemma 4.5). By triangle inequality, we obtain \[\sup_{\Sigma\in\Sigma}\left\|(r_{\Sigma;\mathbb{P}_{n}}+r_{\Sigma;\mathbb{P}}) (X,Y)\right\|_{L_{2}(\mathbb{P}_{n})}\leq\sup_{\Sigma\in\Sigma}\left\|(r_{ \Sigma;\mathbb{P}_{n}}-r_{\Sigma;\mathbb{P}})(X,Y)\right\|_{L_{\infty}( \mathbb{P}_{n})}+2\left\|Y\right\|_{L_{2}(\mathbb{P}_{n})}=O_{P}(1). \tag{4.29}\] where the last identity is due to Lemma 4.4 and the moment assumption \(\mathbb{E}_{P}[Y^{2}]<\infty\). Substituting all the above three uniform estimates, namely, equations from (4.27) to (4.29), into equation (4.26) yields the desired Lemma 4.2. **Lemma 4.5** (Residual Bound).: \(\left\|r_{\Sigma;\mathbb{Q}}(X,Y)\right\|_{L_{2}(\mathbb{Q})}^{2}\leq\mathrm{ Var}_{\mathbb{Q}}(Y)\) _holds for every \(\Sigma\) and measure \(\mathbb{Q}\)._ **Proof** It follows from the following chain of inequality: \[\frac{1}{2}\left\|r_{\Sigma;\mathbb{Q}}(X,Y)\right\|_{L_{2}( \mathbb{Q})}^{2}\leq J(\Sigma;\mathbb{Q}) =\min_{f,\gamma}\frac{1}{2}\mathbb{E}_{\mathbb{Q}}\left[(Y-f(X)- \gamma)^{2}\right]+\frac{\lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}\] \[\leq\frac{1}{2}\mathbb{E}_{\mathbb{Q}}[(Y-\mathbb{E}_{\mathbb{Q} }[Y])^{2}]=\frac{1}{2}\mathrm{Var}_{\mathbb{Q}}(Y).\] The above two inequalities hold from the very definition. #### 4.4.2 Proof of Lemma 4.3 Let \(e_{1},e_{2},\ldots,e_{p}\) be a standard basis of \(\mathbb{R}^{p}\). It suffices to prove that entrywise there is the uniform convergence in probability for every pair of indices \((i,j)\) \[\sup_{\Sigma\in\Sigma}|e_{i}^{T}(\nabla J(\Sigma;\mathbb{P})-\psi_{n}(\Sigma ))e_{j}|=o_{P}(1). \tag{4.30}\] Fix \((i,j)\). To prove it, we apply a fundamental result due to [25, Theorem 2.1]. Given any compact set \(\Sigma\), uniform convergence in probability holds (equation (4.30)) if and only if (i) pointwise convergence holds, i.e., for every given matrix \(\Sigma\), there is the convergence as \(n\to\infty\) \[e_{i}^{T}\psi_{n}(\Sigma)e_{j}-e_{i}^{T}\nabla J(\Sigma;\mathbb{P}_{n})e_{j}= o_{P}(1).\] (ii) stochastic equicontinuity is enjoyed by the random map \(\Sigma\mapsto e_{i}^{T}\psi_{n}(\Sigma)e_{j}\), i.e., for every \(\epsilon,\eta>0\), there exists \(\delta>0\) such that \[\limsup_{n\to\infty}\mathbb{P}(\sup_{\left\|\Sigma-\Sigma^{\prime}\right\|_{ \mathrm{op}}<\delta}|e_{i}^{T}\psi_{n}(\Sigma)e_{j}-e_{i}^{T}\psi_{n}(\Sigma^{ \prime})e_{j}|>\epsilon)<\eta.\] Below we shall verify these two conditions, which imply Lemma 4.3. Pointwise ConvergenceThe individual convergence in probability follows from the law of large numbers applied to the V-statistics \[e_{i}^{T}\psi_{n}(\Sigma)e_{j}=\mathbb{E}_{P_{n}}[r_{\Sigma;\mathbb{P}}(X,Y)r _{\Sigma;\mathbb{P}}(X^{\prime},Y^{\prime})\partial_{\Sigma_{ij}}k_{\Sigma}(X, X^{\prime})],\] which converges almost surely to \(\mathbb{E}_{\mathbb{P}}[r_{\Sigma;\mathbb{P}}(X,Y)r_{\Sigma;\mathbb{P}}(X^{ \prime},Y^{\prime})\partial_{\Sigma_{ij}}k_{\Sigma}(X,X^{\prime})]=e_{i}^{T} \nabla J(\Sigma;\mathbb{P})e_{j}\). Stochastic EquicontinuityThe stochastic equicontinuity is implied by the following condition ([11, Corollary 2.2]): there is a sequence of random variables \(B_{n}=O_{P}(1)\), and a function \(\omega:[0,\infty)\mapsto[0,\infty)\) with \(\omega(0)=0\) and \(\omega\) continuous at zero such that \[|e_{i}^{T}\psi_{n}(\Sigma)e_{j}-e_{i}^{T}\psi_{n}(\Sigma^{\prime})e_{j}|\leq B _{n}\cdot\omega(\left|\!\left|\!\left|\Sigma-\Sigma^{\prime}\right|\!\right|\! \right|) \tag{4.31}\] holds for every \(\Sigma,\Sigma^{\prime}\in\mathsf{\Sigma}\). Below we construct \(B_{n}\) and \(\omega\) that satisfy this desirable property. The key ingredient is the continuity property of the residual function \(r_{\Sigma;\mathbb{P}}\) with respect to \(\Sigma\) (Lemma 4.6). Here we treat \(r_{\Sigma;\mathbb{P}}\) as an element of \(\mathsf{C}_{0}\), the space of continuous functions equipped with \(\left|\!\left|\cdot\right|\!\right|_{\infty}\) norm. The proof of Lemma 4.6 is deferred to Appendix A.2. **Lemma 4.6** (Continuity of the Residual Function).: _For every \(\Sigma\in\mathsf{\Sigma}\), there's the limit_ \[\lim_{\Sigma^{\prime}\to\Sigma}\left|\!\left|r_{\Sigma^{\prime};\mathbb{P}}-r_ {\Sigma;\mathbb{P}}\right|\!\right|_{\infty}=0. \tag{4.32}\] Below we define \(\omega\) to be the modulus of continuity \[\omega(\delta)=\sup\left\{\left|\!\left|r_{\Sigma;\mathbb{P}}-r_{\Sigma^{ \prime};\mathbb{P}}\right|\!\right|_{\infty}\mid\left|\!\left|\!\left|\Sigma- \Sigma^{\prime}\right|\!\right|\!\right|\leq\delta,\ \Sigma,\Sigma^{\prime}\in\mathsf{\Sigma}\right\}.\] Certainly, \(\omega(0)=0\). By Lemma 4.6, \(\omega\) is continuous at zero. Back to the construction of \(B_{n}=O_{P}(1)\) that yields condition (4.31). We compute and bound for every \(\Sigma,\Sigma^{\prime}\in\mathsf{\Sigma}\) \[|e_{i}^{T}\psi_{n}(\Sigma)e_{j}-e_{i}^{T}\psi_{n}(\Sigma^{\prime })e_{j}|\] \[\quad=\frac{1}{2\lambda}\times\mathbb{E}_{P_{n}}[(r_{\Sigma; \mathbb{P}}(X,Y)-r_{\Sigma^{\prime};\mathbb{P}}(X,Y))(r_{\Sigma;\mathbb{P}}(X^ {\prime},Y^{\prime})+r_{\Sigma^{\prime};\mathbb{P}}(X^{\prime},Y^{\prime})) \partial_{\Sigma_{ij}}k_{\Sigma}(X,X^{\prime})]\] \[\quad\leq\frac{1}{2\lambda}\times\left\|(r_{\Sigma;\mathbb{P}}-r_ {\Sigma^{\prime};\mathbb{P}})(X,Y)\right\|_{L_{\infty}(\mathbb{P}_{n})}\times \left\|(r_{\Sigma;\mathbb{P}}+r_{\Sigma^{\prime};\mathbb{P}})(X,Y)\right\|_{L _{2}(\mathbb{P}_{n})}\times\left\|\partial_{\Sigma_{ij}}k_{\Sigma}(X,X^{ \prime})\right\|_{L_{2}(\mathbb{P}_{n})}\] where the last inequality follows from Holder's inequality. By Lemma 4.6 and our construction of \(\omega\), this implies that \[|e_{i}^{T}\psi_{n}(\Sigma)e_{j}-e_{i}^{T}\psi_{n}(\Sigma^{\prime})e_{j}|\leq B _{n}\cdot\omega(\left|\!\left|\!\left|\Sigma-\Sigma^{\prime}\right|\!\right|\! \right|)\] holds for every matrices \(\Sigma,\Sigma^{\prime}\in\mathsf{\Sigma}\) where \[B_{n}=\frac{1}{\lambda}\times\sup_{\Sigma\in\mathsf{\Sigma}}\left\|r_{\Sigma ;\mathbb{P}}(X,Y)\right\|_{L_{2}(\mathbb{P}_{n})}\times\sup_{\Sigma\in \mathsf{\Sigma}}\left\|\partial_{\Sigma_{ij}}k_{\Sigma}(X,X^{\prime})\right\| _{L_{2}(\mathbb{P}_{n})}.\] To show \(B_{n}=O_{P}(1)\), we bound each individual term on the RHS. For the first term, we have \[\sup_{\Sigma\in\mathsf{\Sigma}}\left\|r_{\Sigma;\mathbb{P}}(X,Y)\right\|_{L_{2 }(\mathbb{P}_{n})}\leq\sup_{\Sigma\in\mathsf{\Sigma}}\left\|r_{\Sigma;\mathbb{ P}_{n}}(X,Y)\right\|_{L_{2}(\mathbb{P}_{n})}+\sup_{\Sigma\in\mathsf{\Sigma}} \left\|(r_{\Sigma;\mathbb{P}_{n}}-r_{\Sigma;\mathbb{P}_{n}})(X,Y)\right\|_{L_{2} (\mathbb{P}_{n})}=O_{P}(1)\] where the identity is due to Lemma 4.4 and Lemma 4.5. For the second term, equation (4.28) gives \[\sup_{\Sigma\in\mathsf{\Sigma}}\left\|\partial_{\Sigma_{ij}}k_{\Sigma}(X,X^{ \prime})\right\|_{L_{2}(\mathbb{P}_{n})}=O_{P}(1).\] These two estimates yield \(B_{n}=O_{P}(1)\). This establishes the desired stochastic equicontinuity. ## 5 Low Rankness in Finite Samples This section proves Theorem 1.2. As described in the introduction (Section 1.3), our proof technique is largely based on the idea of set identifiability in the optimization literature. Let us denote the set of low-rank matrices of interest to be: \[\mathcal{M}_{0} =\{\Sigma:\Sigma\succeq 0,\ \text{rank}(\Sigma)\leq\dim(S_{*})\} \tag{5.1}\] \[\mathcal{M} =\{\Sigma:\Sigma\succeq 0,\ \text{rank}(\Sigma)=\dim(S_{*})\}\cdot\] The theorem is divided into two cases based on the value of the parameter \(\lambda\). In the first case, where \(\lambda\in(0,\infty)\), the goal is to demonstrate that with high probability every empirical minimizer \(\Sigma_{n}^{*}\in\mathcal{M}_{0}\) holds. The proof strategy for this case comprises two main parts: * (Identifiability) We shall first prove that \(\mathcal{M}_{0}\) is the _identifiable set_ with respect to the population objective (3.1) at every its minimizer \(\Sigma^{*}\) (see Definition 5.1). Roughly speaking, it says that any deterministic converging sequence \(\Sigma_{i}\to\Sigma^{*}\) nearly stationary to the population objective (3.1) must obey \(\Sigma_{i}\in\mathcal{M}_{0}\) for all large enough indices \(i\). This result is completely deterministic and highlights a property of the population objective. Section 5.1 elaborates all the details. * (Convergence of Minimizers) We next prove the sequence of empirical minimizer \(\Sigma_{n}^{*}\) converges in probability to the population counterparts as \(n\to\infty\), and \(\Sigma_{n}^{*}\) is nearly stationary with respect to the _population_ objective (3.1) with probability tending to one. This part probabilistic in its nature is elaborated in detail in Section 5.2. With these two results, we conclude \(\Sigma_{n}^{*}\in\mathcal{M}_{0}\) holds with probability tending to one as \(n\to\infty\). This holds for every given \(\lambda\in(0,\infty)\). It is worth mentioning that the same technique works for the second case where \(\lambda\in(0,\lambda_{0}]\), allowing us to show that \(\Sigma_{n}^{*}\in\mathcal{M}\) with a probability approaching one. The only modification required is to replace \(\mathcal{M}_{0}\) by \(\mathcal{M}\). The key element that underpins identifiability is the sharpness property of the population objective near its minimizers (Section 3). The key element that underpins convergence of minimizers is the uniform approximation of empirical objective to the population counterpart (Section 4). In the existing literature, a well-documented relationship exists between sharpness and set identifiability, see, e.g., [1, 1, 2, 10, 11, 12]. Notably, a large body of work in this area delineates this relation in scenarios where the identifiable set is characterized as a smooth submanifold within an ambient Euclidean space (e.g., [1, 12]). In the context of our research, while the set \(\mathcal{M}\) conforms to this smooth structure, the set \(\mathcal{M}_{0}\) is not a smooth submanifold. Consequently, ascertaining the identifiability of \(\mathcal{M}_{0}\) presents a greater challenge compared to the set \(\mathcal{M}\). A notable element of our proof delves into highlighting how the algebraic sharpness property (Corollary 3.1) connects with the identifiability of \(\mathcal{M}_{0}\), as detailed in Theorem 5.1 in Section 5.1. ### Identifiability of Set of Low-rank Matrices Here we formally introduce the notion of identifiable sets, following the thesis work [1, Definition 7.2.1] (or its published version [1, Definition 3.12]). **Definition 5.1** (Identifiable Sets).: _Consider an \(f:\mathbb{R}^{m}\mapsto\mathbb{R}\cup\{+\infty\}\) and a critical point \(x\in\mathbb{R}^{m}\) where \(0\in\partial f(x)\). A set \(\mathsf{S}\) is identifiable with respect to \(f\) at \(x\) if for any sequence \((x_{i},f(x_{i}),v_{i})\to(x,f(x),0)\), with \(v_{i}\in\partial f(x_{i})\), the points \(x_{i}\) must lie in \(\mathsf{S}\) for all sufficiently large indices \(i\) (i.e., \(\exists N_{0}<\infty\) such that \(x_{i}\in\mathsf{S}\) for every \(i\geq N_{0}\))._ Theorem 5.1 below presents the main result of the section. Recall the population objective \[\underset{\Sigma}{\text{minimize}}\,J(\Sigma)\ \ \ \text{subject to}\ \ \ \Sigma\in\mathcal{C}_{M}\] where \(\mathcal{C}_{M}=\{\Sigma:\Sigma\succeq 0,\left|\!\left|\Sigma\right|\!\right|\leq M\}\). Following the standard ideas in optimization, we can convert a constrained minimization into an unconstrained minimization: \[\underset{\Sigma}{\text{minimize}}\ \ \widetilde{J}(\Sigma)\] where \(\widetilde{J}(\Sigma)=J(\Sigma)+\mathbf{1}_{\mathcal{C}_{M}}(\Sigma)\) where \(\mathbf{1}_{\mathcal{C}_{M}}(\Sigma)=0\) if \(\Sigma\in\mathcal{C}_{M}\) and \(\mathbf{1}_{\mathcal{C}_{M}}(\Sigma)=+\infty\) if \(\Sigma\not\in\mathcal{C}_{M}\). The subgradient calculus yields the following basic formula that holds for every \(\Sigma\in\mathcal{C}_{M}\): \[\partial\widetilde{J}(\Sigma)=\nabla J(\Sigma)+\mathsf{N}_{\mathcal{C}_{M}}( \Sigma). \tag{5.2}\] In the above, \(\partial\widetilde{J}(\Sigma)\) is the subdifferential of \(\widetilde{J}\) at \(\Sigma\), and \(\nabla J(\Sigma)\) is interpreted as the gradient of \(J\) at \(\Sigma\) in terms of Definition 3.1, and \(\mathsf{N}_{\mathcal{C}_{M}}(\Sigma)\) is the normal cone at \(\Sigma\) relative to the constraint \(\mathcal{C}_{M}\). It is clear that \(0\in\partial\widetilde{J}(\Sigma^{*})\) since \(\Sigma^{*}\) is a global minimizer of \(\widetilde{J}\). By converting to an equivalent unconstrained minimization, we can state our main result on low rank set identifiability with respect to population objective using the terms from Definition 5.1. **Theorem 5.1** (Identifiability of \(\mathcal{M}_{0},\mathcal{M}\) with respect to \(\widetilde{J}\)).: * _For every_ \(\lambda\in(0,\infty)\)_,_ \(\mathcal{M}_{0}\) _is identifiable with respect to_ \(\widetilde{J}\) _at every its minimizer_ \(\Sigma^{*}\)_._ * _For every_ \(\lambda\in(0,\lambda_{0}]\)_,_ \(\mathcal{M}\) _is identifiable with respect to_ \(\widetilde{J}\) _at every its minimizer_ \(\Sigma^{*}\)_._ _In the above, \(\lambda_{0}<\infty\) is identical to \(\lambda_{0}\) in Theorem 1.1._ **Proof** Let \(\Sigma^{*}\) denote a minimizer. Consider the first case where \(\lambda\in(0,\infty)\). Then \(\Sigma^{*}\in\mathcal{M}_{0}\) by Theorem 1.1. Consider a sequence of matrices \(\Sigma_{i}\) for which \((\Sigma_{i},\widetilde{J}(\Sigma_{i}),V_{i})\to(\Sigma^{*},\widetilde{J}( \Sigma^{*}),0)\) where \(V_{i}\in\partial\widetilde{J}(\Sigma_{i})\). Here our goal is to show that \(\Sigma_{i}\in\mathcal{M}_{0}\) for large enough indices \(i\). Clearly, \(\Sigma_{i}\in\mathcal{C}_{M}\) for all large enough \(i\), since \(\widetilde{J}(\Sigma_{i})\to\widetilde{J}(\Sigma^{*})<\infty\). Then this means that for all large indices \(i\), the subgradient \(V_{i}\) takes the form of \(V_{i}=\nabla J(\Sigma_{i})+Z_{i}\) where \(Z_{i}\in\mathsf{N}_{\mathcal{C}_{M}}(\Sigma_{i})\). We shall use the fact that \(V_{i}\to 0\) to conclude that \(\Sigma_{i}\in\mathcal{M}_{0}\) eventually. The key to our analysis is the sharpness property of \(J\) at \(\Sigma^{*}\) (Corollary 3.1). By Corollary 3.1, and Lemma 3.7, there's \(\rho>0\) such that for every vector \(w\in S^{\perp}_{*}\): \[w^{T}\nabla J(\Sigma^{*})w=\left\langle\nabla J(\Sigma^{*}),ww^{T}\right\rangle \geq\rho\left\|w\right\|_{2}^{2}. \tag{5.3}\] Since \(\Sigma_{i}\to\Sigma^{*}\) and thus \(\nabla J(\Sigma_{i})\to\nabla J(\Sigma^{*})\) by Lemma 3.6, we know for some \(N<\infty\), every \(i\geq N\) obeys the property \[w^{T}\nabla J(\Sigma_{i})w\geq\frac{\rho}{2}\left\|w\right\|_{2}^{2}\ \ \ \text{holds for every $w\in S^{\perp}_{*}$}. \tag{5.4}\] Furthermore, there's a basic property that every matrix \(Z\in\mathsf{N}_{\mathcal{C}_{M}}(\Sigma)\) obeys. **Lemma 5.1**.: _Let \(\Sigma\in\mathcal{C}_{M}\) and \(Z\in\mathsf{N}_{\mathcal{C}_{M}}(\Sigma)\). Then \(\Pi_{\mathrm{col}(\Sigma)}Z\Pi_{\mathrm{col}(\Sigma)}\succeq 0\)._ **Proof** Let \(v\in\mathrm{col}(\Sigma)\). Since \(Z\in\mathsf{N}_{\mathcal{C}_{M}}(\Sigma)\), \(\langle\Sigma^{\prime}-\Sigma,Z\rangle\leq 0\) for any \(\Sigma^{\prime}\in\mathcal{C}_{M}\). Since \(v\in\mathrm{col}(\Sigma)\), there is \(\epsilon>0\) such that \(\Sigma-\epsilon vv^{T}\succeq 0\). Hence, \(\Sigma-\epsilon vv^{T}\in\mathcal{C}_{M}\) for small \(\epsilon>0\). Taking \(\Sigma^{\prime}=\Sigma-\epsilon vv^{T}\) yields \(\langle Z,vv^{T}\rangle\geq 0\), or \(v^{T}Zv\geq 0\). The result follows as \(v\in\mathrm{col}(\Sigma)\) is arbitrary. Applying Lemma 5.1 to the sequence \(\Sigma_{i}\), we obtain \[\Pi_{\mathrm{col}(\Sigma_{i})}Z_{i}\Pi_{\mathrm{col}(\Sigma_{i})}\succeq 0. \tag{5.5}\] Back to the proof of Theorem 5.1. Now suppose, on the contrary, that there is a subsequence \(i_{k}\) for which \(\Sigma_{i_{k}}\not\in\mathcal{M}_{0}\), meaning that \(\mathrm{rank}(\Sigma_{i_{k}})>\dim(S_{*})\). Then, \(\mathrm{rank}(\Sigma_{i_{k}})+\dim(S_{*}^{\perp})>\dim(S_{*})+\dim(S_{*}^{ \perp})=p\), which is the ambient dimension. Hence \(\mathrm{col}(\Sigma_{i_{k}})\cap S_{*}^{\perp}\neq\emptyset\). This means that we can always find a vector \(w_{i_{k}}\in\mathrm{col}(\Sigma_{i_{k}})\cap S_{*}^{\perp}\) and \(\left\|w_{i_{k}}\right\|_{2}=1\). For this vector, we have for every index \(i_{k}\geq N\): \[w_{i_{k}}^{T}V_{i_{k}}w_{i_{k}}=w_{i_{k}}^{T}\nabla J(\Sigma_{i_{k}})w_{i_{k}} +w_{i_{k}}^{T}Z_{i_{k}}w_{i_{k}}\geq w_{i_{k}}^{T}\nabla J(\Sigma_{i_{k}})w_{i _{k}}\geq\frac{\rho}{2}\] where the inequality follows from equations (5.4) and (5.5), and our choice of \(w_{i_{k}}\in\mathrm{col}(\Sigma_{i_{k}})\cap S_{*}^{\perp}\). Clearly, this contradicts with \(V_{i_{k}}\to 0\) assumed at the beginning. Hence, it must hold that \(\Sigma_{i}\in\mathcal{M}_{0}\) for every large enough index \(i\). In other words, we have shown for every \(\lambda\in(0,\infty)\), \(\mathcal{M}_{0}\) is an identifiable set with respect to \(\widetilde{J}\) at every its minimizer \(\Sigma^{*}\). Consider the second case where \(\lambda\in(0,\lambda_{0}]\). In this case, \(\Sigma^{*}\in\mathcal{M}\), and thus \(\mathrm{rank}(\Sigma^{*})=\dim(S_{*})\). Since \(\Sigma\mapsto\mathrm{rank}(\Sigma)\) is lower-semicontinuous on the semidefinite cone \(\mathcal{C}\), any sequence \(\Sigma_{i}\in\mathcal{C}_{M}\) with \(\Sigma_{i}\to\Sigma^{*}\) must obey \(\liminf_{i}\mathrm{rank}(\Sigma_{i})\geq\mathrm{rank}(\Sigma^{*})=\dim(S_{*})\), indicating that \(\mathrm{rank}(\Sigma_{i})\geq\dim(S_{*})\) holds eventually. Since we have shown that \(\mathcal{M}_{0}\) is an identifiable set with respect to \(\widetilde{J}\), this means that \(\Sigma_{i}\in\mathcal{M}_{0}\), or equivalently, \(\mathrm{rank}(\Sigma_{i})\leq\dim(S_{*})\) must also hold eventually. Combining these two results we obtain that \(\mathrm{rank}(\Sigma_{i})=\dim(S_{*})\), i.e., \(\Sigma_{i}\in\mathcal{M}\) holds for all large indices \(i\). In other words, this shows that \(\mathcal{M}\) is an identifiable set with respect to \(\widetilde{J}\) when \(\lambda\in(0,\lambda_{0}]\). ### Convergence of Minimizers Let \(\mathbf{\Sigma}_{n}^{*}\) and \(\mathbf{\Sigma}^{*}\) denote the sets of minimizers of \(J_{n}\) and \(J\) respectively, on the compact set \(\mathcal{C}_{M}\). **Lemma 5.2** (Convergence of Minimizers).: _As \(n\to\infty\),_ \[\mathrm{dist}(\mathbf{\Sigma}_{n}^{*},\mathbf{\Sigma}^{*})\overset{p}{\to}0 \quad\text{and}\quad\sup\{\mathrm{dist}(0,\partial\widetilde{J}(\Sigma_{n}^{* }))|\Sigma_{n}^{*}\in\mathbf{\Sigma}_{n}^{*}\}\overset{p}{\to}0. \tag{5.6}\] Proof.: By Theorem 4.1, there is uniform convergence of objective values \(J_{n}\) to \(J\) in probability on the compact set \(\mathcal{C}\). This ensures that \(\mathrm{dist}(\mathbf{\Sigma}_{n}^{*},\mathbf{\Sigma}^{*})\overset{p}{\to}0\). By Theorem 4.1, there is also the uniform convergence of gradients \(\nabla J_{n}\) to \(\nabla J\) in probability on the constraint set \(\mathcal{C}_{M}\). Note that for any minimizer \(\Sigma_{n}^{*}\) of \(J_{n}\) on the convex constraint set \(\mathcal{C}_{M}\), it must obey the first order condition: \(0\in\nabla J_{n}(\Sigma_{n}^{*})+\mathsf{N}_{C_{M}}(\Sigma_{n}^{*})\). Since there is \(|\nabla J_{n}(\Sigma_{n}^{*})-\nabla J(\Sigma_{n}^{*})|\overset{p}{\to}0\), this indicates \(\sup\{\mathrm{dist}(0,\nabla J(\Sigma_{n}^{*})+\mathsf{N}_{C_{M}}(\Sigma_{n}^ {*}))|\Sigma_{n}^{*}\in\mathbf{\Sigma}_{n}^{*}\}\overset{p}{\to}0\) as desired. ### Proof of Theorem 1.2 To finish the proof of Theorem 1.2, we invoke the convergence result (Lemma 5.2) and the identifiability result (Theorem 5.1). We start by considering the case where \(\lambda\in(0,\infty)\). To see the main idea, assume _temporarily_ that there's the following almost sure convergence: \[\mathrm{dist}(\mathbf{\Sigma}_{n}^{*},\mathbf{\Sigma}^{*})\to 0\quad\text{and} \quad\sup\{\mathrm{dist}(0,\partial\widetilde{J}(\Sigma_{n}^{*}))|\Sigma_{n}^ {*}\in\mathbf{\Sigma}_{n}^{*}\}\to 0. \tag{5.7}\] This assumption strengthens the convergence in probability result in Lemma 5.2. Given this almost sure convergence (5.7), we show that with probability one, \(\mathbf{\Sigma}_{n}^{*}\subseteq\mathcal{M}_{0}\) must hold eventually (i.e., \(\exists N_{0}<\infty\) such that \(\mathbf{\Sigma}_{n}^{*}\subseteq\mathcal{M}_{0}\) for all \(n\geq N_{0}\)). To see this, let us pick \(\Sigma_{n}^{*}\in\mathbf{\Sigma}_{n}^{*}\). For any subsequence \(\Sigma_{n_{i}}^{*}\), since \(\mathcal{C}_{M}\) is compact, a subsubsequence \(\Sigma_{n_{i_{j}}}^{*}\) exists and converges to some \(\Sigma^{*}\in\mathbf{\Sigma}^{*}\). Then this subsubsequence \(\Sigma_{n_{i_{j}}}^{*}\) must fall within \(\mathcal{M}_{0}\) eventually by Theorem 5.1. Since this holds for every subsequence \(\Sigma_{n_{i}}^{*}\), Urysohn's subsequence principle then implies that the original sequence \(\Sigma_{n}^{*}\) must eventually fall within \(\mathcal{M}_{0}\). Since this holds for any choice of sequence \(\Sigma_{n}^{*}\in\mathbf{\Sigma}_{n}^{*}\), it further implies that \(\mathbf{\Sigma}_{n}^{*}\subseteq\mathcal{M}_{0}\) eventually. As a result, assuming the almost sure convergence (5.7), we have shown \[\mathbb{P}(\exists N_{0}<\infty,\text{ such that }\mathbf{\Sigma}_{n}^{*} \subseteq\mathcal{M}_{0}\text{ for all }n\geq N_{0})=1,\] and thus, \[\lim_{n\to\infty}\mathbb{P}(\mathbf{\Sigma}_{n}^{*}\subseteq\mathcal{M}_{0})=1. \tag{5.8}\] For the general case where convergence holds only in probability (cf. Lemma 5.2), \[\operatorname{dist}(\mathbf{\Sigma}_{n}^{*},\mathbf{\Sigma}^{*})\overset{p}{ \to}0\quad\text{and}\quad\sup\{\operatorname{dist}(0,\partial\widetilde{J}( \Sigma_{n}^{*}))|\Sigma_{n}^{*}\in\mathbf{\Sigma}_{n}^{*}\}\overset{p}{\to}0. \tag{5.9}\] we can reduce it to the almost sure case. Let us define \(\delta_{n}=\operatorname{dist}(\mathbf{\Sigma}_{n}^{*},\mathbf{\Sigma}^{*})+ \sup\{\operatorname{dist}(0,\partial\widetilde{J}(\Sigma_{n}^{*}))|\Sigma_{n}^ {*}\in\mathbf{\Sigma}_{n}^{*}\}\). The main observation is if \(\delta_{n}\) converge to zero in probability, then any subsequence \(n_{j}\) has a subsubsequence \(n_{j_{k}}\) where \(\delta_{n_{j_{k}}}\) converge to zero _almost surely_. This implies \(\lim\mathbb{P}(\mathbf{\Sigma}_{n_{j_{k}}}^{*}\subseteq\mathcal{M}_{0})=1\) along \(n_{j_{k}}\). Since this holds for every subsequence \(n_{j}\), Urysohn's subsequence principle then yields that the convergence \(\lim\mathbb{P}(\mathbf{\Sigma}_{n}^{*}\subseteq\mathcal{M}_{0})=1\) must hold for the entire sequence. The proof for the case where \(\lambda\in(0,\lambda_{0}]\) is analogous, with \(\mathcal{M}_{0}\) replaced by \(\mathcal{M}\). ### Subspace Convergence We quickly discuss an implication of Theorem 1.2, which shows the column space of the empirical minimizer converges properly to the central mean subspace \(S_{*}\) for small enough \(\lambda>0\). For any two linear subspaces \(S_{1},S_{2}\), we can measure their distance by evaluating the size of the difference between their corresponding projection matrices, e.g., \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left| Inner Product Kernels Fail Various kernels, apart from the translation-invariant kernel mentioned in (1.1), have been proposed in the kernel learning literature. Given the observed phenomenon for the kernel learning using \(k_{\Sigma}\) in (1.1), it is thereby tempting to ask whether the phenomenon we observe appears with other kernels as well, e.g., kernels of the form \((x,x^{\prime})\mapsto\psi(x^{T}\Sigma x^{\prime})\). More precisely, we consider the following alternative kernel learning objective: \[\min_{f,\lambda,\gamma}\frac{1}{2}\mathbb{E}_{n}[(Y-f(X)-\gamma)^{2}]+\frac{ \lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}\quad\text{ subject to }\quad\Sigma\succeq 0. \tag{6.1}\] This objective has the exact same mathematical form as our original objective (1.2), but it replaces the definition of \(\mathcal{H}_{\Sigma}\) as the RKHS associated with the inner-product kernel \(u_{\Sigma}(x,x^{\prime})=\psi(x^{T}\Sigma x^{\prime})\). Here, \(\psi\) is a real-valued function that ensures \(u_{\Sigma}\) is a positive semidefinite kernel for all \(\Sigma\succeq 0\). Motivated by [10], we restrict our attention to analytic \(\psi\) on the real line with \(\psi(t)=\sum_{k}\xi_{k}t^{k}\) where \(\xi_{k}\geq 0\) for all \(k\), and \(\sum_{k}\xi_{k}>0\). These conditions on \(\psi\) guarantee that \((x,x^{\prime})\mapsto u_{\Sigma}(x,x^{\prime})\) is a positive semidefinite kernel for every \(\Sigma\succeq 0\). In Section 7, our extensive numerical experiments suggest that the solution low-rankness phenomenon disappears when we use the kernel \(u_{\Sigma}\) rather than \(k_{\Sigma}\). This controlled experiment suggests that the phenomenon we observe is specifically tied to the kernel \(k_{\Sigma}\). It helps clarify, as stated in the introduction, that the semidefinite constraint can't take full credit for the phenomenon. ### Discussion In this subsection, we give some basic ideas and results that explain why the inner product kernel \(u_{\Sigma}\) fails to deliver exact low-rank solutions in finite samples. The perspective we take is to illustrate the sharpness property that holds for the learning objective with kernel \(k_{\Sigma}\) fails to hold when using the inner-product kernel \(u_{\Sigma}\). Our derivation below for the kernel \(u_{\Sigma}\) mimics this paper's analysis of \(k_{\Sigma}\). Note a full-fledged analysis with all the details exceeds the scope of the paper, and may be developed in the authors' future work. Let us first introduce the function \(J_{u}(\Sigma)\) at the population level--the analogue of \(J(\Sigma)\)--which takes partial minimization over \(f,\gamma\): \[J_{u}(\Sigma)=\min_{f,\gamma}\frac{1}{2}\mathbb{E}[(Y-f(X)-\gamma)^{2}]+\frac {\lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}.\] We also use \(f_{\Sigma},\gamma_{\Sigma}\) to denote the minimizer of the RHS. Note the definition of \(\mathcal{H}_{\Sigma}\) is now associated with the inner-product kernel \(u_{\Sigma}(x,x^{\prime})=\psi(x^{T}\Sigma x^{\prime})\). We are now interested in whether the sharpness property holds for \(J_{u}(\Sigma)\). To do so, let us first use \(\Sigma^{*}\) to denote a minimizer of the population objective \[\text{minimize}\,J_{u}(\Sigma)\quad\text{ subject to }\quad\Sigma\succeq 0,\; \;\;\left\|\Sigma\right\|\leq M.\] We apply the same idea in Lemma 3.10 to derive the gradient of \(J_{u}\) (and due to the space constraints, we omit the details of the derivation). Suppose \(\text{supp}(X)\) is compact. Then, for every \(\Sigma\in\mathcal{C}\): \[\nabla J_{u}(\Sigma)=\mathbb{E}[r_{\Sigma}(X;Y)r_{\Sigma}(X^{\prime};Y^{ \prime})\partial_{\Sigma}u_{\Sigma}(X,X^{\prime})].\] Here the gradient \(\nabla J_{u}\) is interpreted in terms of Definition 3.1, and \(r_{\Sigma}\) is interpreted as the residual \(r_{\Sigma}(x,y)=y-f_{\Sigma}(x)-\gamma_{\Sigma}\). Note the similarity of the gradient formula between \(\nabla J_{u}\) and \(\nabla J\). Let us now suppose that \(\operatorname{col}(\Sigma^{*})\subseteq S_{*}\) (i.e., the analogue of Theorem 1.1 holds for the kernel \(u_{\Sigma}\)). Assume \(\Pi_{S_{*}}X\) is independent of \(\Pi_{S_{*}^{\perp}}X\). Then similar to the proof of Theorem 3.1, we derive \[\Pi_{S_{*}^{\perp}}\nabla J_{u}(\Sigma^{*})\Pi_{S_{*}^{\perp}} =\mathbb{E}[r_{\Sigma^{*}}(X;Y)r_{\Sigma^{*}}(X^{\prime};Y^{ \prime})\psi^{\prime}(X^{T}\Sigma^{*}X^{\prime})\Pi_{S_{*}^{\perp}}X(X^{ \prime})^{T}\Pi_{S_{*}^{\perp}}]\] \[=\mathbb{E}[r_{\Sigma^{*}}(X;Y)r_{\Sigma^{*}}(X^{\prime};Y^{ \prime})\psi^{\prime}(X^{T}\Sigma^{*}X^{\prime})]\cdot\Pi_{S_{*}^{\perp}} \mathbb{E}[X]\cdot(\Pi_{S_{*}^{\perp}}\mathbb{E}[X])^{T}.\] As a result, this shows \(\Pi_{S_{*}^{\perp}}\nabla J_{u}(\Sigma^{*})\Pi_{S_{*}^{\perp}}=0\) if \(\mathbb{E}[X]\in S_{*}^{\perp}\). Note this implies \(\langle\nabla J_{u}(\Sigma^{*}),vv^{T}\rangle=0\) holds for any \(v\in S_{*}^{\perp}\). This further implies that if \(\mathbb{E}[X]\in S_{*}^{\perp}\), then we can't have the analog of sharpness property holds for \(J_{u}\). Indeed, \(\langle\nabla J_{u}(\Sigma^{*}),W\rangle=0\) holds for every \(W\in\mathcal{T}_{\mathcal{C}}(\Sigma^{*})\), with \(\operatorname{col}(W)\subseteq S_{*}^{\perp}\) if \(\mathbb{E}[X]\in S_{*}^{\perp}\). In particular, the sharpness property does not hold when \(\mathbb{E}[X]=0\). In short, for inner-product kernels to possess the sharpness property, our findings indicate that we must make extra distributional assumptions on \((X,Y)\). ## 7 Numerical Experiments Importantly, the low rankness phenomenon described in the paper was not initially discovered by mathematical analysis, but rather through computational experiments. In this section, we document some observations from our experiments that further characterize the scope of when the phenomenon occurs. In other words, we aim to describe situations when the finite sample solution \(\Sigma_{n}^{*}\) of the kernel learning objective (1.2) has rank bounded by the minimum size of projection directions of \(X\) required to achieve full predictive power for the target \(Y\). ### Setting and Methodology Our experiments are of the following form. 1. We generate \(n\) i.i.d. samples \((X,Y)\sim\mathbb{P}\) according to \[Y=f(X)+\epsilon.\] In the above, the noise term \(\epsilon\sim\mathsf{N}(0,\sigma^{2})\) is independent from \(X\sim\mathbb{P}_{X}\). 2. We then solve the finite sample kernel learning objective \[\underset{f,\gamma,\Sigma}{\text{minimize}}\ \ \frac{1}{2}\mathbb{E}_{n}[(Y-f(X)- \gamma)]^{2}+\frac{\lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}\ \ \ \text{ subject to}\ \ \Sigma\succeq 0.\] (P) 3. We document the column space of our solution \(\Sigma_{n}^{*}\) across a grid of choice of \(\lambda\). We then compare it with \(S_{*}\) where \(S_{*}\) is the central mean subspace. The main purpose of the experiments is to vary the signal form \(f\), distribution \(\mathbb{P}_{X}\) and the RKHS \(\mathcal{H}_{\Sigma}\) to investigate when \(\operatorname{rank}(\Sigma_{n}^{*})\leq\dim(S_{*})\) and \(\operatorname{rank}(\Sigma_{n}^{*})=\dim(S_{*})\) hold with high probability. Function \(f\).The function \(f\) takes one of the following forms: (a) \(f(x)=x_{1}+x_{2}+x_{3}\) (b) \(f(x)=x_{1}x_{2}\), (c) \(f(x)=0.1(x_{1}+x_{2}+x_{3})^{3}+\tanh(x_{1}+x_{3}+x_{5})\), (d) \(f(x)=2(x_{1}+x_{2})+(x_{2}+x_{3})^{2}+(x_{4}-0.5)^{3}\), (e) \(f(x)=0\). Algorithm for solving (P).The objective (P) is non-convex in \(\Sigma\). However, for a given \(\Sigma\), partial minimization over \(f\) and \(\gamma\) can be done based on kernel ridge regression. For our experiments, we first derive an explicit formula for \(J_{n}(\Sigma)=\min_{f,\gamma}\frac{1}{2}\mathbb{E}_{n}[(Y-f(X)-\gamma)]^{2}+ \frac{\lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}\) and evaluate the gradient \(\nabla J_{n}(\Sigma)\). To then minimize \(J_{n}(\Sigma)\) subject to \(\Sigma\succeq 0\), we perform the very simple gradient descent with projection onto the semidefinite cone \(\mathcal{C}=\{\Sigma:\Sigma\succeq 0\}\) per iteration, using the Armijo rule to search each iteration's stepsize. We terminate gradient descent when the ratio between the difference of consecutive iterates, measured by the Frobenius norm, and the stepsize is below a threshold \(\Delta>0\). The algorithm is always initialized at a diagonal matrix with diagonal entry \(1/p\). Size \(n,p,\sigma,\Delta\).For all our simulations, \(n=300\), \(p=50\), \(\sigma=0.1\), \(\Delta=10^{-3}\). ### Results We first briefly list our claims and then provide the source of evidence that supports our claims. 1. The phenomenon of \(\operatorname{rank}(\Sigma_{n}^{*})\leq\dim(S_{*})\) occurs with high probability under Assumptions 1-4. While this is just the main Theorem 1.2, we further corroborate it with empirical evidence. 2. The phenomenon \(\operatorname{rank}(\Sigma_{n}^{*})\leq\dim(S_{*})\) occurs in the solution with high probability for the kernel \((x,x^{\prime})\mapsto\phi(\|x-x^{\prime}\|_{\Sigma}^{2})\) regardless of the independence Assumption 3. 3. The phenomenon \(\operatorname{rank}(\Sigma_{n}^{*})\leq\dim(S_{*})\) occurs in the solution with high probability for the kernel \((x,x^{\prime})\mapsto\phi(\|x-x^{\prime}\|_{\Sigma}^{2})\) regardless of whether \(X\) is discrete or continuous. 4. The phenomenon disappears when \(X\) has no predictive power of \(Y\), i.e., \(f(x)\equiv 0\). 5. The phenomenon disappears when using the inner product kernel \((x,x^{\prime})\mapsto\psi(x^{T}\Sigma x^{\prime})\). Below we make each claim concrete with evidence. Claim (i).Using \(X\sim\mathsf{N}(0,I)\), the function \(f\) from Section 7.1 options \((a)-(d)\), and the Gaussian kernel \(k_{\Sigma}(x,x^{\prime})=\exp(-\left\|x-x^{\prime}\right\|_{\Sigma}^{2})\), all Assumptions 1-4 hold. The results are shown in Figure 2. Claim (ii).Here \(X\sim\mathsf{N}(0,K)\) where the covariance matrix \(K\) has its \((i,j)\)-th entry \(K_{i,j}=0.5^{|i-j|}\). Thus, \(X\) has correlated covariates. For every regression function \(f\) from Section 7.1 options \((a)-(d)\), \(\Pi_{S_{*}}X\) is dependent with \(\Pi_{S_{\perp}^{\perp}}X\), and thus Assumption 3 is violated. The results displayed in Figure 3 (Section D) show that the phenomenon occurs regardless of the independence assumption. Claim (iii).Here we experiment with two distributions of \(X\): either \(X\) is discrete with independent coordinates \(X_{1},X_{2},\ldots,X_{d}\) with \(\mathbb{P}(X_{i}=1)=\mathbb{P}(X_{i}=0)=0.5\) or \(X\) is continuous with independent coordinates \(X_{1},X_{2},\ldots,X_{d}\) with each \(X_{i}\) uniformly distributed on \([0,1]\). For the function \(f\) from Section 7.1 options \((a)-(d)\), and the Gaussian kernel \(k_{\Sigma}(x,x^{\prime})=\exp(-\left\|x-x^{\prime}\right\|_{\Sigma}^{2})\), the phenomenon continues to appear in our experiments. See Figure 4, Section D. Claim (iv).For this experiment, \(X\sim\mathsf{N}(0,I)\), \(f(x)\equiv 0\) and \(k_{\Sigma}(x,x^{\prime})=\exp(-\left\|x-x^{\prime}\right\|_{\Sigma}^{2})\) is the Gaussian kernel. The resulting solution \(\Sigma_{n}^{*}\) is full rank with high probability (and in fact, in our experiments, \(\Sigma_{n}^{*}\) is with probability one full rank over a range of \(\lambda\) grid, see Figure 5, Section D). Claim (v).For this experiment, we set \(X\sim\mathsf{N}(0,I)\) and consider two inner product kernels: the linear kernel \(k_{\Sigma}(x,x^{\prime})=x^{T}\Sigma x^{\prime}\) and the cubic kernel \(k_{\Sigma}(x,x^{\prime})=(x^{T}\Sigma x^{\prime})^{3}\). Given occasional divergence of \(\Sigma_{n}^{*}\) in the experiments, we consider \(\min_{f,\gamma,\Sigma}\frac{1}{2}\mathbb{E}_{n}[(Y-f(X)-\gamma)]^{2}+\frac{ \lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}\) subject to \(\Sigma\succeq 0,\left\|\!\left\|\Sigma\right\|\!\right|_{\mathrm{op}}\leq M\), where \(\left\|\!\left\|\cdot\right\|\!\right\|_{\mathrm{op}}\) denotes the operator norm and \(M=100000\). We solve the minimization problem in a similar way: first evaluate \(J_{n}(\Sigma)=\min_{f,\gamma}\frac{1}{2}\mathbb{E}_{n}[(Y-f(X)-\gamma)]^{2}+ \frac{\lambda}{2}\left\|f\right\|_{\mathcal{H}_{\Sigma}}^{2}\) and its gradient \(\nabla J_{n}(\Sigma)\), and then perform projected gradient descent onto the bounded constraint set \(\mathcal{C}_{M}=\{\Sigma:\Sigma\succeq 0,\left\|\!\left\|\Sigma\right\|\!\right|_{ \mathrm{op}}\leq M\}\), using Armijo's rule to perform stepsize selection. We document the resulting solution \(\Sigma_{n}^{*}\). Our finding shows that the resulting solution \(\Sigma_{n}^{*}\) is with high probability full rank regardless of the choice of the function \(f\) from Section 7.1 options \((a)\)-\((d)\). In fact, in all our experiments, \(\Sigma_{n}^{*}\) is with probability one full rank over a range of \(\lambda\) grid, see Figure 6 in Section D. ## 8 Discussions We would like to close this paper by offering a few comments about the results obtained in this paper and by discussing the possibility of generalizations and extensions. ### Convergence Rates The primary theoretical results in this paper concerning the empirical kernel learning objective are presented in an asymptotic fashion. For instance, in Theorem 1.2, we set \(\lambda\in(0,\infty)\), and then consider \(n\to\infty\). For a deeper insight into the convergence rates of empirical minimizers, we require a nuanced nonasymptotic analysis. A key discovery in this paper is the sharpness property near optimal solutions of the population kernel learning objective (see Section 1.6.1). We plan to carry out a more refined nonasymptotic analysis that leverages this sharpness property to demonstrate that convergence rates mainly hinge on the intrinsic dimension, i.e., \(\dim(S_{*})\), rather than the ambient dimension \(p\). ### Computation An important question pertains to the computational aspects of minimizing the kernel learning objective (1.2). A promising idea is to use the random feature model to facilitate computations for kernel ridge regressions [10]. Our preliminary experiments suggest that the phenomenon persists in the relevant random feature model, although certain computational and statistical tradeoffs seem unavoidable. We hope to be able to report on these early findings in a follow-up paper. ### Statistical Analysis of Local Minimum While the primary objective of this paper is to characterize the low-rank nature of the global minimizer pertaining to the empirical objective \(J_{n}\), comprehensive numerical experiments suggest for a wide range of distributions for \(X\), the application of gradient descent on \(J_{n}\) frequently results in matrices possessing exactly low rank. Since gradient descent's convergence to global minimizers is not always guaranteed, our findings suggest that such low-rank properties might extend to local minimizers as well. That being said, we hope to refine our analysis to gain a deeper understanding of the local minimizers for the kernel learning objective. Achieving this understanding may provide rigorous statistical guarantees for gradient-based algorithms targeting kernel learning. Figure 2: Plots for Claim (i). Here, we use Gaussian kernel \(k_{\Sigma}(x,x^{\prime})=\exp\{-\|x-x^{\prime}\|_{\Sigma}^{2}\}\). Experimental setting: the covariate \(X\sim\mathsf{N}(0,I)\) and the response \(Y=f(X)+\mathsf{N}(0,\sigma^{2})\) where \(\sigma=0.1\). Here we choose \(n=300\) and \(p=50\). For each row, the left panel shows the empirical probability of \(\operatorname{rank}(\Sigma_{n}^{*})\leq\dim(S_{*})\) over \(100\) repeated experiments for different \(\lambda\) values. The right panel displays how the rank of the solution \(\Sigma_{n}^{*}\) changes with different \(\lambda\) values, using \(5\) example pairs of \((X,y)\). Acknowledgements The authors would like to thank X.Y. Han for enriching discussions on the experiments. Feng Ruan would like to thank Lijun Ding for his insightful comments on the initial draft, as well as to Basil Saeed, Lanqi Yao, and Yuan Yao for their helpful discussions on the presentation.
2307.12828
Stochastic Degree Sequence Model with Edge Constraints (SDSM-EC) for Backbone Extraction
It is common to use the projection of a bipartite network to measure a unipartite network of interest. For example, scientific collaboration networks are often measured using a co-authorship network, which is the projection of a bipartite author-paper network. Caution is required when interpreting the edge weights that appear in such projections. However, backbone models offer a solution by providing a formal statistical method for evaluating when an edge in a projection is statistically significantly strong. In this paper, we propose an extension to the existing Stochastic Degree Sequence Model (SDSM) that allows the null model to include edge constraints (EC) such as prohibited edges. We demonstrate the new SDSM-EC in toy data and empirical data on young children's' play interactions, illustrating how it correctly omits noisy edges from the backbone.
Zachary P. Neal, Jennifer Watling Neal
2023-07-24T14:25:06Z
http://arxiv.org/abs/2307.12828v2
# Stochastic Degree Sequence Model with Edge Constraints (SDSM-EC) for Backbone Extraction ###### Abstract It is common to use the projection of a bipartite network to measure a unipartite network of interest. For example, scientific collaboration networks are often measured using a co-authorship network, which is the projection of a bipartite author-paper network. Caution is required when interpreting the edge weights that appear in such projections. However, backbone models offer a solution by providing a formal statistical method for evaluating when an edge in a projection is statistically significantly strong. In this paper, we propose an extension to the existing Stochastic Degree Sequence Model (SDSM) that allows the null model to include edge constraints (EC) such as prohibited edges. We demonstrate the new SDSM-EC in toy data and empirical data on young children's' play interactions, illustrating how it correctly omits noisy edges from the backbone. Keywords:backbone, bipartite, null model, projection, social network ## 1 Introduction It is common to use the projection of a bipartite network to measure a unipartite network of interest. For example, scientific collaboration networks are often measured using a co-authorship network, which is the projection of a bipartite author-paper network [12]. Similarly, corporate networks are often measured using a board co-membership or 'interlocking directorate' network, which is the projection of a bipartite executive-board network [1]. The edges in a bipartite projection are weighted (e.g., number of co-authored papers, number of shared boards), but these weights do not provide an unbiased indicator the strength of the connection between vertices [5, 9]. To overcome this bias, backbone extraction identifies the edges that are stronger than expected under a relevant null model, retaining only these edges to yield a simpler unweighted network (i.e., the backbone) that is more suitable for visualization and analysis. Many null models exist for extracting the backbone of bipartite networks, with each model specifying different constraints on the random networks against which an observed network is compared. However, none of the existing models permit constraints on specific edges. In this paper, we extend the fastest and most robust existing backbone model - the stochastic degree sequence model (SDSM) [10] - to accommodate one type of edge constraint: prohibited edges. Prohibited edges are edges that in principle cannot occur in the network, and can arise in many contexts. For example, in a bipartite author-paper network, an author cannot write a paper before their birth, and in a bipartite executive-board network, anti-trust laws prevent executives from serving on the boards of competitors. We illustrate the new stochastic degree sequence model with edge constraints (SDSM-EC) first in toy data, then in empirical data recording young children' membership in play groups. ### Preliminaries A bipartite network's vertices can be partitioned into two sets such that edges exist between, but not within, sets. In this work, we focus on a special case of a bipartite network - a two-mode network - where the two sets of vertices represent distinctly different entities that we call 'agents' and 'artifacts' (e.g. authors and papers, or executives and corporate boards). To facilitate notation, we represent networks as matrices. First, we represent a bipartite network containing \(r\) 'agents' and \(c\) 'artifacts' as an \(r\times c\) binary incidence matrix \(\mathbf{B}\), where \(B_{ik}=1\) if agent \(i\) is connected to artifact \(k\) (e.g., author \(i\) wrote paper \(k\)), and otherwise is \(0\). The row sums \(R=r_{1}...r_{c}\) of \(\mathbf{B}\) contain the degree sequence of the agents (e.g., the number of papers written by each author), while the column sums \(C=c_{1}...c_{r}\) of \(\mathbf{B}\) contain the degree sequence of the artifacts (e.g., the number of authors on each paper). A prohibited edge in a bipartite network is represented by constraining a cell to equal zero, and therefore is sometimes called a'structural zero' [13]. Second, we represent the projection of a bipartite network as an \(r\times r\) weighted adjacency matrix \(\mathbf{P}=\mathbf{B}\mathbf{B}^{T}\), where \(\mathbf{B}^{T}\) represents the transpose of \(\mathbf{B}\). In \(\mathbf{P}\), \(P_{ij}\) equals the number of artifacts \(k\) that are adjacent to both agent \(i\) and agent \(j\) (e.g., the number of papers co-authored by authors \(i\) and \(j\)). Finally, we represent the backbone of a projection \(\mathbf{P}^{\prime}\) as an \(r\times r\) binary adjacency matrix, where \(P^{\prime}_{ij}=1\) if agent \(i\) is connected to agent \(j\) in the backbone, and otherwise is \(0\). Let \(\mathcal{B}\) be an ensemble of \(r\times c\) binary incidence matrices, which can be constrained to have certain features present in \(\mathbf{B}\). Let \(P^{*}_{ij}\) be a random variable equal to \((\mathbf{B}^{*}\mathbf{B}^{*T})_{ij}\) for \(\mathbf{B}^{*}\ \in\ \mathcal{B}\). Decisions about which edges appear in a backbone extracted at the statistical significance level \(\alpha\) are made by comparing \(P_{ij}\) to \(P^{*}_{ij}\): \[P^{\prime}_{ij}=\begin{cases}1&\text{ if }\Pr(P^{*}_{ij}\geq P_{ij})<\frac{ \alpha}{2},\\ 0&\text{otherwise.}\end{cases}\] This test includes edge \(P^{\prime}_{ij}\) in the backbone if its weight in the observed projection \(P_{ij}\) is uncommonly large compared to its weight in projections of members of the ensemble \(P^{*}_{ij}\). ## 2 Backbone models ### The stochastic degree sequence model (SDSM) Models for extracting the backbone of bipartite projections differ in the constraints they impose on \(\mathcal{B}\). The most stringent model - the Fixed Degree Sequence Model (FDSM) [17] - relies on a microcanonical ensemble that constrains each member of \(\mathcal{B}\) to have exactly the same row and column sums as \(\mathcal{B}\). Computing \(P_{ij}^{*}\) under the FDSM is slow because it requires approximation via computationally intensive Monte Carlo simulation. Despite recent advances in the efficiency of these simulations [2], it is often more practical to use the less stringent Stochastic Degree Sequence Model (SDSM) [9]. The SDSM relies on a canonical ensemble that constrains each member of \(\mathcal{B}\) to have the same row and column sums as \(\mathbf{B}\)_on average_. SDSM is fast and exact, and comparisons with FDSM reveal that it yields similar backbones [10]. Under the SDSM, \(P_{ij}^{*}\) follows a Poisson-binomial distribution whose parameters can be computed from the entries of probability matrix \(\mathbf{Q}\), where \(Q_{ik}=\Pr(B_{ik}^{*}=1)\) for \(\mathbf{B}^{*}\ \in\) a microcanonical \(\mathcal{B}\). That is, \(Q_{ik}\) is the probability that \(B_{ik}^{*}\) contains a 1 in the space of all matrices with given row and column sums. Most implementations of SDSM approximate \(\mathbf{Q}\) using the fast and precise Bipartite Configuration Model (BiCM) [14, 15]. However, it can also be computed with minimal loss of speed and precision [10] using a logistic regression [9], which offers more flexibility. This method estimates the \(\beta\) coefficients in \[B_{ik}=\beta_{0}+\beta_{1}r_{i}+\beta_{2}c_{k}+\epsilon\] using maximum likelihood, then defines \(Q_{ik}\) as the predicted probability that \(B_{ik}=1\). ### The stochastic degree sequence model with edge constraints (SDSM-EC) The constraints that SDSM imposes on \(\mathcal{B}\) are determined by the way that \(\mathbf{Q}\) is defined. In the conventional SDSM, \(\mathbf{Q}\) is defined such that \(Q_{ik}\) is the probability that \(B_{ik}^{*}\) contains a 1 in the space of all matrices with given row and column sums, which only imposes constraints on the row and column sums of members of \(\mathcal{B}\). To accommodate edge constraints, we define \(\mathbf{Q}^{\prime}\) such that \(Q_{ik}^{\prime}\) is the probability that \(B_{ik}^{*}\) contains a 1 in the space of all matrices with given row and column sums _and no 1s in prohibited cells_. The BiCM method cannot be used to approximate \(\mathbf{Q}^{\prime}\), however the logistic regression method can be adapted to approximate it. If \(B_{ik}\) is a prohibited edge, then \(Q_{ik}=0\) by definition. If \(B_{ik}\) is not a prohibited edge, then \(Q_{ik}\) is the predicted probability that \(B_{ik}=1\) based on a fitted logistic regression. Importantly, however, whereas the logistic regression used to estimate \(\mathbf{Q}\) is fitted over all \(B_{ik}\), the logistic regression used to estimate \(\mathbf{Q}^{\prime}\) is fitted only over \(B_{ik}\) that are not prohibited edges. ## 3 Results ### Estimating \(\mathbf{Q^{\prime}}\) In general the true values of \(Q_{ik}\) are unknown. However, for small matrices they can be computed from a complete enumeration of the space. To evaluate the precision of \(Q_{ik}\) estimated using the SDSM-EC method described above, we first enumerated all \(4\times 4\) incidence matrices with row sums {1,1,2,2} and column sums {1,1,2,2}; there are 211. Next, we constrained this space to matrices in which a randomly selected one or two cells always contain a zero (i.e. bipartite networks with one or two prohibited edges). Finally, we computed the true value of each \(Q_{ik}\) for all cells and all spaces, estimated each \(Q_{ik}\) using the logistic regression method, and computed the absolute deviation between the two. Figure 1A illustrates that, compared to the cardinality of the unconstrained space (\(|\mathcal{B}|=211\)), the cardinalities of the spaces constrained by one or two prohibited edges are much lower (\(|\mathcal{B}|=2-29\), gray bars). That is, while the SDSM evaluates whether a given edge's weight is significant by comparing its value to a large number of possible worlds, the SDSM-EC compares its value to a much smaller number of possible worlds. Figure 1B illustrates the deviations between the true value of \(Q_{ik}\) and the value estimated using the logistic regression method. It demonstrates that although SDSM-EC requires approximating \(Q_{ik}\), these approximations tend to be close to the true values. ### Toy illustration We offer a toy example to illustrate the impact of imposing edge constraints in backbone extraction. Figure 2A illustrates a bipartite network that contains two types of agents (open and filled circles) and two types of artifacts (open and Figure 1: (A) The cardinality of the space of matrices with row sums {1,1,2,2} and column sums {1,1,2,2} and one or two cells constrained to zero is small compared to the cardinality of the space without constrained cells. (B) The deviation between the true and estimated \(Q_{ik}\) for all such constrained spaces tends to be small. filled squares), such that agents are only connected to artifacts of the same type. Such a network might arise in the context of university students joining clubs. For example, suppose Harvard students (open circles) only join Harvard clubs (open squares), while Yale students (filled circles) only join Yale clubs (filled squares). Figure 2B illustrates the backbone extracted from a projection of this bipartite network using the SDSM. Using the SDSM implies that there are no constraints on edges in the null model. In the context of student clubs, this means that in the null model it is possible for a Harvard student to join a Yale club, and vice versa, and that the pattern of segregation that appears in the bipartite network is chosen (i.e. homophily). The SDSM backbone displays a high level of within-group cohesion (i.e. homophily). This occurs for two reasons. First, agents from the same group share many artifacts (e.g., two Harvard students belong to many of the same clubs). Second, if agents were connected to artifacts randomly (e.g., Harvard students joined both Harvard and Yale clubs), as the SDSM null model assumes, then agents from the same group would have shared fewer artifacts. The presence of within-group connections in the SDSM backbone reflects the fact that it is noteworthy that pairs of Harvard students, or pairs of Yale students, are members of many of the same clubs because they could have chosen otherwise. Figure 2C illustrates the backbone extracted using the SDSM-EC, where we specify that edges are prohibited between an agent and artifact of a different type. In the context of student clubs, this means that in the null model it is Figure 2: (A) A bipartite network containing two groups of agents and two groups of artifacts, such that agents are connected only to their own group’s artifacts. (B) The SDSM backbone of a projection of this bipartite graph, which assumes that an agent _could_ be connected to another group’s artifact, suggests within-group cohesion among agents. (C) The SDSM-EC projection, which assumes that an agent _could not_ be connected to another group’s artifact, suggests none of the edges in the projection are significant. _not_ possible for a Harvard student to join a Yale club, and vice versa, and that the pattern of segregation is enforced by university regulations. The SDSM-EC backbone is empty. This occurs because although agents from the same group share many artifacts, they also share many artifacts under the null model. The absence of connections in the SDSM-EC backbone reflects the fact that it is uninteresting that pairs of Harvard students, or pairs of Yale students, are members of many of the same clubs because they could not have chosen otherwise. ### Empirical illustration We offer an empirical example of the application of SDSM-EC to illustrate its practicality and impact. It can be difficult to directly measure social networks among very young children. One alternative is to infer these networks from observations of their play groups using bipartite backbones [8]. However, considering edge constraints can be important because the organization of the school can mean that it may be impossible to observe certain children playing together. These data were collected in Spring 2013 by observing the behaviors of 53 children in a preschool in the Midwestern United States [3, 6, 7, 8]. A scan observation method was employed whereby a randomly selected child was observed for a period of 10 seconds. After the 10 second period had elapsed, the trained observer coded the child's predominant behavior and, if applicable, the peers with whom they were interacting [4]. Here, we focus only on social play behaviors because they were the most common form of social behavior, and the most likely to involve direct interaction with peers. A total of 1829 social play events were observed during data collection. These data are organized as a bipartite network \(\mathbf{B}\) where \(B_{ik}=1\) if child \(i\) was observed participating in a play group during observation \(k\). A projection of \(\mathbf{P}=\mathbf{B}\mathbf{B}^{T}\), where \(P_{ij}\) indicates the number of times children \(i\) and \(j\) were observed playing together provides an indirect indicator of the children's social network, particularly when refined using backbone extraction [8]. In this context, two types of prohibited edges exist in the bipartite network. First, the school was organized into two age-based classrooms, a classroom of 3-year-olds and a classroom of 4-year-olds. Because these classrooms used different spaces, it was not possible to observe a 3-year old and a 4-year-old together. Therefore, edges from 3-year-olds to observations of 4-year-olds are prohibited, and likewise edges from 4-year-olds to observations of 3-year-olds are prohibited. Second, the children varied in their attendance status: some attended for the full day, some attended only in the morning, and some attended only in the afternoon. Because attendance status determines which children were present and able to play together, it was not possible to observe an AM child and a PM child together. Therefore, edges from AM children to observations conducted in the afternoon are prohibited, and likewise edges from PM children to observations conducted in the morning are prohibited. Figure 3 illustrates two backbones extracted from these data, using shape to represent classroom (circles = 3-year-olds, squares = 4-year-olds) and color to represent attendance status (black = full day, gray = AM only, white = PM only). Figure 3A was extracted using the SDSM and therefore does not consider these edge constraints, while Figure 3B was extracted using the SDSM-EC and does consider these edge constraints. There are some similarities between the SDSM and SDSM-EC backbones that reflect characteristics of the setting: 3-year-olds (circles) are never connected to 4-year-olds (squares), and AM children (gray) are never connected to PM children (white), because it was not possible to observe such children together. However, there are also differences that highlight the impact of incorporating edge constraints using SDSM-EC. The SDSM-EC backbone contains many fewer edges (\(E=85\)) than the SDSM backbone (\(E=153\)). This occurs for similar reasons to the loss of edges in the toy example above, although is less extreme. A hypothetical example serves to illustrate why the SDSM-EC backbone contains fewer edges in this context. Consider the case of an AM child and a Full Day child in the 3-year-old classroom who were observed to play together a few times. The SDSM compares this observed co-occurrence to the expected number of co-occurrences if these two children had played with other AM or Full Day children and with others in the 3-year-old classroom (which is possible), but also if they had played with PM children and children in the 4-year-old classroom (which is not possible). Under such a broad null model that includes some impossible play configurations, observing these two children playing together even just a few times seems noteworthy, and therefore an edge between them is included in the backbone. In contrast, the SDSM-EC compares this observed co-occurrence to the expected number of co-occurrences if these two children had played with Figure 3: (A) Backbone extracted using SDSM and (B) SDSM-EC from 1829 observations of 53 preschool children’ play groups. Vertex shape represents age-based classrooms: circles = 3 year old classroom, squares = 4 year old classroom. Vertex color represents attendance status: black = full day, gray = AM only, white = PM only. other AM or Full Day children and with others in the 3-year-old classroom only, recognizing that it was not possible for the AM child to play with PM children or for either to play with children in the 4-year-old classroom. Under this more constrained null model that excludes impossible play configurations, observing these two children playing together just a few times is not particularly noteworthy, and therefore an edge between them is omitted from the backbone. As this example illustrates, the SDSM-EC contains fewer edges because it correctly omits edges that might seem significantly strong when evaluated against a null model that includes impossible configuration, but that are not significantly strong when evaluated against a properly constrained null model that excludes impossible configurations. ## 4 Conclusion Although bipartite projections offer a promising way to indirectly measure unipartite networks of interest, caution is required when interpreting the edge weights that appear in such projections. Backbone models offer a solution by providing a formal statistical method for evaluating when an edge in a projection is statistically significantly strong by comparison to a bipartite null model. However, extracting an accurate backbone using these methods requires that the null model is properly constrained. In many cases the FDSM (slower) and SDSM (faster) are appropriate and yield similar results [10], however these null models only constrain the degree sequences, but cannot impose edge constraints such as prohibited edges. In this work, we have introduced the SDSM-EC, an extension of SDSM that allows the user to specify edge constraints in the form of prohibited edges. Prohibited edges arise in bipartite networks when a given agent cannot be connected to a given artifact, for example, because the agent is not present or because such a connection is legally prohibited. We have demonstrated in both a toy example and an empirical example that the SDSM-EC performs as expected, correctly omitting weaker edges in the backbone that are not significant when these constraints are considered, but that might have erroneously appeared significant under the SDSM. Therefore, we recommend that SDSM-EC be used to extract the backbones of bipartite projections when the bipartite network contains prohibited edges. The SDSM-EC is implemented in the sdsm() function of the backbone package for R [11]. We have focused on one common type of edge constraint: prohibited edges. However, a second type of edge constraint is also possible: required edges. Required edges arise in bipartite networks when a given agent must always be connected to a given artifact, for example, because the agent is the initiator of the artifact (e.g. a paper's lead author, a club's founder). It is trivial to extend the SDSM-EC to also accommodate such constraints. When \(\mathbf{Q}\) is estimated, \(Q_{ik}=0\) for prohibited edges and \(Q_{ik}=1\) for required edges, then the remaining \(Q_{ik}\) values are computed using the same logistic regression method described above. This work highlights the importance of using a properly constrained null model when extracting the backbone of bipartite projections, and identifies several avenues for future research. First, while \(\mathbf{Q}\) under the SDSM can be estimated quickly and precisely using the BiCM [14, 15], \(\mathbf{Q}\) under the SDSM-EC must be estimated using logistic regression, which is slower and less precise [10]. Future work should investigate improved methods for estimating \(\mathbf{Q}\), which has the potential to benefit not only the SDSM-EC, but all variants of the SDSM. Second, while a broad class of bipartite null models exist [16] and now include edge constraints, future work should investigate the importance and feasibility of incorporating other types of constraints. **Data availability statement.** The data and code necessary to reproduce the results reported above are available at [https://osf.io/7z4gu](https://osf.io/7z4gu).
2307.04092
Coupled-channel $D^\ast K^\ast -D_s^\ast ρ$ interactions and the origin of $T_{c\bar{s}0}(2900)$
Motivated by the recent observation of $T_{c\bar{s}0}(2900)^0$ and $T_{c\bar{s}0}(2900)^{++}$ in the $D_s \pi$ invariant mass distributions, we investigate $D^{\ast}K^{\ast}$ interactions in a coupled-channel approach. We show that the relativistic corrections could be significant for the energy far away from the threshold. Within the hidden local symmetry formalism, a sizable attraction interaction is found in the $J=0$ isospin triplet sector that can form a bound or a virtual state, which is consistent with the experimentally observed $T_{c\bar{s}0}(2900)$. By reproducing a $D_s^*\rho$-$D^*K^*$ bound/virtual state with the pole mass equal to that of the $T_{c\bar{s}0}(2900)$ measured by LHCb in the sector $(I,J)=(1,0)$, we determine the unknown parameter in the loop function, and then search for possible poles in the sectors of $I=1$, $J=1,$ 2 and $I=0$, $J=0$, 1, 2. The predicted resonances provide a useful reference for the future experimental studies of the $(C,S)=(1,1)$ systems and can be also helpful to unravel the nature of the $T_{c\bar{s}0}(2900)$.10
Man-Yu Duan, Meng-Lin Du, Zhi-Hui Guo, En Wang, Dian-Yong Chen
2023-07-09T04:26:03Z
http://arxiv.org/abs/2307.04092v1
# Coupled-channel \(D^{*}K^{-}-D^{*}_{s}\rho\) interactions and the origin of \(T_{c30}(2900)\) ###### Abstract Motivated by the recent observation of \(T_{c30}(2900)^{0}\) and \(T_{c30}(2900)^{++}\) in the \(D_{s}\pi\) invariant mass distributions, we investigate \(D^{*}K^{*}\) interactions in a coupled-channel approach. We show that the relativistic corrections could be significant for the energy far away from the threshold. Within the hidden local symmetry formalism, a sizable attraction interaction is found in the \(J=0\) isospin triplet sector that can form a bound or a virtual state, which is consistent with the experimentally observed \(T_{c30}(2900)\). By reproducing a \(D^{*}_{s}\rho^{+}K^{*}\) bound/virtual state with the pole mass equal to that of the \(T_{c30}(2900)\) measured by LHCb in the sector \((I,J)=(1,0)\), we determine the unknown parameter in the loop function, and then search for possible poles in the sectors of \(I=1\), \(J=1\), \(2\) and \(I=0\), \(J=0\), \(1\), \(2\). The predicted resonances provide a useful reference for the future experimental studies of the \((C,S)=(1,1)\) systems and can be also helpful to unravel the nature of the \(T_{c30}(2900)\). ## I Introduction In 2022, the LHCb Collaboration reported two new states, \(T_{c30}(2900)^{0}\) and \(T_{c30}(2900)^{++}\), in the decays \(B^{0}\to\bar{D}^{0}D^{*}_{s}\pi^{-}\) and \(B^{+}\to D^{-}D^{*}_{s}\pi^{+}\), respectively [1; 2]. The two states decay to \(D^{*}_{s}\pi^{-}\) and \(D^{*}_{s}\pi^{+}\), respectively, which implies that their minimal quark contents are \([c\bar{s}u\bar{d}]\) and \([c\bar{s}u\bar{d}]\). Both states are found to have spin-parity \(J^{P}=0^{+}\) and their resonance parameters extracted from the relativistic Breit-Wigner fits by LHCb are [1; 2], \[m_{T_{c30}(2900)^{0}} = (2892\pm 14\pm 15)\;\text{MeV}\;,\] \[\Gamma_{T_{c30}(2900)^{0}} = (119\pm 26\pm 13)\;\text{MeV}\;,\] \[m_{T_{c30}(2900)^{++}} = (2921\pm 17\pm 20)\;\text{MeV}\;,\] \[\Gamma_{T_{c30}(2900)^{++}} = (137\pm 32\pm 17)\;\text{MeV}\;, \tag{1}\] which are compatible with each other within uncertainties. By assuming the two resonances belong to the same isospin triplet, the common mass and width of \(T_{c30}(2900)\) are fitted to be [1; 2], \[m_{T_{c30}(2900)} = (2908\pm 11\pm 20)\;\text{MeV}\;,\] \[\Gamma_{T_{c30}(2900)} = (136\pm 23\pm 13)\;\text{MeV}\;. \tag{2}\] The discovery of \(T_{c30}(2900)^{++}\) and \(T_{c30}(2900)^{0}\) quickly spurred a number of theoretical studies as the former state is the first observation of a doubly charged open-charm tetraquark state. Unraveling their origin is important to understanding the strong interaction. The proximity of the \(D^{*}K^{*}\) and the \(D^{*}_{s}\rho\) thresholds to the mass of \(T_{c30}(2900)\) suggests that these two-hadron channels could play important roles in the dynamics of the \(T_{c30}(2900)\) states, hinting to a hadronic molecular interpretation of the two states [3; 4; 5; 6; 7]. The alternative interpretation as compact tetraquark states with quark contents \([c\bar{s}u\bar{d}]\) and \([c\bar{s}u\bar{d}]\) is studied in Refs. [8; 9; 10; 11; 12; 13]. In addition to being a genuine state, the \(T_{c30}(2900)\) structure is also proposed to be merely a threshold cusp effect from the interaction between the \(D^{*}K^{*}\) and \(D^{*}_{s}\rho\) channels [14] or the kinetic effect from a triangle singularity [15]. In Ref. [14], the \(D^{*}K^{*}\) (and \(D^{*}\bar{K}^{*}\)) system was investigated within the framework of the extended hidden local symmetry approach to SU(4) to incorporate charmed mesons. In that work, the nonrelativistic approximation, i.e. \(\vec{p}/M_{V}\to 0\) with \(\vec{p}\) the three-momentum of the involved states [16], was taken. It was found that, in the isovector sector with \((C,S)=(1,1)\), being \(C\) and \(S\) the charmness and strangeness numbers in order, i.e. the \(D^{*}K^{*}\)-\(D^{*}_{s}\rho\) coupled-system, while a bound state can be found for \(J=2\) sector, only cusp effects are observed for \(J=0\) and \(1\) sectors through attractive potentials appear in these two cases [14]. Due to the relatively strong attractive potentials, three deep bound states are found for \(J=0,1\), and \(2\) in the isoscalar sector of \((C,S)=(1,1)\). In this sector also sits the well-known \(D^{*}_{s0}(2317)\) discovered in the inclusive \(D^{*}_{s}\pi^{0}\) invariant mass distribution from \(e^{+}e^{-}\) annihilation data by the BaBar Collaboration in 2003 [17]. The \(D^{*}_{s0}(2317)\) is suggested as dominantly a \(DK\) hadronic molecule [18; 19; 20; 21; 22; 23; 24; 25] due to that it is located far below the conventional quark model expectation [26] and just below the \(DK\) threshold. The heavy-quark symmetry implies that the \(D^{*}K\) interaction is identical to the \(DK\) interaction up to \(\mathcal{O}(\Lambda_{\text{QCD}}/m_{c})\) and thus there should exist a \(D^{*}K\) molecule which is identified to the \(D_{s1}(2460)\) observed in the \(D^{*}_{s}\pi^{0}\) mass distribution [27]. The molecular interpretations of the \(D^{*}_{s0}(2317)\) and \(D_{s1}(2460)\) as \(DK\) and \(D^{*}K\) molecules are supported by the observation that \(M_{D_{s1}(2460)}-M_{D^{*}}\simeq M_{D^{*}_{s0}(2317)}-M_{D}\). Though both the \(D^{(*)}K\) and the \(D^{*}K^{*}\) systems are attractive and generate poles in the isoscalar sector [14], it is worth stressing that their origins are different. The \(D^{*}_{s0}(2317)\) and \(D_{s1}(2460)\) emerge as a consequence of the spontaneous chiral symmetry breaking of QCD, which constrains the \(D^{(*)}K\) interaction since the \(K\) is the cor responding Goldstone boson. However, the deep bound states found in the \(I=0\)\(D^{*}K^{*}\) systems are obtained with the potentials derived from the hidden gauge formalism in the nonrelativistic approximation [14]. It is remarked that for the two-body scattering processes with pure light-flavor vectors, such as \(\rho\rho\to\rho\rho\), the deep bound states generated with the nonrelativistic approximation in Ref. [16; 28] is untenable due to the neglect of the relativistic effects [29; 30]. The same issue could also exist in the \(D^{*}_{(s)}V\) (with \(V\) the light-flavor vector) scattering. Therefore one of the key motivations of this work is to study to which level the \(D^{*}_{s}\rho\)-\(D^{*}K^{*}\) system could be affected by including relativistic effects. For energy regions far away from the two-body threshold, the nonrelativistic approximation is questioned as the relativistic corrections could be significant. Therefore, a relativistically covariant formalism is employed in Refs. [29; 30], which leads to an "unphysical" left-hand cut with the on-shell factorization. It is worth emphasizing that this issue is caused by the on-shell factorization and can be overcome by the Lippmann-Schwinger equation or (the first iterated solution of) the \(N/D\) dispersion relation [29; 30]. It is found that, while the poles generated in the very vicinity of the threshold are consistent between the relativistic and nonrelativistic formalism, those found far away from the threshold in the nonrelativistic approximation are unreliable. Moreover, the corrections from the higher-order effective Lagrangians to the derived potentials at the energy regions far away from the threshold could be sizable. Hence, as a conservative estimate, in this work, we will try to reinvestigate the \(D^{*}K^{*}\) interactions with a relativistically covariant formalism and restrict ourselves to the energy region above the corresponding left-hand cuts developed by the vector-exchanging diagrams. The quantum numbers of the \(T_{c30}(2900)\) are determined to be \(I=1\) and \(J^{P}=0^{+}\)[1; 2]. As indicated in Ref. [31], in the isovector sector of the \((C,S)=(1,1)\), the potential for \(D^{*}_{s}\rho\to D^{*}_{s}\rho\) vanishes and that for \(D^{*}K^{*}\to D^{*}K^{*}\) is negligible. A sizable potential for \(D^{*}K^{*}\to D^{*}_{s}\rho\) leads to an attractive effect in this coupled-channel. It is easy to see the conclusion from a combination of the two-body states, i.e. \(|\Psi\rangle_{\pm}\simeq\frac{1}{\sqrt{2}}|D^{*}K^{*}\rangle\pm|D^{*}_{s}\rho\rangle\). In particular, a negative potential for \(D^{*}K^{*}\to D^{*}_{s}\rho\) suggests that while the potential for \(|\Psi\rangle_{+}\) to \(|\Psi\rangle_{+}\) is attractive, that for \(|\Psi\rangle_{-}\) to \(|\Psi\rangle_{-}\) is repulsive. The transition between \(|\Psi\rangle_{+}\) and \(|\Psi\rangle_{-}\) vanishes, which corresponds to diagonalization of the potentials matrix. The attraction could generate a bound state if the strength is sufficiently strong. In particular, at the \(D^{*}K^{*}\) threshold, the potential for \(D^{*}K^{*}\to D^{*}_{s}\rho\) is \(-6.8g^{2}\), where \(g=M_{\rho}/2f_{\pi}=4.17\) with \(f_{\pi}=93\) MeV. It is easy to see that the attraction effect is sizable. However, to determine whether a bound state can be formed, an estimate of the subtraction constant, \(\alpha(\mu)\), is required for the two-point loop function evaluated using dimensional regularization. In Ref. [31], by setting the renormalization scale \(\mu=1500\) MeV and \(\alpha=-1.6\), no pole was obtained for \(J=0\), but only a cusp was observed in the \(D^{*}_{s}\rho\) threshold. It is worth noticing that the cusp in the threshold may indicate a relatively strong interaction, and whether a bound state can be formed is sensitive to the choice of the subtraction \(\alpha\)[32; 33]. We will see below that a bound state can be found by slightly changing the value of \(\alpha\). For instance, by choosing \(\alpha=-1.65\), a pole can be found at around 2886 MeV in the physical Riemann sheet (RS), which is identified as a bound state. As a matter of fact, with \(\alpha=-1.6\), a pole below the \(D^{*}_{s}\rho\) threshold at around 2886 MeV is found in the unphysical RS, which corresponds to a virtual state and shows up as a cusp at the threshold in the amplitude of the physical RS. Without prior knowledge of the subtraction \(\alpha(\mu)\) (although its natural size is discussed in Ref. [34]), the loop function can be estimated by the hard-cutoff regularization with a natural value of the cutoff \(q_{\rm max}\sim M_{V}\). The value of \(\alpha(\mu)\) can be estimated by matching the loop functions evaluated with the two methods at a certain point, e.g., the threshold. We will show that in a reasonable range of the cutoff \(q_{\rm max}\), the determined \(\alpha\) could lead to a bound state or a virtual state. On the other hand, the coupling \(g=M_{\rho}/2f_{\pi}=4.17\) is used in Ref. [31]. When an SU(3) average mass of the vectors is employed, i.e. \(g=4.60\)[30], a bound state at around 2873 MeV can be found even with \(\alpha=-1.6\). As a consequence, the observed \(T_{c30}(2900)\) is consistent with a \(D^{*}K^{*}\)-\(D^{*}_{s}\rho\) bound state/virtual state with \(J=0\). In this work, we will take advantage of the LHCb measurement to determine the \(\alpha\) by assuming the \(T_{c30}(2900)\) as a \(D^{*}K^{*}\)-\(D^{*}_{s}\rho\) bound state/virtual state with its pole mass equal to the value of Eq. (2). Since we only focus on the origin of possible dynamically generated states of \(D^{*}K^{*}\) interactions, we will not consider its width due to the transitions to inelastic two-body channels, e.g. \(DK\) and \(D_{s}\pi\), three-body and four-body channels, which are supposed to affect its pole mass insignificantly [16; 29]. This paper is organized as follows. In Section II we derive the relativistically covariant partial-wave potentials, and demonstrate the formalism to calculate the unitarized scattering amplitudes. The numerical results and discussions are pre Figure 1: The sketch diagrams for \(D^{*}K^{*}\to VV\). Diagrams (a), (b), (c), and (d) correspond to contact interaction, \(s\)-channel, \(t\)-channel, and \(u\)-channel vector meson exchanged interactions, respectively. sented in Section III. Section IV is devoted to a short summary. ## II Formalism In order to make a close comparison with the nonrelativistic treatment of the \(D^{*}K^{*}\) system as done in Ref. [31], it is convenient to take the same theoretical model employed in the former reference, i.e. a straightforward extension of the hidden local symmetry formalism to SU(4) to include charmed vectors, to investigate the \(D^{*}K^{*}\) interactions, although this model could be somewhat oversimplified to deal with the interaction vertices of the charmed and light-flavor mesons. Nevertheless, as stressed in Section I, one aim here is to study the relativistic effects in the \(D^{*}_{i}\rho\)-\(D^{*}K^{*}\) system in a covariant manner within the coupled-channel approach. The Lagrangian describing the interactions among vector mesons reads [31; 35; 36], \[\mathcal{L}=-\frac{1}{4}\langle V_{\mu\nu}V^{\mu\nu}\rangle\;, \tag{3}\] where the symbol \(\langle\dots\rangle\) stands for the trace over SU(4) flavor space, and the tensor \(V_{\mu\nu}\) is defined as \[V_{\mu\nu}=\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu}-ig[V_{\mu},V_{\nu}]\;, \tag{4}\] with the coupling constant \(g=4.17\) as in Ref. [31]. The vector meson matrix \(V_{\mu}\) is \[V_{\mu}=\left(\begin{array}{cccc}\frac{\omega}{\sqrt{2}}+\frac{\rho^{0}}{ \sqrt{2}}&\rho^{+}&K^{*+}&\bar{D^{*0}}\\ \rho^{-}&\frac{\omega}{\sqrt{2}}-\frac{\rho^{0}}{\sqrt{2}}&K^{*0}&D^{*-}\\ K^{*-}&\bar{K}^{*0}&\phi&D^{*-}_{s}\\ D^{*0}&D^{*+}&D^{*+}_{s}&J/\psi\\ \end{array}\right)_{\mu}\;. \tag{5}\] By expanding the effective Lagrangian in Eq. (3), one obtains two types of vector interaction vertices, which are the four-vector contact term [Fig. 1-(a)] and the three-vector vertices responsible for the vector-exchange interactions [Fig. 1-(b-d)], respectively. As for the four-vector contact interaction, the corresponding Lagrangian is \[\mathcal{L}^{(c)}=\frac{g^{2}}{2}(V_{\mu}V_{\nu}V^{\mu}V^{\nu}-V_{\nu}V_{\mu} V^{\mu}V^{\nu})\;. \tag{6}\] One can obtain the corresponding amplitude, which is given by, \[\mathcal{A}^{(c)}=C_{1}\mathcal{A}_{1}^{(c)}+C_{2}\mathcal{A}_{2}^{(c)}, \tag{7}\] with channel-dependent coefficients \(C_{1}\) and \(C_{2}\), and \[\mathcal{A}_{1}^{(c)} = 2g^{2}(\epsilon_{1}\cdot\epsilon_{2}\;\epsilon_{3}^{*}\cdot \epsilon_{4}^{*}+\epsilon_{1}\cdot\epsilon_{3}^{*}\;\epsilon_{2}\cdot\epsilon _{4}^{*}-2\epsilon_{1}\cdot\epsilon_{4}^{*}\;\epsilon_{2}\cdot\epsilon_{3}^{* })\;,\] \[\mathcal{A}_{2}^{(c)} = 2g^{2}(\epsilon_{1}\cdot\epsilon_{3}^{*}\;\epsilon_{2}\cdot \epsilon_{4}^{*}+\epsilon_{1}\cdot\epsilon_{4}^{*}\;\epsilon_{2}\cdot \epsilon_{3}^{*}-2\epsilon_{1}\cdot\epsilon_{2}\;\epsilon_{3}^{*}\cdot \epsilon_{4}^{*})\;,\] where the indices 1, 2, 3, and 4 correspond to the particles with the momenta \(p_{1}\), \(p_{2}\), \(p_{3}\), and \(p_{4}\) in Fig. 1-(a), respectively, the \(\epsilon_{i}^{(*)}\) is the polarization vector of the \(i\)th particle1, the dot indicates the scalar product, and the superscript \((c)\) stands for the contact term. Footnote 1: The concrete expression of polarization vector can be found in the Appendix A of Ref. [29]. The vector-exchange diagrams are described by the Lagrangian \[\mathcal{L}^{(3V)} =ig\langle V^{\alpha}\partial_{\nu}V_{\nu}V^{\mu}-\partial_{\nu} V_{\mu}V^{\mu}V^{\nu}\rangle\] \[=ig\langle(V^{\mu}\partial_{\nu}V_{\mu}-\partial_{\nu}V_{\mu}V^{ \mu})V^{\nu}\rangle\;. \tag{9}\] The \(t\)-channel amplitude exchanging the vector \(V_{ex}\) with mass \(M_{ex}\) corresponding to Fig. 1-(c) has the form \[C_{t}^{V_{ex}}\mathcal{A}_{V_{ex}}^{(t)} = C_{t}^{V_{ex}}\frac{g^{2}}{t-M_{ex}^{2}}\Bigg{[}\epsilon_{1} \cdot\epsilon_{3}^{*}\;\epsilon_{2}\cdot\epsilon_{4}^{*}\] \[\times\left(s-u+\frac{(M_{1}^{2}-M_{3}^{2})(M_{2}^{2}-M_{4}^{2})} {M_{ex}^{2}}\right)\] \[-4\;\epsilon_{1}\cdot\epsilon_{3}^{*}\;(p_{1}\cdot\epsilon_{2}\;p _{2}\cdot\epsilon_{4}^{*}+p_{1}\cdot\epsilon_{4}^{*}\;p_{4}\cdot\epsilon_{2})\] \[+4\;p_{1}\cdot\epsilon_{3}^{*}\;(\epsilon_{1}\cdot\epsilon_{2}\;p _{2}\cdot\epsilon_{4}^{*}+\epsilon_{1}\cdot\epsilon_{4}^{*}\;p_{4}\cdot\epsilon _{2})\] \[-4\;\epsilon_{2}\cdot\epsilon_{4}^{*}\;(p_{1}\cdot\epsilon_{3}^{* }\;p_{2}\cdot\epsilon_{1}+p_{2}\cdot\epsilon_{3}^{*}\;p_{3}\cdot\epsilon_{1})\] \[+4\;p_{3}\cdot\epsilon_{1}\;(\epsilon_{2}\cdot\epsilon_{3}^{*}\;p _{2}\cdot\epsilon_{4}^{*}+\epsilon_{3}^{*}\cdot\epsilon_{4}^{*}\;p_{4}\cdot \epsilon_{2})\Bigg{]}\;,\] where \(C_{t}^{V_{ex}}\) is a channel-dependent coefficient, \(M_{1}\), \(M_{2}\), \(M_{3}\), and \(M_{4}\) stand for the masses of the particles with the momenta \(p_{1}\), \(p_{2}\), \(p_{3}\), and \(p_{4}\) in Fig. 1-(c), respectively, and the Mandelstam variables are defined as \(s=(p_{1}+p_{2})^{2}\), \(t=(p_{1}-p_{3})^{2}\) and \(u=(p_{1}-p_{4})^{2}\), which satisfy the constraint \(s+t+u=\sum_{i=1,4}M_{i}^{2}\). The \(u\)-channel exchanging amplitude \(\mathcal{A}_{V_{ex}}^{(u)}\) can be obtained from the expression of \(\mathcal{A}_{V_{ex}}^{(t)}\) by exchanging \(p_{3}\leftrightarrow p_{4}\) and \(\epsilon_{3}^{*}\leftrightarrow\epsilon_{4}^{*}\). Similarly, the \(s\)-channel exchanging amplitude \(\mathcal{A}_{V_{ex}}^{(s)}\) can also be obtained from the expression of \(\mathcal{A}_{V_{ex}}^{(t)}\) by performing the exchange \(p_{2}\leftrightarrow-p_{3}\) and \(\epsilon_{2}\leftrightarrow\epsilon_{3}^{*}\). Then the tree-level scattering amplitude for a certain process is given by \[\mathcal{A} = C_{1}\;\mathcal{A}_{1}^{(c)}+C_{2}\;\mathcal{A}_{2}^{(c)} \tag{11}\] \[+C_{s}^{V_{ex}}\;\mathcal{A}_{V_{ex}}^{(t)}+C_{t}^{V_{ex}}\; \mathcal{A}_{V_{ex}}^{(t)}+C_{u}^{V_{ex}}\;\mathcal{A}_{V_{ex}}^{(a)}\;,\] where the \(V_{ex}\) runs over all possible exchanging vectors. In the present work, we focus on the channels with charmless \(C=1\) and strangeness \(S=1\). In the isoscalar sector, i.e. \(I=0\), three channels, namely \(D^{*}K^{*}\), \(D_{s}^{*}\omega\) and \(D_{s}^{*}\phi\), are involved. In the isovector sector, we take two channels into account, \(D^{*}K^{*}\) and \(D_{s}^{*}\rho\). The tree-level amplitudes for the transitions among those channels with certain isospin are given by Eq. (11) with the coefficients collected in Table 1. The transitions of \(D_{s}^{*}\omega\to D_{s}^{*}\omega\), \(D_{s}^{*}\omega\to D_{s}^{*}\phi\), and \(D_{s}^{*}\rho\to D_{s}^{*}\rho\) vanish in the present model, thus are not shown. By means of the above amplitudes with definite isospin, one can calculate the partial-wave amplitudes in the \(IJ\ell S\) basis (states with definite isospin \(I\), total angular momentum \(J\), orbital angular momentum \(\ell\) and total spin \(S\)), denoted as \(V^{(IJ)}_{GS;IS}(s)\) for the transition \((IJ\bar{\ell}S)\rightarrow(IJ\bar{\ell}S)\). The expression for partial-wave decomposition reads [29] \[V^{(IJ)}_{GS;IS}(s) = \frac{Y^{0}_{\bar{\ell}}(\hat{\mathbf{z}})}{2J+1}\sum_{\sigma_{1} \sigma_{2},\sigma_{1},\sigma_{2},\sigma_{2}}\int d\hat{\mathbf{p}}^{\prime \prime}Y^{m}_{\ell}(\mathbf{p}^{\prime\prime})^{*}(\sigma_{1}\sigma_{2}M|s_{1} s_{2}S) \tag{12}\] \[\times(mM\bar{M}|\ell S)(\bar{\sigma_{1}}\sigma_{2}\bar{M}|\bar{ s}_{1}\bar{s}_{2}\bar{S})(0\bar{M}\bar{M}|\bar{\ell}SJ)\] \[\times\mathcal{A}^{(I)}(p_{1},p_{2},p_{3},p_{4};\epsilon_{1}, \epsilon_{2},\epsilon_{3}^{*},\epsilon_{4}^{*})\;,\] where \(M=\sigma_{1}+\sigma_{2}\) and \(\bar{M}=\bar{\sigma_{1}}+\bar{\sigma_{2}}\) with \(\sigma_{i}\) is the third component of spin \(s_{i}\) in the center-of-mass frame, \(m\) is the third component of orbital angular momentum \(\ell\). The Clebsch-Gordan coefficient \((a_{1}a_{2}A|b_{1}b_{2}B)\) is the composition for \(b_{1}+b_{2}=B\), with \(a_{i}\) and \(A\) referring to the third components of the \(b_{i}\) and \(B\). The expressions of three-momentum are \[\mathbf{p}_{1}=|\mathbf{p}|\hat{\mathbf{z}},\quad\mathbf{p}_{2}=-|\mathbf{p}| \hat{\mathbf{z}},\quad\mathbf{p}_{3}=\mathbf{p}^{\prime\prime},\quad\mathbf{p }_{4}=-\mathbf{p}^{\prime\prime}. \tag{13}\] The left-hand cut will appear when the exchanging particles in the crossed channels as indicated in Eq. (10) become on-shell in the partial-wave integration of Eq. (12). For a \(t\)-channel exchanging term, its typical partial-wave integral gives \[\frac{1}{2}\int_{-1}^{+1}d\cos\theta\frac{1}{t-M_{ex}^{2}+i \epsilon}=-\frac{s}{\sqrt{\lambda(s,M_{1}^{2},M_{2}^{2})\lambda(s,M_{3}^{2},M _{4}^{2})}}\] \[\times\log\left[\frac{M_{1}^{2}+M_{3}^{2}-(s+M_{1}^{2}-M_{2}^{2} )(s+M_{3}^{2}-M_{4}^{2})/(2s)-\sqrt{\lambda(s,M_{1}^{2},M_{2}^{2})\lambda(s,M_ {3}^{2},M_{4}^{2})/(2s)-M_{ex}^{2}+i\epsilon}}{M_{1}^{2}+M_{3}^{2}-(s+M_{1}^{2 }-M_{2}^{2})(s+M_{3}^{2}-M_{4}^{2})/(2s)+\sqrt{\lambda(s,M_{1}^{2},M_{2}^{2}) \lambda(s,M_{3}^{2},M_{4}^{2})/(2s)-M_{ex}^{2}+i\epsilon}}\right], \tag{14}\] where the Kallen function \(\lambda(x,y,z)=x^{2}+y^{2}+z^{2}-2xy-2yz-2xz\). The branch point of the left-hand cut can be easily obtained from Eq. (14) by requiring the argument of log vanishes. With the partial-wave projected potentials \(V^{(IJ)}\), one can obtain the unitarized amplitude \(T\) by on-shell factorization \[T^{(L)}(s)=\left[1-V^{(IJ)}(s)G(s)\right]^{-1}V^{(IJ)}(s)\;, \tag{15}\] where the two-meson loop function \(G(s)\) is \[G(s)=i\int\frac{d^{4}q}{(2\pi)^{4}}\frac{1}{q^{2}-m_{1}^{2}+i \epsilon}\frac{1}{(q-P)^{2}-m_{2}^{2}+i\epsilon}\;, \tag{16}\] with \(m_{1}\) and \(m_{2}\) the masses of the two mesons involved in the loop, and \(P\) is the total four-momentum of the meson-meson system. The two-point loop function \(G(s)\) is logarithmically divergent and can be calculated with a once-subtracted dispersion relation whose explicit expression is [34; 37] \[G(s) = \frac{1}{16\pi^{2}}\left[\alpha(\mu)+\log\frac{m_{1}^{2}}{\mu^{2} }+\frac{m_{2}^{2}-m_{1}^{2}+s}{2s}\log\frac{m_{2}^{2}}{m_{1}^{2}}\right. \tag{17}\] \[+\frac{q_{\rm cm}}{\sqrt{s}}\left(\log\frac{s-m_{2}^{2}+m_{1}^{2} +2q_{\rm cm}\sqrt{s}}{-s+m_{2}^{2}-m_{1}^{2}+2q_{\rm cm}\sqrt{s}}\right.\] \[+\left.\log\frac{s+m_{2}^{2}-m_{1}^{2}+2q_{\rm cm}\sqrt{s}}{-s-m_ {2}^{2}+m_{1}^{2}+2q_{\rm cm}\sqrt{s}}\right)\right],\] where \(\alpha(\mu)\) is the subtraction constant depending on the renormalization scale \(\mu\), \(q_{\rm cm}\) is the magnitude of the three-momentum of the meson in the center of mass frame \[q_{\rm cm}=\frac{\sqrt{[s-(m_{1}+m_{2})^{2}]\left[s-(m_{1}-m_{2})^{2}\right]} }{2\sqrt{s}}. \tag{18}\] Alternatively, one can also calculate the loop function by us \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline Isospin & Channel & \(C_{1}\) & \(C_{2}\) & \(C^{\prime}_{s}\) & \(C^{*}_{s}\) & \(C^{*^{*}}_{s}\) & \(C^{D^{\prime}}_{s}\) & \(C^{\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) & \(C^{\prime\prime}_{s}\) \\ \hline I = 0 & \(D^{\prime}K^{*}\to D^{\prime}K^{*}\) & 1 & 0 & 0 & 0 & 0 & 0 & 2 & \(\frac{3}{2}\) & \(\frac{1}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & \(D^{*}K^{*}\to D^{*}_{s}\omega\) & 0 & \(-\frac{1}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ & \(D^{*}K^{*}\to D^{*}_{s}\phi\) & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & \(\sqrt{2}\) & 0 & 0 & \(\sqrt{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & \(D^{*}_{s}\phi\to D^{*}_{s}\phi\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ I = 1 & \(D^{*}K^{*}\to D^{*}K^{*}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\(\frac{1}{2}\) & \(\frac{1}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & \(D^{*}K^{*}\to D^{*}_{s}\rho\) & 0 & \(\frac{1}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: The coefficients of amplitudes for the contact term and vector-exchange term in Eq. (11). ing a hard cutoff [38] \[G_{c}(s)=\int_{0}^{q_{\rm max}}\frac{q^{2}dq}{(2\pi)^{2}}\frac{\omega_{1}+\omega_ {2}}{\omega_{1}\omega_{2}[s-(\omega_{1}+\omega_{2})^{2}+i\epsilon]}\, \tag{19}\] where \(q_{\rm max}\) is the cutoff of the three-momentum, \(\omega_{i}=\sqrt{\bar{q}^{2}+m_{i}^{2}}\). The natural value of the cutoff is the scope of the low-energy theorem, or the scale of chiral symmetry breaking, e.g., \(q_{\rm max}\simeq\Lambda_{x}\simeq M_{V}\). Then we can get a natural value for the subtraction constant \(\alpha\) in Eq. (17) by matching \(G(s)\) and \(G_{c}(s)\) at the threshold [29]. By setting \(\mu=1500\) MeV as in Ref. [14]2, the corresponding range of the value of \(\alpha\) is determined to be \((-0.75,-1.65)\) by matching the \(G_{c}(s)\) with \(500~{}{\rm MeV}<q_{\rm max}<1300\) MeV for the \(D^{*}K^{*}\) loop. Unitarity leads to a cut along the real axis above the corresponding two-body threshold for each loop function, which divides the energy plane into two RSs. The expressions in Eqs. (17) and (19) are for the first (physical) RS, denoted by \(G^{I}(s)\), and the analytic continuation to the second RS can be obtained Figure 2: \(S\)-wave potentials \(V^{(IJ)}\) defined by Eq. (12) for \(VV\to VV\) with \(I=0\). The red solid and blue dashed curves are the real and imaginary parts of the potentials, respectively, while the black dot-dashed curves are the nonrelativistic potentials in Ref. [31]. The black dotted lines correspond to the \(VV\) thresholds. via [38] \[G^{II}(s)=G^{I}(s)+i\frac{q_{\rm cm}}{4\pi\sqrt{s}}. \tag{20}\] For a \(n\)-channel system, there are \(2^{n}\) RSs in total. Various sheets can be accessed by different choices of the loop functions \(G^{I/II}(s)\) for each channel. In particular, for the \(D^{*}_{s}\rho\)-\(D^{*}K^{*}\) system, there exist 4 RSs, which are labeled as \(\{1,1\}\), \(\{2,1\}\), \(\{2,2\}\), and \(\{1,2\}\). The first one is the physical sheet, while the last three are unphysical ones. The RS=\(\{2,1\}\) connects to the physical one through the interval between the \(D^{*}_{s}\rho\) and \(D^{*}K^{*}\) thresholds, and the RS=\(\{2,2\}\) is connected to the physical region above the \(D^{*}K^{*}\) threshold along the real axis. Although the RS=\(\{1,2\}\) is not directly connected to the physical region, a pole in it can still leave an impact on the physical observables due to the proximity of the \(D^{*}_{s}\rho\) and \(D^{*}K^{*}\) thresholds. The poles of the unitarizied amplitude could be identified as possible states. Poles located on the real axis below the lowest threshold in the physical RS correspond to possible bound states, and those on the unphysical RSs correspond to resonances. In particular, the poles on the real axis in unphysical RSs below the lowest threshold are called virtual states. A virtual state (as well as a resonance) does not correspond to a spatially localized state. However, such a pole can leave a significant imprint on the line shapes at the threshold if located near the threshold. The poles of the \(T\)-matrix correspond to the zeros of the determinant \[{\rm Det}(s)=\det\left[1-V(s)G(s)\right]. \tag{21}\] In addition, we can define an effective coupling of the channel \(i\) (\(j\)) to a given state at the pole \(s_{0}\) by the residues of the transition amplitude \(T_{ij}\) via \[g_{i}g_{j}=\lim_{s\to z_{0}}\left(s-s_{0}\right)T_{ij}(s). \tag{22}\] ## III Numerical results and discussions It is noticed that either a bound state or a virtual state can be formed in the aforementioned range of \(\alpha\), i.e. \(-1.65<\alpha<-0.75\), for the \(J=0\) isovector with both \(g=4.17\) and \(4.60\). It implies that the \(T_{c30}(2900)\) is consistent with a \(D^{*}_{s}\rho\)-\(D^{*}K^{*}\) bound state/virtual state. In particular, for \(\alpha=-1.60\) and \(g=4.17\), we find a bound state using the relativistic potentials in Eq. (11) with the binding energy \(E_{B}=0.03\) MeV with respect to the \(D^{*}_{s}\rho\) threshold. However, by employing the nonrelativistic potentials in Ref. [31], a virtual state very close to the \(D^{*}_{s}\rho\) is found in the RS=\(\{2,1\}\) with masses \(1.86\) MeV below the \(D^{*}_{s}\rho\) threshold. This virtual state is located in the vicinity of the \(D^{*}_{s}\rho\) threshold thus produces a cusp in the threshold. By employing \(g=4.60\), the virtual state resulting from the nonrelativistic potentials turns out to be a bound state in the physical RS with a binding energy of \(13.70\) MeV. We stress the distinction between the two scenarios of the relativistic and the nonrelativistic potentials beyond the accuracy of the frameworks used. However, it is quite certain that the \(D^{*}_{s}\rho\)-\(D^{*}K^{*}\) interaction is attractive, at least near the thresholds, which hints at the existence of a pole near the thresholds. In the present work, we assume that the \(T_{c30}(2900)\) corresponds to the \(D^{*}_{s}\rho\)-\(D^{*}K^{*}\) bound/virtual state discussed above. Based on that we determine the unknown constant \(\alpha\) from the Breit-Wigner mass of the \(T_{c30}(2900)\). In what follows, we employ \(g=4.17\) as in Ref. [31]. The results for the \(g=4.60\) is similar since the effect from the change of the \(g\) is largely compensated by the adjustment of the subtraction constant \(\alpha\). In addition, we employ the relativistic potentials which have left-hand cuts originating from the vector-exchanging diagrams. In Figs. 2 and 3, we present the potentials with \(I=0\) and \(I=1\), respectively. From these figures, one can find that our results and the ones of Ref. [31] are similar typ Figure 3: The same as in Fig. 2, but for \(I=1\). ically near the \(D^{*}K^{*}\) threshold, however for lower values of \(\sqrt{s}\), they depart quickly due to relativistic corrections and the onset of the left-hand cuts. As shown in Figs. 2-(a1), (a2), and (a3), the peculiar structures around the left-hand cuts of the \(D^{*}K^{*}\to D^{*}K^{*}\) channel with \(I=0\) are derived from the effect of exchanging \(\rho\) and \(\omega\) particles in the \(t\) channel amplitude, which also appear in Figs. 3-(f1), (f2), and (f3). The left-hand cuts of Figs. 2-(a), (b), (d), (e) and 3-(f), (h) appear at 2772 MeV, 2722 MeV, 2836 MeV, 2558 MeV, 2772 MeV, and 2718 MeV, respectively. Notice that the presence of the left-hand cuts invalidates the on-shell factorization employed in Eq. (15). As a result, we restrict ourselves to the energy region above the left-hand cuts. It is reasonable since the corrections from the relativistic kinematics and the higher-order effective Lagrangian could be significant for the regions far below the threshold. To be concrete, we only consider the poles above \(\sqrt{s}=2840\) MeV for \(I=0\) and above \(\sqrt{s}=2780\) MeV for \(I=1\). Starting from \(\alpha=-1.65\), corresponding to the \(G_{s}(s)\) with \(q_{\rm max}=1300\) MeV, one finds a pole at 2885 MeV for the \((I,J)=(1,0)\) sector in the physical sheet, as shown in the left panel of Fig. 4, which is accidentally on the top of the edge of the \(1\sigma\) uncertainty band of the \(T_{c\bar{s}0}(2900)\) mass \(m_{T_{c\bar{s}0}(2900)}=(2908\pm 11\pm 20)\) MeV. By increasing the \(\alpha\), the pole corresponding to a bound state moves towards the \(D^{*}_{s}\rho\) threshold and hits the threshold at \(\alpha=-1.60\). Then it turns into a virtual state in RS\(=\)\(\{2,1\}\) and the pole position moves away from the threshold towards the left-hand cuts with increasing \(\alpha\) and arrives at 2885 MeV again with \(\alpha=-1.55\), see e.g. the right panel of Fig. 4. Keep increasing \(\alpha\), a pole in the RS\(=\)\(\{1,2\}\) can be found in the real axis below the \(D^{*}_{s}\rho\) threshold, and it moves towards the threshold. In particular, at \(\alpha=-1.39\), the pole in RS\(=\)\(\{1,2\}\) is located at 2885 MeV. Meanwhile, the pole in RS\(=\)\(\{2,1\}\) moves to 2813 MeV. Increasing \(\alpha\) to \(-1.35\), the pole in RS\(=\)\(\{1,2\}\) hits the \(D^{*}_{s}\rho\) threshold and turns into a pole in RS\(=\)\(\{2,2\}\), where the pole position moves from the threshold to the left-hand cut and arrives at 2885 MeV for \(\alpha=-1.28\), see e.g. in Fig. 5. As mentioned above, a pole in the vicinity of the threshold in an unphysical sheet could also leave an impact on the physical observables. To see that, \(1/|\)Det\(|\) evaluated in the physical RS are shown in Fig. 6 with four different values of \(\alpha\), which produce a pole at 2885 MeV in RS\(=\)\(\{1,1\}\), \(\{2,1\}\), \(\{1,2\}\) and \(\{2,2\}\), respectively. Therefore, by identifying the \(T_{c\bar{s}0}(2900)\) as a \((I,J)=(1,0)\)\(D^{*}_{s}\rho\)-\(D^{*}K^{*}\) bound/virtual state, we obtain a range of the parameter \(\alpha\) from the Breit-Wigner mass of the \(T_{c\bar{s}0}(2900)\) under the uncertainty given by Eq. (2), i.e., \(-1.65<\alpha<-1.55\) and \(-1.39<\alpha<-1.28\). The corresponding pole positions and the effective couplings are collected in Table 2. Two disconnected intervals of \(\alpha\) are caused by the mass splitting of \(D^{*}_{s}\rho\) and \(D^{*}K^{*}\). If one approaches the SU(3) symmetry and decreases the mass splitting of the two channels, the difference between the RS\(=\)\(\{1,1\}\) and \(\{1,2\}\) diminishes. And under the exact SU(3) symmetry, there are only two RSs surviving, i.e., \begin{table} \begin{tabular}{c c c c c} \hline RS & \(\alpha\) & \(\sqrt{s}_{\rm pole}\) [MeV] & \(|g_{D^{*}K^{*}}|\) [MeV] & \(|g_{D^{*}\rho}|\) [MeV] \\ \hline \(\{1,1\}\) & -1.65\(\sim\)-1.60 & 2885\(\sim\)-2887 & 5531\(\sim\)-2198 & 5379\(\sim\)-2082 \\ \(\{2,1\}\) & -1.60\(\sim\)-1.55 & 2887\(\sim\)-2885 & 1755\(\sim\)-8202 & 1650\(\sim\)7348 \\ \(\{1,2\}\) & -1.39\(\sim\)-1.35 & 2885\(\sim\)-2887 & 6587\(\sim\)1625 & 7886\(\sim\)1865 \\ \(\{2,2\}\) & -1.35\(\sim\)-1.28 & 2887\(\sim\)-2885 & 1415\(\sim\)-4202 & 1613\(\sim\)-4672 \\ \hline \end{tabular} \end{table} Table 2: The pole positions and effective couplings evaluated for \(I=1\), \(J=0\) on different RSs with \(\mu=1500\) MeV. The threshold of \(D^{*}_{s}\rho\) is 2887 MeV. Figure 4: The determinant defined by Eq. (21) for \(I=1\), \(J=0\) with \(\alpha=-1.65\) (left) and \(\alpha=-1.55\) (right). The red solid line: the real part of the determinant on the Riemann sheet RS \(=\)\(\{1,1\}\), and the magenta dotted line: the imaginary part of the determinant on RS \(=\)\(\{1,1\}\), the blue dashed line: real part of determinant on RS \(=\)\(\{2,1\}\), the cyan dot-dashed line: imaginary part of determinant on RS \(=\)\(\{2,1\}\), the lower black dotted line: the \(D^{*}_{s}\rho\) threshold, and the upper black dotted line: the \(D^{*}K^{*}\) threshold. The arrows refer to the position of the bound state (left) and virtual state (right). \(\{1,1\}\) and \(\{2,2\}\), in which case, the two intervals of \(\alpha\) coincide. With the parameter \(\alpha\) in hand, we are equipped to investigate the sectors \(I=1\), \(J=1\), \(2\) and \(I=0\), \(J=0\), \(1\), \(2\). The poles found in these sectors are collected in Table 3. As for \(I=1\) and \(J=1\), a pole is found located at 2886 MeV on RS\(=\{1,1\}\) with \(\alpha=-1.65\). Its pole mass increases to the \(D_{s}^{*}\rho\) threshold and then decreases once it arrives at the threshold with the \(\alpha\) variation from \(-1.65\) to \(-1.55\). For \(-1.39<\alpha<-1.36\), it should be noted that two virtual states are found in RS\(=\)\(\{1,2\}\). However, only the one with higher mass which is closer to the physical region can leave significant imprints on the observable, and thus are kept in Table 3. Similarly, we do not show the poles which are far from the physical region and do not impact the line shapes. For the sector of \(I=1\) and \(J=2\), we find poles in the physical RS with the pole mass \(2780\sim 2806\) MeV for the \(\alpha\) in the interval \((-1.31,-1.28)\). The mass is consistent with that predicted in Ref. [31] with the pole position 2786 MeV in the sector of \(C=1\), \(S=1\), \(I=1\) and \(J=2\). We should mention, however, that we do not predict a bound state with mass 2780 to 2806 MeV, which is only found with a certain range of determined \(\alpha\) from the \(T_{c30}(2900)\). For the rest of the \(\alpha\) values, we do not find a pole above 2780 MeV. Similarly, we do not find poles above the left-hand cuts for sectors \(I=0\), \(J=0\), \(1\) and \(2\). For the region below the left-hand cuts, it beyonds the capability of the current effective Lagrangian and the on-shell factorization as mentioned above [29; 30]. So far we have neglected the widths of the vector mesons and the inelastic channels, which will generate widths for those bound states and virtual states mentioned above and turns them into resonances. The significant decay widths of \(\rho\to\pi\pi\), \(K^{*}\to K\pi\) imply that the contributions from the \(D_{s}^{*}\pi\pi\), \(D^{*}K\pi\) three-body (and even \(DK\pi\) four-body) intermediate states to the widths of the generated states should be the order of the width of \(\rho/K^{*}\). In addition, the pseudoscalar intermediate states, e.g. the \(D_{s}\pi\) and \(DK\), contribute to the widths as well, corresponding to the decays into these two mesons. In order to take such contributions into account, one has to introduce model-dependent form factors. While the width of the generated resonance is sensitive to the form factors, the real part of the pole position is merely affected [16; 29]. In the present work, we focus on the origin of the poles and their masses, and do not consider the convolution of loop functions accounting for the widths of \(\rho\) and \(K^{*}\) in their propagators and box diagrams assessing the \(D_{s}\pi\) and \(DK\) inelastic contributions [39; 40]. ## IV Summary The recently observed spin-party \(J^{P}=0^{+}\) states \(T_{c30}(2900)^{0}\) and \(T_{c30}(2900)^{++}\) by the LHCb Collaboration in the \(D_{s}^{+}\pi^{-}\) mass distribution of the process \(B^{0}\to\bar{D}^{0}D_{s}^{*}\pi^{-}\) and the \(D_{s}^{+}\pi^{+}\) distribution of the \(B^{+}\to D^{-}D_{s}^{*}\pi^{+}\), respectively, are in good agreement and belong to an isospin triplet [1; 2]. By investigating the \(D^{*}K^{*}\) coupled-channel system within the framework of the local hidden gauge approach extended to SU(4), we found that the \(D_{s}^{*}\rho\)-\(D^{*}K^{*}\) system in the \((I,J)=(1,0)\) sector manifests a sizable attractive interaction which can form a bound or a virtual state within a reasonable parameter range. We have derived the scattering potentials including the relativistic corrections in a covariant formalism which develops left-hand cuts. The existence of the left-hand cuts invalidates the on-shell factorization in the vicinity of the left-hand cuts and below. Therefore we only focus on the region above the left-hand cuts. Notice that for the energy below the left-hand cuts, the contributions from the relativistic correction and the higher-order effective Lagrangian could be significant such that beyond the capability of the framework. By assuming the \(T_{c30}(2900)\) as a \(D_{s}^{*}\rho\)-\(D^{*}K^{*}\) bound/virtual state in the sector \((I,J)=(1,0)\) and reproducing the pole mass within the \(1\sigma\) uncertainty, we have determined the subtraction constant \(\alpha\) to be in two intervals, i.e, \(-1.65<\alpha<-1.55\) and \(-1.39<\alpha<-1.28\), for the renormalization scale \(\mu=1500\) MeV. With the subtraction constant, we have searched for the possible poles in the sectors of \(I=1\), \(J=1\), \(2\) and \(I=0\), \(J=0\), \(1\), \(2\). The sector \((I,J)=(1,1)\) has a comparable attraction with \((I,J)=(1,0)\) such that form a bound/virtual state in the range \((2883,2887)\) MeV. For the sector \((I,J)=(1,2)\), a stronger attraction interaction could generate a deeper bound Figure 5: The same as in Fig. 4, except that the determinant with \(\alpha=-1.39\) (left) and \(\alpha=-1.28\) (right) on RS \(=\{1,2\}\) and RS \(=\{2,2\}\). The arrows refer to the position of the virtual states.
2305.12089
Approximation theorem for the Kawahara operator and its application in control theory
Control properties of the Kawahara equation are considered when the equation is posed on an unbounded domain. Precisely, the paper's main results are related to an approximation theorem that ensures the exact (internal) controllability in $(0,+\infty)$. Following Rosier SIAM Simon (2000), the problem is reduced to prove an approximate theorem which is achieved thanks to a global Carleman estimate for the Kawahara operator.
Roberto de A. Capistrano Filho, Luan S. de Sousa, Fernando A. Gallego
2023-05-20T04:17:53Z
http://arxiv.org/abs/2305.12089v2
# Approximation theorem for the Kawahara operator and its application in control theory ###### Abstract. Control properties of the Kawahara equation are considered when the equation is posed on an unbounded domain. Precisely, the paper's main results are related to an approximation theorem that ensures the exact (internal) controllability in \((0,+\infty)\). The approximation theorem is achieved thanks to a global Carleman estimate for the Kawahara operator. Key words and phrases:Carleman estimate, approximation theorem, exact controllability, Kawahara equation, unbounded domain 2020 Mathematics Subject Classification: Primary: 35Q53, 93B07, 93B05 Secondary: 37K10 *Corresponding author Capistrano-Filho was supported by CNPq grants numbers 307808/2021-1, 401003/2022-1 and 200386/2022-0, CAPES grants numbers 88881.311964/2018-01 and 88881.520205/2020-01, and MATHAM-SUD 21-MATH-03. Gallego was supported by MATHAMSUD 21-MATH-03 and Hermes Unal project nro 55249. ice sheet [14], gravity waves on the surface of a heavy liquid [10], etc. In the literature, this equation is also referred to as the fifth-order KdV equation [4], or singularly perturbed KdV equation [22]. Some valuable efforts in the last years focus on the analytical and numerical methods for solving (1.1). These methods include the tanh-function method [2], extended tanh-function method [3], sine-cosine method [23], Jacobi elliptic functions method [13], direct algebraic method [21], decompositions methods [18], as well as the variational iterations and homotopy perturbations methods [15]. Due to this recent advance, previously mentioned, other issues for the study of the Kawahara equation appear. For example, we can cite the control problems, which is our motivation. Precisely, we are interested in proving control results for the Kawahara operator in an unbounded domain. It is well known that the first result with a "kind" of controllability for the Kawahara equation \[u_{t}+u_{x}+u_{xxx}-u_{xxxxx}=f(t,x),\quad(t,x)\in\mathbb{R}^{+}\times(0, \infty), \tag{1.2}\] was proposed recently by the authors in [7]. It is important to point out that in [7], the authors are not able to prove that solutions of (1.2) satisfy the exact controllability property \[u(T,x)=u_{T}\quad x\in(0,\infty). \tag{1.3}\] Instead of this, they showed that solutions of the Kawahara equations satisfy an integral condition. To fill this gap in providing a study of the exact boundary controllability of (1.2) in an unbounded domain, this paper aims to present a way that may be seen as a first step in the knowledge of control theory for the system (1.2) on unbounded domains since the results proved in [7], can not recover (1.3). So, our aim in this manuscript is to present an answer to the following question: **Problem \(\mathcal{A}:\)**_Is there a solution to the system (1.2) satisfying (1.3)? Or, equivalently, Is the solution of the system (1.2) exact controllable in the unbounded domain \((0,+\infty)\)?_ ### Historical background Stabilization and control problems on the bounded domain have been studied in recent years for the Kawahara equation. The first work concerning the stabilization property for the Kawahara equation in a bounded domain \((0,T)\times(0,L)\), is due to Capistrano-Filho _et al._ in [1]. In this article, the authors were able to introduce an internal feedback law and, considering general nonlinearity \(u^{p}u_{x}\), \(p\in[1,4)\), instead of \(uu_{x}\), to show that under the effect of the damping mechanism the energy associated with the solutions of the system decays exponentially. Concerning the internal control problems we can cite as pioneer works the Zhang and Zhao articles [24, 25]. In both works the authors considered the Kawahara equation in a periodic domain \(\mathbb{T}\) with a distributed control of the form \[f(t,x)=(Gh)(t,x):=g(x)(h(t,x)-\int_{\mathbb{T}}g(y)h(t,y)dy),\] where \(g\in C^{\infty}(\mathbb{T})\) supported in \(\omega\subset\mathbb{T}\) and \(h\) is a control input. Still related to internal control issues, Chen [9] presented results considering the Kawahara equation posed on a bounded interval with a distributed control \(f(t,x)\) and homogeneous boundary conditions. She showed the result by taking advantage of a Carleman estimate associated with the linear operator of the Kawahara equation with an internal observation. With this in hand, she was able to get a null controllable result when \(f\) is effective in a \(\omega\subset(0,L)\). As the results obtained by Chen in [9] do not answer all the issues of internal controllability, in a recent article [5] the authors closed some gaps left in [9]. Precisely, considering the Kawahara model with an internal control \(f(t,x)\) and homogeneous boundary conditions, the authors can show that the equation in consideration is exactly controllable in \(L^{2}\)-weighted Sobolev spaces and, additionally, the Kawahara equation is controllable by regions on \(L^{2}\)-Sobolev space, for details see [5]. Recently, a new tool to find control properties for the Kawahara operator was proposed in [6, 7]. First, in [6], the authors showed a new type of controllability for a Kawahara equation, what they called the overdetermination control problem. Precisely, they can find a control acting at the boundary that guarantees that the solution of the problem under consideration satisfies an integral condition. In addition, when the control acts internally in the system, instead of the boundary, the authors proved that this condition is also satisfied. These problems give answers that were left open in [5] and present a new way to prove boundary and internal controllability results for the Kawahara operator. After that, in [7], the authors extend this idea for the internal control problem for the Kawahara equation on unbounded domains. Precisely, under certain hypotheses over the initial and boundary data, they can prove that an internal control input exists such that solutions of the Kawahara equation satisfy an integral overdetermination condition considering the Kawahara equation posed in the real line, left half-line, and right half-line. ### Main results With this background in hand, as mentioned before, our main goal is to answer the Problem \(\mathcal{A}\). To do that, we first prove two main results which are the key to giving some position of the controllability properties for the Kawahara operator on an unbounded domain. Let us introduce some notations. For \(L>0\) and \(T>0\) let \(Q_{T}=\{(x,t)\in(-L,L)\times(0,T)\subset\mathbb{R}^{2}\}\), be a bounded rectangle. From now on, for the sake of brevity, we shall write \(P\) for the operator \[P=\partial_{t}+\partial_{x}+\partial_{x}^{3}-\partial_{x}^{5} \tag{1.4}\] with domain \[\mathcal{D}(P)=L^{2}(0,T;H^{5}(-L,L)\cap H_{0}^{2}(-L,L))\cap H^{1}(0,T;L^{2}( -L,L)). \tag{1.5}\] Our first result is related to a Carleman estimate for the Kawahara operator being precise, for \(f\in L^{2}(0,T;L^{2}(-L,L))\) and \(q_{0}\in L^{2}(-L,L)\), the operator \(Pq=f\), where \(P\) is defined by (1.4) with domain (1.5). So, the first result is devoted to proving a global Carleman estimate. **Theorem 1.1**.: _There exist constants \(s_{0}=s_{0}(L,T)>0\) and \(\tilde{C}=\tilde{C}(L,T)>\) such that for any \(q\in\mathcal{D}(P)\) and all \(s\geqslant s_{0}\), one has_ \[\begin{split}\int_{0}^{T}\int_{-L}^{L}\left\{(s\varphi)^{9}|q|^{ 2}+(s\varphi)^{7}|q_{x}|^{2}+(s\varphi)^{5}|q_{xx}|^{2}+(s\varphi)^{3}|q_{xxx }|^{2}+s\varphi|q_{xxxx}|^{2}\right\}e^{-2s\varphi}dxdt\\ \leqslant C\int_{0}^{T}\int_{-L}^{L}|f|^{2}e^{-2s\varphi}dxdt. \end{split} \tag{1.6}\] As a consequence of the previous Carleman estimate, the second main result of the manuscript gives us an approximation theorem, which is the key point to prove the exact controllability for the operator \(P\) posed on unbounded domain and, in this case, to answer the Problem \(\mathcal{A}\). **Theorem 1.2**.: _Let \(n\in\mathbb{N}\backslash\{0,1\},\) and \(t_{1},t_{2}\) and \(T\) real number such that \(0<t_{1}<t_{2}<T.\) Let us consider \(u\in L^{2}((0,T)\times(-n,n))\) such that_ \[Pu=0\quad\text{in}\quad(0,T)\times(-n,n),\] _with \(\operatorname{supp}\ u\subset[t_{1},t_{2}]\times(-n,n)\). Let \(0<\epsilon<min(t_{1},T-t_{2}),\) then there exists \(v\in L^{2}((0,T)\times(-n-1,n+1))\) satisfying_ \[\text{$Pv$}=0\text{ in }(0,T)\times(-n-1,n+1), \tag{1.7}\] \[\operatorname{supp}\ v\subset[t_{1}-\epsilon,t_{2}+\epsilon]\times(-n-1,n+1), \tag{1.8}\] _and_ \[\|v-u\|_{L^{2}((0,T)\times(-n+1,n-1))}<\epsilon. \tag{1.9}\] Finally, the previous result helps to show the third main result of the manuscript, giving a positive answer for the exact controllability problem. **Theorem 1.3**.: _Given \(T,\epsilon\) an \(s\) real numbers with \(0<\epsilon<\frac{T}{2}\) and \(s\in\left(-\frac{7}{4},\frac{5}{2}\right)\setminus\left\{\frac{1}{2},\frac{3} {2}\right\}\). Let \(u_{0},u_{T}\in H^{s}(0,+\infty)\), thus, there exists a function_ \[u\in L^{2}_{\text{loc}}([0,T]\times(0,+\infty))\cap C([0,\epsilon];H^{s}(0, +\infty))\cap C([T-\epsilon,T];H^{s}(0,+\infty) \tag{1.10}\] _solution of_ \[\begin{cases}u_{t}+u_{x}+u_{xxx}-u_{xxxxx}=0&\text{ in }\mathcal{D}^{\prime}((0,T) \times(0,+\infty)),\\ u(0,x)=u_{0}&\text{ in }(0,+\infty),\end{cases} \tag{1.11}\] _satisfying \(u(T,x)=u_{T}\) in \((0,+\infty)\)._ ### Final comments and paper's outline The results in this manuscript gave a necessary first step to the improvement of the control properties for the Kawahara operator. Let us comment on this in the following remark. _Remarks_.: The following remarks are worth mentioning: 1. From our knowledge, our results are the first ones for the Kawahara operator posed on an unbounded domain. 2. Note that the Carleman estimate proved in [9] is local which differs from the Carleman estimates shown in Theorem 1.1. 3. This work is the first one to prove an approximation theorem, that is, Theorem 1.2, for the Kawahara operator (1.4). 4. In the context of the Kawahara operator, there is one work [7] which is limited from a control point of view since the solutions satisfy an integral condition instead of (1.3). Thus, Theorem 1.3 provides progress in the control theory for this operator in an unbounded domain thanks to the fact that solutions of (1.11) satisfy the exact controllability condition (1.3). 5. Summarizing, our result gives new results for the Kawahara operator in the following sense: (1) Global Carleman estimates; (2) Approximation theorem; (3) Exact controllability in an unbounded domain. The remainder of the paper is organized as follows. In Section 2, we present auxiliaries results which are paramount to show the main results of the article. In Section 3, we present the global Carleman estimate, that is, we will show Theorem 1.1. Section 4 is devoted to giving applications of the Carleman estimate, precisely, we will provide an approximation Theorem 1.2. Finally, in Section 5, we will answer the Problem \(\mathcal{A}\) using the approximation theorem, i.e., we present the proof of Theorem 1.3. ## 2. Preliminaries ### Auxiliary lemma In this subsection, we will prove an auxiliary result that will put us in a position to apply them to prove the main results of the article. **Lemma 2.1**.: _Consider \(l_{1},l_{2},L,t_{1},t_{2}\) and \(T\) be number such that \(0<l_{1}<l_{2}<L\) and \(0<t_{1}<t_{2}<T\). Let \(u\in L^{2}((0,T)\times(-l_{2},l_{2}))\) be such that_ \[Pu=0\text{ in }(0,T)\times(-l_{2},l_{2})\quad\text{and}\quad\operatorname{supp} \ u\subset[t_{1},t_{2}]\times(-l_{2},l_{2}). \tag{2.1}\] _Let \(\eta>0\) and \(\delta>0\), with \(2\delta<\min(t_{1},T-t_{2})\) be given. Then there exist \(v_{1},v_{2}\in L^{2}(-L,L)\) and \(v\in L^{2}((0,T)\times(-L,L))\) such that_ \[Pv=0\text{ in }(0,T)\times(-L,L), \tag{2.2}\] \[v(t,\cdot)=S_{L}(t-t_{1}+2\delta)v_{1},\text{ for }t_{1}-2\delta<t<t_{1}-\delta, \tag{2.3}\] \[v(t,\cdot)=S_{L}(t-t_{2}+\delta)v_{2},\text{ for }t_{2}+\delta<t<t_{2}+2\delta \tag{2.4}\] _and_ \[\|v-u\|_{L^{2}((t_{1}-2\delta,t_{2}+2\delta)\times(-l_{1},l_{1}))}<\eta. \tag{2.5}\] Proof.: Remember that \(Q_{T}=(0,T)\times(-L,L)\), \(P\) is defined by (1.4)-(1.5) and pick \(Q_{\delta}=(t_{1}-2\delta,t_{2}+2\delta)\times(-l_{1},l_{1}).\) By a smoothing process via convolution and multiplying the regularized function by a cut-off function of \(x\), we have a function \(u^{\prime}\in\mathcal{D}(\mathbb{R}^{2})\), such that \[\begin{cases}\operatorname{supp}\ u^{\prime}\subset[t_{1}-\delta,t_{2}- \delta]\times[-l_{2},l_{2}],\\ Pu^{\prime}=0\text{ in }(0,T)\times(-l_{1},l_{1}),\quad\text{and}\\ \|u^{\prime}-u\|_{L^{2}((0,T)\times(-l_{1},l_{1}))}<\frac{\eta}{2}.\end{cases} \tag{2.6}\] Consider the following set \[\mathcal{E}=\{v\in L^{2}(Q_{T});\exists\ v_{1},v_{2}\in L^{2}(-L,L)\text{ such that }(\ref{eq:L2}),(\ref{eq:L2})\text{ and }(\ref{eq:L2})\text{ hold true}\}.\] Note that this lemma is proved if we may find \(v\in\mathcal{E}\) such that \[\|v-u^{\prime}\|_{L^{2}(Q_{\delta})}<\frac{\eta}{2}.\] It follows by the following trivial inequality \[\|v-u\|_{L^{2}(Q_{\delta})}\leqslant \|v-u^{\prime}\|_{L^{2}(Q_{\delta})}+\|u^{\prime}-u\|_{L^{2}(Q_{ \delta})}\] \[< \|v-u^{\prime}\|_{L^{2}(Q_{\delta})}+\frac{\eta}{2}.\] So we achieve the proof if we prove that \(u^{\prime}\in\overline{\mathcal{E}}=(\mathcal{E}^{\perp})^{\perp}\), where the closure and the orthogonal complement are taken in the space \(L^{2}(Q_{\delta}).\) For a fix function \(g\in\mathcal{E}^{\perp}\subset L^{2}(Q_{\delta})\) we should prove that the following holds \[(u^{\prime},g)_{L^{2}(Q_{\delta})}=0. \tag{2.7}\] Before presenting the proof of (2.7), we claim the following. **Claim 1**.: Let \(\mathcal{T}=\{\varphi\in C^{\infty}(\mathbb{R}^{2});\operatorname{supp}\ \varphi\subset[t_{1}-\delta,t_{2}+\delta]\times\mathbb{R}\}.\) So, there exists \(C>0\) such that \[|(\varphi,g)_{L^{2}(Q_{\delta})}|\leqslant C\|P\varphi\|_{L^{2}(Q_{T})}, \tag{2.8}\] for all \(\varphi\in\mathcal{T}\). In fact, pick \(\varphi\in\mathcal{T}\) and define \[\psi(t)=\int_{0}^{t}S_{L}(t-s)P\varphi(s)ds,\] for \(0\leq t\leq T\), that is, \(\psi\) is strong solution of the boundary initial-value problem \[\begin{cases}P\psi=0,&\text{in }Q_{T},\\ \psi(t,-L)=\psi(t,L),\quad\psi_{x}(t,-L)=\psi_{x}(t,L),\quad\psi_{xx}(t,-L)= \psi_{xx}(t,L),&t\in[0,T],\\ \psi_{xxx}(t,-L)=\psi_{xxx}(t,L),\quad\psi_{xxxx}(t,-L)=\psi_{xxxx}(t,L),&t\in[ 0,T],\\ \psi(0,\cdot)=0,&\text{in }[-L,L].\end{cases}\] Thanks to this fact, \(v=\psi-\varphi\in\mathcal{E}\), observe that (2.3) and (2.4) is verified with \(v_{1}=0\) and \(v_{2}=\psi(t_{2}+\delta)\), hence \[(v,g)_{L^{2}(Q_{\delta})}=(\psi-\varphi,g)_{L^{2}(Q_{\delta})}=0.\] On the other hand, we have \[\|\psi(t)\|_{L^{2}(-L,L)}\leq\|P\varphi\|_{L^{1}(0,t;L^{2}(-L,L))}\leq\sqrt{T} \|P\varphi\|_{L^{2}(Q_{T}))},\] for all \(t\in[0,T]\), and therefore \[|(\varphi,g)_{L^{2}(Q_{\delta})}|=|(\psi,g)_{L^{2}(Q_{\delta})}|\leq T\|g\|_{ L^{2}(Q_{\delta})}\|P\varphi\|_{L^{2}(Q_{T})},\] showing Claim 1. We also need the following claim. **Claim 2.** There exists a function \(\omega\in L^{2}(Q_{T})\) such that \[(\varphi,g)_{L^{2}(Q_{\delta})}=(P\varphi,\omega)_{L^{2}(Q_{T})}, \tag{2.9}\] for all \(\varphi\in\mathcal{T}\). Indeed, let \(\mathcal{Z}=\{(P\varphi)\big{|}_{Q};\varphi\in\mathcal{T}\}\) and define the map \(\Lambda:\mathcal{Z}\longrightarrow\mathbb{R}\) by \[\Lambda(\zeta)=(\varphi,g)_{L^{2}(Q_{\delta})}.\] First, note that for any \(\zeta\in\mathcal{Z}\), if \(\zeta=(P\varphi_{1})\big{|}_{Q_{T}}=(P\varphi_{2})\big{|}_{Q_{T}}\), for two functions \(\varphi_{1},\varphi_{2}\in\mathcal{T}\), we have using claim 1 that \(\varphi_{1}-\varphi_{2}\in\mathcal{E}\), hence \((\varphi_{1}-\varphi_{2},g)_{L^{2}(Q_{\delta})}=0\). Thus, \(\Lambda\) is well defined. Consider \(H\) the closure of \(\mathcal{Z}\) in \(L^{2}(Q)\). Due to (2.8), using the Hahn-Banach theorem, we may extend \(\Lambda\) to \(H\) in such way that \(\Lambda\) is a continuous linear form on \(H\). Thus, it follows from Riesz representation theorem that there exists \(\omega\in H\) such that \[\Lambda(\zeta)=(\zeta,\omega)_{L^{2}(Q_{T})},\ \forall\zeta\in H,\] and so (2.9) follows, and the proof of Claim 2 is finished. Finally, let us prove (2.7). To do it, consider the extensions of \(g\) and \(\omega\) in \(\mathbb{R}^{2}\) given by \[\tilde{g}(t,x)=0,\text{ for }(t,x)\in\mathbb{R}^{2}\backslash Q_{\delta}\] and \[\tilde{\omega}(t,x)=0,\text{ for }(t,x)\in\mathbb{R}^{2}\backslash Q_{T},\] receptively. Taking \(\Omega=(t_{1}-\delta,t_{2}-\delta)\times\mathbb{R}\), let \(\varphi\in\mathcal{D}(\Omega)\subset\mathcal{T}\). So, we have that \[(\varphi,g)_{L^{2}(Q_{\delta})}=(\varphi,\tilde{g})_{L^{2}(\Omega)}\quad \text{and}\quad(P\varphi,\omega)_{L^{2}(Q_{T})}=(P\varphi,\tilde{\omega})_{L^ {2}(\Omega)},\] therefore, using (2.9), we get \[\langle P^{*}(\tilde{\omega}),\varphi\rangle_{\mathcal{D}^{\prime}(\Omega), \mathcal{D}(\Omega)}=\langle\tilde{g},\varphi\rangle_{\mathcal{D}^{\prime}( \Omega),\mathcal{D}(\Omega)},\] so \(P^{*}(\tilde{\omega})=\tilde{g}\) in \(\mathcal{D}^{\prime}(\Omega)\) and \[P^{*}(\tilde{\omega})=0,\text{ for }t_{1}-\delta<t<t_{2}+\delta\text{ and }|x|>l_{1}.\] Since \[\tilde{\omega}(t,x)=0,\text{ for }t_{1}-\delta<t<t_{2}-\delta\text{ and }|x|>L,\] Holmgren's uniqueness theorem (see e. g. [12, Theorem 8.6.8]) ensures that \[\tilde{\omega}(t,x)=0,\text{ for }t_{1}-\delta<t<t_{2}+\delta\text{ and }|x|>l_{1}.\] Lastly, due to (2.9) and (2.6), we conclude that \[(u^{\prime},g)_{L^{2}(Q_{\delta})}=(Pu^{\prime},\omega)_{L^{2}(Q)}=(Pu^{\prime },\omega)_{L^{2}((t_{1}-\delta,t_{2}+\delta)\times(-l_{1},l_{1}))}=0,\] finishing the proof. ### Observability inequality _via_ Ingham inequality Given a family \(\Omega=(\omega_{k})_{k\in K}:=\{\omega_{k}:k\in K\}\) of real numbers, we consider functions of the form \(\sum_{k\in K}c_{k}e^{i\omega_{k}t}\) with square summable complex coefficients \((c_{k})_{k\in K}:=\{c_{k}:k\in K\}\), and we investigate the relationship between the quantities \[\int_{I}\left|\sum_{k\in K}c_{k}e^{i\omega_{k}t}\right|^{2}\ dt\quad\text{ and}\quad\sum_{k\in K}\left|c_{k}\right|^{2},\] where \(I\) is some given bounded interval. In this work, the following version of the Ingham-type theorem will be used. **Theorem 2.2**.: _Let \(\{\lambda_{k}\}\) be a family of real numbers, satisfying the uniform gap condition_ \[\gamma=\inf_{k\neq n}|\lambda_{k}-\lambda_{n}|>0\] _and set_ \[\gamma^{\prime}=\sup_{A\subset K}\inf_{k,n\in K\setminus A}|\lambda_{k}- \lambda_{n}|>0\] _where \(A\) runs over the finite subsets of \(K\). If \(I\) is a bounded interval of length \(|I|\geqslant\frac{2\pi}{\gamma^{\prime}}\), then there exist positive constants \(A\) and \(B\) such that_ \[A\sum_{k\in K}|c_{k}|^{2}\leqslant\int_{I}|f(t)|^{2}dt\leqslant B\sum_{k\in K} |c_{k}|^{2} \tag{2.10}\] _for all functions given by the sum \(f(t)=\sum_{k\in K}c_{k}e^{i\lambda_{k}t}\) with square-summable complex coefficients \(c_{k}\)._ Proof.: See Theorem 4.6 in [20], page 67. Now on, consider the operator \(A:D(A)\subset L^{2}(-L,L)\longrightarrow L^{2}(-L,L)\), defined by \(A(u)=-u_{x}-u_{xxx}+u_{xxxxx}\), with \[D(A)=\{v\in H^{5}(-L,L);v(-L)=v(L),v_{x}(-L)=v_{x}(L),...,v_{xxxx}(-L)=v_{xxxxx }(L)\}.\] In what follows \(S_{L}\) will denote the unitary group in \(L^{2}(-L,L)\) generated by the operator \(A\), using Stone theorem. With this in hand, pick \(e_{n}=\frac{1}{\sqrt{2L}}e^{in\pi\frac{\pi}{L}x}\) for \(n\in\mathbb{Z}\). So, \(e_{n}\) is an eigenvector for \(A\) associated with the eigenvalue \(\omega_{n}=i\lambda_{n}\), with \[\lambda_{n}=\left(\frac{n\pi}{L}\right)^{5}+\left(\frac{n\pi}{L}\right)^{3}- \frac{n\pi}{L}. \tag{2.11}\] If \(u_{0}\in L^{2}(-L,L)\) is any complex function, we decomposed as \(u_{0}=\sum_{n\in\mathbb{Z}}c_{n}e_{n}\), so we have for every \(t\in\mathbb{R}\) \[S_{L}(t)u_{0}=\sum_{n\in\mathbb{Z}}e^{i\lambda_{n}t}c_{n}e_{n}.\] We are now in a position to prove an observability result. **Proposition 2.3**.: _Let \(l,L,\) and \(T\) be positive number such that \(l<L.\) Then there exists a constant positive \(C\) such that for every \(u_{0}\in L^{2}(-L,L)\), denoting \(u=S_{L}(.)u_{0},\) we get_ \[\|u_{0}\|_{L^{2}(-L,L)}\leqslant C\|u\|_{L^{2}((0,T)\times(-l,l))}. \tag{2.12}\] _Therefore,_ \[\|u\|_{L^{2}((0,T)\times(-L,L))}\leqslant\sqrt{T}C\|u\|_{L^{2}((0,T)\times(-l, l))}. \tag{2.13}\] Proof.: Pick \(T^{\prime}\in(0,\frac{T}{2})\) and \(\gamma>\frac{\pi}{T^{\prime}}\). Let \(N\in\mathbb{N}\) such that \[\lambda_{N}-\lambda_{-N}=2\lambda_{N}\geqslant\gamma\text{ and }(n\in\mathbb{Z},|n |\geqslant N)\Rightarrow\lambda_{n+1}-\lambda_{n}\geqslant\gamma.\] By Ingham's inequality, see Theorem 2.2, there exists a constant \(C_{T^{\prime}}>0\) such that for every sequence \((a_{n})_{|n|>N}\) of complex numbers, with \(a_{n}=0,\) for all \(n\in\mathbb{Z};|n|<N,\) the following inequality is verified \[\sum_{|n|\geqslant N}|a_{n}|^{2}\leqslant C_{T^{\prime}}\int_{0}^{2T^{\prime} }\biggl{|}\sum_{|n|\geqslant N}a_{n}e^{i\lambda_{n}t}\biggr{|}^{2}dt \tag{2.14}\] Let \(\mathcal{Z}_{n}=Span(e_{n})\) for \(n\in\mathbb{Z}\) and \(\mathcal{Z}=\oplus_{n\in\mathbb{Z}}\mathcal{Z}_{n}\subset L^{2}(-L,L).\) Let us now define the following seminorm \(p\) in \(\mathbb{Z}\) by \[p(u)=\biggl{(}\int_{-l}^{l}|u(x)|^{2}dx\biggr{)}^{\frac{1}{2}}dt,\ \forall u\in\mathcal{Z}.\] In this case, \(p\) is a norm in each \(\mathcal{Z}_{n}.\) By other hand, if \(u_{0}\in\mathcal{Z}\cap(\oplus_{|n|<N}\mathcal{Z})^{\perp},\) we can rewrite \(u_{0}\) in the following way \[u_{0}=\sum_{|n|>N}c_{n}e_{n},\] with \(c_{n}=0\) for \(|n|\) large enough. Thus, applying (2.14) with \(a_{n}=\frac{c_{n}}{\sqrt{2L}}e^{i(\lambda_{n}T^{\prime}+n\frac{\pi}{T}x)}\) and integrating in \((-l,l)\) we get \[2l\sum_{|n|\geqslant N}\frac{|c_{n}|^{2}}{2L}\leqslant C_{T^{\prime}}\int_{-l }^{l}\int_{0}^{2T^{\prime}}\biggl{|}\sum_{|n|\geqslant N}e^{i\lambda_{n}t}c_{ n}e_{n}(x)\biggr{|}^{2}dtdx.\] Therefore, Fubini's theorem ensures that \[\|u_{0}\|_{L^{2}(-L,L)}\leqslant\frac{L}{l}C_{T^{\prime}}\int_{0}^{2T^{\prime }}p(S_{L}(t)u_{0})^{2}dt.\] Finally, for \(u_{0}\in L^{2}(-L,L),\) we have \[\int_{0}^{2T^{\prime}}p(S_{L}(t)u_{0})^{2}dt\leqslant\|S_{L}(.)u_{0}\|_{L^{2} ((0,2T^{\prime})\times(-L,L))}^{2}=2T^{\prime}\|u_{0}\|_{L^{2}(-L,L)}.\] Thanks to the fact that \(2T^{\prime}<T\), follows from [19, Theorem 5.2] that there exists a positive constant, still denoted by \(C\), such that (2.12) is verified for all \(z_{0}\in\mathcal{Z}\) and the general case, that is, for all \(u_{0}\in L^{2}(-L,L),\) follows by a density argument, showing the result. ## 3. Global Carleman estimate Consider \(T\) and \(L>0\) to be a positive numbers. Pick any function \(\psi\in C^{8}[-L,L]\) with \[\psi>0\text{ in }[-L,L];\quad\psi^{\prime}(-L)>0;\quad\psi^{\prime}(L)>0,\psi^{ \prime\prime}<0\quad\text{and}\quad|\psi_{x}|>0\text{ in }[-L,L]. \tag{3.1}\] Let \(u=e^{-s\varphi}q\) and \(\omega=e^{-s\varphi}P(e^{s\varphi}u).\) Straightforward computations show that \[\omega=L_{1}(u)+L_{2}(u), \tag{3.2}\] with \[L_{1}(u) =Au+C_{1}u_{xx}+Eu_{4x},\] \[L_{2}(u) =Bu_{x}+C_{2}u_{xx}+Du_{xxx}+u_{t}-u_{5x}.\] Here \[A= s(\varphi_{t}+\varphi_{x}+\varphi_{xxx}-\varphi_{5x})-s^{2}(10 \varphi_{xx}\varphi_{xxx}-3\varphi_{x}\varphi_{xx}+5\varphi_{x}\varphi_{4x})\] \[-s^{3}(15\varphi_{x}\varphi_{xx}^{2}+10\varphi_{x}^{2}\varphi_{ xxx}-\varphi_{x}^{3})-s^{4}10\varphi_{x}^{3}\varphi_{xx}-s^{5}\varphi_{x}^{5},\] \[B= +s(3\varphi_{xx}-5\varphi_{4x})-s^{2}(15\varphi_{xx}^{2}+20 \varphi_{x}\varphi_{xxx}-3\varphi_{x}^{2})-s^{3}30\varphi_{x}^{2}\varphi_{xx }-s^{4}5\varphi_{x}^{4},\] \[C_{1}= s(3\varphi_{x}-10\varphi_{xxx})-s^{3}10\varphi_{x}^{3}\] \[C_{2}= C_{2}=-s^{2}30\varphi_{x}\varphi_{xx}\] \[D= -s10\varphi_{xx}-s^{2}10\varphi_{x}^{2},\] \[E= -s5\varphi_{x}.\] On the other hand \(\left\|\omega\right\|^{2}=\left\|L_{1}(u)\right\|^{2}+\left\|L_{2}(u)\right\| ^{2}+2\left(L_{1}(u),L_{2}(u)\right)\) where \[(u,v)=\int_{0}^{T}\int_{-L}^{L}uv\text{ d}x\text{ d}t\] and \(\left\|\omega\right\|^{2}=(\omega,\omega)\). With this in hand, we can prove a global Carleman estimate for the Kawahara equation \[\begin{cases}u_{t}+u_{x}+u_{xxx}-u_{xxxxx}=0&(x,t)\in Q_{T},\\ u\left(-L,t\right)=u\left(L,t\right)=u_{x}\left(-L,t\right)=u_{x}\left(L,t \right)=u_{xx}\left(L,t\right)=0&t\in\left(0,T\right),\\ u\left(x,0\right)=u_{0}\left(x\right)&x\in\left(0,L\right).\end{cases}\] We cite to the reader that the well-posedness theory for this system can be found in [1]. ### Proof of Theorem 1.1 We split the proof in two steps. The first one provides an exact computation of the inner product \((L_{1}(u),L_{2}(u))\), whereas the second step gives the estimates obtained thanks to the pseudoconvexity conditions (3.1). **Step 1.** Exact computation of the scalar product \(2(L_{1}(u),L_{2}(u))\). First, let us compute the following \[\int_{0}^{T}\int_{0}^{L}(Au+C_{1}u_{xx}+Eu_{xxxx})L_{2}(u)dxdt=:J_{1}+J_{2}+J_ {3}\] To do that, observe that \(u\) belongs to \(\mathcal{D}(P)\), thus, we infer by integrating by parts, that \[\begin{split} J_{1}=&-\frac{1}{2}\int_{0}^{T}\int_{-L}^{ L}[A_{t}-A_{5x}-(AC_{2})_{xx}+(AB)_{x}+(AD)_{xxx}]u^{2}dxdt\\ &-\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[5A_{xxx}-3(AD)_{x}+2(AC_{2 })]u_{x}^{2}dxdt\\ &+\frac{5}{2}\int_{0}^{T}\int_{-L}^{L}A_{x}u_{xx}^{2}dxdt,\end{split} \tag{3.3}\] \[\begin{split} J_{2}=&\int_{0}^{T}\int_{-L}^{L}C_{1}u_{ xx}[Bu_{x}+C_{2}u_{xx}+Du_{xxx}-u_{xxxxx}]dxdt\\ &+\int_{0}^{T}\int_{-L}^{L}C_{1}u_{xx}u_{t}dxdt:=I_{1}+I_{2}. \end{split} \tag{3.4}\] and \[\begin{split} J_{3}=&\int_{0}^{T}\int_{-L}^{L}Eu_{ xxxx}[Bu_{x}+C_{2}u_{xx}+Du_{xxx}-u_{xxxxx}]dxdt\\ &+\int_{0}^{T}\int_{-L}^{L}Eu_{xxxx}u_{t}dxdt:=I_{3}+I_{4}.\end{split} \tag{3.5}\] Let us now treat \(I_{i}\), for \(i=1,2,3,4\). Note that \(I_{1}\) is equivalent to \[\begin{split} I_{1}=&-\frac{1}{2}\int_{0}^{T}\int_{- L}^{L}(C_{1}B)_{x}u_{x}^{2}dxdt-\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[(C_{1}D)_{x}-2(C_ {1}C_{2})-C_{1xxx}]u_{xx}^{2}dxdt\\ &-\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}3C_{1x}u_{xxx}^{2}dxdt. \end{split} \tag{3.6}\] By other hand, by the definition of \(\omega\), see (3.2), for \(I_{2}\) we have that \[\begin{split} I_{2}=&-\frac{1}{2}\int_{0}^{T}\int_{- L}^{L}(AC_{1x})_{x}u^{2}dxdt-\int_{0}^{T}\int_{-L}^{L}(C_{1x})u_{xxx}^{2}dxdt\\ &-\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[-2(BC_{1x})+(CC_{1x})_{x} -(DC_{1x})_{xx}\\ &+(EC_{1x})_{xxx}-C_{1t}+(C_{1})_{xxxxx}]u_{x}^{2}dxdt\\ &-\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[(2DC_{1x})-3(EC_{1x})_{x} -4C_{1xxx}]u_{xx}^{2}dxdt\\ &-\int_{0}^{T}\int_{-L}^{L}C_{1x}u_{x}\omega dxdt,\end{split} \tag{3.7}\] where we have used that \(u\) belongs to \(\mathcal{D}(P)\) and \(u_{|t=0}=u_{|t=T}=0\). Now, using the same strategy as before, that is, integration by parts, \(u\) belongs to \(\mathcal{D}(P)\) and \(u_{|t=0}=u_{|t=T}=0\) ensures that \[\begin{split} I_{3}=&-\frac{1}{2}\int_{0}^{T}\int_{- L}^{L}(EB)_{xxx}u_{x}^{2}dxdt+\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[3(EB)_{x}+(EC_{2 })_{xx}]u_{xx}^{2}dxdt\\ &-\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[(ED)_{x}+2(EC_{2})]u_{xxx} ^{2}dxdt+\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}E_{x}u_{xxxx}^{2}dxdt,\end{split} \tag{3.8}\] and \[\begin{split} I_{4}=&\int_{0}^{T}\int_{-L}^{L}E_{xx}u_{t }u_{xx}dxdt-2\int_{0}^{T}\int_{-L}^{L}(E_{x}u_{xx})_{x}u_{t}dxdt+\frac{1}{2}\int_ {0}^{T}\int_{-L}^{L}E\frac{d}{dt}u_{xx}^{2}dxdt\\ =&-\int_{0}^{T}\int_{-L}^{L}E_{xx}u_{xx}u_{t}dxdt-2 \int_{0}^{T}\int_{-L}^{L}E_{x}u_{xxx}u_{t}dxdt-\frac{1}{2}\int_{0}^{T}\int_{-L}^ {L}E_{t}u_{xx}^{2}dxdt\\ =&-\int_{0}^{T}\int_{-L}^{L}[E_{xx}u_{xx}+2E_{x}u_{xxx }]u_{t}dxdt-\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}E_{t}u_{xx}^{2}dxdt=:I_{5}+I_{ 6}.\end{split} \tag{3.9}\] Note that \(I_{5}\) can be seen as \[\begin{split} I_{5}=&\frac{1}{2}\int_{0}^{T}\int_{- L}^{L}[(E_{xx}A)_{xx}-2(E_{x}A)_{xxx}]u^{2}dxdt\\ &+\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[-2(E_{xx}A)-(BE_{xx})_{x} +6(E_{x}A)_{x}+2(E_{x}B)_{xx}]\,u_{x}^{2}dxdt\\ &+\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[2(E_{xx}C)-(E_{xx}D)_{x} +(E_{xx}E)_{xx}+E_{xxxxx}-4(E_{x}B)-2(CE_{x})_{x}]u_{xx}^{2}dxdt\\ &+\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[-2(E_{xxx}E)-7E_{xxx}+4(E _{x}D)-2(EE_{x})_{x}]u_{xxx}^{2}dxdt\\ &+\int_{0}^{T}\int_{-L}^{L}E_{x}u_{xxxx}^{2}dxdt-\int_{0}^{T}\int _{-L}^{L}(E_{xx}u_{xx}+2E_{x}u_{xxx})\omega dxdt,\end{split}\] thanks to (3.2). So, putting the previous equality into (3.9) we get, \[\begin{split} I_{4}&=\frac{1}{2}\int_{0}^{T}\int_{- L}^{L}[(E_{xx}A)_{xx}-2(E_{x}A)_{xxx}]u^{2}dxdt\\ &+\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[-2(E_{xx}A)-(BE_{xx})_{x}+ (E_{x}A)_{x}+2(E_{x}B)_{xx}]\,u_{x}^{2}dxdt\\ &+\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[2(E_{xx}C)-(E_{xx}D)_{x}+ (E_{xx}E)_{xx}+E_{xxxxx}-4(E_{x}B)-E_{t}-2(CE_{x})_{x}]u_{xx}^{2}dxdt\\ &+\frac{1}{2}\int_{0}^{T}\int_{-L}^{L}[-2(E_{xxx}E)-7E_{xxx}+4(E _{x}D)-2(EE_{x})_{x}]u_{xxx}^{2}dxdt\\ &+\int_{0}^{T}\int_{-L}^{L}E_{x}u_{xxxx}^{2}dxdt-\int_{0}^{T}\int _{-L}^{L}(E_{xx}u_{xx}+2E_{x}u_{xxx})\omega dxdt.\end{split} \tag{3.10}\] Putting together (3.6) and (3.7) in (3.4), (3.8) and (3.10) into (3.5), and adding the result quantities with (3.5), we have that the scalar product \(2(L_{1}(u),L_{2}(u))\) is given by \[\begin{split} 2\int_{0}^{T}\int_{-L}^{L}L_{1}(u)L_{2}(u)dxdt=& -\int_{0}^{T}\int_{-L}^{L}(E_{xx}u_{xx}+2E_{x}u_{xxx})\omega dxdt\\ &-2\int_{0}^{T}\int_{-L}^{L}(\omega C_{1x})u_{x}dxdt+\int_{0}^{T} \int_{-L}^{L}Mu^{2}dxdt\\ &+\int_{0}^{T}\int_{-L}^{L}Nu_{x}^{2}dxdt+\int_{0}^{T}\int_{-L}^{ L}Ou_{xx}^{2}dxdt\\ &+\int_{0}^{T}\int_{-L}^{L}Ru_{xxx}^{2}dxdt+\int_{0}^{T}\int_{-L} ^{L}Su_{4x}^{2}dxdt,\end{split} \tag{3.11}\] where \[M= -(AB)_{x}-A_{t}+A_{5x}+(AC_{2})_{xx}-(AD)_{xxx}-(AC_{1x})_{x}+(E_{xx}A )_{xx}-2(E_{x}A)_{xxx}\] \[N= \ 3(AD)_{x}-2(AC_{2})-(C_{1}B)_{x}+(BC_{1x})+C_{1t}-(CC_{1x})_{x}+(DC _{1x})_{xx}-5A_{xxx}\] \[-(EC_{1x})_{xxx}-C_{15x}-(EB)_{xxx}-2(E_{xx}A)-(BE_{xx})_{x}+6(E_{x }A)_{x}+2(E_{x}B)_{xx}\] \[O= \ 5A_{x}-(C_{1}D)_{x}-2(DC_{1x})+3(EB)_{x}+2(C_{1}C_{2})-4(E_{x}B )+5C_{1xxx}+3(EC_{1x})_{x}\] \[+2(E_{xx}C)+(EC_{2})_{xx}-(E_{xx}D)_{x}+(E_{xx}E)_{xx}+E_{5x}-E_{t }-2(CE_{x})_{x}\] \[R= -5C_{1x}-(ED)_{x}+4(E_{x}D)-2(EC_{2})-2(E_{xxx}E)-7E_{xxx}-2(EE_{x })_{x}\] \[S= \ 3E_{x}\] Now, note that \[2\int_{0}^{T}\int_{-L}^{L}L_{1}(u)L_{2}(u)dxdt\leq\int_{0}^{T}\int_{-L}^{L} \left(L_{1}(u)+L_{2}(u)\right)^{2}dxdt\leq\int_{0}^{T}\int_{-L}^{L}\omega^{2} dxdt, \tag{3.12}\] we have due to (3.11) that \[\int_{0}^{T}\int_{-L}^{L}Mu^{2}dxdt+\int_{0}^{T}\int_{-L}^{L}Nu_{ x}^{2}dxdt+\int_{0}^{T}\int_{-L}^{L}Ou_{xx}^{2}dxdt+\int_{0}^{T}\int_{-L}^{L}Ru_{xxx}^{2}dxdt\] \[+\int_{0}^{T}\int_{-L}^{L}Su_{xxxx}^{2}dxdt-2\int_{0}^{T}\int_{-L }^{L}(\omega C_{1x})u_{x}dxdt-\int_{0}^{T}\int_{-L}^{L}(E_{xx}u_{xx}+2E_{x}u_{ xxx})\omega dxdt\] \[\leq\int_{0}^{T}\int_{-L}^{L}\omega^{2}dxdt. \tag{3.13}\] Let us put each common term of the previous inequality together. To do that, note that using Young inequality, for \(\epsilon\in(0,1)\) we get \[2\int_{0}^{T}\int_{-L}^{L}(\omega C_{1x})u_{x}dxdt= \ 2\int_{0}^{T}\int_{-L}^{L}\left(\epsilon^{\frac{1}{2}}C_{1x}u_{x} \right)\left(\epsilon^{-\frac{1}{2}}\omega\right)dxdt\] \[\leq \ \epsilon\int_{0}^{T}\int_{-L}^{L}C_{1x}^{2}u_{x}^{2}dxdt+ \epsilon^{-1}\int_{0}^{T}\int_{-L}^{L}\omega^{2}dxdt.\] In an analogous way, \[\int_{0}^{T}\int_{-L}^{L}(E_{xx}u_{xx}+2E_{x}u_{xxx})\omega dxdt \leq \ \frac{\epsilon}{2}\int_{0}^{T}\int_{-L}^{L}E_{xx}^{2}u_{xx}^{2}dxdt+ \epsilon\int_{0}^{T}\int_{-L}^{L}E_{x}^{2}u_{xxx}^{2}dxdt\] \[+\frac{3}{2}\epsilon^{-1}\int_{0}^{T}\int_{-L}^{L}\omega^{2}dxdt.\] So, we have that \[-\epsilon\int_{0}^{T}\int_{-L}^{L}C_{1x}^{2}u_{x}^{2}dxdt-\epsilon^{-1}\int_{ 0}^{T}\int_{-L}^{L}\omega^{2}dxdt\leq-2\int_{0}^{T}\int_{-L}^{L}(\omega C_{1x} )u_{x}dxdt \tag{3.14}\] and \[-\frac{\epsilon}{2}\int_{0}^{T}\int_{-L}^{L}E_{xx}^{2}u_{xx}^{2} dxdt-\epsilon\int_{0}^{T}\int_{-L}^{L}E_{x}^{2}u_{xxx}^{2}dxdt-\frac{3}{2} \epsilon^{-1}\int_{0}^{T}\int_{-L}^{L}\omega^{2}dxdt\] \[\leq -\int_{0}^{T}\int_{-L}^{L}(E_{xx}u_{xx}+2E_{x}u_{xxx})\omega dxdt. \tag{3.15}\] Replacing (3.14) and (3.15) into (3.13) yields that \[\begin{split}&\int_{0}^{T}\int_{-L}^{L}Mu^{2}dxdt+\int_{0}^{T} \int_{-L}^{L}\left(N-\epsilon C_{1x}^{2}\right)u_{x}^{2}dxdt+\int_{0}^{T}\int_{ -L}^{L}\left(O-\frac{\epsilon}{2}E_{xx}^{2}\right)u_{xx}^{2}dxdt\\ &+\int_{0}^{T}\int_{-L}^{L}\left(R-\epsilon E_{x}^{2}\right)u_{xxx }^{2}dxdt+\int_{0}^{T}\int_{-L}^{L}Su_{xxxx}^{2}dxdt\leq\left(1+\frac{5}{2} \epsilon^{-1}\right)\int_{0}^{T}\int_{-L}^{L}\omega^{2}dxdt.\end{split} \tag{3.16}\] **Step 2.** Estimation of each term of the left hand side of (3.16). The estimates are given in a series of claims. **Claim 1.** There exist some constants \(s_{1}>0\) and \(C_{1}>1\) such that for all \(s\geq s_{1}\), we have \[\int_{0}^{T}\int_{-L}^{L}Mu^{2}dxdt\geq C_{1}^{-1}\int_{0}^{T}\int_{-L}^{L} \left(s\varphi\right)^{9}u^{2}dxdt.\] Observe that \[M= -\left(AB\right)_{x}+\frac{O\left(s^{8}\right)}{t^{8}(T-t)^{8}}=-45s^{9 }\varphi_{x}^{8}\varphi_{xx}+\frac{O\left(s^{8}\right)}{t^{8}(T-t)^{8}}=-45s^{ 9}\frac{\left(\psi^{\prime}\right)^{8}\psi^{\prime\prime}}{t^{9}(T-t)^{9}}+ \frac{O\left(s^{8}\right)}{t^{8}(T-t)^{8}}\] We infer from (3.1) that for some \(k_{1}>0\) and all \(s>0\), large enough, we have \[M\geq k_{1}\frac{s^{9}}{t^{9}(T-t)^{9}}\] Claim 1 follows then for all \(s>s_{1}\), with \(s_{1}\) large enough and some \(C_{1}>1\). **Claim 2.** There exist some constants \(s_{2}>0\) and \(C_{2}>1\) such that for all \(s\geq s_{2}\), we have \[\int_{0}^{T}\int_{-L}^{L}\left(N-\epsilon C_{1x}^{2}\right)u_{x}^{2}dxdt\geq C _{2}^{-1}\int_{0}^{T}\int_{-L}^{L}\left(s\varphi\right)^{7}u_{x}^{2}dxdt.\] Noting that \[\begin{split} N-\epsilon C_{1x}^{2}=& 3(AD)_{x}-2(AC_{2})-(C_{1}B)_{x}+(BC_{1x})+ \frac{O\left(s^{6}\right)}{t^{6}(T-t)^{6}}\\ =&-50s^{7}\varphi_{x}^{6}\varphi_{xx}+\frac{O\left(s ^{6}\right)}{t^{6}(T-t)^{6}}=-50s^{7}\frac{\left(\psi^{\prime}\right)^{6}\psi ^{\prime\prime}}{t^{7}(T-t)^{7}}+\frac{O\left(s^{6}\right)}{t^{6}(T-t)^{6}}, \end{split}\] and using again that (3.1) holds, we get for some \(k_{2}>0\) and all \(s>0\), large enough, that \[N-\epsilon C_{1x}^{2}\geq k_{2}\frac{s^{7}}{t^{7}(T-t)^{7}}\] and Claim 2 follows then for all \(s>s_{2}\), with \(s_{2}\) large enough and some \(C_{2}>1\). **Claim 3.** There exist some constants \(s_{3}>0\) and \(C_{3}>1\) such that for all \(s\geq s_{3}\), we have \[\int_{0}^{T}\int_{-L}^{L}\left(O-\frac{\epsilon}{2}E_{xx}^{2}\right)u_{xx}^{2} dxdt\geq C_{3}^{-1}\int_{0}^{T}\int_{-L}^{L}(s\varphi)^{5}u_{xx}^{2}dxdt.\] First, see that \[\begin{split} O-\frac{\epsilon}{2}E_{xx}^{2}&=5A_{x}- (C_{1}D)_{x}-2(DC_{1x})+3(EB)_{x}+2(C_{1}C_{2})-4(E_{x}B)+\frac{O\left(s^{4} \right)}{t^{4}(T-t)^{4}}\\ &=-250s^{5}\varphi_{x}^{4}\varphi_{xx}+\frac{O\left(s^{4}\right)} {t^{4}(T-t)^{4}}=-250s^{5}\frac{\left(\psi^{\prime}\right)^{4}\psi^{\prime \prime}}{t^{5}(T-t)^{5}}+\frac{O\left(s^{4}\right)}{t^{4}(T-t)^{4}}.\end{split}\] Next, using (3.1) we have that for some \(k_{3}>0\) and all \(s>0\), large enough, \[O-\frac{\epsilon}{2}E_{xx}^{2}\geqslant k_{3}\frac{s^{5}}{t^{5}(T-t)^{5}}\] is verified, so Claim 3 holds true for all \(s>s_{3}\), with \(s_{3}\) large enough and some \(C_{3}>1\). **Claim 4.** There exist some constants \(s_{4}>0\) and \(C_{4}>1\) such that for all \(s\geqslant s_{4}\), we have \[\int_{0}^{T}\int_{-L}^{L}\left(R-\epsilon E_{x}^{2}\right)u_{xxx}^{2}dxdt \geqslant C_{4}^{-1}\int_{0}^{T}\int_{-L}^{L}(s\varphi)^{3}u_{xxx}^{2}dxdt.\] As the previous Claims, thanks to (3.1) and \[R-\epsilon E_{x}^{2}= -5C_{1x}-(ED)_{x}+4(E_{x}D)-2(EC_{2})+\frac{O\left(s^{2}\right)}{ t^{2}(T-t)^{2}}\] \[= -100s^{3}\varphi_{x}^{2}\varphi_{xx}+\frac{O\left(s^{2}\right)}{ t^{2}(T-t)^{2}}=-100s^{3}\frac{(\psi^{\prime})^{2}\psi^{\prime\prime}}{t^{3}(T-t)^ {3}}+\frac{O\left(s^{2}\right)}{t^{2}(T-t)^{2}},\] we can find some constant \(k_{4}>0\) and all \(s>0\), large enough, such that \[R-\epsilon E_{x}^{2}\geqslant k_{4}\frac{s^{3}}{t^{3}(T-t)^{3}}\] follows and Claim 4 is verified for all \(s>s_{4}\), with \(s_{4}\) large enough and some \(C_{4}>1\). **Claim 5.** There exist some constants \(s_{5}>0\) and \(C_{5}>1\) such that for all \(s\geqslant s_{4}\), we have \[\int_{0}^{T}\int_{-L}^{L}Su_{xxxx}^{2}dxdt\geqslant C_{5}^{-1}\int_{0}^{T} \int_{-L}^{L}(s\varphi)u_{xxxx}^{2}dxdt.\] This is also a direct consequence of the fact that \(S=-s5\varphi_{xx}\) and (3.1) holds. Therefore, Claim 5 is verified. We infer from Steps 1 and 2, that for some positive constants \(s_{0}\), \(C\), and all \(s\geqslant s_{0}\), we have \[\int_{0}^{T}\int_{-L}^{L}\left\{(s\varphi)^{9}|u|^{2}+(s\varphi)^{7}|u_{x}|^{ 2}+(s\varphi)^{5}|u_{xx}|^{2}+(s\varphi)^{3}|u_{xxx}|^{2}+s\varphi|u_{xxxx}|^{2 }\right\}dxdt\] \[\leqslant C\int_{0}^{T}\int_{-L}^{L}|\omega|^{2}dxdt.\] Replacing \(u\) by \(e^{-s\varpi}q\) yields (1.6). ## 4. Approximation Theorem This section is devoted to presenting an application of the Carleman estimate shown in Section 3 for the Kawahara operator \(P\) defined by (1.4)-(1.5). First, we prove a result which is the key to proving the approximation Theorem 1.2. We have the following as a consequence of the Theorem 1.1. **Proposition 4.1**.: _For \(L>0\) and \(f=f(t,x)\) a function in \(L^{2}(\mathbb{R}\times(-L,L))\) with \(\operatorname{supp}\ f\subset([t_{1},t_{2}]\times(-L,L))\), where \(-\infty<t_{1}<t_{2}<\infty,\) we have that for every \(\epsilon>0\) there exist a positive number \(C=C(L,t_{1},t_{2},\epsilon)\) (\(C\) does not depend on \(f\)) and a function \(v\in L^{2}(\mathbb{R}\times(-L,L))\) such that_ \[\begin{cases}v_{t}+v_{x}+v_{xxx}-v_{xxxxx}=f\text{ in }\mathcal{D}^{\prime}( \mathbb{R}\times(-L,L)),\\ \operatorname{supp}\ v\subset[t_{1}-\epsilon,t_{2}-\epsilon]\times(-L,L)\end{cases}\] _and_ \[\|v\|_{L^{2}(\mathbb{R}\times(-L,L))}\leqslant C\|f\|_{L^{2}(\mathbb{R}\times(-L,L ))}.\] Proof.: By a change of variable, if necessary, and without loss of generality, we may assume that \(0=t_{1}-\epsilon<t_{1}<t_{2}<t_{2}-\epsilon=T\). Thanks to the Calerman estimate (1.6), we have that \[\int_{0}^{T}\int_{-L}^{L}|q|^{2}e^{-\frac{k}{t(T-t)}}dxdt\leqslant C_{1}\int_{ 0}^{T}\int_{-L}^{L}|P(q)|^{2}dxdt, \tag{4.1}\] for some \(k>0\), \(C_{1}>0\) and any \(q\in\mathcal{Z}\). Here, the operator \(P\) is defined by (1.4). Therefore, we have that \(F:\mathcal{Z}\times\mathcal{Z}\longrightarrow\mathbb{R}\) defined by \[F(p,q)=\int_{0}^{T}\int_{-L}^{L}P(p)P(q)dxdt\] is a scalar product in \(\mathcal{Z}\). Now, let us consider \(H\) the completion of \(\mathcal{Z}\) for \((\cdot,\cdot)\). Note that \(|q|^{2}e^{-\frac{k}{t(T-t)}}\) is integrable on \(Q_{T}\) if \(q\in H\) and (4.1) holds true. By the other hand, we claim that \(T:H\longrightarrow\mathbb{R}\) defined by \[T(q)=-\int_{0}^{T}\int_{-L}^{L}f(t,x)q(x)dxdt,\] is well-defined on \(H\). In fact, due the hypotheses, that is, \(\text{supp }\,f\subset([t_{1},t_{2}]\times(-L,L))\), and thanks to Holder inequality and the relation (4.1), we have \[\int_{0}^{T}\int_{-L}^{L}|f(t,x)q(x)|dxdt\leqslant\int_{t_{1}}^{t_{2}}\int_{- L}^{L}|f(t,x)q(x)|dxdt\leqslant C\|f(t,x)\|_{L^{2}((t_{1},t_{2})\times(-L,L))}(q,q)^ {\frac{1}{2}}, \tag{4.2}\] for some constant positive \(C\). Thus, it follows from the Riesz representation theorem that there exists a unique \(u\in H\) such that \[F(u,q)=T(q),\ \forall q\in H. \tag{4.3}\] Pick \(v:=P(u)\in L^{2}((0,T)\times(-L,L))\), so have that \[\langle P^{*}(v),q\rangle= \langle v,P(q)\rangle=\int_{0}^{T}\int_{-L}^{L}vP(q)dxdt=\int_{0} ^{T}\int_{-L}^{L}P(u)P(q)dxdt\] \[= F(u,q)=T(q)=-\int_{0}^{T}\int_{-L}^{L}fqdxdt=\langle-f,q\rangle,\] where \(\langle\cdot,\cdot\rangle\) denotes the duality pairing \(\langle\cdot,\cdot\rangle_{\mathcal{D}^{\prime}(Q_{T});\mathcal{D}(Q_{T})}\) and \(P^{*}=-P\), hence \[Pv=f\ \text{in}\ \mathcal{D}^{\prime}(Q_{T}).\] Finally, observe that \(v\in H^{1}((0,T);H^{-5}(-L,L))\), since we have \[v_{t}=f+v_{xxxxx}-v_{xxx}-v_{x}\in L^{2}(0,T;H^{-5}(-L,L)),\] thus \(v(0,\cdot)\) and \(v(T,\cdot)\) make sense in \(H^{-5}(-L,L)\). Now, let \(q\in H^{1}(0,T;H^{5}_{0}(-L,L))\), follows by (4.3) that \[-\int_{0}^{T}\int_{-L}^{L}fqdxdt=-\int_{0}^{T}\int_{-L}^{L}fqdxdt+\langle v(t,x),q(t,x)\rangle\bigg{|}_{t=0}^{T},\] where \(\langle\cdot,\cdot\rangle\) denotes the duality pairing \(\langle\cdot,\cdot\rangle_{H^{-5}(-L,L);H^{5}_{0}(-L,L)}\). Since \(q|_{t=0}\) and \(q|_{t=T}\) are arbitrarily in \(\mathcal{D}(-L,L)\), we infer that \(v(T,\cdot)=v(0,\cdot)=0\) in \(H^{-5}(-L,L)\). Therefore, the result follows extending \(v\) by setting \(v(t,x)=0\) for \((t,x)\notin Q_{T}\) Now, we are in a position to prove Theorem 1.2. Proof of Theorem 1.2.: Pick \(\eta>0\), to be chosen later. Thanks to the Lemma 2.1, applied for \(L=n+1,\ l_{1}=n-1,\ l_{2}=n,\ 2\delta=\frac{\epsilon}{2}\), there exists \(\tilde{v}\in L^{2}((0,T)\times(-n-1,n+1))\) such that \[P\tilde{v}=0\ \text{in}\ (0,T)\times(-n-1,n+1).\] \[\tilde{v}(t,.)=S_{n+1}(t-t_{1}+\frac{\epsilon}{2})v_{1},\ \text{for}\ t_{1}- \frac{\epsilon}{2}<t<t_{1}-\frac{\epsilon}{4} \tag{4.4}\] and \[\tilde{v}(t,.)=S_{n+1}(t-t_{2}-\frac{\epsilon}{4})v_{2},\ \text{for}\ t_{2}+ \frac{\epsilon}{4}<t<t_{2}+\frac{\epsilon}{2}, \tag{4.5}\] for some \((v_{1},v_{2})\in L^{2}((t_{1}-\frac{\epsilon}{2},t_{2}+\frac{\epsilon}{2}) \times(-n+1,n-1))^{2}\) and \[\|\tilde{v}-u\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_{2}+\frac{\epsilon}{2}) \times(-n+1,n-1))}<\eta.\] So that (1.8) be fulfilled, we multiply \(\tilde{v}\) by a cut-off function. Now on, consider \(\varphi\in\mathcal{D}(0,T)\) be such that \(0\leqslant\varphi\leqslant 1,\ \varphi(t)=1\), for all \(t\in[t_{1}-\frac{\epsilon}{4},t_{2}+\frac{\epsilon}{4}]\) and \(\text{supp}\ \varphi\subset[t_{1}-\frac{\epsilon}{2},t_{2}+\frac{\epsilon}{2}]\). Picking \(\overline{v}(t,x)=\varphi(t)\tilde{v}(t,x)\), we get \[\text{supp}\ \overline{v}\subset[t_{1}-\frac{\epsilon}{2},t_{2}+\frac{ \epsilon}{2}]\times(-n-1,n+1).\] Therefore, \[\|\overline{v}-u\|_{L^{2}((0,T)\times(-n+1,n-1))}\leqslant \|\tilde{v}-u\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_{2}+\frac{ \epsilon}{2})\times(-n+1,n-1))}\] \[+\|(\varphi-1)\tilde{v}\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_{2}+ \frac{\epsilon}{2})\times(-n+1,n-1))}.\] Since \(\text{supp}\ \ u\subset[t_{1},t_{2}]\times(-n,n)\) and \(\varphi(t)=1\), for \(t_{1}-\frac{\epsilon}{4}\leqslant t\leqslant t_{2}+\frac{\epsilon}{4}\), we have \[\|(\varphi-1)\tilde{v}\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_{2}+ \frac{\epsilon}{2})\times(-n+1,n-1))}^{2}\leqslant \|\tilde{v}\|_{L^{2}(([t_{1}-\frac{\epsilon}{2},t_{1}-\frac{\epsilon}{ 4})\cup(t_{2}+\frac{\epsilon}{4},t_{2}+\frac{\epsilon}{2})]\times(-n+1,n-1))}^ {2} \tag{4.6}\] \[= \|\tilde{v}-u\|_{L^{2}(([t_{1}-\frac{\epsilon}{2},t_{1}-\frac{ \epsilon}{4})\cup(t_{2}+\frac{\epsilon}{4},t_{2}+\frac{\epsilon}{2}))\times(-n +1,n-1))}^{2}\] \[\leqslant \|\tilde{v}-u\|_{L^{2}(([t_{1}-\frac{\epsilon}{2},t_{2}+\frac{ \epsilon}{2})\times(-n+1,n-1))}^{2}\] \[\leqslant \eta^{2}.\] Hence, \[\|\overline{v}-u\|_{L^{2}((0,T)\times(-n+1,n-1))}\leqslant 2\eta, \tag{4.7}\] where we have used the fact that \(\text{supp}\ \ u\subset[t_{1},t_{2}]\times(-n,n)\). Finally, \[P\overline{v}=\frac{d\varphi}{dt}\tilde{v}\quad\text{in}\quad(0,T)\times(-n-1, n+1)\] so \[\|P\overline{v}\|_{L^{2}((0,T)\times(-n-1,n+1))}^{2}\leqslant\|\frac{d\varphi}{dt}\| _{L^{\infty}(0,T)}^{2}\|\tilde{v}\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_{1}- \frac{\epsilon}{4})\cup(t_{2}+\frac{\epsilon}{4},t_{2}+\frac{\epsilon}{2})) \times(-n-1,n+1))}^{2}\] thanks to the fact that \(\varphi(t)=1\) in \([t_{1}-\frac{\epsilon}{4},t_{1}+\frac{\epsilon}{4}]\). On the other hand, since (4.4) and (4.5) holds, we infer by the observability result, that is, by Lemma 2.3, that there exists a constant \(C=C(n,\epsilon)>0\) such that \[\|\tilde{v}\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_{1}-\frac{\epsilon}{4}) \times(-n-1,n+1))}\leqslant C\|\tilde{v}\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_ {1}-\frac{\epsilon}{4})\times(-n+1,n-1))}\] and also \[\|\tilde{v}\|_{L^{2}((t_{2}+\frac{\epsilon}{4},t_{2}+\frac{\epsilon}{2}) \times(-n-1,n+1))}\leqslant C\|\tilde{v}\|_{L^{2}((t_{2}+\frac{\epsilon}{4},t_ {1}+\frac{\epsilon}{2})\times(-n+1,n-1))},\] or equivalently, \[\|\tilde{v}\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_{1}-\frac{\epsilon}{4})\cup(t _{2}+\frac{\epsilon}{4},t_{2}+\frac{\epsilon}{2}))\times(-n-1,n+1))}\leqslant C \|\tilde{v}\|_{L^{2}((t_{1}-\frac{\epsilon}{2},t_{1}-\frac{\epsilon}{4})\cup(t _{2}+\frac{\epsilon}{4},t_{2}+\frac{\epsilon}{2}))\times(-n+1,n-1))}.\] Thus, combining the last inequality with (4.6) yields that \[\|P\overline{v}\|_{L^{2}((0,T)\times(-n-1,n+1))}\leq C\big{\|}\frac{d\varphi}{dt} \big{\|}_{L^{\infty}(0,T)}\eta \tag{4.8}\] Now, to finish the proof, we use Proposition 4.1, to ensure the existence of a constant \(C=C^{\prime}(n,t_{1},t_{2},\epsilon)>0\) and a function \(\omega\in L^{2}((0,T)\times(-n-1,n+1))\) such that \[\begin{cases}P\omega=P\overline{v}\text{ in }(0,T)\times(-n-1,n+1),\\ \operatorname{supp}\ \omega\subset[t_{1}-\epsilon,t_{2}+\epsilon]\times(-n-1,n+1), \end{cases} \tag{4.9}\] and \[\|\omega\|_{L^{2}((0,T)\times(-n-1,n+1))}\leq C^{\prime}\|P\overline{v}\|_{L^ {2}((0,T)\times(-n-1,n+1))}. \tag{4.10}\] Consequently, setting \(v=\overline{v}-\omega\) we get (1.7) and (1.8) by using (4.9). Moreover, thanks to (4.7), (4.8) and (4.10), we get that \[\|v-u\|_{L^{2}((0,T)\times(-n+1,n-1))}\leq\big{(}2+CC^{\prime}\big{\|}\frac{d \varphi}{dt}\big{\|}_{L^{\infty}(0,T)}\big{)}\eta. \tag{4.11}\] Now, choosing \(\eta\) small enough, we have shown (1.9) and so the result is shown. Finally, as a consequence of Theorem 1.2, we prove the next result that gives us information to prove the third main result of the article in the next section. **Corollary 4.2**.: _Let \(t_{1}\), \(t_{2}\), \(T\) real numbers such that \(0<t_{1}<t_{2}<T\) and \(f=f(t,x)\) be a function in \(L^{2}_{loc}(\mathbb{R}^{2})\) such that_ \[\operatorname{supp}\ f\subset[t_{1},t_{2}]\times\mathbb{R}. \tag{4.12}\] _Let \(\epsilon\in(0,min(t_{1},T-t_{2}))\), then there exists \(u\in L^{2}_{loc}(\mathbb{R}^{2})\) such that_ \[\omega_{t}+\omega_{x}+\omega_{xxx}-\omega_{xxxxx}=f\text{ in }\mathcal{D}^{ \prime}(\mathbb{R}^{2}) \tag{4.13}\] _and_ \[\operatorname{supp}\ \omega\subset[t_{1}-\epsilon,t_{2}+\epsilon]\times \mathbb{R}. \tag{4.14}\] Proof.: Consider two sequences of number denoted by \(\{t_{1}^{n}\}_{n\geqslant 2}\) and \(\{t_{2}^{n}\}_{n\geqslant 2}\) such that for all \(n\geqslant 2\) we have \[t_{1}-\epsilon<t_{1}^{n+1}<t_{1}^{n}<t_{1}<t_{2}<t_{2}^{n}<t_{2}^{n+1}<t_{2}+\epsilon. \tag{4.15}\] We construct by induction over \(n\) a sequence \(\{u_{n}\}_{n\geqslant 2}\) of function such that, for every \(n\geqslant 2\) \[\begin{cases}u_{n}\in L^{2}((0,T)\times(-n,n)),\\ \operatorname{supp}\ u_{n}\subset[t_{1}^{n},t_{2}^{n}]\times(-n,n),\\ Pu_{n}=f\text{ in }(0,T)\times(-n,n),\end{cases} \tag{4.16}\] and, if \(n>2\) \[\|\tilde{u}_{n}-u_{n-1}\|_{L^{2}((0,T)\times(-n+2,n-2))}<\frac{1}{2^{n}}. \tag{4.17}\] Here, \(u_{2}\) is given by Proposition 4.1. Now on, let us assume, for \(n\geqslant 2\), that \(u_{2},\cdots,u_{n}\) satisfies (4.16) and (4.17). By Proposition 4.1, there exists \(\omega\in L^{2}((0,T)\times(-n-1,n+1))\) such that \[\operatorname{supp}\ \omega\subset[t_{1}^{2},t_{2}^{2}]\times(-n-1,n+1)\] and \[P\omega=f\text{ in }(0,T)\times(-n-1,n+1).\] As we have \(P(u_{n}-\omega)=0\) in \((0,T)\times(-n,n)\) and \[\operatorname{supp}\ (u_{n}-\omega)\subset[t_{1}^{n},t_{2}^{n}]\times(-n,n)\] with \(t_{1}^{n+1}<t_{1}^{n}<t_{2}^{n}<t_{2}^{n+1}\). So, using Theorem 1.2, there exists a function \(v\in L^{2}((0,T)\times(-n-1,n+1))\) such that \[\operatorname{supp}\ v\subset[t_{1}^{n+1},t_{2}^{n+1}]\times(-n-1,n+1),\ \ Pv=0\ \text{in}\ (0,T)\times(-n-1,n+1)\] and \[\|v-(u_{n}-\omega)\|_{L^{2}((0,T)\times(-n+1,n-1))}<\frac{1}{2^{n-1}}.\] Thus, picking \(u_{n+1}=v+\omega\), we get that \(u_{n+1}\) satisfies (4.16) and (4.16). Extending the sequence \(\{u_{n}\}_{n\geqslant 2}\) by \(u_{n}(t,x)=0\) for \((t,x)\in\mathbb{R}^{2}\backslash(0,T)\times(-n,n)\), we deduce, thanks to (4.17) that \[\{u_{n}\}_{n\geqslant 2}\to u\quad\text{in}\quad L^{2}_{loc}(\mathbb{R}^{2})\] with \[\operatorname{supp}\ u\subset[t_{1}-\epsilon,t_{2}+\epsilon]\times\mathbb{R}\] due to the fact (4.15). Additionally, \(Pu=f\) in \(\mathbb{R}^{2}\) by the third equation of (4.16). Thus, the proof is finished. ## 5. Approximation Theorem applied in control problem In this section, we present a direct application of the approximation Theorem 1.2, which ensures the proof of Theorem 1.3. ### Proof of Theorem 1.3 As is well know, see [8], that there exist \(u_{1}\) and \(u_{2}\) in a class \(C(0,T;H^{s}(0,+\infty)\), for \(s\in\left(-\frac{7}{4},\frac{5}{2}\right)\backslash\left\{\frac{1}{2},\frac{3 }{2}\right\}\), solutions of (without specification of the boundary conditions) \[\begin{cases}u_{1t}+u_{1x}+u_{1xxx}-u_{1xxxxx}=0&\text{in}\ (0,T)\times(0,+ \infty),\\ u_{1}(0,x)=u_{0}&\text{in}\ (0,+\infty)\end{cases}\] and \[\begin{cases}u_{2t}+u_{2x}+u_{2xxx}-u_{2xxxxx}=0&\text{in}\ (0,T)\times(0,+ \infty),\\ u_{2}(0,x)=u_{T}&\text{in}\ (0,+\infty),\end{cases}\] respectively, for \(s\in\left(-\frac{7}{4},\frac{5}{2}\right)\). Now, consider \(\tilde{u}_{2}(t,x)=u_{2}(t-T,x)\). We have that \(P\tilde{u}_{2}=0\) in \([0,T]\times(0,+\infty)\). Now, pick any \(\epsilon^{\prime}\in(\epsilon,\frac{T}{2})\) and consider the function \(\varphi\in C^{\infty}(0,T)\) defined by \[\varphi(t)=\begin{cases}1,&\text{if}\ t\in[0,\epsilon^{\prime}]\\ 0,&\text{if}\ t\in[T-\epsilon^{\prime},T].\end{cases} \tag{5.1}\] Note that the change of variable \[u(t,x)=\varphi(t)u_{1}(t,x)+(1-\varphi(t))\tilde{u}_{2}(t,x)+\omega(t,x),\] transforms (1.11) in \[\begin{cases}\omega_{t}+\omega_{x}+\omega_{xxx}-\omega_{xxxxx}=\frac{d}{dt} \varphi(\tilde{u}_{2}-u_{1})&\text{in}\ \mathcal{D}^{\prime}((0,T)\times(0,+\infty)),\\ \omega(0,x)=\omega(T,x)=0&\text{in}\ (0,+\infty).\end{cases}\] The proof is finished taking into account the Corollary 4.2 with \(f=\frac{d\varphi}{dt}(\tilde{u}_{2}-u_{1})\). **Acknowledgments:** This work was done while the first author was visiting Virginia Tech. The author thanks the host institution for their warm hospitality.
2304.14909
Resilient-Economic Coordinated Robust Operation of Integrated Electric-Gas Systems
Interactions between power and gas systems, which are both large and complex, have been gradually intensified during the last decades, predominantly due to the propagation of large fleet natural gas-fired power units (GPUs) and the technological developments of power-to-gas (P2G) facilities. These interactions not only bring significant economic benefits to society but also provide additional operating flexibilities, which are essential to handle fluctuations of the large-scale renewable power generation (RPG) and power system contingencies. Moreover, neglecting these interactions in power system operation may not only result in infeasible operation status in the gas systems but also increase the decision-making operation costs of both systems. previous studies suffered from two significant drawbacks, namely (1) they assumed the existence of only one utility that has full control authority over the power system and gas system; (2) the economic interactions between power systems and gas systems have been neglected, which goes against the current industrial practice. This research revisits the day-ahead resilient and economic operations of power systems considering the economic and physical interactions with gas systems, which are characterized by the modeling of bilateral energy purchase contracts and operational constraints of gas systems. This thesis provides a novel perspective and solution for the resilient and economically coordinated robust operation of integrated electric-gas systems (IEGSs) under uncertainties. The proposed robust scheduling decision frameworks are practically compatible with the existing industrial operations of the IEGSs.
Ahmed Rabee Sayed, Cheng Wang, Tianshu Bi
2023-04-28T15:32:28Z
http://arxiv.org/abs/2304.14909v2
# North China Electric Power University ###### Abstract The study of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the electric power of the power of the electric power of the electric power of the electric power of the electric power of the electric power of the power of the electric power of the electric power of the electric power of the electric power of the power of the electric power of the power of the electric power of the electric power of the power of the electric power of the power of the electric power of the power of the electric power of the power of the electric power of the power of the electric power of the power of the power of the electric power of the power of the electric power of the power of the electric power of the power of the electric power of the power of the electric power of the power of the electric power of the power of the power of the electric power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power of the power power of the power of the power of the power of the power of the power power of the power of the power of the power of the power of the power power of the power of the power of the power of the power power of the power of the power power of the power power of the power of the power power of the power of the power power of the power power of the power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power of the power power power of the power power of the power power of the power power power of the power power of the power power of the power power power of the power power power of the power power power of the power power of the power power power of the power power power of the power power of the power power power of the power power power of the power power power of the power power of power power power of the power power power of the power power power of the power power of power power power of the power power power of the power power power of the power power of power power power of the power power of power power power of the power power power of the power power power of power power power of the power power power of power power power power of the power power power of power power power of the power power power of power power power of the power power power of power power power power of power power power power of the power power power of power power power power power of power power power power of power power power power power of power power power power power of power power power power power of power power power power power power of power power power power power power of power [MISSING_PAGE_POST] * [10079] 621.3 [10079] 621.3 [1010] 277 [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] ## Declaration of Authorship I, Ahmed Rabee Kamel Sayed, declare that this thesis titled, "Resilient-Economic Coordinated Robust Operation of Integrated Electric-Gas Systems" and the work presented in it are my own. I confirm that: * This work was done wholly or mainly while in candidature for a research degree at North China Electric Power University. * Where any part of this thesis has previously been submitted for a degree or any other qualification at North China Electric Power University or any other institution, this has been clearly stated. * Where I have consulted the published work of others, this is always clearly attributed. * Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. * I have acknowledged all main sources of help. * Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself. Signed: Date: [MISSING_PAGE_POST] ###### Abstract Interactions between power and gas systems, which are both large and complex, have been gradually intensified during the last decades, predominantly due to the propagation of large fleet natural gas-fired power units (GPUs) and the technology developments of power-to-gas (P2G) facilities. These interactions not only bring significant economic benefits to the society but also provide additional operating flexibilities, which are essential to handle fluctuations of the large-scale renewable power generation (RPG) and power system contingencies. Moreover, neglecting these interactions in power system operation may not only result in infeasible operation status in the gas systems but also increase the decision-making operation costs of both systems. Previous studies suffered from two major drawbacks, namely (1) they assumed the existence of only one utility that has full control authority over the power system and gas system; (2) the economic interactions between power systems and gas systems have been neglected, which goes against the current industrial practice. This research revisits the day-ahead resilient and economic operations of power systems considering the economic and physical interactions with gas systems, which are characterized by the modeling of bilateral energy purchase contracts and operational constraints of gas systems, respectively. The main work of the thesis is as follows: 1. Propose a tri-level resilient operational framework to optimize the operational performances of power systems under the worst-case \(N-k\) contingencies. The proposed model considers gas contracts with gas systems, where firm gas supply contracts and gas reserve contracts are formulated in the pre- and the post-contingency stages, respectively. 2. Emerging P2G facilities to mitigate the surplus RPG outputs, bidirectional gas contracts are inevitable. A two-stage robust model of the energy management problem for the power distribution networks (PDNs) is proposed. According to the current gas contracting mechanism, flexible real-time contracts may still be signed for the low-probability utilized reserved GPU outputs in practice. To balance the robustness and the conservativeness of the operation strategy, a two-stage distributionally robust contracting model is proposed. 3. A robust operational equilibrium solution method for the interactive markets of power and gas systems is proposed, where the bidirectional interactions include energy contracts, and the impacts of the uncertainties of wind generation outputs on the two markets are characterized. To guarantee the robustness of market equilibrium against uncertainties, the power and gas market-clearing models become two-stage robust ones. In brief, this thesis provides a novel perspective and solution for the resilient and economic coordinated robust operation for the integrated electric-gas systems (IEGSs) under uncertainties. The proposed robust scheduling decision frameworks are practically compatible with the existing industrial operations of the IEGSs, and they are expected to be employed in the operation of IEGSs with large-scale integration of RPG, extreme weather and operating failures, to provide technical and optimal energy management for improving the resilient and economic operation of IEGSs, and to realize the secure and economic operation of the integrated systems against uncertainties. **Keywords:** Economic Dispatch, Resilient Dispatch, Integrated Electric-Gas Systems, Uncertainties, Energy Management, Energy Market * [1] **H** \(T\) _N_ _- k_ \(H\) _T_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H__ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_ \(H\) _H_H_ _H_ [MISSING_PAGE_POST] ## Table of Contents ### Abstract List of Figures List of Tables ### Xxiv ### Nomenclature List of Publications ### 1 Introduction 1.1 Research Background & Motivations 1.2 Literature Review 1.3 Research Objectives and Challenges 1.4 Contributions and Publications 1.5 Thesis Outline ### 2 Modeling the Integrated Electric-Gas Systems 1.1 Natural Gas System Modeling 1.2.1 Physical Structure of Gas System 1.2 Dynamic-State Gas Flow Model 1.3 Steady-State Gas Flow Model 1.4 Electric Power Systems Modeling 1.5 Optimal Power Flow 1.6 AC Optimal Power Flow Model 1.7 DC Optimal Power Flow Model 1.8 Interdependent Power and Gas Systems 1.9 Similarities and Differences 1.1 Coupling Components 1.1 Coordination Strategies [MISSING_PAGE_POST] 1.5 Thesis Outline [MISSING_PAGE_POST] 1.5 Thesis Outline [MISSING_PAGE_POST] 1.5 Thesis Outline 1.7 Control of Gas Systems 1.8 Control of Gas Systems 1.9 Control of Gas Systems 1.1 Control of Gas Systems 3.1.2 Dynamic Programming Techniques 3.1.3 Linear Programming Techniques 3.1.4 Convex Relaxation Techniques 3.2 Mathematical Formulations 3.2.1 Transmission-level IEGS Model 3.2.2 Distribution-level IEGS Model 3.3 Optimal Power-Gas Flow Calculation for IEGSs 3.3.1 Piecewise Linear Approximation Method 3.3.2 Gas Flow Correction Method 3.3.3 Sequential-MISOCP Algorithm 3.4 Simulation Results 3.4.1 Case Studies with Transmission-level IEGS 3.4.2 Case Studies with Distribution-level IEGS 3.5 Conclusions and Discussions ## 1 Introduction The theory of differential equations is a very general and useful tool for solving differential equations. The theory of differential equations is a very general and useful tool for solving differential equations. * 5.4.1 Test Systems Description * 5.4.2 Comparison with the IPS Model * 5.4.3 Comparison Between the One-stage Contracting and IEGS Models * 5.4.4 Impacts of the Penalty Coefficients * 5.4.5 Performance of the S-MISOCP Algorithm in RO Model * 5.4.6 Scalability Tests of the Procedure with RO Models * 5.4.7 Comparison with SO and RO Models * 5.4.8 Comparison Between the Two-stage Contracting and IEGS Models * 5.4.9 Comparison Between Two-stage and One-stage Contracting Mechanisms * 5.4.10 Scalability Tests of the Procedure with DRO Models * 5.5 Conclusions and Discussions * 6 Robust Operational Equilibrium for Coupled Electricity and Gas Markets * 6.1 Introduction * 6.2 Mathematical Formulation * 6.2.1 Pool-based Market Mechanism * 6.2.2 Bilateral Energy and Reserve Contracting * 6.2.3 Robust Clearing Model of the Electricity Market * 6.2.4 Robust Clearing Model of the Gas Market * 6.3 Solution Methodology * 6.3.1 Clearing the Electricity Market with Uncertainties * 6.3.2 Clearing the Gas Market with Uncertainties * 6.3.3 Seeking the Operational Equilibrium * 6.4 Simulation Results * 6.4.1 Base-case Analysis * 6.4.2 Effectiveness of Modeling the Gas Dynamics * 6.4.3 Comparison with Deterministic Market Clearing Models * 6.4.4 Comparison with the Centralized Clearing Model * 6.4.5 Computational Efficiency Analysis * 6.5 Conclusions and Discussions * 7 Conclusions and Future Works * 7.1 Conclusions and Discussions * 7.2 Future Work Guidelines * 7.2.1 Modeling the Integrated Electric-gas Systems * 7.2.2 Solving the Integrated Electric-gas Systems * A Reference Formulations * A.1 Nonlinear Gas Compressor Model * A.2 Formulation of the Exact Gas System Dynamics * A.3 Unit Commitment Problem * A.4 Bus Injection Power Flow Model * A.5 Exact Separation Approach for GM-I-MP * A.6 Incremental Piecewise Linear Approximation Model * B Energy Test Systems * B.1 Power Transmission Systems * B.1.1 PJM-5Bus Power Transmission System * B.1.2 IEEE-39Bus Power Transmission System * B.1.3 IEEE-118Bus Power Transmission System * B.2 Power Distribution Networks * B.2.1 IEEE-13Bus Power Distribution Network * B.2.2 IEEE-123Bus Power Distribution Network * B.3 Gas Systems * B.3.1 7Nodes Gas System * B.3.2 8Nodes Gas System * B.3.3 20Nodes Gas System List of Figures * 1.1 Electricity generation in the US from selected fuels. Source: Annual Energy Outlook 2020 [http://www.eia.gov](http://www.eia.gov) * 1.2 World gas system consumption for OECD (left) and non-OECD (right) countries. Source: Annual Energy Outlook 2020 [http://www.eia.gov](http://www.eia.gov) * 1.3 World gas system production by gas types. Source: [https://www.eia.gov](https://www.eia.gov) * 1.4 Projected growth rate in the electricity generation in China. Source: Annual Energy Outlook 2020 [http://www.eia.gov](http://www.eia.gov) * 1.5 Thesis structure. * 2.1 Gas System Topology * 2.2 Schematic geology for the four types of gas resources * 2.3 Discretization of the PDEs, indicating the terminal and average values of pressures and gas flow. * 2.4 Optimization and control procedures for power system planning, operation and market * 3.1 Topology of the test system * 3.2 Production scheduling of gas wells in both the dynamic- and steady-state conditions * 3.3 The test system topology. * 3.4 Maximum \(RCV\) and penalties values for case 1.5GL+1.5PL. * 3.5 Energy production schedules obtained by MISOCP relaxation method and S-MISOCP algorithm. * 4.1 Operational layout in the pre- and post-contingency stages for the IPS model, IEGS model and the proposed GC model. * 4.2 Illustration example of minimum output capacity constraint violation. * 4.3 A simple block diagram of a GPU. * 4.4 The three levels of the proposed model. * 4.5 The NC&CG algorithm layout. * 4.6 Topology of the test system * 4.7 Pre- and post-contingency costs of the IEGS and proposed GC models for \(15\) cases. * 4.8 Generator output adjustment before and after the contingency. * 4.9 Generator output and the required firm gas in the pre-contingency stage for the IPS model and the proposed GC model for the Case "D1A2" with gas load stress \(\#2\). * 4.10 Physical violations in the IPS model for Case "D1A2" and gas stress \(\#2\); (a) gas pressure of all nodes (except node \(4\)) and the boundaries, (b) gas pressure at node \(4\) and the boundaries, (c) gas production from well \(1\) and the capacities, (d) gas production from well \(2\) and the capacities, (e) inlet/outlet pressures of the compressor. * 4.11 Economic performance of the proposed model with and without considering over-generation for different cases. * 4.12 Time schedules in the pre-contingency stage for normal and resilient dispatch for the Case "D1A2" in **TS-I**. * 4.13 Tracing the operational, regulation, and total costs during the inner and outer iterations for case "D3A2" in **TS-I**. * 4.14 Middle-and upper-level computational times for **TS-I**; (a) Case "D1A1", (b) Case "D1A2", (c) Case "D3A2", (c) Case "D1A4". * 5.1 Schematic layout of the IPS, the IEGS, and the proposed models. * 5.2 Flowchart of the proposed quadruple-loop algorithm. * 5.3 Decision-making process for the PSO. * 5.4 The schematic diagram of the overall solution procedure. * 5.5 Topology of the test system. * 5.6 The test system topology. * 5.7 Sequences of solving problems and algorithms iterations in the proposed quadruple-loop algorithm for the large test system with wind uncertainty budget being \(4\). * 5.8 (a) RC obtained by the inner and outer C&CG and the MRCV of **F2* * by S-MISOCP; (b)-(d) RC obtained by the inner C&CG (LB and UB) and the MRCV of **F1* * by S-MISOCP for outer iterations (1)-(3), respectively. * 6.1 Market mechanism for the coupled power and gas systems. * 6.2 The proposed procedure for the interdependent market mechanism. * 6.3 The topology of **TS-I**. * 6.4 Bone Map for energy prices at equilibrium: (a) LMEP ($/MWh); (b) LMFGP ($/kSm\({}^{3}\)h); (c) Upward LMRGP ($/kSm\({}^{3}\)h); (d) Downward LMRGP ($/kSm\({}^{3}\)h) * 6.5 The performance of the BRD algorithm. * A.1 Breakpoints of PLA model used to linearized the nodal pressure at node \(1\) of the 7Nodes gas system * B.1 Topology of PJM-5Bus Power Transmission System * B.2 Topology of IEEE-39Bus Power Transmission System * B.3 Topology of IEEE-118Bus Power Transmission System * B.4 Topology of IEEE-13Bus Power Distribution System * B.5 Topology of IEEE-123Bus Power Distribution System * B.6 Topology of 7Nodes Gas System * B.7 Topology of 8Nodes Gas System [MISSING_PAGE_POST] [MISSING_PAGE_POST] List of Tables * 1.1 The proposed models and contributions in the thesis * 1.1 Typical values of gas system parameters * 1.2 Electricity-Gas analogy * 1.3 The effect of segments number and breakpoints selection on the Weymouth error and CPU time * 1.3.1 GFC method effectiveness under different stress levels on IEGS * 1.3 Comparison between MISOCP and MILP models under stress levels on IEGS * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 The effect of segments number and breakpoints selection on the Weymouth error and CPU time * 1.3.1 GFC method effectiveness under different stress levels on IEGS * 1.3.1 Comparison between MISOCP and MILP models under stress levels on IEGS * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 1.3.1 S-MISOCP algorithm parameters * 1.3 S-MISOCP algorithm parameters * 5.7 Comparison with the RO and SO based models. * 5.8 Comparison between the proposed two-stage contracting model and the IEGS model * 5.9 Comparisons with between the proposed tow-stage and one-stage contracting mechanisms. * 5.10 Computation times for the large-scale test system with different confidence levels. * 5.11 Computational performance of S-MISOCP algorithm under different parameters with \(\beta=0.95\) and \(S=1000\) * 6.1 Operating costs and energy prices at equilibrium with different gas system models * 6.2 Operational equilibria under different wind penetration levels * 6.3 Economical comparisons between the independent and central market operations under different loading levels. * PJM-5Bus System * PJM-5Bus System * PJM-5Bus System * PJM-5Bus System * PJM-5Bus System * PJM-5Bus System * B.7 Load Portion- PJM-5Bus System * B.8 Adjustment costs for non-GPUs and Efficiencies of GPUs- IEEE-39Bus System * IEEE-118Bus System * IEEE-118Bus System * IEEE-118Bus System * IEEE-118Bus System * IEEE-118Bus System * IEEE-118Bus System * IEEE-118Bus System * IEEE-118Bus System * IEEE-13Bus System * IEEE-13Bus System * IEEE-13Bus System * IEEE-13Bus System * IEEE-13Bus System * IEEE-13Bus System * IEEE-13Bus System * IEEE-123Bus System * IEEE-123Bus System * B.26 - IEEE-123Bus System * IEEE-123Bus System * IEEE-123Bus System * B.29 Parameters of Wind Power Generation 3 [Pmin = 0, Pmax = 0.5MW, Bus 35]- IEEE-123Bus System * IEEE-123Bus System * IEEE-123Bus System * 7Nodes Gas System * 7Nodes Gas System * 7Nodes Gas System * 7Nodes Gas System * 7Nodes Gas System * B.37 Connection Lines Between the PJM-5Bus System coupled and 7Nodes Gas System * 8Nodes Gas System * 8Nodes Gas System * 8Nodes Gas System * 8Nodes Gas System * 8Nodes Gas System * B.43 Connection Lines Between IEEE-13Bus System and 8Nodes Gas System * B.44 G2P Gas Contracts Between IEEE-13Bus System and 8Nodes Gas System * B.45 P2G Gas Contracts Between IEEE-13Bus System and 8Nodes Gas System * 20Nodes Gas System * 20Nodes Gas System * B.48 Connection Lines Between IEEE-123Bus System and 20Nodes Gas System * 20Nodes Gas System * B.50 G2P Gas Contracts Between IEEE-123Bus System and 20Nodes Gas System * 20Nodes Gas System * 20Nodes Gas System * B.53 P2G Gas Contracts Between IEEE-123Bus System and 20Nodes Gas System * B.54 Connection Lines Between IEEE-118Bus System and 20Nodes Gas System [MISSING_PAGE_POST] ## Acknowledgments ## Nomenclature Most of the symbols and notations are listed below as a quick reference. The sets and parameters appear in capital letters and the variables and indices appear in small letters. Other terms are defined where they first appear. The only difference between the decision variables in the day-ahead and real-time (pre- and post-contingency) stages are the hats (zeros) above the symbols. To eliminate any duplication, the explanations for the day-ahead (pre-contingency) decision variables are not listed. One can refer to the _Decision Variables_ part by adding the hats (zeros). _A. Sets and indices_ \begin{tabular}{l l} \(c\in\mathcal{C}\) & Gas compressors. \\ \(s\in\mathcal{S}\) & Gas storages. \\ \(d\in\mathcal{D}_{p}/\mathcal{D}_{g}\) & Electricity/Gas demands. \\ \(h\in\mathcal{H}\) & Gas-to-power contracts. \\ \(i,o\in\mathcal{I}\) & Gas network nodes. \\ \(w\in\mathcal{W}\) & Gas wells or gas sources. \\ \(l\in\mathcal{L}\) & Power transmission or distribution lines. \\ \(n,m\in\mathcal{N}\) & Power transmission buses or distribution nodes. \\ \(p\in\mathcal{P}\) & Gas passive pipelines, bidirectional set is \(\mathcal{P}^{\pm}\). \\ \(r\in 1...R\) & Iteration index of the C\&CG Algorithm. \\ \(t\in\mathcal{T}\) & Time periods. \\ \(u\in\mathcal{U}_{g}/\mathcal{U}_{n}\) & Gas-fired power units (GPUs) /Non-GPUs, \(\mathcal{U}=\mathcal{U}_{g}\cup\mathcal{U}_{n}\). \\ \(z\in\mathcal{Z}\) & P2G facilities. \\ \(e\in\mathcal{E}\) & Wind farms. \\ \(j\in\mathcal{J}\) & Power-to-gas contracts. \\ \end{tabular} _B. Parameters_ \begin{tabular}{l l} \(C_{u}(.)\) & Quadratic cost function of power units. \\ \(\mu_{h}\) & Day-ahead or pre-contingency gas prices for firm gas. \\ \(\mu_{h}^{+}/\mu_{h}^{-}\) & Day-ahead or pre-contingency gas prices for reserved gas. \\ \(\mu_{h}^{2+}/\mu_{h}^{2-}\) & Real-time gas prices for reserved gas. \\ \(C_{j}^{+}/C_{j}^{-}\) & Penalties of P2G contract avoidance. \\ \(C_{j}^{2-}/C_{j}^{2+}\) & Penalties/revenue of P2G production adjustments in two-stage contracting. \\ \(C_{j}\) & Day-ahead gas prices for P2G gas contract. \\ \(C_{w}\) & Cost of gas production from wells. \\ \(C_{w}^{+}/C_{w}^{-}\) & Real-time up/down reserve production costs. \\ \(C_{u}^{+}/C_{u}^{-}\) & Real-time regulation costs for non-GPUs. \\ \end{tabular} \begin{tabular}{l l} \(C_{d}/C_{n}\) & Penalties of non-served electric demands \(d\)/connected to bus \(n\). \\ \(C_{e}\) & Penalties of wind power curtailment. \\ \(C_{i}^{u}/C_{i}^{y}\) & Attacker/Defense costs. \\ \(C_{h}\) & Penalties of unserved gas demands in G2P contracts. \\ \(\beta_{n}\) & Electricity prices at power buses. \\ \(C_{i}^{+}/C_{i}^{-}\) & Prices of upward/downward gas reserves provided by end-users. \\ \(\overline{C}_{i}\) & Penalties of unbalanced gas nodes. \\ \(C_{i}\) & Penalties of gas load shedding. \\ \(\overline{P}_{u}/\underline{P}_{u}\) & Upper/lower active power limits of power units. \\ \(\overline{Q}_{u}/\underline{Q}_{u}\) & Upper/lower reactive power limits of power units. \\ \(\overline{P}_{z}/\underline{P}_{z}\) & Upper/lower power limits of P2G units. \\ \(\overline{R}_{u}^{+}/\overline{R}_{u}^{-}\) & Ramping up/down of power units. \\ \(r_{l}/x_{l}\) & Series resistance/reactance of power lines. \\ \(G_{n}/B_{n}\) & Shunt conductance/susceptance of power nodes. \\ \(P_{d}/Q_{d}\) & Active/reactive power demands. \\ \(\hat{W}_{e,t}\) & Forecasted power outputs of wind farms. \\ \(\overline{P}_{e,t}/\underline{P}_{e,t}\) & Maximum/minimum forecasting power outputs from wind farms. \\ \(\overline{I}_{l}\) & Ampacity limit of power lines. \\ \(\overline{V}_{n}/\underline{V}_{n}\) & Voltage limits of power nodes. \\ \(\overline{F}_{w}/\underline{F}_{w}\) & Upper/lower gas production limits of well or source \(w\). \\ \(\overline{\Pi}_{i}/\underline{\Pi}_{i}\) & Pressure limits of gas nodes. \\ \(\overline{F}_{p}/\underline{F}_{p}\) & Gas flow limits of a pipeline. \\ \(\overline{L}_{s}/\underline{L}_{s}\) & Upper/Lower limits of working volume of storage \(s\) at time \(t\). \\ \(\overline{f}_{s}^{in}/\overline{f}_{s}^{out}\) & Maximum capacities of the injection/withdraw rates of storage \(s\). \\ \(\alpha_{c}\) & Gas consumption factor of compressors. \\ \(\gamma_{c}\) & Maximum compression factor of compressors. \\ \(F_{d}\) & Gas demands. \\ \(\eta_{z}/\eta_{u}\) & Efficiency of P2G units/GPUs. \\ \(\Phi\) & Electricity-to-gas conversion factor. \\ \(\chi_{p}^{m}/\chi_{p}^{f}\) & Line pack/Weymouth equation constants. \\ \(\underline{\Theta}_{n}/\overline{\Theta}_{n}\) & minimum/maximum limits bus angles \\ \(\widetilde{\pi}\) & The mathematical constant \(\widetilde{\pi}\approx 3.1416\). \\ \(\overline{P}_{l}\) & Power flow capacity of power lines. \\ \(\Gamma^{e}/\Gamma^{t}\) & Wind budget based on number of wind farms/time periods. \\ \(\varepsilon/\xi,\varrho\) & Convergence parameters of. \\ \(\tau\) & Penalty coefficients for penalized problems in S-MISOCP algorithm. \\ \(\overline{\mu},\ \underline{\mu},\sigma\) & penalty growth rate coefficients. \\ \end{tabular} ### Decision variables \begin{tabular}{l l} \(c_{u,t}\) & Power units generator status (UC). \\ \(f_{w,t}\) & Gas production of gas wells or sources. \\ \(p_{u,t}/q_{u,t}\) & Active/reactive power of power units. \\ \(p_{l,t}/q_{l,t}\) & Active/reactive power flow of power lines. \\ \end{tabular} \begin{tabular}{l l} \(p_{j,t}/\varrho_{j,t}\) & Utilized power by/produced gas from P2G units. \\ \(p_{e,t}\) & Real-time production power of wind farms. \\ \(\rho_{h,t}\) & Firm gas in G2P contracts. \\ \(\rho_{h,t}^{+}/\rho_{h,t}^{-}\) & Upward/downward reserved gas in G2P contracts. \\ \(\rho_{h,t}^{2+}/\rho_{h,t}^{2-}\) & Upward/downward real-time gas amounts in the two-stage contracting. \\ \(g_{j,t}\) & Scheduled gas in P2G contracts. \\ \(u_{l}/y_{l}\) & Attacker/defender decisions for power line \(l\). \\ \(h_{l}\) & Availability status of power line \(l\). \\ \(\triangle p_{u,t}^{+}/\triangle p_{u,t}^{-}\) & Upward/downward regulation power from non-GPUs. \\ \(\triangle p_{d,t}/\triangle q_{d,t}\) & Active/reactive power load shedding. \\ \(w_{e,t}\) & Real-time wind power outputs. \\ \(\triangle w_{e,t}\) & Wind power curtailment. \\ \(\triangle g_{h,t}^{+}/\triangle g_{h,t}^{-}\) & Gas deviations in P2G contracts. \\ \(i_{l,t}\) & Squared current of power lines. \\ \(v_{n,t}\) & Squared voltage of power buses. \\ \(\theta_{n,t}\) & Voltage angle at power buses. \\ \(\pi_{i,t}\) & Pressure of gas nodes. \\ \(\pi_{i,t}^{+}/\pi_{i,t}^{-}\) & Pressures of sending/receiving nodes of a pipeline. \\ \(f_{p,t}\) & Average gas flow of passive pipelines. \\ \(i_{c,t}^{in}/f_{c,t}^{out}\) & Inlet/Outlet gas flow of compressors. \\ \(f_{p,t}^{in}/f_{p,t}^{out}\) & Inlet/Outlet flows of gas pipelines. \\ \(f_{s,t}^{in}/f_{s,t}^{out}\) & Inlet/Outletflows of gas storage. \\ \(m_{p,t}\) & Line pack. \\ \(l_{s,t}\) & Working volume of storages. \\ \(u_{w,t}^{u},u_{w,t}^{l}\) & Wind uncertainty binaries, they can be written as \(\xi_{w,t}^{u},\xi_{w,t}^{l}\) \\ \(UB/LB\) & Upper/lower bound of C\&CG algorithms. \\ \(Gap\) & Optimality gap of C\&CG algorithms, \((UB-LB)/LB\). \\ \(s\) & Auxiliary variables for the penalized problems in S-MISOCP algorithms. \\ \end{tabular} [MISSING_PAGE_POST] List Of Publications This document is based on the work of 6 articles (2 in journals with JCR, 2 under review, and 2 in conferences). These are: **(Paper A)**: Ahmed R. Sayed, Cheng Wang, and Tianshu Bi. "Resilient operational strategies for power systems considering the interactions with natural gas systems." Applied energy, vol. 241, no. 1, pp. 548-66, May 2019. **(Paper B)**: Ahmed R. Sayed, Cheng Wang, Junbo Zhao, and Tianshu Bi. "Distribution-level Robust Energy Management of Power Systems Considering Bidirectional Interactions with Gas Systems." IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2092-2105, May 2020. **(Paper C)**: Ahmed R. Sayed, Cheng Wang, Tianshu Bi, and Arsalan Masood. "A Tight MISOCP Formulation for the Integrated Electric-Gas System Scheduling Problem." In 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2), IEEE, 2018, pp. 1-6. **(Paper D)**: Ahmed R. Sayed, Cheng Wang, Tianshu Bi, Mohamed Abdelkarim Abdelbaky, and Arsalan Masood. "Optimal Power-Gas Flow of Integrated Electricity and Natural Gas System: A Sequential MISOCP Approach." In 2019 3rd IEEE Conference on Energy Internet and Energy System Integration (EI2), IEEE, 2019, pp. 283-288. **(Paper E)**: Ahmed R. Sayed, Cheng Wang, Sheng Chen, Ce Shang, and Tianshu Bi. "Two-stage Distributionally Robust Gas Contracting for Power System Operation." Submitted for publication to IEEE Transactions on Smart Grid. **(Paper F)**: Ahmed R. Sayed, Cheng Wang, Wei Wei, Tianshu Bi, and Mohammad Shahidehpour. "Robust Operational Equilibrium for Electricity and Gas Markets Considering Bilateral Energy and Reserve Contracts." Submitted for publication to IEEE Transactions on Power Systems. During the course of the Ph.D. study, the following publications have been prepared, but they are omitted from the thesis document because they are not related to the main objective. **(Paper G)**: Masood, Arsalan, Junjie Hu, Ai Xin, Ahmed R. Sayed, and Guangya Yang. "Transactive Energy for Aggregated Electric Vehicles to Reduce System Peak Load Considering Network Constraints." IEEE Access 8, 2020, pp. 31519-31529. * [Paper H] Masood, Arsalan, Ai Xin, Junjie Hu, Salman Salman, Ahmed R. Sayed, and Mishkat Ullah Jan. "FLECH Services to Solve Grid Congestion." In 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2), IEEE, 2018, pp. 1-5. * [Paper I] Sayed A. Zaki, Honglu Zhu, Jianxi Yao, Ahmed R. Sayed, and Mohamed Abdelkarim. Abdelbaky "Detection and Localization the Open and Short Circuit Faults in PV Systems: A MILP Approach." Accepted for publication in 2020 The 2nd Asia Energy and Electrical Engineering Symposium (AEES 2020). ## Chapter 1 Introduction ### 1.1 Research Background & Motivations The reliable, resilient and economic operation of critical infrastructures, such as electricity, water, natural gas, cooling, transport and telecommunication, is important to strengthen and support economic and social activities in modern society. The electric power system is the most critical infrastructure system because electricity plays an important role in the secure and continuous operation of these systems. However, existing electric power grids experience different forms of vulnerabilities and random failures, such as extreme weather, terrorism, component aging/failure, unexpected generator or power line outages, and human errors, which may result in widespread economic and social contingencies. For example, extreme weather has caused power outages with damages ranging from $\(20\) to $\(55\) billion in the USA [1], blackouts such as Hurricane Katrina in \(2005\)[2], the Japan Earthquake in \(2011\)[3], Hurricane Sandy in \(2012\) (\(N-90\) event) [4], and transmission line contingencies in South Australia [5]. Natural disasters, such as extreme weather are expected to increase in the future due to climate change [6]. In addition, vulnerabilities to terrorist attacks could cause more severe system disruptions than natural disasters [7]. From \(1999\) to \(2002\), more than \(150\) terrorist attacks on power networks worldwide have been reported [8]. These vulnerabilities make it crucial to evaluate the performance and facilitate decision-making with regard to the power grid under contingencies by analyzing the power system vulnerability. Besides, climate change and environmental concerns have been major driven forces for the utilization of renewable energy resources, such as wind and solar power generation, around globe [9, 10]. In this regard, the top two CO\({}_{2}\) emitters, China and the US, pledged to increase their wind energy utilization to 20% by \(2030\)[11]. According to the US energy information administration (EIA), renewable share in the US grid continue to increase from about \(800\) billion MWh to \(2100\) billion MWh by \(2050\), as shown in Figure 1. Therefore, renewable resources displace the conventional thermal power units, which today provide many services, including frequency and voltage control, generation reserves and stability services, to the reliable power system operation. However, the utilization of wind power at a large scale brings new challenges for energy management in power systems because of the variable and uncertain output features of renewables. ###### Abstract The study of the energy transfer of energy transfer in a single-layer system is a fundamental problem in the energy transfer of energy transfer in a single layer of a single Due to their direct and physical connection with energy consumers, IESs have become the most effective and essential part of energy supply, and IESs performance would have a great impact on the social activities and industrial practice. Therefore, in the recent economic development, ensuring the reliability of IES is a crucial need [13], [14]. In [15], IES structure is enhanced by integrating the communication systems, where the basic steps of the incorporation are discussed. A robust optimization model for the IES operation is proposed in [16] to consider wind uncertainty in an electricity-coal-gas system. A general energy flow model is presented, while modeling hydrogen system in the IES [17]. Chemical energy utilization is studied with combined heating and power system in [18], where the utilization efficiency is improved. Based on a cost-benefit theory, an evaluation method is established on urban IES to increase its utilization efficiency [19]. An operation model for a new type of community-level IES is proposed in [20]. Interested reader can refer to [21, 22, 23, 24, 25] for the state-of-the-art reviews on the structures, models, analysis and solution methodologies for the IES. Natural gas has provided a strong importance in the global energy balance as the most energy cost-effective fossil fuel, and it act as a bridge in the transition to a near-zero emission IES system [26, 27, 28]. This progress was resulted from \(1980\)s due to new concerns from a potential global warming. Recently, because of advanced technologies in gas cracking and extraction, natural gas reserves have encouraged the growth of this energy worldwide. Besides, among other fuels, natural gas has a great position predominantly due to robust production, low carbon emission and abundant accessibility of gas sources. Moreover, thanks to the shale gas revolution, which is enhanced by the development in horizontal drilling and hydraulic cracking technologies, the gas prices are decreasing significantly [29]. Therefore, high-efficiency gas is promoted as the second largest energy source/consumption over the world [27]. Figure 1.2 displays the growth in gas consumption from \(2010\) to \(2014\) for OECD countries (left) and non-OECD countries (right), indicating that the worldwide consumption may be raised from \(113\) trillion cubic feet (Tcf) to \(185\) Tcf. Shale gas is projected to increase by \(30\%\) of world gas production in \(2040\). Figure 1.3 exhibits the shale gas production, which grows from \(42\) billion cubic feet per day (Bcf/d) to \(168\) Bcf/d in \(2015-2040\) for the six countries that have shale resources. Although coal and nuclear are expected to remain as the main fuels used in electricity generation, global environmental concerns and energy prices have motivated the developments to produce electricity from renewable and natural gas energies, respectively, as shown in Figure 1.1. According to EIA, China will increase renewable energy utilization from about \(1\) TWh to \(5\) TWh, along with the growth deployment of natural gas to \(10\%\) of electricity generation. This situation is illustrated in Figure 1.4. ## Chapter 1 Introduction ### 1.1 Introduction The first part of the thesis is devoted to the study of the properties of the electromagnetic and electromagnetic properties of the electromagnetic electromagnetic field. The electromagnetic properties of the electromagnetic electromagnetic field are described by the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic electromagnetic field, the electromagnetic field, the electromagnetic electromagnetic field the natural gas can be stored with large capacities in a cost-effective manner, P2G facilities are recently employed to effectively convert electricity into gas, which further is stored, transported and reutilized by gas networks. P2G facilities allow power system to be interacted with other energy systems, such as transport and heating systems. The idea to convert electricity into gas for storage was firstly developed in \(1978\)[35]. Several pilot sites are constructed throughout the world, indicating the strong importance in this technologies [36]. The existing researches [37, 38, 39] agree that P2G facilities can help the power system operation in mitigating the fluctuations of energy loads, the surplus renewable energy, recycling CO\({}_{2}\) and offering ancillary services. Neglecting the physical interactions between the two systems may not provide the optimal decision for power system operators and it may cause physical violations, such as under/over nodal pressure and/or gas well production capacity violations, in gas systems [40]. Any inadequacy in the coordination between the two systems may result a sequence of shutdown of electric generators, and may lead to a blackout. However, this increased interaction, which is also referred to as an integrated electric-gas systems (IEGSs) in the literature, encounters issues related to the secure, reliable, and resilient operation of the IEGS. ### 1.2 Literature Review The short-term operation of power and gas systems has been influenced with the increasing interdependence on each other. For instance, power system operating decisions are influenced by natural gas prices, which are received from gas system operator (GSO), i.e., consumed gas by GPUs as well as the electricity prices will be directly affected. Also the production schedule of gas system depends on the injected gas to/from GPUs/P2G units, and further will impact on the operational costs. This situation may be more challenge when power and Figure 1.4: Projected growth rate in the electricity generation in China. Source: Annual Energy Outlook 2020 [http://www.eia.gov](http://www.eia.gov) gas demands are simultaneously peak. Furthermore, gas supplied to GPUs has high priority to be curtailed under gas system congestion because this amount of gas is usually signed as interruptible contracts [41]. Therefore, PSO are progressively tending to appeal more flexibility from gas systems to assure continuous supply. On the other hand, facing volatile energy demands, which are difficult to forecast, represents security issues in the operation of IEGS because guarantee gas supply continuity under varying loads is not straightforward task for GSO. Natural gas infrastructure take long time to response due to slow dynamics of gas, therefore, its operating decisions must be early and timely planned. The travelling velocity of the gas is much slower than electricity, its maximum value is about \(50\) km/h [42]. To this end, there are many economical and physical interactions, which may impact the operation of one system corresponding to the other, and appropriate planning and operation are necessary for accurate coordination in production, delivery and utilization, considering uncertainties of the integrated renewable energies as well as system contingencies. Although different research works have been conducted in the pertinent literature in the past \(20\) years to address the issues of interdependency and resilient-economic operation of IEGSs [43], there are significant research gaps between the existing works and the practical application, that need to be fulfilled. 1. Most studies on the coordinated operation of power and gas systems share one underlying assumption, namely, the existences of one operator or utility that has full control and operation authority over the both systems. This operator minimizes all costs associated with energy production and provides optimal decisions for the combined system. However, in industrial practice, there are significant institutional and administrative barriers to operate the two systems in a holistic manner [44]. Power system(s) and gas system(s) are operated by different utilities and they are unsynchronized in most countries and regions as in European countries [45] and in China [46]. This lack of synchronization indicates that the total fuel cost minimization determined by the IEGS models might not be a realistic operational objective for autonomous sub-systems and, therefore, bilateral energy trading is inevitable. 2. For the power system resilient operation, the operational mode of the electric power system in post-contingency conditions might be significantly different from that of the pre-contingency stage, such as sudden start-up or shut-down of fast-response generators and rapid increases or decreases in generator outputs to minimize operating losses; a similar trend is observed for the gas demands of GPUs. Moreover, GPUs usually provide interruptible gas supply services according to current gas industrial practices [47] and the gas contracts are usually determined by considering day-ahead contracts because real-time contracting would be costly and inconvenient [48]. In other words, GPUs cannot execute the planned regulations without appropriate gas contracting. However, this economic interactions between the two systems, including reserved gas contracts, have been neglected in the existing resilient operation models. 3. In power system integrating large-scale renewable energy sources, the real-time operation of power system might be changed from the day-ahead dispatch largely owing to the renewable uncertainties. This means the outputs of GPUs and P2G facilities might deviate from their day-ahead schedules, to mitigate the operation losses or the surplus wind generation. In this regard, modeling bidirectional energy contracts is necessary. Moreover, the purchased gas can be divided into two parts, including the firm part for the dispatched outputs day-ahead and the variable one for the real-time utilized reserves, respectively, which suggests the size of the contract is directly related to the operation strategies of power systems. The recent studies only consider the modeling of firm gas contracts in power system operation [47], [49], where the reserved gas contracts and the impacts of uncertainty on the contracts as well as P2G gas contracts are missing. 4. No attempt has been found in the literature that considers power system uncertainties in a robust optimization (RO) approach to analyze the equilibrium between the electricity and gas markets, where the main difficulty is how to reflect the impacts of power system uncertainties on the gas system, and vice versa. 5. Beside the drawbacks in decision-making framework modeling, there are also computational difficulties to identify the optimal and feasible decisions for the interacted power and gas systems. Finding the optimal gas flow (OGF) has drawn attention from researchers due to its non-convexity, as it is originated from the nonlinear partial differential equations, which are commonly reformulated by piecewise linear approximation (PLA) methods, and second-order cone (SOC) relaxation. However, these reformulations either introduce high computational burden due to large number of integer variables or infeasible OGF due to inexact relaxation. It should be noted that the steady-state gas flow model is widely adopted in the literature, neglecting the mass flow rate inside pipelines and slow gas dynamics that may provide suboptimal solutions. Moreover, most of the recent studies mainly concentrate on transmission level, however, stronger interactions in the IEGS are observed in the distribution-level [50], [51], in which the active and reactive power are coupled as the bus voltages are notably influenced by active power variations. Furthermore, additional computational challenges in the coupled power and gas system operation, especially under uncertainties, would be introduced. ### 1.3 Research Objectives and Challenges The main purpose of this thesis is to revisit the resilient and economic operation of power systems against contingencies as well as renewable energy uncertainties in terms of decision-making, considering the interactions of power systems with gas systems. With this objective, this work seeks to satisfy the aforementioned gaps between the existing researches and the industrial application concerning the lack of neglecting physical and/or economic interactions with gas systems. The proposed operation models must be able to provide a level of reliability and flexibility, which is required for PSO, and secure and feasible optimal decisions for both systems. To achieve that, this thesis first introduces how to model and optimize accurately, in both transmission and distribution levels, the coupled power and gas system. Afterward, different power system dispatch models are developed and efficiently optimized against \(N-k\) contingencies and volatile wind power uncertainties, where energy contracts are ## Chapter 1 Introduction The goal of this thesis is to develop a new approach to the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the of the design of the design of the design of the design of the of the design of the design of the design of the of the design of the design of the of the design of the design of the of the design of the design of the design of the design of the design of the design of the of the design of the design of the of the design of the design of the design of the design of the design of the design of the of the design of the design of the design of the of the design of the design of the of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the design of the of the design of the design of the design of the design of the of the design of the design of the design of the of the design of the design of the of the design of the design of the design of the design of the design of the of the design of the design of the design of the of the design of the design of the of the design of the of the design of the of the design of the design of the design of the of the design of the of the design of the of the design of the design of the of the design of the design of the design of the of the design of the design of the of the design of the design of the of the design of the design of the of the design of the of the design of the of the design of the of the design of the design of the of the design of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the design of the of the design of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the design of the of the of the design of the of the design of the of the design of the of the design of the of the of the design of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the design of the of the of the of the of the design of the of the of the of the of the design of the of the of the of the of the of the of the design of the of the of the of the of the of the design of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of of the of the of of the of the of the of the of the of of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of the of the of the of the of the real-time and day-ahead stages, therefore a novel method is needed that encourages the fifth objective. 5. Develop an efficient approach for solving two-stage robust optimization (RO) models with the non-convex nonlinear power and gas flow equations. Convex relaxation methods have been implemented to find the optimal power-gas flow (OPGF) owing to their computational benefits. However, convex relaxation is exact under mild conditions, therefore, they are not tight enough, and the solution exactness cannot be guaranteed. The non-convex equations can be reformulated as difference-of-convex programming (DCP) functions, and a sequential convex procedure (SCP) can be adopted to find a more accurate and feasible solution. Consequently, a quadrable-loop procedure based on the SCP and the nested column and constraint (NC&CG) algorithm can be adopted to tackle the two-stage RO model. 6. Establishing a distributionally robust economic dispatch model for power systems that considers bidirectional two-stage gas contracting. Incorporating both the day-ahead and real-time gas contracts in power system operation is necessary to consider the costly real-time contracts for the low-probability utilized reserved GPU outputs in practice. The proposed model can be insensitive on the exact probability distribution of renewable generation, and the decisions robustness and conservativeness can be adjusted. 7. Characterizing a robust operational equilibrium for the interactive markets of power and gas systems, considering the impacts of the uncertainties of wind generation outputs on the two markets. The proposed framework considers that the two markets must be independently operated and allow limited information exchange, including only the prices and demands of both systems for contract agreements. Besides, under equilibrium, impacts of power system uncertainties must be reflected on the gas system, and vice versa. The superiority of the robust operational equilibrium over the deterministic one and its effectiveness under limited data exchange must be confirmed. 8. Validate proposed frameworks and solution methodologies with different numerical simulations and case studies. Data and test systems employed in case studies must be able to illustrate the applicability of the proposed frameworks and suggested approaches in the industrial practice. Unfortunately, during the thesis working period it was not easy to find real system data. However, the proposals are validated on test systems used in the literature, or similar to them. ### 1.4 Contributions and Publications The major contribution of this research is to develop optimization models for the coordinated power and gas systems that able to provide optimal resilient and economic operational ## Chapter 1 Introduction The most important part of the research in the field of research has been the development of the field of research in the field of considering contracts, a new kind of attack strategy emerges, i.e., the consumption of gas below/above the reserved (contracted) values; (2) Unlike the most tri-level models where the lower level decision variables are continuous, there are binary variables in the lower level optimization problem in the proposed model. The additional binaries originate from the linearization of the nonlinear non-convex Weymouth equation, as well as the on/off control of the generators in the post-contingency stage, and they are used to determine the potential attack region [52]. Therefore, the NC&CG algorithm proposed by [53] is applied to solve the proposed tri-level model after adjusting its stopping criteria. This model has been published as: * Ahmed R. Sayed, Cheng Wang, and Tianshu Bi. "Resilient operational strategies for power systems considering the interactions with natural gas systems." Applied energy, 2019 May, 1;241:548-66, DOI: [https://doi.org/10.1016/j.apenergy.2019.03.053](https://doi.org/10.1016/j.apenergy.2019.03.053) Then, in Chapter 5, two operational models for optimal power system operation with bidirectional gas contracts are proposed. The first is a robust EM model for the PDN with RPG uncertainties. The main contributions of this study are twofold; (1) A tri-level robust dispatch model is established for the PDN considering both physical and economic interactions with the gas systems. Specifically, the physical interaction is achieved by adding the security and feasibility constraints of the gas system into the EM problem of the PDN, while the economic interaction is completed by modeling firm and reserved gas contracts for both G2P and P2G; (2) A quadruple-loop algorithm for the proposed robust EM problem of the PDN is devised, where the second and forth loops are S-MISOCP algorithms to enhance the solution feasibility in the day-ahead and real-time dispatch stages, respectively, and the first and third loops are column-and-constraint (C&CG) algorithms to tackle the tri-level decision-making structure with binary recourse. This work has been published as: * Ahmed R. Sayed, Cheng Wang, Junbo Zhao, and Tianshu Bi. "Distribution-level Robust Energy Management of Power Systems Considering Bidirectional Interactions with Gas Systems." IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2092-2105, May 2020. DOI: [https://doi.org/10.1109/TSG.2019.2947219](https://doi.org/10.1109/TSG.2019.2947219) The second model is a distributionally robust two-stage contracting model. Compared with the literature, the salient feature of this study is that a two-stage distributionally robust model is proposed for signing bidirectional energy contracts with gas systems from the perspective of power system operator (PSO). To the best knowledge of the authors, this work is the first attempt to incorporate both the day-ahead and real-time gas contracts in power system operation. This model is submitted for publication as * Ahmed R. Sayed, Cheng Wang, Sheng Chen, Ce Shang, and Tianshu Bi. "Two-stage Distributionally Robust Gas Contracting for Power System Operation." Submitted for publication to IEEE Transactions on Smart Grid. Finally, a method for the robust operational equilibrium seeking of the coupled electricity and gas markets is proposed. The main innovations are multi-fold: (1) A robust operational equilibrium for the coupled electricity and gas markets is characterized considering the uncertainties of RPG as well as bidirectional energy and reserve contracts; (2) Inspired by [54] and [55], the marginal energy and reserve prices for electricity and gas markets are derived based on the cost causation principle to reflect the impacts of uncertainties; (3) The BRD algorithm is proposed to identify the characterized operational equilibrium, where the electricity and gas markets are separately cleared by the C&CG and NC&CG algorithms, respectively; (4) The superiority of the robust operational equilibrium over the deterministic one, its effectiveness under limited data exchange, the importance of considering the gas dynamics, and solution procedure performance have been verified by numerical results. This work is submitted for publication as * Ahmed R. Sayed, Cheng Wang, Wei Wei, Tianshu Bi, and Mohammad Shahidehpour. "Robust Operational Equilibrium for Electricity and Gas Markets Considering Bilateral Energy and Reserve Contracts." Submitted for publication to IEEE Transactions on Power Systems. The abovementioned models and algorithms are intended to be employed by energy companies, planners and operators, interested in resilient and economic operations for power system with the physical and economic interactions with gas systems. We hope that this thesis gets some value to the "State Grid Corporation of China" and the "Egyptian Electricity Holding Company and Subsidiaries" in the near future, to their operation of Chinese and Egyptian electricity networks with a high level of security, economy and resiliency. ### 1.5 Thesis Outline The remainder of this document is organized into six chapters, each divided into sections and subsections. Figure 1.5 displays the overall thesis structure to show the connections among chapters, indicating the main motivations of each one. Chapters begin with a brief introduction to describe their motivations and contents. Each mathematical formulation or advised algorithms are validated with different numerical examples, including decision-making strategies and computational comparisons. The scalability tests have been conducted for all proposed models and approaches. It should be noted that all models in the thesis consider the gas dynamics. Finally, each chapter terminates with its conclusions. The proposed models and the thesis' contributions introduced in each chapter are listed in Table 1.1. **Chapter 2** starts by a description of the physical structure of gas system, indicating the mathematical model of the principal components. The set of PDE, which represents the gas system dynamics is listed. Then, the approximated dynamic- and steady-state gas flow models are depicted. Chapter 2 introduces the electric power system modeling, focusing on formulations, applications and versions of the optimal power flow (OPF). The similarities and differences between electricity and gas system and the coupling components are illustrated. Finally, a survey on the available coordination strategies between the two systems is presented. **Chapter 3** focuses on identifying the optimal power-gas flow (OPGF) in the coupled power and gas system. Initially, different solution methodologies for solving gas system and power system optimization problems are illustrated. Then, based on convex relaxation methods, a gas flow correction methods is proposed to guarantee the exactness and feasibility from the relaxed formulation. Case studies are conducted to illustrate the effectiveness and features of the proposed two methodology, and to compare with the widely adopted PLA methods. Then, a sequential-MISOCP algorithm is designed to solve OPGF, considering the AC-OPF at the distribution level. Case studies validate the accuracy and feasibility of the proposed algorithm as well as its performance and convergence are discussed. **Chapter 4** starts with a detailed introduction of power system resilience models, IEGS resilience models and the proposed model, indicating the main contributions of the work in this chapter. The pre- and post- contingency operational constraints are illustrated and gas contracts are modeled considering its two subcontracts, namely firm and reserved gas contracts. The proposed model is solved by NC&CG algorithm. Finally, the necessity of considering economic and physical interactions between power systems and natural gas systems and the effectiveness of the proposed model and algorithm are verified by numerical simulations of two test systems. **Chapter 5** focuses on the economic operation of power system against wind power uncertainties. Two optimization models are proposed, namely robust day-ahead operation with bidirectional gas contracting and two-stage distributionally robust gas contracting models, respectively. The main features, problem formulation and solution methodology and case studies of each model are separately presented. **Chapter 6** beginnings with a detailed introduction of the existing studies from market perspectives, indicating the main contributions of the presented work. A pool-based market mechanism is proposed and the electricity and gas markets are modeled. Each market is individually cleared and the operational equilibrium is identified by the best-response decomposition (BRD) algorithm. Finally, case studies are conducted to verify the superiority Figure 1.5: Thesis structure. of the robust operational equilibrium over the deterministic one and its effectiveness under limited data exchange. This document finishes with the main conclusions and future works in **Chapter 7**. \begin{table} \begin{tabular}{c c c} \hline \hline **Chapter** & **Models** & **Contributions** \\ \hline **2. Modeling the** & \begin{tabular}{c} Formulations for gas and \\ power systems as well as \\ Coordination strategies \\ \end{tabular} & Introducing a literature review \\ \hline **3. Optimal Energy** & \begin{tabular}{c} Transmission-level \\ integrated electricity and \\ gas system \\ \end{tabular} & \begin{tabular}{c} MISOCP formulation for Weymouth \\ equation \\ \end{tabular} \\ \cline{2-3} **4. Robust Resilient** & \begin{tabular}{c} Tri-level resilient \\ operational framework of \\ power systems considering \\ interactions with gas \\ systems \\ \end{tabular} & \begin{tabular}{c} Feasibility and accuracy guarantee, by \\ proposing the Sequential-MISCOP \\ algorithm \\ \end{tabular} \\ \cline{2-3} **5. Robust Economic** & \begin{tabular}{c} Robust day-ahead \\ operation with \\ bidirectional gas \\ contracting \\ \end{tabular} & \begin{tabular}{c} Modeling firm and reserved gas \\ contracts for both G2P and P2G. \\ \end{tabular} \\ \cline{2-3} **5. Robust Economic** & \begin{tabular}{c} Suggesting a quadruple-loop algorithm \\ to fined feasible and optimal strategies \\ \end{tabular} \\ \cline{2-3} **Operational** & \begin{tabular}{c} Modeling both the day-ahead and real- \\ time gas contracts in power system \\ operation \\ \end{tabular} \\ \cline{2-3} **Strategies for IEGSs** & \begin{tabular}{c} Two-stage distributionally \\ robust gas contracting \\ \end{tabular} & \begin{tabular}{c} A tractable formulation of the objective, \\ then solved by the quadruple-loop \\ algorithm. \\ \end{tabular} \\ \cline{2-3} **6. Robust Operational** & \begin{tabular}{c} Robust operational \\ equilibrium considering \\ the uncertainties of \\ renewable energy \\ \end{tabular} & \begin{tabular}{c} Considering bidirectional energy and \\ reserve contracts in energy markets \\ \end{tabular} \\ \cline{2-3} **6. Robust Operational** & \begin{tabular}{c} Robust operational \\ equilibrium considering \\ the uncertainties of \\ renewable energy \\ \end{tabular} & \begin{tabular}{c} Derivation of the marginal energy and \\ reserve prices based on cost causation \\ principle \\ \end{tabular} \\ \cline{2-3} **7. Coupled Electricity** & \begin{tabular}{c} Robust operational \\ equilibrium considering \\ the uncertainties of \\ renewable energy \\ \end{tabular} & \begin{tabular}{c} Derivation of the marginal energy and \\ reserve prices based on cost causation \\ principle \\ \end{tabular} \\ \cline{2-3} **and Gas Markets** & \begin{tabular}{c} Six-loop procedure is suggested to \\ identify the characterized operational \\ equilibrium \\ \end{tabular} \\ \hline **7. Robust Robust Operational** & \begin{tabular}{c} Tri-level resilient \\ operational framework of \\ power systems considering \\ interactions with gas \\ systems \\ \end{tabular} & \begin{tabular}{c} Modeling firm and reserved gas \\ contracts for both G2P and P2G. \\ \end{tabular} \\ \cline{2-3} **8. Robust ## Chapter 2 Modeling the Integrated Electric-Gas Systems Predominantly due to the propagation of large fleet natural gas power units (GPUs) and the technology developments of power-to-gas (P2G) facilities, interactions between power and gas systems have been noticeably enhanced in transmission [37] and distribution [56], [57] levels. These interactions not only bring significant economic and environmental benefits to the society but also provide additional operating flexibilities, which are essential to handle fluctuations of renewable energy and demands, as well as contingencies [40], [43]. Moreover, neglecting these interactions in power system operation may not only result in infeasible operation status in the gas systems [40], but also increase the decision-making operation costs of both systems [49]. This intensified interaction gradually brings quite a few research interests on modeling, simulation and analyzing the coupled power and natural gas systems [58], [59]. This chapter focuses on the advancements of modeling and coordinating the gas and power systems that could be adopted to analyze the resilient and economic operations of the coupled system. Section 2.1 focuses on modeling the gas systems. It starts by discussing the physical structure of gas system to provide a brief description of the principle components and their mathematical models. A set of partial differential equations (PDE) are defined, and, consequently, the approximated dynamic- and steady-sate gas flow models are obtained. Section 2.1 mainly focuses on modeling the optimal power flow (OPF) and its applications and versions. The AC-OPF and its approximated DC-OPF are arranged at the end. Finally, an analogy between power and gas systems, indicating the similarities and differences between the two energy systems as well as modeling the physical interactions are provided in Section 2.3. Finally, the existing coordination scenarios in operation are discussed. ### 2.1 Natural Gas System Modeling There are different gas models have been found in the literature that mathematically formulate the relationships between the physical quantities of the gas infrastructure. The existing models can be classified into four categories [60], [61]: i) investment models, which are used to provide the planning decisions for site investments; ii) value chain models, in which, the complete system stages, including production, transportation, storage and marketing, are simultaneously optimized. iii) transportation models, which are adopted for studying the gas industrial. A tradeoff between the accuracy of decision-making strategies and the complexity of the gas transportation model is important to solve the model. A steady-state model is solved by the simplex algorithm in [62]. A multi-period flow model is presented in [63] to minimize the operational costs of compressors; iv) equilibrium models, which are used in gas markets, they are usually formulated as complementarity problems [64]. This thesis focuses on the transportation model, which is applicable to be incorporated in the power system optimization problems. #### Physical Structure of Gas System The gas system comprises several components, which are serving the delivery process, starting from production stage, moving with transportation stage, reaching to the consumption stage. All components are connected together by gas nodes, similar to the buses in power system. Figure 2.1 displays a simple gas system topology, which depicts the main components of gas systems, including one gas well, four pipelines \(p_{1}\)-\(p_{4}\), one valve \(v_{1}\), one compressor station \(c_{1}\), and one gas storage \(s_{1}\). A brief description of the gas system components and their mathematical models are provided as follows. #### Gas Sources In this study, natural gas is not only produced from gas wells but also from the power-to-gas (P2G) facilities, which are adopted to convert the excessive wind energy into gas by methanation process. The first gas source is discussed in this subsection, while the latter is comprehensively illustrated in Section 2.3.2. Natural gas has four different types, namely, conventional gas, unconventional gas, associated gas and coalbed gas, based on the formations of its land, where the surrounded area has large cracks and layer spaces, pore spaces like shale and sandstone, deposits of crude oil, and coal deposits, respectively, as displayed in Figure 2.2. To find natural gas, first the geologists locate the most likely formations that could contain gas deposits using seismic surveys. Second, if the survey introduces a positive indication, an exploratory well is examined to guarantee an acceptable quality and quantity of the available sored gas. Third, more wells are drilled Figure 2.1: Gas System Topology vertically and/or horizontally in case of good information achieved by the examined well [65]. For the unconventional gas production, natural gas is extracted by a new technological method, which is called hydraulic fracturing or fracking [29]. In this method, the gas is forced by water, sand and chemicals with a high pressure down the well, and the formations are frittered, to release the gas from rocks up to the surface. The produced gas is called wet natural gas, because it has impurities, such as water vapor and propane. Therefore, it needs to be processed in the processing plants before sending it to the pipelines. In fact, the injected gas volumes are subjected to technical or contractual ranges. Technically, the gas reservoir and the installed equipment have their industrial capacities, such as pressure range and maximum gas flow. Contractually, the production fields might be controlled by several owners, who has rights to establish the maximum and minimum levels of production. These levels are usually signed in take-or-pay gas contacts with a predefined time intervals, and the producers have to deliver the contracted values [60]. Therefore, the gas wells production is modeled as \[\underline{F}_{w}\leq f_{w,t}\leq\overline{F}_{w},\ \forall w,t, \tag{2.1}\] where, \(f_{w,t}\) is the gas flow rate injected from gas well \(w\) at time \(t\). \(\underline{F}_{w}\) and \(\overline{F}_{w}\) are the minimum and maximum production capacities. It should be noted that, for sake of simplicity, the system operational constraints include only the processed gas, therefore, the whole production process is neglected and the gas is considered to be heterogamous [61], [66], i.e., the gas quality is same in all situations. Interested readers can refer to [67] for modeling the pooling problem considering the gas quality issues. Figure 2.2: Schematic geology for the four types of gas resources Source: [https://www.eia.gov/energyexplained/natural-gas](https://www.eia.gov/energyexplained/natural-gas) ### Gas Storage Unlike power system, natural gas can be stored with large volumes. Gas storage provides additional operating flexibilities for the gas system, particularly in the short-term operation, to mitigate any congestion introduced by sudden increase/decrease of gas demands, fluctuations in gas prices, or system contingencies. Therefore, they are considered as an effective technique that is able to increase the gas system reliability in case of insufficient gas production. The peak-load storage are installed close to the gas demands to mitigate unserved gas loads at peak intervals, because the gas storages are in the injection (charging) state at off-peak load periods, and they are in the withdraw (discharging) state during the peak-loads or low gas prices. Gas storage can be classified as: i) underground storage, such as aquifers, salt caverns, and abandoned gas reservoirs, which might have a large capacity so it is cost-effective; ii) aboveground storage for liquefied natural gas. In industrial, there are stored mass inside pipelines, called line pack, which is implicit natural gas storage. The line pack is discussed in Section 2.1.2. The gas volume inside gas storage can be expressed as \[l_{s,t}=l_{s,t-1}+f_{s,t}^{in}-f_{s,t}^{out},\ \forall s,t, \tag{2.2}\] where, \(l_{s,t}\) is the working volume of storage \(s\) at time \(t\), \(f_{s,t}^{in}\) and \(f_{s,t}^{out}\) are the in-/outlet gas flow of storage \(s\). The working volume should be limited by the storage minimum and maximum capacities (\(\underline{L}_{s}\) and \(\overline{L}_{s}\)) as \[\underline{L}_{s}\leq l_{s,t}\leq\overline{L}_{s},\ \forall s,t. \tag{2.3}\] The injection and withdraw rates are non-convex functions with the stored gas volume. These functions are linearized by piecewise approximation in [68]. A basic representation of injection and withdraw rates can be expressed as \[f_{s,t}^{in}\leq\overline{f}_{s}^{in},\ \ f_{s,t}^{out}\leq\overline{f}_{s}^{out},\ \forall s,t. \tag{2.4}\] where, \(\overline{f}_{s}^{in}\) and \(\overline{f}_{s}^{out}\) are the maximum capacities of the injection and withdraw rates, respectively. In this thesis, most of studies assume that all gas storages are nonstrategic elements, i.e., they are in closed state, to highlight the effectiveness of considering the gas dynamics and gas line pack inside pipelines. ### Gas Valves Gas valve is a controllable device that can regulate and/or direct the gas flow by opening, closing and partially opening pathways, and other functions. It is a strategic element in the gas system, i.e., it is employed to manage the gas flow rates, such as sectionalizing for maintenance, isolating under contingencies, preventing the excessive pressure, and forcing the flow in a certain direction. The existing literature presents five types of gas valves according to their functions as summarized in [61]: * _Check valve_: it fixes the gas flow direction, i.e., the gas is allowed to move in a specified direction and other directions are prohibited. * _Ordinary valve_: it is used to connect the pipelines to the gas nodes, if their end terminals have equal pressures. * _Bypass valve_: it is connected in parallel with the gas compressor to allow the reverse gas flow. Hence, the gas compressor is protected during its turning off, i.e., the valve acts as a flywheel. * _Block valve_: it is used to block the gas flow for isolating and sectionalizing a part of gas system for maintenance or operational reasons. * _Control valve_: it is employed to reduce the gas pressure at sink nodes or to regulate the gas flow inside pipelines. Detailed models for the above gas valves can be found in [69]. In this study, the gas nodal pressures are considered uniform, therefore, the gas valves are nonstrategic elements in the gas system, in other words, the gas system optimization problem adopts the valves in static states. This assumption is widely adopted and common treatment in the recent coupled power and gas models. ### Gas Compressor Stations To overcome the pressure drop caused by the gas pipeline friction, gas compressor stations are employed to increase the pressure to its desired level, similar to the step-up transformer in power system. They are commonly installed close to underground storage at intervals of \(50-100\) miles [70]. Gas compressors can be classified as: * _Gas-driven compressor_: it is powered by gas turbines by consuming the required gas from pipeline flow. It is a traditional type and widely installed in the existing networks. * _Electricity-driven compressor_: it is equipped with electric motors. Although it increases the interdependency between power and system, and additional electricity contracts are needed, it introduces environmental benefits to the modern society so it is employed in the new gas networks. A detailed compressor model is given in Appendix A.1. Unfortunately, this model is non-convex and it is difficult to solve and poses tractable challenges to the gas system. Many attempts are found in the literature to propose a simplified models [47], [60], [61], [66], [71] that optimize the consumed energy, assume a constant pressure ratio, adopt a constant loss factor, linearize the consumption function, or adopt the Newton-Raphson method. Because the objective of this study is finding the optimal economic and resilient operation for the coupled power and gas system, the commonly adopted and simplified compressor model is employed in the thesis work. In simplified model, the gas flow direction inside the compressor is predetermined, therefore the in- and outlet nodes are known. Besides, according to [72], the consumed gas is usually in range \(3\%-5\%\) of the pipeline gas flow. Therefore, the simplified compressor model is defined as \[\pi_{i,t}\leq\pi_{o,t}\leq\gamma_{c}\pi_{i,t},\forall c,t,(i,o)\in c \tag{2.5}\] \[0\leq f_{c,t}^{out}=(1-\alpha_{c})f_{c,t}^{in},\ \forall c\in \mathcal{C},t, \tag{2.6}\] where \(\pi_{i,t}/\pi_{o,t}\) and \(f_{c,t}^{in}/f_{c,t}^{out}\) are the inlet/outlet pressures and gas flows, \(\gamma_{c}\) is the maximum compression factor with the specified direction, and \(\alpha_{c}\) is the gas consumption factor, which equal zero for the electricity-driven compressor and \(0.03-0.05\) for gas-driven compressor. Note that these simplifications have not a great influence on the solution accuracy for the coupled power and gas system, however, it provide a more tractable model to be optimally solved. #### Gas Nodes Gas node is the connection point between gas system elements, including gas sources, gas loads, compressors and valves. It also connects the gas system with the power system through the coupled components, such as GPUs and P2G facilities. The gas pressure is same for all sections in the node, i.e., the node has a constant gas flow per period and there is no gas dynamics in the node. The gas nodal balancing equation of gas node \(i\) at time \(t\) is accomplished by \[\sum_{w\in\mathcal{W}(i)}f_{w,t}+\sum_{p\in\mathcal{P}_{1}(i)}f_{ p,t}^{out}-\sum_{p\in\mathcal{P}_{2}(i)}f_{p,t}^{in}+\sum_{s\in\mathcal{S}(i)}(f_{s,t}^{out}-f_{s,t}^{in})+\sum_{c\in\mathcal{C}_{1}(i)}f_{c,t}^{out}\] \[\quad-\sum_{c\in\mathcal{C}_{2}(i)}f_{c,t}^{in}+\sum_{z\in \mathcal{Z}(i)}\varrho_{z,t}=\sum_{u\in\mathcal{U}_{g}(i)}\rho_{u,t}+\sum_{d \in\mathcal{D}_{g}(i)}F_{d,t},\ \forall i,t, \tag{2.7}\] where \(f_{w,t}\) is the gas flow injected from gas well \(w\); \(f_{p,t}^{in}/f_{p,t}^{out},\ f_{s,t}^{in}/f_{s,t}^{out}\) and \(f_{c,t}^{in}/f_{c,t}^{out}\) are the in-/outlet gas flow of pipeline \(p\), storage \(s\) and compressor \(c\), respectively; \(\varrho_{z,t}\) is the produced gas from P2G facility \(z\); \(\rho_{u,t}\) is the utilized gas by GPU \(u\); and \(F_{i,t}\) is the total gas load at node \(i\) and time \(t\); \(\mathcal{W}(i),\ \mathcal{S}(i),\ \mathcal{Z}(i)\), and \(\mathcal{D}_{g}(i)\) are subsets of gas wells, gas storage, P2G units and gas demands connected with node \(i\); and \(\mathcal{P}_{1}(i)/\mathcal{P}_{2}(i)\) and \(\mathcal{C}_{1}(i)/\mathcal{C}_{2}(i)\) are subsets of pipelines and compressors, whose ending/beginning terminals are connected with node \(i\), respectively. Additionally, the gas pressures should be in a specified range according to the consumer requirements, technical restrictions or signed contracts. Therefore, the gas pressure \(\pi_{i,t}\) of node \(i\) and time \(t\) is limited by the nodal upper and lower boundaries, i.e., \(\underline{\Pi}_{i}\) and \(\overline{\Pi}_{i}\), as follows. \[\underline{\Pi}_{i}\leq\pi_{i,t}\leq\overline{\Pi}_{i},\ \forall i,t, \tag{2.8}\] #### Gas Loads Natural gas is consumed by final users at any gas node. The gas pressure is reduced to a contractual value for distribution. Gas loads are signed with different levels and priorities that should be handled by system operator in an effective way. In some situations, gas systems cannot satisfy all demands, then higher priority clients are served first, by adjusting the penalties of unserved gas loads (\(\triangle f_{d,t}\)) in the objective function og the optimization problem, and relaxing the gas nodal balancing equation as \[\sum_{w\in\mathcal{W}(i)}f_{w,t}+\sum_{p\in\mathcal{P}_{1}(i)}f_{p,t}^{out}-\sum_{p\in\mathcal{P}_{2}(i)}f_{p,t}^{in}+\sum_{s\in\mathcal{S}(i)}( f_{s,t}^{out}-f_{s,t}^{in})+\sum_{c\in\mathcal{C}_{1}(i)}f_{c,t}^{out}\] \[\quad-\sum_{c\in\mathcal{C}_{2}(i)}f_{c,t}^{in}+\sum_{z\in \mathcal{Z}(i)}\varrho_{z,t}=\sum_{u\in\mathcal{U}_{g}(i)}\rho_{u,t}+\sum_{d \in\mathcal{D}_{g}(i)}(F_{d,t}-\triangle f_{d,t}),\ \forall i,t, \tag{2.9}\] #### Gas Pipelines The most important and critical element in the gas system is the pipelines because of the pressure gradients and physical characteristics of natural gas and they have large quantity of gas stored within them. Pipelines can be categorized into three major types according to their location: i) gathering pipelines, which are installed in the production plants to collect the gas from wellheads to the refining stations; ii) transmission pipelines, which deliver the refined gas with large quantities to the market area, i.e., distribution systems. They have wide diameters and long lengths; iii) distribution pipelines, which are used to distribute the transmitted gas to the final users. Because the refining process is not included in the gas model, as discussed above, the gathering pipelines are not modeled in this study. The typical values of working pressures/diamters of pipelines in the transmission and distribution systems are \(1.5\times 10^{5}\)\(\sim\)\(8.5\times 10^{5}\,\mathrm{Pa/}0.15\)\(\sim\)\(1.22\,\mathrm{m}\) and \(0.4\times 10^{5}\)\(\sim\)\(1.5\times 10^{5}\,\mathrm{Pa/}0.025\)\(\sim\)\(0.61\,\mathrm{m}\), respectively 12. Footnote 1: [https://naturalgas.org/](https://naturalgas.org/) Footnote 2: [https://blog.miragemachines.com/types-of-pipeline-every-oil-and-gas-engineer-should-know-about](https://blog.miragemachines.com/types-of-pipeline-every-oil-and-gas-engineer-should-know-about) Compared with power flow, the gas flow travels with limited velocities due to the slow dynamics of gas system. Therefore, gas flow needs a response time to be delivered, and such circumstance should be considered in modeling the short-term operation. The optimization of gas dynamics are found in the existing literature with different denotations, such as transient optimization, time-dependent optimization, partial differential equations (PDE) gas problem, nonlinear mixed-integer optimization and gas dynamics optimization. For the sake of clarity, in the thesis, the gas system is denoted as dynamic-state or steady-state gas models, where the latter neglects the line pack. These two models are introduced in the following sections. To represent the gas system dynamics, a set of PDE are derived from the physics of gas particles. This set guarantees that the system is affected by transportation process only, and there is no lost/gained energy or gas mass. According to [61], this set can be defined and summarized in Appendix A.2. The PDE of gas model dynamics can not be incorporated with power system optimization problem. In what follows, the dynamic- and steady-state gas flow models are presented in a tractable algebraic formulations. #### Dynamic-State Gas Flow Model Equations in Appendix A.2, namely continuity equation, momentum equation, energy equation and state equation, express the gas dynamics in detail in the time and space terms. In order to derive a tractable formulation of the gas optimization model, some simplifications, which are previously employed by researchers to achieve acceptable results, and satisfy the gas industry requirements, are adopted. These simplifications are as follows: 1. The gas temperature is assumed to be equal to that of the surroundings [73], considering that gas pipelines are close the ground, or as a result of slow dynamics of gas. According to [74], this assumption may introduces results with an error up to \(2\%\). Note that, with this assumption, the energy equation holds, i.e., it can be dropped from PDE set. 2. All pipelines are installed horizontally [66, 68, 41]. Therefore, the second term of (A.6) is constant, and the terms in (A.7) including the pipeline height \(h\) are less challengeable. One can refer to [75, 76, 77] for modeling and solving the gas dynamic models with inclined pipelines. 3. The accelerating forces in the momentum equation, i.e., kinetic energy \(\upsilon\partial\lambda/\partial t\) and gravity force \(\partial(\lambda\upsilon^{2})/\partial x\), introduce less than \(1\%\) of the friction force \(G\lambda\partial h/\partial x\)[73]. Therefore, the third and fourth terms of this equation can be dropped and the gas pressure-flow relationship depends only on the friction of pipelines [66, 68, 47]. 4. Based on the first assumption, the compressibility function in the state equation can be linearized in terms of pressure only, or it can be assumed as a fixed value based on the pressure range [60, 61, 68]. 5. The widely used Dercy friction factor is adopted. Therefore, the friction factor \(F\), in the right hand side of the momentum equation, can be calculated by Colebrook-White formula or its modified version (see equations (2.16)-(2.17) in [61]). By adopting the above simplifications, and replacing the density and velocity with the mass flow rate, i.e., \(f=\frac{\bar{\pi}D^{2}}{4}\frac{\lambda\upsilon}{\rho_{0}}\), the continuity and momentum equations are expressed in a simplified forms. However these forms are still PDE, which can not be incorporated in the optimization problem. Therefore, they need to be converted into ordinary algebraic equations, by discretizing the PDE in time and space. The discretization methods for a transient equations can be explicit and implicit methods. The explicit method uses the recent variables to calculate the next ones, it provides restrictions on the time step [41, 73]. However, the implicit method is based on finite-difference approximations (i.e., several short pipelines) or finite-volume schemes (i.e., several finite volumes in a meshed geometry), which offer numerical stabilities for large gas networks [41, 60, 68, 69]. Due to the simplicity, applicability and stability of the finite-difference approximations, a novel set of algebraic equations that represent the gas flow dynamics are formulated in [61]. Figure 2.3 depicts the average and terminal values of pressures and gas flows for a discretized pipeline in space. The general flow equation, known as Weymouth equation, is derived from the momentum equation. Weymouth equation describes the relationship between the average gas flow and the terminal pressures of a pipeline. It is defined as \[f_{p,t}|f_{p,t}| =\chi_{p}^{f}(\pi_{i,t}^{2}-\pi_{o,t}^{2}),\ \forall p,t,(i,o)\in p \tag{2.10}\] \[f_{p,t} =\frac{f_{p,t}^{in}+f_{p,t}^{out}}{2},\ \forall p,t, \tag{2.11}\] where \(f_{p,t}\) is the average gas flow rate inside pipeline \(p\in\mathcal{P}\) and time \(t\in\mathcal{T}\), \(\mathcal{P}\) and \(\mathcal{T}\) are the sets of gas pipelines and time intervals, \(\pi_{i,t}\) and \(\pi_{o,t}\) are the terminal pressures of a pipeline connected with gas nodes \(i\) and \(o\), respectively. \(\chi_{p}^{f}\) is the Weymouth equation coefficient, which depends on the physical characteristics of the pipeline. This coefficient is calculated by \[\chi_{p}^{f}=\Phi^{f}\left(\frac{\tilde{\pi}}{4}\right)^{2}\frac{T_{0}}{\pi_{0 }\rho_{air}}\frac{D_{p}^{5}}{L_{p}F_{p}TZG_{s}},\ \forall p \tag{2.12}\] where \(\tilde{\pi}\approx 3.1416\) is the mathematical constant; the base temperature is \(T_{0}=\)\(273.15\,\mathrm{K}\), the base pressure \(\pi_{0}=\)\(1.013\,25\,\mathrm{bar}\), base density of air is \(\rho_{air}=\)\(1.2922\,\mathrm{kg}\,\mathrm{m}^{-3}\), the gas temperature is \(T=\)\(281.15\,\mathrm{K}\), the relative gravity of gas \(G_{s}=\)\(0.6106\), and the compressibility factor adopted in the thesis models is \(Z=\)\(0.8\). \(D_{p},\ L_{p}\) and \(F_{p}\) are the physical parameters of pipeline \(p\), namely diameter, length, and friction factor. Finally, \(\Phi^{f}\) is the unit conversion factor, for example, if the pressure and the gas flow rate units are \(\mathrm{bar}\) and \(\mathrm{MSm}^{3}/\mathrm{h}\), respectively, then \(\Phi^{f}=(3600)^{2}\times 10^{-12}\). The approximated continuity equation depicts the relationship between the gas flow difference of a pipeline in space, i.e., the difference between in- and outlet gas flow, and the average pressure difference in time, i.e., the difference in gas pressure between the recent and previous time intervals. Therefore, it originates a time-dependent optimization model, which may introduces computational challenges. However, it formulates the industrial stored gas in pipelines, known as the line pack, by considering the gas flow difference in space. Line pack provides additional operating flexibilities for gas system operator to instantaneously balance the gas system in case of contingencies, peak demands or varying loads, under insufficient gas production schedules, which are based on longer time intervals (\(15\)-\(120\) minuets). Subsequently, the continuity equation can be expressed in terms of the line pack as \[m_{p,t}=\chi_{p}^{m}(\pi_{i,t}+\pi_{o,t}),\ \forall p,t,(i,o)\in p \tag{2.13}\] \[f_{p,t}^{in}-f_{p,t}^{out}=m_{p,t}-m_{p,t-1},\ \forall p,t, \tag{2.14}\] Figure 2.3: Discretization of the PDEs, indicating the terminal and average values of pressures and gas flow. where \(m_{p,t}\) is the mass of gas stored in pipeline \(p\) at time \(t\). \(\chi_{p}^{m}\) is the line pack coefficient, which is calculated by \[\chi_{p}^{m}=\Phi^{m}\frac{\widetilde{\pi}}{8}\frac{T_{0}}{\pi_{0}} \frac{D_{p}^{2}L_{p}}{TZ},\ \forall p \tag{2.15}\] where \(\Phi^{m}\) is a unit conversion factor, for example, if the pressure and the gas flow rate units are \(\mathrm{bar}\) and \(\mathrm{MSm^{3}/h}\), respectively, then \(\Phi^{m}=3600\times 10^{-6}\). Based on the above discussion, the dynamic-state model is summarized as following: * _Gas production capacities_: (2.1). * _Gas storage constraints_: working volume limits (2.2)-(2.3) and in-/outlet flow capacity (2.4). * _Gas compressors constraints_: compression and consumption constraints (2.5)-(2.6). * _Nodal balancing equation_: (2.7) or with gas load shedding (2.9). * _Nodal pressure bounds: (2.8). * _Weymouth equation_: (2.10) and average flow rate (2.11). * _Line pack_: (2.13). * _Continuity equation_: (2.14). #### Steady-State Gas Flow Model When the transient fluctuations are neglected, the resulted formulation is the steady-state gas flow model. Compared with the dynamic-state gas model, the steady-state one does not consider the time-dependent equations, i.e., the continuity equation (2.14). Therefore, the line pack is neglected or assumed to be fixed, consequently, the inlet and outlet gas flows are the same (\(f_{p,t}^{in}=f_{p,t}^{out}\)), according to (2.14). And the model depends on the \(\pi_{i,t}^{2}\) instead of \(\pi_{i,t}\), as (2.13) is dropped. Therefore, as suggested in [62], a simple substitutions can be adopted to simplify the steady-state model, where the nonlinear variables \(\pi_{i,t}^{2},\ \forall i,t\) are replaced with a linear one \(\bar{\pi}_{i,t},\ \forall i,t\). Then the resultant formulation will be as the following: * _Gas production capacities_: (2.1). * _Gas storage constraints_: working volume limits (2.2)-(2.3) and in-/outlet flow capacity (2.4). * _Gas compressors constraints_: compression constraints will be as \[\bar{\pi}_{i,t}\leq\bar{\pi}_{o,t}\leq\gamma_{c}^{2}\bar{\pi}_{i,t},\forall c,t,(i,o)\in c,\] (2.16) and the consumption constraints are defined in (2.6). * _Nodal balancing equation_: considering the gas load shedding \[\sum_{w\in\mathcal{W}(i)}f_{w,t}+\sum_{p\in\mathcal{P}_{1}(i)}f_{p,t }-\sum_{p\in\mathcal{P}_{2}(i)}f_{p,t}+\sum_{s\in\mathcal{S}(i)}(f_{s,t}^{out}-f _{s,t}^{in})+\sum_{c\in\mathcal{C}_{1}(i)}f_{c,t}^{out}\] \[\quad-\sum_{c\in\mathcal{C}_{2}(i)}f_{c,t}^{in}+\sum_{z\in \mathcal{Z}(i)}\varrho_{z,t}=\sum_{u\in\mathcal{U}_{g}(i)}\rho_{u,t}+\sum_{d \in\mathcal{D}_{g}(i)}(F_{d,t}-\triangle f_{d,t}),\;\forall i,t,\] (2.17) * _Nodal pressure bounds_: \[\underline{\Pi}_{i}^{2}\leq\bar{\pi}_{i,t}\leq\overline{\Pi}_{i}^{2},\; \forall i,t,\] (2.18) * _Weymouth equation_: \[f_{p,t}|f_{p,t}|=\chi_{p}^{f}(\bar{\pi}_{i,t}-\bar{\pi}_{o,t}),\;\forall p,t,(i,o)\in p,\] (2.19) Equations (2.16) and (2.18) are obtained by squaring each term in (2.5) and (2.8), respectively. In the nodal balance equation (2.9), the inlet and outlet gas flow of the pipeline are replaced with the average one, and the average flow rate equation (2.11) is dropped, as \(f_{p,t}=f_{p,t}^{in}=f_{p,t}^{out}\). Although the steady-state gas flow model introduces inaccurate decisions for gas system operator because of neglecting the slow dynamics of gas flow and disregarding the line pack, they merit to be studied due to some reasons, such as * The simplicity of the mathematical model that can be easily incorporated with power system optimization problems. This model can be adopted to find a quick solution for the main variables of the gas system. Therefore, it is commonly employed in the existing coupled power and gas studies. * In planning perspectives, This model identifies acceptable solutions for large-scale systems within reasonable times. Because it neglects the line pack, which provides economic benefits to the system, its solutions could be low conservative as they consider the worst-line pack scenario, which is zero. * In the distribution level, the line pack quantities are small, therefore, this model canfined a high-quality decision for system control and operation. Table 2.1 presents a comparison between transmission and distribution levels in terms of the gas system parameters, including nodal pressures and pipeline dimensions. The range of stored gas within one unit length (\(1\,\mathrm{m}\)) is calculated by (2.13) with the range of pressures and diameters. It is quit significant that the gas dynamics could be neglected in the distribution level under low pressures. * In this paper, many studies are presented to solve the dynamic-state gas flow model, and numerical simulations are conducted to show the effectiveness of considering the gas flow dynamics. Therefore, the steady-state gas flow model is discussed as a reference. ### 2.2 Electric Power Systems Modeling The power system is modeled as set of nodes (buses) interconnected by a set of branches, where the branches represent the power lines, transformers and cables, and the buses are the physical points for connecting the system components, including generators and loads. In the electric power system, identifying the optimal power flow (OPF) is one of the fundamental issues in power system planning, operation and markets. In this section, due to its importance in the coupled power and gas system modeling, the OPF is discussed, including its definitions, extensions, applications, and mathematical formulations. #### Optimal Power Flow The OPF optimization problem was firstly introduced in \(1962\) by Carpentier [78]. It is well-studied in the literature to formulate the physical and economic constraints according to the electrical laws and engineering decisions (see the recent surveys [79, 80, 81, 82]). Figure. 2.4 dramatically displays the optimization and control procedures for power system management, indicating that the accuracy of OPF is important with short time intervals. The OPF is applied for long-term transmission-level planning decisions, security-constrained unit commitment (SCUC), and day-ahead/real-time economic dispatch (ED). Before presenting the mathematical formulations of the OPF models, the conventional power flow (CPF) is firstly discussed. \begin{table} \begin{tabular}{c c c c} \hline \hline Level & Diameter (inch) & Pressure (psi) & Line pack (Sm\({}^{3}\) /m) \\ \hline Transmission & 6 \(\sim\) 48 & 200 \(\sim\) 1200 & 0.4 \(\sim\) 140 \\ \hline Distribution & 1 \(\sim\) 24 & 6 \(\sim\) 200 & 0.0003 \(\sim\) 6 \\ \hline \hline \end{tabular} \end{table} Table 2.1: Typical values of gas system parameters Figure 2.4: Optimization and control procedures for power system planning, operation and market ### 2.2 Electric Power Systems Modeling #### Conventional Power Flow The CPF is to compute any feasible solution for the system equations, neglecting the operational costs or the objective function. Its compact form can be expressed as \[\sum_{u\in\mathcal{U}(n)}p_{u,t}+\sum_{l\in\mathcal{L}(n)}f_{l_{p}, t}(\mathbf{v},\mathbf{\theta})=\sum_{d\in\mathcal{D}_{p}(n)}P_{d,t},\ \forall n,t, \tag{2.20}\] \[\sum_{u\in\mathcal{U}(n)}q_{u,t}+\sum_{l\in\mathcal{L}(n)}f_{l_{q },t}(\mathbf{v},\mathbf{\theta})=\sum_{d\in\mathcal{D}_{p}(n)}Q_{d,t},\ \forall n,t. \tag{2.21}\] where \(p_{u,t}\) and \(q_{u,t}\) are the active and reactive power generated from unit \(u\) at time \(t\); \(P_{d,t}\) and \(Q_{u,t}\) are the active and reactive power of demand \(d\); \(f_{l_{p},t}\) and \(f_{l_{q},t}\) are the functions of active and reactive power flow of line \(l\), respectively. These functions depend on the vector of voltage magnitude \(\mathbf{v}\) and phase angle \(\mathbf{\theta}\) at system buses, and their detailed expressions are discussed in Section 2.2.2. \(\mathcal{U}(n),\ \mathcal{L}(n)\) and \(\mathcal{D}_{p}(n)\) are subsets of power units, power lines and power demands connected with bus \(n\), respectively. For each bus, there are four variables, namely net active power \(p_{n,t}\), net reactive power \(q_{n,t}\), voltage magnitude \(v_{n,t}\) and phase angle \(\theta_{n,t}\). To find a deterministic solution for the CPF, two out of four variables are fixed by assigning the system buses to one of the following three bus types [81]: i) slack bus, which sets a voltage reference for all system buses, i.e., \(v_{n,t}=1\) p.u. and \(\theta_{n,t}=0\); ii) load bus, at which the net power is fixed, i.e., \(p_{n,t}=P_{n,t}\) and \(q_{n,t}=Q_{n,t}\); and iii) voltage-controlled bus, at which the active power and voltage magnitude are fixed by a local reactive source as a voltage regulator, i.e., \(p_{n,t}=p_{n,t}^{*}\), and \(v_{n,t}=v_{n,t}^{*}\). As the real power injections at the slack bus is free to provide a feasible solution for the CPF, only one slack bus can be assumed in the system model. It should be noted that the CPF is a deterministic problem that is solved with a number of equations equal the number of unknowns, however, it may provide an impractical solution, such as negative voltage magnitude and large power generation. #### Optimal Power Flow Versions Compared with the CPF, the OPF is an optimization problem that combines the CPF with an objective function and a set of technical constraints to avoid any physical or technical violation [79]. Several OPF versions have been found in the existing works, including i) static OPF, which handles the problem with single time interval [83]; ii) dynamic OPF, which handles the problem with multi-time periods, i.e., multi-period optimization problem [84]; iii) transient stability-constrained OPF, which simultaneously systemizes the static and dynamic OPF models in the same problem [85]; iv) security-constrained OPF, which considers the system constraints under contingencies [86]; v) deterministic OPF, which neglects any uncertainty in the power system parameters; vi) stochastic or robust OPF, which considers the uncertainties of the power system [87]; vii) AC-OPF, which accurately formulates the system power flow equations, considering the reactive power injections, transmission losses and voltage constraints [88]; viii) DC OPF, which simplifies the AC-OPF; ix) mixed AC/DC OPF, which adopted in integrated AC-DC grids [89]; x) multi-phase OPF, which considers \(n\)-conductor in the optimization problem [90]; xi) unbalanced three-phase OPF, which are adopted for unbalanced distribution systems [91]. The above versions do not cover all versions of OPF. And other extensions can be obtained by combining two or more versions from the above list, for example dynamic stochastic OPF. In fact, the OPF version is selected based on the solution accuracy, reliability and optimality that is mainly influenced by the objective function. Different objective functions are reported to consider one or more sub-objectives, such as generation costs, power losses, voltage violations, reactive power costs, carbon emission, power shedding, energy reserves, and energy imports. ##### Optimal Power Flow Applications Most of the recent OPF models are based on the classic formulations presented in [78], where a classic ED is formulated. The objective of the classic ED model is to minimize the total operational costs of power production as \[\min_{p_{u,t},q_{u,t},v_{n,t},\theta_{n,t},p_{l,t},q_{l,t}} \sum_{t\in\mathcal{T}}\sum_{u\in\mathcal{U}}C_{u}(p_{u,t}) \tag{2.22}\] \[s.t. \eqref{eq:p_u}-\eqref{eq:p_u},\] (2.23) \[\underline{P}_{u}\leq p_{u,t}\leq\overline{P}_{u},\ \forall u,t,\ \ \underline{Q}_{u}\leq q_{u,t}\leq\overline{Q}_{u},\ \forall u,t,\] (2.24) \[\underline{V}_{n}\leq v_{n,t}\leq\overline{V}_{n},\ \forall n,t,\ \ \underline{\Theta}_{n}\leq\theta_{n,t}\leq\overline{\Theta}_{n},\ \forall n,t. \tag{2.25}\] where the production cost \(C_{u}(.)\) is a quadratic convex function, and \(\underline{P}_{u}/\overline{P}_{u},\ \underline{Q}_{u}/\overline{Q}_{u},\ \underline{V}_{n}/ \overline{V}_{n}\) and \(\underline{\Theta}_{n}/\overline{\Theta}_{n}\) are the maximum/minimum physical and technical constraints for power generation and bus voltages, respectively. Besides the classic ED model, there are many applications of OPF that can be employed to overcome the difficulties in operation, control, planning and marketing. These applications include i) optimal reactive power flow, known as VAR control, which include the effects of tap-changing and phase-shifting transformers [79]; ii) reactive power planning, in which, new reactive power sources are optimally allocated [80]; iii) network constraints unit commitment (NCUC), which couples the unit commitment (UC) problem with the power flow equations [92]. The UC problem refers to the optimal operating schedule (on-off status) of all power units. Appendix A.3 presents the tight, compact and computational efficient UC model proposed by Morales-Espana et al [93]. In the thesis, this model is employed to find a predefined UC decisions to be used in the proposed operational models for the coupled power and gas systems; iv) security-constrained ED (SCED), which identifies an optimal ED considering the power system contingencies [94]. #### AC Optimal Power Flow Model The AC-OPF is the transformation of the complex power flow equations into algebraic ones to be employed in a mathematical optimization problem. There are two different models for AC-OPF, namely bus injection model and branch flow model. The bus injection model is the most compact form, however, the branch power flow model is more convenient to be reformulated into a convex relaxation OPF problems. It should be noted that this thesis concentrates on solving the branch power flow model, whose formulation is provided in this subsection, and the possible formulations of the bus injection model is attached in Appendix A.4. The branch flow model is derived from the voltage-current relationship of a power line \(l\) connected with two buses \(m\) and \(n\), which can be expressed as \[\vec{i}_{l,t}=\vec{y}_{l}(\vec{v}_{m,t}-\vec{v}_{n,t}),\ \forall l,(m,n)\in l,t. \tag{2.26}\] where \(\vec{i}_{l,t},\ \vec{v}_{n,t}\) and \(\vec{y}_{l}\) are the phasor vector of the branch current, bus voltage and branch admittance, respectively. Baran and Wu introduced the original branch flow model to optimize the capacitor allocation problem in \(1989\)[95]. The branch flow model is a set of three complex equations \[\vec{v}_{n,t}-\vec{v}_{m,t}=\vec{z}_{l}\,\vec{i}_{l,t},\ \forall l,(m,n) \in l,t, \tag{2.27}\] \[\vec{s}_{l,t}=\vec{v}_{n,t}\,(\vec{i}_{l,t})^{\star},\ \forall l,(m,n) \in l,t,\] (2.28) \[\sum_{l\in\mathcal{L}(n)}f_{l_{p},t}(\boldsymbol{v},\boldsymbol {\theta})+j\sum_{l\in\mathcal{L}(n)}f_{l_{p},t}(\boldsymbol{v},\boldsymbol{ \theta})=\sum_{l\in\mathcal{L}_{1}(n)}\vec{s}_{l,t}-\sum_{l\in\mathcal{L}_{2} (n)}(\vec{s}_{l,t}-\vec{z}_{l}\,i_{l,t}^{2})+\vec{y}_{n}^{h}\,v_{n,t}^{2},\ \forall n,t. \tag{2.29}\] where \(\vec{z}_{l}=r_{l}+j\,x_{l}\) is the total series impedance of power line \(l\); \(\vec{y}_{n}^{h}=G_{n}+j\,B_{n}\) is the total shunt admittance at bus \(n\). \(\mathcal{L}_{1}(n)\) and \(\mathcal{L}_{2}(n)\) are subsets of power lines whose final and initial terminals are connected with bus \(n\), respectively; \(v_{n,t}\) and \(i_{l,t}\) are the magnitude value of vectors \(\vec{v}_{n,t}\) and \(\vec{i}_{l,t}\), respectively; \(\vec{s}_{l,t}\) is the apparent power of branch \(l\), and \({}^{**}\) denotes complex conjugation The set (2.27)-(2.29) can be decomposed as pairs of real an imaginary terms as follows: i) using the squared voltage variable \(\bar{v}_{n}=v_{n}^{2}\) instead of \(v_{n}\); ii) using the squared current variable \(\bar{i}_{l}=i_{l}^{2}\) instead of \(i_{l}\); iii) incorporating the nodal balancing equations (2.20)-(2.21) with the resultant forms. Therefore, the complete AC branch flow-based OPF model is \[\sum_{u\in\mathcal{U}(n)}p_{u,t}+\sum_{l\in\mathcal{L}_{1}(n)}(p_{l,t}-r_{l} \bar{i}_{l,t})-\sum_{l\in\mathcal{L}_{2}(n)}p_{l,t}-G_{n}\bar{v}_{n,t}=\sum_{ d\in\mathcal{D}_{p}(n)}(P_{d,t}-\triangle p_{d,t}),\ \forall n,t, \tag{2.30}\] \[\sum_{u\in\mathcal{U}(n)}q_{u,t}+\sum_{l\in\mathcal{L}_{1}(n)}(q_{l,t}-x_{l} \bar{i}_{l,t})-\sum_{l\in\mathcal{L}_{2}(n)}q_{l,t}-B_{n}\bar{v}_{n,t}=\sum_{ d\in\mathcal{D}_{p}(n)}(Q_{d,t}-\triangle q_{d,t}),\ \forall n,t, \tag{2.31}\] \[\bar{v}_{n,t}=\bar{v}_{m,t}-2(r_{l}\,p_{l,t}+x_{l}\,q_{l,t})+(r_{l} ^{2}+x_{l}^{2})\,\bar{i}_{l,t},\ \forall l,t, \tag{2.32}\] \[p_{l,t}^{2}+q_{l,t}^{2}=\bar{v}_{n,t}\,\bar{i}_{l,t},\ \forall l,t. \tag{2.33}\] where the active power shedding \(\triangle p_{d,t}\) and reactive power shedding \(\triangle q_{d,t}\) are considered. For sake of simplicity, the squared voltage and current variables are directly denoted as \(v_{n,t}\) and \(i_{l,t}\) in the thesis studies, in case of adopting branch power flow model. In order to fully prepare the AC-OPF model to be included in an optimization problem, a set od boundary constraints are required to define the upper and lower limits of the decision variables. Bus voltages are bounded with the physical and engineering limits as \[\underline{V}_{n}^{2}\leq v_{n,t}\leq\overline{V}_{n}^{2},\ \forall n,t, \tag{2.34}\] The branch current is limited with the power line capacity as \[\underline{I}_{l}^{2}\leq i_{l,t}\leq\overline{I}_{l}^{2},\ \forall l,t, \tag{2.35}\] #### DC Optimal Power Flow Model The AC-OPF is nonlinear and non-convex model, which brings additional complexities for the coupled power and gas system optimization problems. Therefore, the DC-OPF model, which is a linear approximation for the AC-OPF model, is usually employed to formulate the power network constraints. It is named as DC-OPF as its equations resemble the power flows in a DC network, however, it is still working for the AC networks. There are some assumptions required to derive the DC-OPF: i) reactive power flow is neglected; ii) power system is lossless, i.e., all power line resistances are very small (\(\approx 0\)); iii) there are reactive power sources that can regulate all the system buses to be equal \(1\); and iv) the difference between the voltage angles of two connected busses is very small, i.e., \(\sin(\theta_{n}-\theta_{m})\approx\theta_{n}-\theta_{m}\). Therefore, the DC-OPF model will be formulated as \[p_{l,t}=\frac{\theta_{m,t}-\theta_{n,t}}{x_{l}},\ \forall l,t,(m,n) \in l. \tag{2.36}\] \[\sum_{u\in\mathcal{U}(n)}p_{u,t}+\sum_{l\in\mathcal{L}_{1}(n)}p_{ l,t}-\sum_{l\in\mathcal{L}_{2}(n)}p_{l,t}=\sum_{d\in\mathcal{D}_{p}(n)}(P_{d,t}- \triangle p_{d,t}),\ \forall n,t.\] (2.37) \[-\tilde{\pi}\leq\theta_{n,t}\leq\tilde{\pi},\ \forall n,t,\] (2.38) \[-\overline{p}_{l,t}\leq p_{l,t}\leq\overline{p}_{l,t},\ \forall l,t. \tag{2.39}\] It should be noted that, under normal operating conditions, the DC-OPF model provides quite accurate power flows with a low execution time due to is linearity. However, it may lead to insufficient solutions under stressed power systems, where the bus angle differences are large and overestimation of bus voltages. Besides, the active and reactive power are coupled in the distribution levels, and the voltage is significantly influenced by the reactive power flows, therefore, adopting the DC-OPF model in the distribution level provides considerable errors in the power system decisions. In this paper, the DC-OPF and AC-OPF models are employed to model the power network at transmission and distribution levels, respectively. ### 2.3 Interdependent Power and Gas Systems As discussed earlier that the shale gas revolution is enhanced by the development in horizontal drilling and hydraulic cracking technologies, therefore, the gas prices are decreasing significantly [29]. Besides its economical advantages, the new concerns from the potential global warming have driven forces to raise its importance in the global energy balance due to the low carbon emissions and high environmental benefits [26, 27, 28]. It is observed that natural gas is promoted as the second largest energy source over the world [27]. Therefore, and as an outcome of the progress in the deregulation and competition, new investments have been enrolled in the electricity markets by installing GPUs due to their economical and quick construction, friendly to the environment, and high operational efficiency and flexibility, which are necessary to handle the power system uncertainties, such as demand response, renewable power generation (RPG) output fluctuations as well as contingencies. The installation of GPUs has been so high, for example, it is expected that \(60\%\) of new electric power units will be fueled by natural gas by \(2035\)[27]. That discusses why researcher give some attentions on modeling and solving the interacted power and gas systems during the last \(20\) years. Before that, the two systems were separately analyzed and optimized. Furthermore, due to the advanced technologies of P2G facilities, which are recently employed to effectively convert electricity into gas to be stored, transported and reutilized by gas networks. P2G facilities are the most well-qualified solution for providing more flexibility to mitigate RPG output [33]. Therefore, the interactions between power and gas systems are enhanced, and neglecting them may not provide the optimal decision for power system operators and may cause physical violations. In this section, an analogy between power and gas systems is presented, indicating the similarities and differences between the two energy systems and modeling the physical interactions (coupling components). Finally, the existing coordination scenarios in operation and planning are provided. It should be noted that modeling the economic interactions in the coupled system depend on the coordination strategy, the considered coupling components and the type of optimization technique, i.e., stochastic optimization (SO), robust optimization (RO) and distributionally RO (DRO). These interactions are rarely introduced in the recent studies, and their models are systematically proposed in Chapters 4-5. #### Similarities and Differences The two energy systems, power and gas, are network industries, where the energy production units are connected to energy utilisers through transmission or distribution sub-systems. These two networks share some similarities while also owing distinguished characteristics. Traditionally, electronic-hydraulic analogies are prepared to explain how the electricity works, where the electric components are represented by hydraulic ones, as the electric current is invisible and it is hard to illustrate the electric operations. However, recently, the modeling and solution techniques in power systems are more developed than those in natural gas systems. Therefore, power-gas analogy is needed to understand the similarities and differences and to apply the previous knowledge of power analysis. Table 2.2 presents a list of equivalent variables, components and models that are employed in this study. In the energy production, electricity is generated by non-GFUs, RPG sources and GFUs, and natural gas is produced from gas wells and P2G facilities, where GFUs and P2G facilities consume natural gas and electricity from gas and power systems, respectively. In the energy ### 1.2 Modeling the Integrated Electric-Gas Systems The integrated electric-Gas system is a very important tool for the development of the solar system. The solar system is a very important tool for the development of the solar system. #### Coupling Components GPUs, P2G facilities and electric-driven compressors represents the linkages between the power and gas systems. More specifically, the electricity network relies on the gas network for providing the gas fuel to GPUs and withdrawing the produced gas from P2G facilities. While the gas network relies on the power grid to supply the electric-driven compressor station for enhancing the gas transportation process. ##### Gas-fired Power Units GPUs are the most important and critical coupling components because of the following reasons: 1. Compared with other power units, such as coal- and oil-fueled units, natural gas is usually not stockpiled in the site. In other words, GPUs utilize just-on-time the delivered gas from gas network [43]. Therefore their operational flexibility directly depend on the gas network adequacy and capacity. 2. In the industrial practice, there are two major types of gas delivery service, namely the firm and interruptible gas services, and GPUs usually get the latter due to their cost-effective 3[56]. Moreover, the natural gas end-users have higher priorities than GPUs to be supplied with gas. Therefore, under insufficient gas supply or tight pipelines capacity, peak gas loads could impact on the interruptible gas delivery of GPUs, that introduces operational issues in the power system [96]. Footnote 3: [https://learn.pjm.com/three-priorities/keeping-the-lights-on/gas-electric-industry/natural-gas-electric-market.aspx](https://learn.pjm.com/three-priorities/keeping-the-lights-on/gas-electric-industry/natural-gas-electric-market.aspx) 3. Due to the environmental concerns, large-scale renewable energy are promised to make a great contribution to future energy system. However, this contribution introduces additional challenges for the power system operation. In such conditions, GPUs are necessary to mitigate the renewable uncertainties by providing their flexible dispatchability and their high ramping capacity. In addition, the gas prices are decreasing significantly because of the shale gas revolution [29]. Therefore, GPUs installation has been significantly increased, in the last decade around the world, to share the largest power generation capacity [37, 56, 97]. That is expected to continue increasing in the future as well. In turn, the operational flexibility of gas system is important to cope with volatile gas demands for GPUs. Three types of GFUs are introduced in the existing studies that are industrially employed, namely single-cycle gas-fired turbine, combined-cycle GFU, and dual-fuel units. The single-cycle turbine is the simplest GFU, where a consumption engine is employed to convert the gas into mechanical power, which further transformed to electric power. The combined-cycle GFU comprises a steam unit and multiple gas turbines, where the waste heat from turbines is utilized in the steam unit to increase the power plant efficiency [98, 99]. The dual-fuel power plant could switch from gas fuel to other types under insufficient gas delivery. Therefore, they have the ability to share the peak gas demands and to provide the power system operational reliability and security [100]. ## Chapter 2 Modeling the Integrated Electric-Gas Systems ### 2.1 Introduction The energy of the electron-positron pair (PF) is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron-positron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair (PF). The PF is a fundamental tool to study the energy of electron pair (PF). The PF is a fundamental tool to study the energy of the electron pair ### 2.3 Interdependent Power and Gas Systems #### Electric-driven Compressor Stations As discussed in Section 2.1.1, compressor stations are installed along the pipelines to facilitate the gas transportation. They keep the gas pressure in its technical and contractual levels that might be dropped due to the pipelines friction, long distance and elevation difference. The detailed compressor model and its simplified one, as well as the solution methods are presented in Appendix A.1 and Section 2.1.1. #### Coordination Strategies Due to the intensive strengthen interdependencies between power and gas systems4-5, it may not be practical or technical reasonable to separately model and optimize the two energy systems without considering their physical and economical interactions. In order to address this interdependency, three types of coordination strategies are discussed in the existing studies: Footnote 4: [https://www.pjm.com/markets-and-operations.aspx](https://www.pjm.com/markets-and-operations.aspx) Footnote 5: [https://www.eia.gov/todayinenergy/detail.php?id=34612](https://www.eia.gov/todayinenergy/detail.php?id=34612) 1. **Power system optimization models considering physical constraints of gas system** In this coordination, the impact of gas infrastructure capacities on the gas fuel delivering to GPUs is addressed. Due to the high priority of gas end-users, especially when the gas and power loads are peak together, and when the gas system operational constraints are not considered, power system operating decisions may be sub-optimal or infeasible. Therefore, quite a few researchers have incorporating the physical constraints of gas system in the power system optimization models, focusing on the security-constrained unit commitment (SCUC) problem. A simplified linear constraints of the gas system is included in the SCUC problem. [109], while the transient behavior of gas system is considered in [110]. Gas marginal prices are defined to optimize the SCUC [111]. A SCUC problem is proposed to consider the AC-OPF and gas dynamics in a two-stage optimization model, and the reactive power dispatch is optimally obtained. Gas network awareness is analyzed in [112] through a SCUC model. 2. **Gas system optimization models considering gas consumptions of power system** In this coordination, the impact of gas demands of GPUs on the nodal pressure levels, line pack and gas system operational security has been investigated. Specifically, the time-varying gas consumptions are incorporated in the gas system optimization models to explore the risk of RPG fluctuations [113]. In [114], a gas network simulator with the stochastic optimization technique is established to find the optimal planning strategy under future uncertainties introduced by GPUs demands. PDE equations of gas dynamics have been discretized in [115] to find the optimal control of transient gas flow inside pipelines. Interested readers can refer to the up-to-date models [116, 117, 118], which focus on this coordination. 3. **Co-optimization models for power and gas systems** ## Chapter 2 Modeling the Integrated Electric-Gas Systems ### 2.1 Introduction The energy of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic electromagnetic field of the electromagnetic electromagnetic electromagnetic electromagnetic of the electromagnetic field of the all costs associated with energy production and provides optimal decisions for the combined system. However, in industrial practice, there are significant institutional and administrative barriers to operate the two systems in a holistic manner [44]. Power system(s) and gas system(s) are operated by different utilities and they are unsynchronized in most countries and regions as in European countries [45] and in China [46]. This lack of synchronization indicates that the total fuel cost minimization determined by the IEGS models might not be a realistic operational objective for autonomous sub-systems. Therefor, the first coordination strategy is the most suitable and practical one to be used in the recent operation mechanisms of the coupled power and gas system. The existing studies, which characterize this coordination, suffered from three major drawbacks, namely (1) they concentrate on the physical interactions with gas system, neglecting the economical ones. Energy contracts, including firm and reserved gas contracts, are absent that may provide unrealistic decisions for power system operators; (2) they focus only on the UC problem, neglecting the resilient and economic dispatches (ED), which admit computational challenges due to the non-convexity of gas dynamics before and after uncertainty realization. In fact, ED against contingencies or RPG fluctuations are used to find the locational marginal energy and reserve prices in the energy markets; (3) The recent solution methodologies cannot guarantee the solution feasibility in case of adopting the dynamic-state gas flow model. It should be noted that chapters 4-6 provide compatible operational strategies with the existing industrial practices for the power system and energy markets, based on the this coordination strategy along with improving its derelictions in the recent work. Moreover, Chapter 3, which focuses on the optimal energy flow by proposing novel solution methodologies, neglecting the decision-making challenges, employs the third coordination strategy through deterministic models. ### 2.4 Conclusions and Discussions The interdependencies and interactions between the largest two energy systems, electric power system and natural gas systems, are intensified due to the wide deployment of coupling components, namely GPUs, P2G facilities and electric-driven compressors, as a result of their operational flexibilities, advanced technologies, and environmental benefits. This chapter provides the mathematical models and formulations, which represent the physical structure of the two energy systems, and recent developments in the coupling and coordination strategies. The presented formulations are employed for the proposed models and solution methods in the next chapters. This chapter started with discussing the physical structure of the natural gas system and the mathematical formulations of the main components. The system components used in the thesis work are gas sources, compressors, gas nodes, pipelines, valves and storages. The dynamic-state gas flow model, which is rarely used in the recent works due to its complexity, is adopted in the thesis work to provide additional operating flexibility and practical system representation. The steady-state gas flow models is formulated to be compared with the dynamic-state model. Then, the electric power system modeling is presented. OPF, which is the fundamental issue in power system planning and operation, is formulated and its versions and applications are discussed. The AC-OPF and DC-OPF are mathematically modeled to be employed in the distribution and transmission levels electric systems, respectively. Finally, the similarities and differences between the electric power system and gas system are provided, indicating the physical interactions between them. Different types of coordinations between the two systems are listed, while demonstrating their applicability with recent industrial practice. Different simplifications are employed to obtain tractable formulations for power and gas systems to be incorporated in their operational optimization problems. However, more developments are required to control and systemize the final solution accuracy and the computational burden. For example, the simplified dynamic-state gas flow model is formulated under some assumptions applied on the PDEs, and the gas flow inside compressors and their consumed energy are usually approximated into linear constraints. In the IEGS optimization problems, it is generally to apply the DC-OPF, neglecting the reactive power flow, transformer models, voltage tolerances, and exact operations of power generators. These simplifications could provide errors in the integrated system operation. The questions are that 1) is the exactness of the final decisions acceptable for the interactive utilities; 2) is it possible to deduce the solution quality with such assumptions; 3) is it possible to better represent the gas system dynamics with low computational burden. In fact, the IEGS research is in its first era, and it is very fast in modeling and solution methodology developments. Modern modeling techniques are suggested to represent the energy systems, such as [150, 151, 152], which can be employed in IEGS decision-making frameworks. ## Chapter 3 Optimal Energy Flow for Integrated Electric-Gas Systems Optimal energy flow is the most fundamental problem in the energy system operation, and the enhanced interdependencies between power and natural gas systems provide additional challenges to this problem from transmission to distribution levels. For gas system operational constraints, the steady-state gas flow model is extensively adopted, considering the inlet and outlet gas flow of a pipeline are equal, neglecting the gas line pack. The non-convex Weymouth equation imposes a major complexity on seeking the feasible and optimal gas flow (OGF). For the power system operational constraints, the DC optimal power flow (OPF) model is commonly adopted for simplicity in the transmission level IEGS. The accurate AC-OPF must be adopted in the distribution level IEGS, however, it is also non-convex and presents more difficulties in the IEGS optimization problems. This chapter is about how to find the optimal power-gas flow (OPGF) in the IEGS, guaranteeing its feasibility and optimality. Two different efficient methods based on convex optimization approaches are proposed for transmission and distribution levels, respectively. A comprehensive study on the existing methods and approaches suggested to solve the OPGF problem is presented in Section 3.1. These methods include the nonlinear and linear programming, dynamic programming, simulation, heuristics and convex relaxations. In Section 3.2, day-ahead multi-period frameworks for economic dispatch are formulated for the transmission- and distribution-level IEGSs, respectively. In these models, the gas flow compressibility and slow traveling velocity as well as bidirectional gas flow are considered. Three different solution methodologies are provided to solve the OPGF problems in Section 3.3. The first method is to reformulate the transmission-level IEGS model into a mixed integer linear programming (MILP) framework using piecewise linear approximation (PLA) of the quadratic Weymouth equation. It is extensively adopted in the literature for IEGS optimization problems. In the second method, which is presented in Section 3.3.2, the Weymouth equations are relaxed as second-order-cone programming (SOCP) constraints. As a result of considering bidirectional gas flow inside pipelines, the proposed model is converted into a mixed integer SOCP (MISOCP) framework. A gas flow correction (GFC) method is proposed based on the multi-slack-node method and the Levenberg-Marquardt algorithm to provide a tight relaxation and find exact production schedules for the IEGS. This work has been published as * Ahmed R. Sayed, Cheng Wang, Tianshu Bi, and Arsalan Masood. "A Tight MISOCP Formulation for the Integrated Electric-Gas System Scheduling Problem." In 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2), IEEE, 2018, pp. 1-6. DOI: [https://doi.org/10.1109/EI2.2018.8582239](https://doi.org/10.1109/EI2.2018.8582239) Then, another contribution is provided to adopt the AC-OPF in the power system operational constraints. Section 3.3.3 focuses on finding the OPGF in the coupled system at distribution level, where AC-OPF is adopted. A sequential-MISOCP (S-MISOCP) algorithm is proposed to find the OPGF in the formulated IEGS optimization model. The non-convex power flow and gas flow equations are decomposed as difference-of-convex programming (DCP) functions, which are reformulated as MISOCP constraints. Starting with an initial point, a sequence of penalized MISOCP problems are solved to find a feasible OPGF close to, if not equal to, the optimal one. The feasibility and quick convergence are guaranteed by designing an adaptive penalty growth rate and suggesting a high-quality initial point, respectively. Moreover, bidirectional gas flow inside pipelines is considered to incorporate meshed gas networks. This work has been published as * Ahmed R. Sayed, Cheng Wang, Tianshu Bi, Mohamed Abdelkarim Abdelbaky, and Arsalan Masood. "Optimal Power-Gas Flow of Integrated Electricity and Natural Gas System: A Sequential MISOCP Approach." In 2019 3rd IEEE Conference on Energy Internet and Energy System Integration (EI2), IEEE, 2019, pp. 283-288. DOI: [https://doi.org/10.1109/EI247390.2019.9062250](https://doi.org/10.1109/EI247390.2019.9062250) Finally, in Section 3.4, the reformulated MISOCP model with the GFC method is compared with the widely adopted analogous models from the literature, namely MILP model and MISCOP relaxed model without the proposed GFC method. Case studies are conducted to highlight the line pack consideration in the integrated system operation. Other studies have been conducted on a distribution-level IEGS test system to validate the S-MISOCP algorithm performance and convergence with both the dynamic-state gas flow and the AC power flow models. In fact, the above solution methods are adopted in this chapter to optimize deterministic IEGS models, which disregard the system uncertainties. However, the S-MISOCP algorithm can be employed to guarantee the decisions feasibility in each stage of non-deterministic optimization models due to its tractability. This algorithm is modified to be applied in solving two-stage robust and distributionally robust optimizations models against renewable generation uncertainties in chapters 5-6. ### 3.1 The State-of-the-Art Methods The objective of system operators is minimizing the operating costs of energy transportation in their systems, fulfilling all physical, economical, technical, contractual, and legal constraints as well as any type of interactions with other systems. The optimization problems of both power and gas networks in planning and operation are challengeable. Nowadays, planners and operators manage larger and more complex transport grids with significant growth in production and consumption, with bilateral energy transactions, and with higher levels of interconnection between energy networks. Identifying an accurate, feasible and optimal solution of the mathematical formulations for the interconnected systems, either transmission or distribution for power systems or dynamic- or steady-state for gas systems, represents a major challenge. In this section, the existing solution methods that are adopted in solving the OGF and OPF are separately presented. For power system modeling, due to the simplicity and linearity of DC-OPF model, which can be efficiently solved by the most of commercial solvers, it is usually used to formulate the electric power systems in the IEGSs literature models (for example but not limited to [44], [52], [56], [66], [127], [128], [132], [153], [154]). These studies mainly concentrate on the transmission level, where the difference between the voltage angles of two connected busses is very small and the power lines are lossless. However, stronger interactions in the IEGS are observed in the distribution-level [50], [155], [156]. In the power distributed network (PDN), the AC-OPF is a fundamental issue of PDN operation. The AC-OPF problem, which can be formulated as a branch flow model [157] or bus injection model [158], is a non-convex framework with quadratic constraints, please refer to Section 2.2 for more details. Various studies have been conducted to find the OPF in power system operation. Because different review articles and surveys have addressed the applicable and efficient solution methods and approaches in the pertinent literature, there is no need to comprehensively discuss the recent methodologies. An up-to-date comprehensive survey on the AC-OPF solution methodologies is presented in [79] and [80] for deterministic and non-deterministic models with AC-OPF formulations, respectively. Convex relaxation methods, such as SOC relaxation [159], convex quadratic relaxation [160], and semi-definite relaxation [161], could provide the optimal objective value if the relaxation is exact. In [159], it is proven that SOC relaxation is exact for PDN with the radial network under mild conditions. These conditions include the objective function that is convex, increasing with all power injecting sources, non-decreasing with power loads, and increasing with line losses. These conditions are sensitive to the objective function and system data, therefore SOC relaxation method may fail to find a feasible AC-OPF. In such circumstances, valid inequalities are suggested to enhance the relaxation tightness [162], however, they could not provide a zero-optimality gap. Therefore, the local heuristic penalty convex-concave procedure (P-CCP) based on difference-of-convex programming (DCP) introduced in [163], is utilized to locate a feasible and (local) optimal solution for AC-OPF problems [164]. For natural gas system modeling, there are two major approaches for gas networks, namely numerical simulation and optimization frameworks. The simulation approaches are implemented to identify the actual response of gas network by a certain number of runs under different control variables [165]. The simulation results can describe the original nonlinear gas flow equations with a level of accuracy based on how discretized the PDEs is. However, the simulation approaches can not guarantee the solution optimality and they need the knowledge and experience of the operator to increase its performance [67]. To simulate the gas network, some numerical methods can be found in [166], [167]. On the other hand, the ## Chapter 3 Optimal Energy Flow for Integrated Electric-Gas Systems ### 3.1 Optimal Energy Flow for Integrated Electric-Gas Systems In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. In this chapter, we present a general framework for the optimization of energy flow for the integrated electric-Gas system. #### Dynamic Programming Techniques Dynamic programming (DP) algorithms has proved to efficiently solve the gas problem with its nonlinearity and discontinuity constraints [180]. A multi-time-period optimization model is solved by a hybrid approximate DP algorithm in [181], considering the advantages of model predictive control (MPC). A two-level model is suggested for transient gas flow model in [182], where the levels are pipeline and compressor levels, respectively, and the subproblems are handled by DP algorithm. In [183], a DP-based decentralized algorithm is advised to find the optimal energy flow of IEGS. In [184], a multi-objective optimization model is suggested to combine the power consumption minimization and the gas delivery maximization in one optimization problem for IEGS. Based on tree decomposition algorithm, the DP algorithm is applied for optimal gas flows considering compressor stations and the cost fuel minimization with a reduction technique [185]. An integrated gas and hydrothermal system is optimally operated by DP algorithm based on dual decomposition and Lagrangian relaxation [186]. However, the DP algorithms might have some limitations. Kelling et al. [187] have introduced some concerns for the several partial solutions, which need variables monitoring and standard solution ranges. Furthermore, it is argued that DP algorithms are limited for radial gas networks and their execution time exponentially increase with the network size [67]. #### Linear Programming Techniques One way to overcome many of the aforementioned disadvantages of NLP techniques is to replace the nonlinear equations by approximated linear functions. The optimal solution would depend on the linearization accuracy, which can be measured and controlled. Therefore, with suitable and predefined tolerances, linear techniques guarantee that the global optimal solution can be found, that is not achieved by NLP or heuristic methods due to the presence of non-convexities. Due to the trustworthiness and straightforwardness of linear programming (LP) algorithms, linearization methods, including LP and mixed-integer LP (MILP), have been extensively applied in literature on gas flow optimization [125], [188], [189]. The gas flow equations are approximated into a set of upper planes obtained from the first order Taylor expansion with fixed gas flow directions, and they are replaced by several linear inequality constraints in [190]. However, the formulated convex feasible set may not provide the optimal objective in some cases, such as maximizing the line pack inside pipelines that is needed to mitigate the gas load variations [191]. Consequently, this approximation is not able to consider bidirectional gas flow and it forces the gas flow to be in a certain range [127]. In [191], a successive-LP (SLP) algorithm [192] is applied to better represent the flow equations, where the lower bound of the convex set obtained from [190] is updated by penalizing some deviation variables in the objective function and the feasible region would be get narrow. This approach is employed to solve a multi-period optimization model for IEGS [193], and to check the subproblem gas feasibility [41] for a coupled power and gas system. In [62], the steady-state gas model is iteratively solved by simplex algorithm, where two linear problems are formulated: the first one is to find an initial point while neglecting compressor model; the second one considers all constraints. Another iterative algorithm is proposed in [194], where the nonlinear function is approximated by a dynamically adjusted linear plane to mitigate the solution error. The Newton-Raphson method is adopted in [47] with a starting point obtained from the projection method, which is well-addressed in [127]. To overcome the iterative algorithms, which could introduce some difficulties in convergence, especially in the integrated energy system, the gas flow equation can be expressed as MILP formulation. In [195], the sign function of Weymouth equations are relaxed by introducing binary directional indicator variables, and the resultant equalities are linearized by a Taylor series expansion method. Due to their fast, robust and applicability, piecewise linear approximation (PLA) methods are widely adopted to be solved by MILP techniques. A review of PLA methods is provided in [196] to analyze their computational efficiencies. Martin et al. [63] introduce a MILP formulations for the steady-state gas model. A multi-choice method is proposed in [197] to approximate the squared variables of gas flow for the steady-state case in multi-energy systems. To consider gas dynamics, A novel special ordered set (SOS) constraints solved by a branch-and-cut algorithm [198], two-dimensional PLA [199], one-dimensional PLA [125] and a generalized incremental method [200] are developed to formulate the gas flow nonlinearities into MILP forms. Valuable studies employ these formulations in the coupled power and gas optimization problems, e.g. [66], [125], [132], [189]. According to the above motivations, Correa and Sanchez [188] contribute theoretical and computational comparisons of MILP formulations for the gas network in both steady- and dynamic-state conditions, indicating the benefits and drawbacks of each PLA model. This study includes seven different PLA models, and it was shown that the incremental model outperformed the others in terms of computational time and accuracy. The incremental PLA model, which is the most widely adopted method to reformulate Weymouth equations into a tractable form to be employed in non-deterministic optimization models, is utilized in chapter 4 to identify the resilient operational strategies of power system with gas systems interactions. This model is theoretically and computationally compared with the proposed methods developed in this chapter. A detailed formulation of the incremental PLA model is presented in Appendix A.6. #### Convex Relaxation Techniques Besides the LP techniques, other convex relaxations have drawn much attention in the pertinent literature to reformulate the non-convex general flow equations. A geometric programming approach is proposed for the fuel cost minimization for gas networks in [201], [202]. Different convex reformulations are introduced in [203] for general nonlinear problems, where they are computationally analyzed. As an effective and efficient convexification method, second-order cone (SOC) relaxation are rapidly developed. In [71], the quadratic equality gas flow equations are relaxed into SOC programming (SOCP) for the distribution level IEGS with the assumption of fixed directions of gas flow. However, considering unidirectional gas flow is improper for the IEGS. Consequently, a SOC relaxation is proposed in [204] to drive a high-quality solutions considering flow directions and on/off constraints. The resulting model is formulated as mixed-integer SOCP (MISOCP) framework, which is adopted on different practical large-scale gas systems. Further, this model is applied in IEGS, e.g., [58], [133], [205]. Nevertheless, when the convex relaxations are not tight enough, the solution exactness cannot be guaranteed, yielding infeasible or suboptimal operating decisions. Therefore, a provable feasibility guarantee is non-trivial. To find a more accurate solution, there are two methods are suggested: 1. Gas flow correction (GFC) method. In [128], a novel SOC relaxation is proposed to mathematically transform the non-convex programming problem into MISOCP model. Then, the Newton-Raphson algorithm is employed to correct the gas flow feasibility based on multi-slack-node gas flow calculation method. Although this study provide a high-quality decisions within suitable solution time, the line pack inside pipelines is neglected, and there is no attempt have been done to employ the GFC method in solving the IEGS optimization models in the dynamic-state conditions. 2. Sequential convex programming (SCP) approach. In [51], the Weymouth equations are reformulated as difference-of-convex programming (DCP) functions, and the SCP algorithm proposed in [163] is designed for the steady-state gas flow model in the IEGS, considering fixed gas flow directions. A decentralized operation model for IEGS is solved by an alternating direction method of multipliers (ADMM) in [206], where the gas flow feasibility is guaranteed by the SCP algorithm, where the DC-OPF is adopted. Based on the above discussion, this chapter provides two novel methodologies to solve the OPGF problem considering both gas dynamics and bidirectional gas flow. The methodologies are based on the GFC and SCP methods to solve IEGS at transmission and distribution levels, respectively. ### 3.2 Mathematical Formulations In this section, day-ahead multi-period models for economic dispatch are formulated for transmission- and distribution-level IEGSs, respectively. In these models, the gas flow compressibility and slow traveling velocity as well as bidirectional gas flow are considered. Before presenting the mathematical formulation of these IEGS models, some commonly prerequisite simplifications and assumptions, adopted in the literature, are stated as follows: 1. In general, (i) this study focuses on solving the OPGF with a deterministic model, and the uncertainties of power and gas systems are not considered [51][164]; (ii) the generated power prices and produced gas prices are known before optimization; (iii) there is one cooperator who has carte blanche to manage and control both power and gas systems. 2. In power system modeling: * In the transmission-level IEGS model, (i) the power system operates in a steady state, and the DC power flow model is adopted; (ii) All power units are fast response; (iii) Unit commitment (UC) is known. However, the proposed model can easily be extended to include traditional coal-fired units with minimum on/off times, and to include UC variables, please refer to Appendix A.3 for the UC problem. * In the distribution-level IEGS model, (i) the PDN is radial with a balanced three-phase system; (ii) the branch flow model is employed, prohibiting bidirectional power flow [164]; (iii) the required gas for GPUs are fully controlled by the generated active power only. 3. In gas system modeling, (i) the approximated Weymouth equation and dynamic-state gas flow model presented in Section 2.1.2 are used; (ii) the linear model of P2G facilities and compressors [61], [127], [153] are adopted. #### Transmission-level IEGS Model Starting with the IEGS objective function definition, the power and gas operational constraints are listed, respectively. Finally, the holistic model is presented. ##### Objective Function The objective function is to minimize the total operating costs associated with all energy suppliers by optimizing power system production costs along with the gas well production costs. In (3.1), the production costs of both systems are defined in the first two terms, and the non-served energy demands are penalized in the second two terms. \[\min_{\Omega}\sum_{\forall t}\big{[}\sum_{\forall u\in\mathcal{U}_{n}}C_{u}(p_ {u,t})+\sum_{\forall w}C_{w}f_{w,t}+\sum_{\forall d\in\mathcal{D}_{p}}C_{d} \triangle p_{d,t}+\sum_{\forall d\in\mathcal{D}_{g}}C_{f}\triangle f_{d,t} \big{]} \tag{3.1}\] where \(\Omega\) is the set of all decision variables; \(C_{u}(.)\) is the cost functions of non-GPUs; \(C_{w}\) is the cost of gas production at gas wells; and \(C_{d}/C_{f}\) is penalty of power/gas load shedding. ##### Power System Operational Constraints The power system operational constraints are derived from Section 2.2.3, considering the power generation capacities. They are composed by: Power flow equation: (2.36), (3.2) Bus angle limits: (2.38), (3.3) Power flow limits: (2.39), (3.4) Nodal balancing equation: (2.37), (3.5) Generation capacities: \[c_{u,t}\underline{P}_{u,t}\leq p_{u,t}\leq c_{u,t}\overline{P}_{u,t},\;\forall u,t,\] (3.6) Maximum ramping up and down limits: \[-\overline{R}_{u}^{-}\leq p_{u,t}-p_{u,t-1}\overline{R}_{u}^{+},\;\forall u,t\] (3.7) where \(c_{u,t}\) is a predetermined UC decision; \(\underline{P}_{u,t}/\overline{P}_{u,t}\) is the minimum/maximum limit of power generation; and \(\overline{R_{u}}/\overline{R}_{u}^{+}\) is the maximum ramping down/up capacity. ##### Natural Gas Operational Constraints Gas system operational constraints are derived from Section 2.1.2 to consider the gas flow dynamics. They are composed by: Gas production capacities: (2.1), (3.8) Gas compressors constraints: (2.5)-(2.6), (3.9) Nodal pressure bounds: (2.8), (3.10) Weymouth equation: (2.10), (3.11) Average flow rate equation: (2.11), (3.12) Mass flow equation: (2.13), (3.13) Continuity equation: (2.14), (3.14) GFUs gas consumption: (2.40) can be simplified as \[\rho_{u,t}=\frac{\Phi}{\eta_{u}}p_{u,t},\ \forall u\in\mathcal{U}_{g},t,\] (3.15) Nodal balancing equation: \[\sum_{w\in\mathcal{W}(i)}f_{w,t}+\sum_{p\in\mathcal{P}_{1}(i)}f_{ p,t}^{out}-\sum_{p\in\mathcal{P}_{2}(i)}f_{p,t}^{in}+\sum_{c\in\mathcal{C}_{1}(i) }f_{c,t}^{out}\] \[-\sum_{c\in\mathcal{C}_{2}(i)}f_{c,t}^{in}=\sum_{u\in\mathcal{U} _{g}(i)}\rho_{u,t}+\sum_{d\in\mathcal{D}_{g}(i)}(F_{d,t}-\triangle f_{d,t}), \ \forall i,t.\] (3.16) ##### The Holistic IEGS The holistic non-convex IEGS model, at transmission level, can be cast as \[\min_{\Omega}\ \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq: ### 3.2 Optimal Energy Flow for Integrated Electric-Gas Systems #### 3.2.1 Optimal Energy Flow for Integrated Electric-Gas Systems The optimal energy flow for integrated electric-Gas systems is given by \[\underline{P}_{u}\leq p_{u,t}\leq\overline{P}_{u};\ \ \underline{Q}_{u}\leq q_{u,t} \leq\overline{Q}_{u},\forall u,t \tag{3.18}\] \[\overline{R}_{u}^{-}\leq p_{u,t}-p_{u,t-1}\leq\overline{R}_{u}^{ +},\ \forall u,t\] (3.19) \[0\leq p_{z,t}\leq\overline{P}_{z},\forall z,t\] (3.20) \[0\leq p_{l,t};\ \ 0\leq q_{l,t},\ \forall l,t\] (3.21) \[0\leq i_{l,t}\leq I_{l}^{2},\ \forall l,t\] (3.22) \[\underline{V}_{i}^{2}\leq v_{i,t}\leq\overline{V}_{i}^{2},\ \forall i>1,t;\ \ v_{1,t}=1,\ \forall t\] (3.23) \[v_{m,t}=v_{n,t}-2(r_{l}p_{l,t}+x_{l}q_{l,t})+(r_{l}^{2}+x_{l}^{ 2})i_{l,t},\ \forall l,t\] (3.24) \[p_{l,t}^{2}+q_{l,t}^{2}=v_{i,t}i_{l,t},\ \forall l,t \tag{3.25}\] In (3.18), the generated active power and reactive power from all power units are restricted by their capacities. Ramping up and ramping down capacities of all power units are defined in (3.19). The active power consumption by P2G facilities is limited in (3.20). To fix the power flow direction, (3.21) defines the lower boundary of active and reactive power. Squared line current and squared nodal voltage are restricted in (3.22) and (3.23) for power lines and power nodes, respectively. The line voltage drop is defined in (3.24), where \(r_{l},x_{l},p_{l,t},\) and \(q_{l,t}\) are the series resistance, series reactance, active and reactive power flows of line \(l\), respectively. Finally, the non-convex power flow equation is expressed in (3.25). #### 3.2.2 GDN Operational Constraints The GDN operational constraints are derived from Section 2.1.2 to consider the gas flow dynamics. They are composed by: Gas production capacities: (2.1), (3.26) Gas compressors constraints: (2.5)-(2.6), (3.27) Nodal pressure bounds: (2.8), (3.28) Average flow rate equation: (2.11), (3.29) Mass flow equation: (2.13), (3.30) Continuity equation: (2.14), (3.31) GFUs gas consumption: (2.40) can be simplified as \[\rho_{u,t}=\frac{\Phi}{\eta_{u}}p_{u,t},\ \forall u\in\mathcal{U}_{g},t,\] (3.32) P2G gas production: (2.41). Weymouth equation: (2.10), (3.34) #### Coupling Constraints All types of coupling components, namely GPUs, P2G facilities and electric-driven compressors, are considered. These components mainly interact in the nodal power and gas balancing equations, which are relaxed by adding the active and reactive power and gas loads shedding, as follows. \[\sum_{u\in\mathcal{U}(n)}p_{u,t}+\sum_{e\in\mathcal{E}(n)}\hat{P}_{e,t}+\sum_{l\in\mathcal{L}_{1}(n)}(p_{l,t}-r_{l}i_{l,t})-\sum_{l\in\mathcal{L}_{ 2}(n)}p_{l,t}\] \[=G_{n}v_{n,t}+\sum_{z\in\mathcal{Z}(n)}p_{z,t}+\sum_{c\in \mathcal{C}^{e}(n)}\alpha_{c}f_{c,t}^{in}/\Phi+\sum_{d\in\mathcal{D}_{p}(n)}P_ {d,t}(1-\delta_{d,t}),\;\forall n,t \tag{3.35}\] \[\sum_{u\in\mathcal{U}(n)}q_{u,t}+\sum_{l\in\mathcal{L}_{1}(n)}(q_ {l,t}-x_{l}i_{l,t})-\sum_{l\in\mathcal{L}_{2}(n)}q_{l,t}=B_{n}v_{n,t}+\sum_{d \in\mathcal{D}_{p}(n)}Q_{d,t}(1-\delta_{d,t}),\;\forall i,t\] (3.36) \[\sum_{p\in\mathcal{P}_{1}(i)}f_{p,t}^{out}-\sum_{p\in\mathcal{P} _{2}(i)}f_{p,t}^{in}+\sum_{c\in\mathcal{C}_{1}(i)}f_{c,t}^{out}-\sum_{c\in \mathcal{C}_{2}(i)}f_{c,t}^{in}+\sum_{z\in\mathcal{Z}(i)}\varrho_{z,t}+\sum_{w \in\mathcal{W}(i)}f_{w,t}\] \[=\sum_{u\in\mathcal{U}_{g}(i)}\rho_{u,t}+\sum_{d\in\mathcal{D}_{g }(i)}(F_{d,t}-\triangle f_{d,t}),\;\forall i,t\] (3.37) \[0\leq\delta_{d,t}\leq 1,\forall t,d\in\mathcal{D}_{p};\;\; \triangle f_{d,t}\geq F_{d,t},\forall t,d\in\mathcal{D}_{g}. \tag{3.38}\] In the above expressions, the nodal active and reactive power balancing equations are defined in (3.35) and (3.36), respectively. \(\mathcal{U}(n),\mathcal{E}(n),\mathcal{Z}(n),\mathcal{C}^{e}(n)\), and \(\mathcal{D}_{p}(n)\) are subsets of power generators, wind farms, P2G facilities, electric-driven compressors, and power loads connected to node \(n\), respectively. Subsets \(\mathcal{L}_{1}(n)/\mathcal{L}_{2}(n)\) are the feeders whose initial/final node is \(n\). The balance equation for gas nodes is (3.37), where \(\mathcal{W}(i),\mathcal{Z}(i),\mathcal{U}_{g}(i)\), and \(\mathcal{D}_{g}(a)\) are subsets of gas sources, P2Gs, GPUs, and gas loads connected with node \(i\). \(\mathcal{C}_{1}(i)/\mathcal{C}_{2}(i)\) are subsets of compressors whose final/initial node is \(i\). \(\mathcal{P}_{1}(i)/\mathcal{P}_{2}(i)\) are subsets of pipelines whose terminal/starting node is \(i\). The upper boundaries of active and reactive power load shedding, as well as gas load shedding, are defined in (3.38), where \(\delta_{d,t}\) is the proportion of electric power load shedding. #### The Holistic IEGS Model The objective of the IEGS model is to minimize the total operational costs for the integrated system, as defined in (3.39a). Similar to (3.1), the objective of the proposed model includes the costs of power generated from non-GPUs, costs of the gas consumed from all gas sources, and costs of power and gas load shedding. Note, that the model objective is a convex quadratic function due to the presence of fuel consumption cost functions in the first term. The holistic model can be cast as \[\min_{\Omega}\ \sum_{\forall t}\big{[}\sum_{\forall w\in\mathcal{U}_{n}}C _{u}(p_{u,t})+\sum_{\forall w}C_{w}f_{w,t}+\sum_{\forall d\in\mathcal{D}_{p}}C_{d }^{p}P_{d,t}\delta_{d,t}+\sum_{\forall d\in\mathcal{D}_{g}}C_{d}^{f}\triangle f_ {d,t}\big{]} \tag{3.39a}\] \[s.t:\] Power system constraints: ( 3.18 )-( 3.25 ). ( 3.39 ) \[\text{Gas system constraints: (\ref{eq:26})-(\ref{eq:33}).}\] (3.39c) \[\text{Coupling constraints: (\ref{eq:3.35})-(\ref{eq:3.38}).}\] (3.39d) \[\Omega=\{\delta_{d,t},\triangle f_{d,t},p_{u,t},q_{u,t},\rho_{u,t},p_{l,t},i_{l,t},v_{n,t},f_{w,t},f_{p,t},\pi_{i,t},f_{p,t}^{in},f_{p,t}^{out}, f_{c,t}^{in},m_{p,t}\} \tag{3.39e}\] In fact, because of the non-convexity of gas flow equations (3.11) or (3.34) and power flow equations (3.25), the above two models are not ready to be solved by commercial solvers. In what follows, the given problems are reformulated and solution methods are proposed to find a feasible decisions. ### 3.3 Optimal Power-Gas Flow Calculation for IEGSs In this section, three solution methodologies are provided to solve the above IEGS models. #### Piecewise Linear Approximation Method This method can only be employed in the transmission level IEGS models (3.17) because it can only handel the non-convexity of Weymouth equations, and PLA of the non-convex power flow equations provides unacceptable errors. It is to reformulate the IEGS model into a MILP framework using PLA of the quadratic Weymouth equation (3.11). Various models based on PLA are presented in [188], and the incremental model outperforms the others according to its computational time and accuracy. In Appendix A.6, the incremental PLA model for a general nonlinear function \(\Im(x)\), such as the squared nodal pressure, i.e., \(\pi_{i,t}^{2},\pi_{o,t}^{2}\) and the squared pipeline flow, i.e., \(f_{p,t}|f_{p,t}|\), is introduced. Decreasing the linearization error can be accomplished by: (i) increasing number of segments \(S\); (ii) selecting the breakpoint values \((x_{i},\Im(x_{i}))\). The optimal breakpoints are derived in [127] to reduce the maximum approximation tolerances; (iii) using practical conditions to reduce the operating intervals of nodal pressures [66], [128]. Therefore, the MILP model for the IEGS dispatch problem is \[\min_{\Omega} \tag{3.1}\] \[s.t:\] Power system constraints: ( 3.2 )-( 3.7 ) ( 3.40 ) \[\text{Gas system constraints: (\ref{eq:3.8})-(\ref{eq:3.10}), (\ref{eq:3.12})-(\ref{eq:3.16})}\] (3.40c) \[\text{PLA models for }\pi_{i,t}^{2},\ \pi_{o,t}^{2}\text{ and }f_{p,t}|f_{p,t}|\] (3.40d) \[\Omega=\{\triangle p_{d,t},\triangle f_{d,t},p_{u,t},\rho_{u,t},p_{ l,t},\theta_{i,t},f_{w,t},f_{p,t},\pi_{i,t},f_{p,t}^{in},f_{p,t}^{out},f_{c,t}^{in},f_{c,t}^{ out},m_{p,t},Var_{PLA}\} \tag{3.40e}\] where \(Var_{PLA}\) is the linearization variables, which include all continuous and binary variables for the squared nodal pressures and squared pipelines average flow. #### Gas Flow Correction Method This method is developed to be employed in the transmission level IEGS models (3.17). The novel SOC relaxation method presented in [128] is adopted to convexify the quadratic Weymouth equation (3.11) considering both the gas flow dynamics and bidirectional gas flow, resulting in an MISOCP framework mathematically. A GFC method, which is based on the multi-slack-node method and the Levenberg-Marquardt algorithm, is designed to calculate the optimal energy flow solution for the IEGS. ##### Formulating the MISOCP Model Before convexifying the Weymouth equation, directional binary variables \(z_{p,t}\) are introduced with the Big-M method to reformulate the equation as in (3.43) without the sign function. If \(\pi_{i,t}\) is greater/lower than \(\pi_{o,t}\), then the direction variable \(z_{p,t}\) would equal \(1/0\) by (3.41), and \(f_{p,t}\) is forced to be positive/negative value by (3.42). The gas flow limits, i.e., \(\underline{F}_{p}\) and\(\overline{F}_{p}\), can be obtained from (2.10), by substituting nodal pressure ranges. \[(1-z_{p,t})(\underline{\Pi}_{i}-\overline{\Pi}_{o})\leq\pi_{i,t} -\pi_{o,t}\leq z_{p,t}(\overline{\Pi}_{i}-\underline{\Pi}_{o}), \tag{3.41}\] \[(1-z_{p,t})\underline{F}_{p}\leq f_{p,t}\leq z_{p,t}\overline{F} _{p}\] (3.42) \[f_{p,t}^{2}=\begin{cases}\chi_{p}^{f}(\pi_{i,t}^{2}-\pi_{o,t}^{2 }),z_{p,t}=1,\\ \chi_{p}^{f}(\pi_{o,t}^{2}-\pi_{i,t}^{2}),z_{p,t}=0\end{cases},\;\;\forall p,t,(i,o)\in p. \tag{3.43}\] The inlet and outlet pressures of a pipeline \(p\), i.e., \(\pi_{p,t}^{+}\) and \(\pi_{p,t}^{-}\), are assigned in (3.44)-(3.48), where (3.44)-(3.45) and (3.46)-(3.47) represent the pressures in case of positive and negative flow directions, respectively. In order to decrease binary variables, (3.44)-(3.47) are adopted only for the bidirectional pipelines \(p\in\mathcal{P}^{\pm}\). The inlet and outlet pressures of unidirectional pipelines, which are connected with gas sources or gas loads at far terminals, are assigned directly by (3.48). Therefore, Weymouth equations (3.43) are replaced with (3.44)-(3.49), without the absolute function. \[(1-z_{p,t})(\overline{\Pi}_{o}-\underline{\Pi}_{i})\geq\pi_{p,t} ^{+}-\pi_{i,t}\geq(1-z_{p,t})(\underline{\Pi}_{o}-\overline{\Pi}_{i}),\;\; \forall p\in\mathcal{P}^{\pm},t,(i,o)\in p \tag{3.44}\] \[(1-z_{p,t})(\overline{\Pi}_{i}-\underline{\Pi}_{o})\geq\pi_{p,t} ^{-}-\pi_{o,t}\geq(1-z_{p,t})(\underline{\Pi}_{i}-\overline{\Pi}_{o}),\;\; \forall p\in\mathcal{P}^{\pm},t,(i,o)\in p\] (3.45) \[z_{p,t}(\overline{\Pi}_{i}-\underline{\Pi}_{o})\geq\pi_{p,t}^{+ }-\pi_{o,t}\geq z_{p,t}(\underline{\Pi}_{i}-\overline{\Pi}_{o}),\;\;\forall p \in\mathcal{P}^{\pm},t,(i,o)\in p\] (3.46) \[z_{p,t}(\overline{\Pi}_{o}-\underline{\Pi}_{i})\geq\pi_{p,t}^{- }-\pi_{i,t}\geq z_{p,t}(\underline{\Pi}_{o}-\overline{\Pi}_{i}),\;\;\forall p \in\mathcal{P}^{\pm},t,(i,o)\in p\] (3.47) \[\pi_{p,t}^{+}=\pi_{i,t},\;\;\pi_{p,t}^{-}=\pi_{o,t},\;\;\forall p \in\mathcal{P}/\mathcal{P}^{\pm},t,(i,o)\in p\] (3.48) \[f_{p,t}^{2}=\chi_{p}^{f}(\pi_{p,t}^{+2}-\pi_{p,t}^{-2}),\;\; \forall p,t.. \tag{3.49}\] The quadratic equation (3.49) can be relaxed as an inequality constraints as shown in (3.50). Note that (3.50) is a proper cone and its canonical form is presented in (3.51). \[f_{p,t}^{2}+\chi_{p}^{f}\pi_{p,t}^{+2}\leq\chi_{p}^{f}\pi_{p,t}^{-2} \tag{3.50}\] \[\left\|\frac{f_{p,t}}{\sqrt{\chi_{p}^{f}}\pi_{p,t}^{-}}\right\|_{2} \leq\sqrt{\chi_{p}^{f}}\pi_{p,t}^{+} \tag{3.51}\] Therefore, The MISOCP model for the IEGS dispatch problem is \[\min_{\Omega} \eqref{eq:MISOCP} \tag{3.52a}\] \[s.t:\text{Power system constraints: \eqref{eq:MISOCP}-\eqref{eq:MISOCP}}\] (3.52b) \[\text{Gas system constraints: \eqref{eq:MISOCP}-\eqref{eq:MISOCP}},\eqref{eq:MISOCP}-\eqref{eq:MISOCP}\] (3.52c) \[\text{Gas flow direction: \eqref{eq:MISOCP}-\eqref{eq:MISOCP}},\] (3.42) \[\text{SOC relaxation: \eqref{eq:MISOCP}}\] (3.52e) \[\Omega= \{\triangle p_{d,t},\triangle f_{d,t},p_{u,t},\rho_{u,t},p_{l,t}, \theta_{i,t},f_{w,t},f_{p,t},\pi_{i,t},z_{p,t},\pi_{p,t}^{+},\pi_{p,t}^{-},f_ {p,t}^{in},f_{p,t}^{out},f_{c,t}^{in},f_{c,t}^{out},m_{p,t}\} \tag{3.52f}\] #### Multi-Slack-Node Based Newton-Raphson Algorithm The MISOCP model (3.52) is provided to optimize the day-ahead dispatch for IEGS. It fails to find the exact gas flow as it is merely an approximation of the original model, especially for gas transmission networks. Accordingly, it is necessary to provide a correction method for gas flow after obtaining the solution of the MISOCP model. In [207], a multi-slack-node method with Newton-Raphson algorithm is suggested to calculate the steady-state OPGF, assuming unidirectional gas flow. This method is improved to consider bidirectional flows in an economic dispatch IEGS model under \(K-1\) contingency criteria [128]. However, the improved method neglects the line pack. In this paper, gas flow is corrected by Multi-slack-node model with Levenberg-Marquardt algorithm considering gas flow dynamics. Gas flow equations (3.53)-(3.56) are reformulated from the gas system operational constraints, namely the nodal balance equation (3.16), Weymouth equation (3.11), mass flow equation, and the compressor flow constraints (3.9), which can be written as \(\pi_{o,t}=\gamma_{c}\pi_{i,t},\)\(1\leq\gamma_{c}\leq\overline{\gamma}_{c},\)\(\forall c,t,(i,o)\in c\). \(\overline{\gamma}_{c}\) is the maximum compression ration. \(g_{i,t}^{1}\) is the nodal flow unbalance for node \(i\) at time \(t\), \(g_{p,t}^{2}\) is the pipeline flow unbalance for pipeline \(p\) at time \(t\), \(g_{i,t}^{3}\) is the line pack unbalance for pipeline \(p\) at time \(t\), \(g_{c,t}^{4}\) is the compressor pressure unbalance for compressor \(c\) at time \(t\). Each node connected with gas well is considered as slack node [207], and the amount of gas unbalance \(\triangle g_{t}\) of all nodes at time \(t\) is adjusted by multiple gas wells according to (3.57). \(f_{w,t}^{0}\) is the optimal gas flow from gas wells obtained from the MISOCP model (3.52). \(\beta_{w,t}\) is the participation factor of supplier \(w\). \[g_{i,t}^{1}=\sum_{w\in\mathcal{W}(i)}f_{w,t}+\sum_{p\in\mathcal{P }_{1}(i)}f_{p,t}^{out}-\sum_{p\in\mathcal{P}_{2}(i)}f_{p,t}^{in}+\sum_{c\in \mathcal{C}_{1}(i)}f_{c,t}^{out}\] \[-\sum_{c\in\mathcal{C}_{2}(i)}f_{c,t}^{in}-\sum_{u\in\mathcal{U} _{g}(i)}\rho_{u,t}-\sum_{d\in\mathcal{D}_{g}(i)}(F_{d,t}-\triangle f_{d,t})=0,\;\forall \tag{3.53}\] \[g_{p,t}^{2}=(f_{p,t}^{in}+f_{p,t}^{out})|f_{p,t}^{in}+f_{p,t}^{out}|-4\chi_{p}^{f} (\pi_{i,t}^{2}-\pi_{o,t}^{2})=0,\ \forall p,t,(i,o)\in p, \tag{3.54}\] \[g_{p,t}^{3}=2f_{p,t}^{in}-2f_{p,t}^{out}-\chi_{m}^{f}(\pi_{i,t}+\pi_{o,t})+\chi _{m}^{f}(\pi_{i,t-1}+\pi_{o,t-1})=0,\ \forall p,t,(i,o)\in p, \tag{3.55}\] \[g_{p,t}^{4}=\pi_{o,t}-\gamma_{c}\pi_{i,t}=0,\ \forall c,t,(i,o)\in c, \tag{3.56}\] \[f_{w,t}=f_{w,t}^{0}-\beta_{w,t}\triangle g_{t},\ \beta_{w,t}=\frac{f_{w,t}^{0} }{\sum_{\forall w}f_{w,t}^{0}},\ \ \forall w,t,i\in w. \tag{3.57}\] Let \(Y_{t}\) and \(X_{t}\) be the unbalance and state vectors at time \(t\) as defined in (3.58)-(3.59), respectively. \[X_{t}=[f_{t}^{in},f_{t}^{out},f_{c},\pi_{t},\triangle g_{t}],\ \ \forall t, \tag{3.58}\] \[Y_{t}(X_{t})=[g_{t}^{1},g_{t}^{2},g_{t}^{3},g_{t}^{4}]_{X_{t}},\ \ \forall t, \tag{3.59}\] where \(g_{t}^{1},g_{t}^{2},g_{t}^{3},\)\(g_{t}^{4},f_{t}^{in},\)\(f_{t}^{out},\)\(f_{c},\) and \(\pi_{t}\) are vectors for all \(g_{i,t}^{1},g_{p,t}^{2}\), \(g_{p,t}^{3},g_{c,t}^{4},g_{p,t}^{in}\), \(f_{p,t}^{out},f_{c,t},\) and \(\pi_{i,t}\), respectively. The derivative matrix (Jacobian) between \(Y_{t}\) and \(X_{t}\) can be calculated by \[A_{t}=\begin{bmatrix}\frac{\partial g_{t}^{1}}{\partial f_{t}^{in}}&\frac{ \partial g_{t}^{1}}{\partial f_{t}^{out}}&\frac{\partial g_{t}^{1}}{\partial f _{t}^{in}}&0&\frac{\partial g_{t}^{1}}{\partial\triangle g_{t}}\\ \frac{\partial g_{t}^{2}}{\partial f_{t}^{in}}&\frac{\partial g_{t}^{2}}{ \partial f_{t}^{out}}&0&\frac{\partial g_{t}^{2}}{\partial\pi_{t}}&0\\ \frac{\partial g_{t}^{3}}{\partial f_{t}^{in}}&\frac{\partial g_{t}^{3}}{ \partial f_{t}^{out}}&0&\frac{\partial g_{t}^{3}}{\partial\pi_{t}}&0\\ 0&0&0&\frac{\partial g_{t}^{4}}{\partial\pi_{t}}&0\end{bmatrix},\ \forall t, \tag{3.60}\] As a result of considering the continuity equation for line pack in (3.55), the derivative matrix between \(Y_{t}\) and \(X_{t-1}\) can be calculated by \[B_{t,t-1}=\begin{bmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&\frac{\partial g_{t}^{3}}{\partial\pi_{t-1}}&0\\ 0&0&0&0\end{bmatrix}_{X_{t-1}},\ \forall t, \tag{3.61}\] The initial line pack is considered to be equal the final line pack that could to be operated in the next day, i.e., \(m_{p,1}=m_{p,T},\ \ \forall p\). Therefore, the overall Jacobian is presented in (3.64) and all derivatives for \(X\) are obtained from (3.62)-(3.63). Levenberg-Marquardt algorithm is presented to find a more exact solution for the proposed MISOCP model. \[X=[X_{1}^{\top},X_{2}^{\top},X_{3}^{\top}\dots X_{T}^{\top}]^{\top}, \tag{3.62}\] \[Y(X)=[Y_{1}(X_{1})^{\top},Y_{2}(X_{2})^{\top},Y_{3}(X_{3})^{\top}\dots Y_{T}(X _{T})^{\top}]^{\top} \tag{3.63}\] \[J(X)=\begin{bmatrix}A_{1}&0&0&\dots&0&B_{1,T}\\ B_{2,1}&A2&0&\dots&0&0\\ 0&B_{3,2}&A_{3}&\dots&0&0\\ \vdots&\vdots&\vdots&\ddots&0&0\\ 0&0&0&\dots&A_{T-1}&0\\ 0&0&0&\dots&B_{T,T-1}&A_{T}\end{bmatrix}_{X} \tag{3.64}\] #### Sequential-MISOCP Algorithm Motivated by the discussion in Section 3.1, this study proposes a computational framework for distribution level multi-period OPGF problem based on DCP. Owing to the emerging P2G facilities and renewable energy outputs into the IEGS, bidirectional energy conversion is inevitable, and the conditions discussed in [159] are not fulfilled. Therefore, SOC relaxation for the power flow generally provides inexact AC-OPF of the PDN. For the gas distribution network (GDN), in order to allow bidirectional gas flow, the sign function of the Weymouth equation is replaced with MILP constraints and quadratic equalities, similar to the treatment discussed in Section 3.3.2. These equalities and branch power flow quadratic equalities are reformulated as DCP functions. Following the algorithm proposed in [51], The P-CCP proposed in [163] is designed to solve the OPGF for IEGS. Its convergence is proved in [163] for general DCP problems and another proof has been presented in [164] for the branch flow OPF model. It should be noted that our work is an extension for [51] and the main differences are: (i) our work considers bidirectional gas flow pipelines to deal with meshed grid GDN; (ii) [51] directly adopts the SOC relaxation on power flow without an exactness guarantee. Moreover, three different types of coupling components are considered in this study: GPUs, P2G facilities, and electric-driven gas compressors. As well as, the dynamic gas model is adopted. The main contributions in this method are summarized as: 1. Feasibility and accuracy guarantee. A S-MISCOP algorithm is proposed to find the OPGF for IEGS. Based on DCP, the non-convex branch power flow and Weymouth gas flow equalities are decomposed as MISOCP constraints, which are easier to be solved than the original nonlinear problem. The proposed algorithm is a sequence of solving penalized MISOCP problems, and its feasibility is guaranteed by controlling its penalties. 2. Fast and reliable convergence. Because S-MISOCP algorithm is a local heuristic approach, it is influenced by the initial point. Therefore, a high-quality initial point is suggested and an adaptive penalty growth rate is designed to adjust the main objective weight in the penalized problem. ### 3.3 Optimal Power-Gas Flow Calculation for IEGSs #### 3.3.1 DCP Reformulation Solving the above model requires much computation efforts, due to the presence of the nonlinear and nonconvex power flow and Weymouth equations. Fortunately, the power flow constraints and Weymouth equations can be formulated as DCP problem by expressing the proposed model constraints as difference of two convex functions. Note that the concave function \(g(x)\) of a DCP constraint can be linearized as \(\hat{g}(x,\overline{x})\) at point \(\overline{x}\) by (3.65), which is the first-order Taylor expansion. \[\hat{g}(x,\overline{x})\cong g(\overline{x})+\nabla g(\overline{x})^{\top}(x- \overline{x}) \tag{3.65}\] 1. Power flow equation reformulation: The quadratic power flow equation can be written as two inequality constraints as \[4p_{l,t}^{2}+4q_{l,t}^{2}+(v_{n,t}-i_{l,t})^{2}\leq(v_{n,t}+i_{ l,t})^{2},\ \ \forall l,t,\] (3.66) \[(v_{n,t}+i_{l,t})^{2}\leq 4p_{l,t}^{2}+4q_{l,t}^{2}+(v_{n,t}-i_{ l,t})^{2},\ \ \forall l,t.\] (3.67) The first inequality (3.66) is an SOC constraint, and its canonical form is (3.68). Using (3.65), the right-hand side of (3.67) can be replaced by its linear approximation. Given \([\overline{p}_{l,t}\ \overline{q}_{l,t}\ \overline{v}_{n,t}\ \tilde{i}_{l,t}]^{\top}\) as an initial point, the constraint (3.67) can be substituted with the approximated canonical form (3.69), where \(\Gamma_{l,t}\) is an auxiliary variable. \[\left\|\begin{array}{c}2p_{l,t}\\ 2q_{l,t}\\ (v_{n,t}-i_{l,t})\end{array}\right\|_{2}\leq(v_{n,t}+i_{l,t}),\ \ \forall l,t,\] (3.68) \[\left\|\begin{array}{c}2(v_{n,t}+i_{l,t})\\ \Gamma_{l,t}-1\end{array}\right\|_{2}\leq\Gamma_{l,t}+1,\ \ \forall l,t,\] (3.69) \[\Gamma_{l,t}=8\overline{p}_{l,t}p_{l,t}+8\overline{q}_{l,t}q_{l,t}+2( \overline{v}_{n,t}-\tilde{i}_{l,t})(v_{n,t}-i_{l,t})\] \[-4\overline{p}_{l,t}^{2}-4\overline{q}_{l,t}^{2}-(\overline{v}_{n,t}-\tilde{i}_{l,t})^{2},\ \ \forall l,t.\] 2. Gas flow equation reformulation: Weymouth equations are firstly rearticulated without the sign function by using a directional binary variable \(z_{p,t}\), as introduced in Section 3.3.2, by using (3.41)-(3.42) and (3.44)-(3.48). Secondly, (3.49) is converted into two opposite inequality constraints as \[f_{p,t}^{2}+\chi_{p}^{f}\pi_{p,t}^{-2}\leq\chi_{p}^{f}\pi_{p,t}^{ +2},\ \ \forall p,t,\] (3.70) \[\chi_{p}^{f}\pi_{p,t}^{+2}-(f_{p,t}^{2}+\chi_{p}^{f}\pi_{p,t}^{-2}) \leq 0,\ \ \forall p,t\] (3.71) The first inequality is an SOC constraint, and its canonical form is (3.72). Similar to (3.67), given \([\overline{f}_{p,t}\ \overline{\pi}_{p,t}^{+}\ \overline{\pi}_{p,t}]^{\top}\) as an initial point, the second inequality (3.71) is substituted with the approximated canonical form (3.73), after linearizing the right-hand side by (3.65), where \(\Lambda_{p,t}\) is an auxiliary variable. \[\left\|\sqrt{\frac{f_{p,t}}{\chi_{p}^{f}\pi_{p,t}^{-}}}\right\|_{2} \leq\sqrt{\chi_{p}^{f}}\pi_{p,t}^{+},\ \ \forall p,t, \tag{3.72}\] \[\left\|\frac{2\sqrt{\chi_{p}^{f}}\pi_{p,t}^{+}}{\Lambda_{p,t}-1} \right\|_{2}\leq\Lambda_{p,t}+1,\] (3.73) \[\Lambda_{p,t}=2\chi_{p}^{f}\overline{\pi}_{p,t}^{-}\pi_{p,t}^{-}+2 \overline{f}_{p,t}f_{p,t}-\chi_{p}^{f}\overline{\pi}_{p,t}-\overline{f}_{p,t}^ {2},\ \ \forall p,t\] #### The Compact Form The compact form of the proposed model, after above reformulations of nonlinear equations, is \[\min_{\mathbf{x},\hat{\mathbf{x}}}f(\mathbf{x}) \tag{3.74a}\] \[s.t.\ \mathbf{A}\mathbf{x}\leq\mathbf{B}\] (3.74b) \[\|\mathbf{D}_{h,t}\mathbf{x}\|_{2}\leq\mathbf{d}_{h,t}\mathbf{x},\forall h,t,\] (3.74c) \[\|\mathbf{E}_{h,t}(\hat{\mathbf{x}})\mathbf{x}+\mathbf{F}_{h,t}(\hat{\mathbf{x}})\|_ {2}\leq\mathbf{e}_{h,t}(\hat{\mathbf{x}})\mathbf{x}+\mathbf{f}_{h,t}(\hat{\mathbf{x}}),\forall h,t \tag{3.74d}\] where \(\mathbf{x}\) is the decision variables for both systems, including the continuous and binary variables. Due to the need of finding suitable linearization points used in the approximated cones (3.69) and (3.73), \(\hat{\mathbf{x}}\) is considered as a decision variable for the IEGS problem. \(\mathbf{A}\) and \(\mathbf{B}\) can be easily obtained from the MILP constraints (3.41)-(3.42), (3.44)-(3.48) and (3.39b)-(3.39d). The exact SOC constraints (3.68) and (3.72) are compressed in (3.74c), while the approximated SOC constraints (3.69) and (3.73) are defined in (3.74d). #### The Proposed Algorithm Structure The S-MISOCP algorithm starts with an initial infeasible linearization point \(\hat{\mathbf{x}}\), a sequence of MISOCP problems, which penalize the constraints violations, are solved while updating the linearization point in each iteration. Therefore, with suitable algorithm parameters, a quick convergence can be achieved by shifting the infeasible solution to a feasible one very close to, or equal to, the optimum. The convergence proof is discussed in [163]. In fact, S-MISOCP algorithm is a local heuristic approach, and its performance and the solution quality are influenced by: 1. The Problem Infeasibility: the feasibility of the original problem, which suggests the algorithm would fail to converge if it is infeasible. For this reason, the power and gas load shedding are added to relax the operational constraints. If load shedding occurs, the upgradation of system components is important to provide a secure operation for the IEGS. 2. The Initial Point Selection: the proposed algorithm starts with a convexified counterpart of the original model, which needs to be parameterized by an initial point. According to [51] and [208], the initial point has crucial impacts on the quality of the final solution, solver time and iterations number. Therefore, to find a high-quality initial point, we recommend using the relaxed MISOCP problem with penalizing the right-hand side of (3.69) and (3.73), as follows. \[\min_{\mathbf{x}}f(\mathbf{x})+\lambda_{p}\sum_{\forall t}\sum_{\forall l}i_ {l,t}+\lambda_{g}\sum_{\forall t}\sum_{\forall p}\pi^{+}_{p,t}\] (3.75a) \[s.t.\] (3.74b) - (3.74c) where \(\lambda_{p}\) and \(\lambda_{g}\) are small values that control the focus of the model objective on the constraints violation. 3. The Algorithm Parameters: selecting suitable parameters and using adaptive penalty growth rate, which is suggested in this study, provide fast and feasible solutions. Compared with the standard penalty growth rate introduced in [163], where a global penalty coefficient \(\tau\) is selected for all the convexified constraints, each convexified constraint is assigned with its own penalty coefficient, and an adaptive rule is designed for updating it. This allows us to better capture the impact of slack variables on the objective and to facilitate convergence. The proposed rate depends on the relative constraint violation (\(RCV\)), which can be calculated by \[RCV_{h,t}=\varphi_{h,t}/(\mathbf{e}_{h,t}(\hat{\mathbf{x}})\mathbf{x}+\mathbf{f}_{h,t}(\hat{ \mathbf{x}})),\ \ \forall h,t \tag{3.76}\] where, \(\varphi_{h,t}\) is the value of the approximated cones violations. The adaptive penalty, which is used in step 5 of the proposed algorithm with iteration \(k\), is achieved by \[\text{If}\ RCV_{h,t}\leq\varepsilon,\text{Then,}\ \tau^{k}_{h,t}=\mu\tau^{k-1}_{h,t};\] \[\text{Else,}\ \tau^{k}_{h,t}=\tau^{k-1}_{h,t}\ \min\{\overline{\mu},\ \max[\underline{\mu},\ \sigma\,RCV_{h,t}]\}. \tag{3.77}\] In the above formula, where \(\overline{\mu},\underline{\mu}\) are limits of penalty rate coefficient, and \(\sigma\) is a fixed constant that controls the rate. Selecting suitable parameters provides fast and reliable convergence with a high-efficiency solution. Note that the solution might be suboptimal at higher iteration number with high penalties because the weight of violations is greater than the main objective function. In order to decrease this weight, \(\mu\) is added to decrease penalties of inviolated constraints. And its range should be \(1\geq\mu>1/\underline{\mu}\) to avoid any fluctuations in penalties between iterations. Theoretically, the convergence of P-CCP only holds for continuous problems [163], and it is not always guaranteed for MISOCP models due to their discontinuity. Based on our experiences, directional binaries obtained from the relaxed problem would remain fixed after the first few iterations, which is consistent with the observation in [206]. Therefore, the binary variables can be fixed after the beginning iterations, which is tuned as \(5\) in this work. Then, the original MISOCP model can be converted into an SOCP with fixed binary variables, which means the S-MISOCP algorithm would degenerate to a standard P-CCP and its convergence can be guaranteed. ### 3.4 Simulation Results #### Case Studies with Transmission-level IEGS A transmission-level IEGS test system, containing a \(5\)-bus power system and a \(7\)-bus gas system, is examined to illustrate the effectiveness and features of the proposed GFC method as well as the incremental PLA method. Figure 3.1 shows the topology of the integrated system infrastructure. The details of all parameters of the integrated system and unit commitment are found in Appendix B.1.1 and Appendix B.3.1. In the figure, \(B,G,L,pl,W,C,\) and \(gl\) are used with subscripts to denote the power buses, generators, power lines, power loads, gas wells, compressors, and gas loads, respectively. #### Effectiveness of Break-Points on the MILP Model MILP model (3.40) is applied on the test system to demonstrate the effect of number of segments and selection of breakpoints used in PLA of Weymouth equation. To be clear results, the Weymouth equation contains three nonlinear elements, namely \(\pi_{i,t}^{2},\pi_{o,t}^{2}\) and \(f_{p,t}|f_{p,t}|\), the pressure squared is linearized by \(S\) segments, while the gas flow is linearized by \(2*S\) due to the absolute function. It should be noted that these nonlinear terms are replaced by piecewise linearization variables. As a result, they may not be considered as a decision variable in the optimization problem. The Weymouth equation error is more important than the interpolation error for each variable. Correa et al. [61] provide an analysis on the interpolation error, which affects the Weymouth error. Weymouth equation error (\(error\%\)) is the maximum difference occurred between the two sides of Weymouth equation for all pipelines at any time, as defined in (3.81). Table 3.1 presents the results of different number of segments. By increasing \(S\), \(error\%\) decreases while CPU time increases. \[error\%=\max\left(\left|\frac{f_{p,t}^{2}-\chi_{p}^{f}|\pi_{i,t}^{2}-\pi_{i,t}^ {2}|}{f_{p,t}^{2}}\right|\times 100\right),\ \ \forall p,t \tag{3.81}\] Using optimal breakpoints improves the incremental model without increasing \(S\) as shown in Table 3.1. We use the formulation used in [188] to decrease the squared linearization error, and the maximum error is forced to be less than a tolerance value. The optimal breakpoints clearly decrease the Weymouth error with almost same CPU time. The objective value is slightly increased/decreased by breakpoints selection. #### Performance of the MISOCP Model MISOCP model (3.52) is applied on the test system to demonstrate its effectiveness and the computational efficiency. Table 3.2 presents different stress levels (loading) on the gas and electricity infrastructure, which are denoted as G and E respectively. It shows the maximum error of Weymouth equation obtained by (3.81) before and after using the proposed GFC method, denoted as Error1 and Error2, respectively. With increasing the gas stress, the gas \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline & \multicolumn{3}{c}{Without optimal breakpoints} & \multicolumn{3}{c}{With optimal breakpoints} \\ \cline{2-7} \(S\) & Obj (\(10^{6}\)\(\$\)) & error\% & Time (s) & Obj (\(10^{6}\)\(\$\)) & error\% & Time (s) \\ \hline 2 & 5.037 & 24.5\% & 0.988 & 5.036 & 18.61\% & 0.86 \\ \hline 3 & 5.030 & 11.7\% & 34.31 & 5.031 & 9.85\% & 34.20 \\ \hline 5 & 5.027 & 3.97\% & 174.19 & 5.027 & 3.8\% & 170.89 \\ \hline 10 & 5.026 & 1.02\% & 735.97 & 5.026 & 0.901\% & 740.57 \\ \hline \hline \end{tabular} \end{table} Table 3.1: The effect of segments number and breakpoints selection on the Weymouth error and CPU time Figure 3.1: Topology of the test system production cost is subsequently increased to feed the additional gas load, therefore the total IEGS cost increases. It is notable that using GFC method able to decrease the maximum error of the Weymouth equation. MISOCP model with the proposed GFC method is compared with the MILP model in Table 3.3 under different stresses on the IEGS. Moreover, as the MILP solution exactness depends on the number of segments \(S\) used in the PLA model, two MISOCP models are formulated with different \(S\), namely \(S=10\) and \(S=20\). The proposed GFC method outperforms the MILP model, especially for high load stress. In the first two cases, the MILP model reaches the optimal objective value but with longer execution time and greater maximum error compared with the proposed MISOCP model. With \(20\) segments, although the MILP model needs very long time, it cannot provide the optimal objective with accurate decisions compared with the proposed method, as shown in the last row of Table 3.3. #### Comparison with the Steady-State Gas Model The main purpose of the proposed model is to find the optimal day-ahead schedule for both power units and gas suppliers. It is better to consider any assumption which may provide suboptimal decisions. With the consideration of the line pack and traveling velocity of gas, the proposed model has a significant effect on the production scheduling. The following two patterns are presented to discuss this effectiveness. Pattern \(1\) corresponds to steady-state \begin{table} \begin{tabular}{c c c c c c} \hline Stress & Model & Obj (\(10^{6}\$\)) & Solution time (s) & Error (\%) \\ \hline \multirow{3}{*}{\(100\%\)loading} & MISOCP & 3.9345 & 3.8792 & 0.098\% \\ \cline{2-6} & MILP, S=10 & 3.9345 & 24.475 & 1.020\% \\ \cline{2-6} & MILP, S=20 & 3.9345 & 1411.1 & 0.260\% \\ \hline \multirow{3}{*}{\(130\%\)loading} & MISOCP & 4.7535 & 5.1751 & 0.013\% \\ \cline{2-6} & MILP, S=10 & 4.7535 & 574.24 & 1.070\% \\ \cline{2-6} & MILP, S=20 & 4.7535 & 6079.5 & 0.201\% \\ \cline{2-6} & MISOCP & 5.4499 & 5.4372 & 0.184\% \\ \cline{2-6} & MILP, S=10 & 5.4736 & 232.22 & 1.002\% \\ \cline{2-6} & MILP, S=20 & 5.4731 & 1350.8 & 0.261\% \\ \hline \end{tabular} \end{table} Table 3.3: Comparison between MISOCP and MILP models under stress levels on IEGS \begin{table} \begin{tabular}{c c c c c c c} \hline Stress & Obj & MISOCP time & Error1 & Correction time & Error2 & Iter. \\ Level & (\(10^{6}\$\)) & (s) & (\%) & (s) & (\%) & \# \\ \hline \(1.0\)G & 3.934 & 0.678 & 16.64\% & 3.201 & 0.098\% & 82 \\ \hline \(1.3\)G & 4.753 & 0.559 & 25.98\% & 4.615 & 0.013\% & 117 \\ \hline \(1.6\)G & 5.573 & 13.536 & 19.29\% & 7.773 & 0.017\% & 113 \\ \hline \(1.6\)G+\(1.1\)E & 5.734 & 18.844 & 22.75\% & 4.255 & 0.044\% & 114 \\ \hline \end{tabular} \end{table} Table 3.2: GFC method effectiveness under different stress levels on IEGS gas flow model, please refer to Section 2.1.3 for the model formulations. In pattern \(2\), the dynamic-state gas flow model considers the line pack as formulated in the proposed model. Figure 3.2 plots the day-ahead gas production for the two patterns in case of 150% gas stress level. Considering the line pack inside pipelines provides more operational flexibility in pattern \(2\) compared with pattern \(1\), therefore, the schedules are different. Additionally, the objective cost would be dissimilar, it equal \(\$5317.2\times 10^{3}\) for pattern \(1\), while it is \(\$5311.7\times 10^{3}\) for pattern \(2\). #### Case Studies with Distribution-level IEGS In this subsection, the S-MISOCP algorithm performance is evaluated, and the impact of algorithm parameters and initial point selection on the solution quality is discussed. Moreover, computational comparisons between the proposed algorithm and MISOCP relaxation method is presented. All the below results are conducted on personal PC with \(8\)GB memory and Intel(R) Core(TM) \(\mathrm{i}5-3320\)M CPU, using the MATLAB environment with YALMIP toolbox [209] and Gurobi solver. #### Test System Description The test system is a \(13\)-bus PDN integrated with \(8\)-node meshed gas network, and its topology is shown in Figure 3.3. The PDN has one non-GFU (G1), one wind farm (W1), seven power Figure 3.2: Production scheduling of gas wells in both the dynamic- and steady-state conditions \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline Parameter & \(\mathbf{\tau}^{0}\) & \(\mathbf{\tau}^{max}\) & \(\overline{\mu}\) & \(\underline{\mu}\) & \(\mathcal{G}\) & \(K^{max}\) & \(\epsilon\) & \(\underline{\varepsilon}\) & \(\underline{\lambda}_{p}\) & \(\underline{\lambda}_{g}\) \\ \hline Value & 0.1 & \(10^{5}\) & 3 & 1.5 & \(10^{3}\) & 100 & \(10^{4}\) & \(10^{5}\) & \(10^{2}\) & \(10^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 3.4: S-MISOCP algorithm parameters \(p_{1}-p_{2}\) demands (\(pl_{1}-pl_{7}\)), and \(11\) power lines, whereas, the gas system has one gas source, four gas demands (\(gl_{1}-gl_{4}\)), one electric-driven compressor (C1), one P2G facility (P2G), and seven pipelines (\(p_{1}-p_{7}\)). Note that the computational burden mainly depends on the number of power lines and gas pipelines due to their two pair cones. This burden is also influenced by the number of bidirectional pipelines, which are three pipelines, namely \(p_{3},p_{4}\) and \(p_{7}\). System details, including wind power forecasting, are found in Appendix B.2.1 and Appendix B.3.2. Algorithm parameters are listed in Table 3.4. #### Impact of Initial Point The impact of the initial point on S-MISOCP algorithm is presented in this section. To better demonstrate this impact, the final objective values and execution time are reported with three different initial points, which are: (i) zero-initial point; (ii) relaxed MISOCP point, which is the solution vector of problem (3.74), excluding the approximated cones (3.74); (iii) proposed initial point, which is obtained by (3.75). The effect of \(\lambda_{p}\) and \(\lambda_{g}\) is also presented in this study. Note that, the algorithm convergence parameters are kept same for all cases as given in Table 3.4. Zero vector introduces the worst objective value with a longer computational time, while relaxed MISOCP vector is suitable to obtain a better solution within a time equals \(30\,\%\) of the zero vector time. The proposed vector, which outperforms the other two methods in the computational burden, is affected by the penalties used in (3.75). With low values of \(\lambda_{p}\) and \(\lambda_{g}\), the solution acts as a relaxed MISOCP vector solution due to their low impact on the objective in (3.75). While increasing these penalties, the solution time decreases as shown in the first three rows of Table 3.5. However, high values of \(\lambda_{p}\) and \(\lambda_{g}\) may provide bad initial vector, so the algorithm takes longer time to converge, as shown in the last row of the table. The reason is suboptimal objective, provided by initial vector, which needs more iterations to be recovered. We conclude that using proper values of penalties \(\lambda_{p}\) and \(\lambda_{g}\), fast, accurate and optimal solutions can be identified. Figure 3.3: The test system topology. #### Effectiveness of Adaptive Penalty Compared with the fixed penalty growth rate (see e.g. [51], [163], [164]), the proposed adaptive penalty growth rate has computational benefits, which are discussed in this section. Because the adaptive rate depends on the \(RCV\), it provides less focus on low \(RCV\) and concentrates more on high \(RCV\) and the main objective. In order to show the effectiveness of the adaptive rate, numerical cases are conducted based on a different combination of power and gas demands. Table 3.6 shows a numerical comparison between the fixed and adaptive rates under four cases. Note that the power and gas load shedding are added to the power and gas nodal balancing equations, therefore, for any load stress a feasible solution can be found. For the fixed penalty rate, coefficients of (3.76) are set at \(\mu=\mu=\overline{\mu}=\)\(2\). While parameters listed in Table 3.4 are used for adaptive penalty rate. In the first three cases, the fixed penalty rate provides the same results of an adaptive one because both of them starts with the same initial vector, which has small values of \(RCV\), obtained by (3.75). Therefore, the algorithm converges quickly. In the last two cases, increasing the gas demands introduces more stressed IEGS and OPGF cannot be identified easily, therefore, the algorithm executes a large number of iterations. To better present the effect of adaptive rate on the penalty values, Figure 3.4. displays all penalties at each iteration for the last case of Table 3.6. Note that the number of penalties is \(|\mathcal{T}|\times|\mathcal{P}\cup\mathcal{L}|\) at each iteration. It can be seen that the median value of penalties is very small as compared with the largest penalty value in each iteration. Due to the adoption of \(\mu=\)\(0.95\), penalties start to decrease after the 9\({}^{\text{th}}\) iteration to provide a high weight for the main objective \(f(\mathbf{x})\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Case\({}^{*}\)} & \multicolumn{3}{c}{Fixed Penalty Rate} & \multicolumn{3}{c}{Adaptive Penalty Rate} \\ \cline{2-7} & Obj. (\$) & Time (s) & Iter.\({}^{**}\) & Obj. (\$) & Time (s) & Iter.\({}^{**}\) \\ \hline 0.5GL+0.5PL & 4488.092 & 2.11 & 2 & 4488.092 & 1.95 & 2 \\ \hline 1GL+1PL & 6476.257 & 2.3 & 3 & 6476.257 & 2.2 & 3 \\ \hline 1GL+1.5PL & 9389.173 & 3.8 & 3 & 9389.173 & 3.2 & 3 \\ \hline 1.5GL+1PL & 8737.334 & 140.23 & 22 & 8736.851 & 123.88 & 31 \\ \hline 1.5GL+1.5PL & 11825.742 & 158.45 & 20 & 11825.203 & 71.29 & 15 \\ \hline \hline \multicolumn{7}{l}{\({}^{*}\)PL= Power load, GL = Gas load; \({}^{**}\) Iteration no.} \\ \end{tabular} \end{table} Table 3.6: Numerical comparisons between fixed and adaptive penalty rates \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{\(\lambda_{p}\)} & \multirow{2}{*}{\(\lambda_{g}\)} & \multicolumn{3}{c}{Zero vector} & \multicolumn{2}{c}{MISOCP vector} & \multicolumn{2}{c}{Proposed vector} \\ \cline{2-7} & & Obj. (\$) & Time (s) & Obj. (\$) & Time (s) & Obj. (\$) & Time (s) \\ \hline 0.001 & 0.001 & 6479.346 & 15.64 & 6477.196 & 4.51 & 6477.196 & 4.51 \\ \hline 0.01 & 0.01 & 6479.346 & 15.64 & 6477.196 & 4.51 & 6476.257 & 2.63 \\ \hline 0.1 & 0.1 & 6479.346 & 15.64 & 6477.196 & 4.51 & 6477.322 & 2.04 \\ \hline 1 & 1 & 6479.346 & 15.64 & 6477.196 & 4.51 & 6477.359 & 20.28 \\ \hline \hline \end{tabular} \end{table} Table 3.5: Computational comparisons between different initial vectors ### 3.4 Comparison with MISOCP Relaxation Method MISOCP relaxation (MISOCPR) method, which is adopted in many studies (see e.g. [204], [210]), is compared with the S-MISOCP algorithm. The MISOCPR method is obtained by solving (3.75) without constraints (3.74c). It is clear that MISOCPR method takes smaller solution time, however, it may introduce infeasible solutions. Therefore, the objective values and maximum \(RCV\) are reported from each method with different time periods and the results are listed in Table 3.7. The proposed algorithm provides total objective very close to, if not equal to, the optimal one obtained by the MISOCPR method. The maximum \(RCV\) of power lines (MRCV_P) and that of gas pipelines (MRCV_G) provided by the MISOCPR method are larger than that of the proposed S-MISOCP algorithm. It is because the S-MISOCP terminates after checking the solution feasibility. Therefore, both MRCV_P and MRCV_G are below \(10^{-5}\). We can conclude that the decisions obtained by MISOCPR method are infeasible and may introduce insecure operations in IEGS. Figure 3.5. displays the dramatic difference in energy production schedules between the two methods. For power generation dispatch, active power of each generator from both methods is almost similar because of the low MCRV_P in case of MISOCPR method. However, for gas schedules, the two methods provide completely different gas production decisions. ### 3.5 Conclusions and Discussions The optimal power-gas flow (OPGF) is the most fundamental problem in the interdependent power and natural gas systems. Initially, this chapter introduces a state-of-the-art techniques that are employed to solve OPGF problem, indicating the major advantages and drawbacks these techniques. Then, two different efficient methods based on convex optimization Figure 3.4: Maximum \(RCV\) and penalties values for case 1.5GL+1.5PL. approaches have been proposed for transmission and distribution levels, respectively. Moreover, the commonly adopted PLA method, which reformulate the IEGS model into a MILP framework have been presented to be compared with the proposed ones. The first proposed method is the gas flow correction (GFC) method, in which the multi-slack-node method and the Levenberg-Marquardt algorithm are designed to consider the gas dynamics and bidirectional gas flow. Different case studies are conducted to show the effectiveness and the computational performance the proposed method, and the main conclusions are as follows. 1. The maximum Weymouth error decreases by GFC method from \(25\%\) to \(0.013\%\) as a worst-case. 2. Increasing number of segments of PLA in the MILP model decreases the Weymouth error but it requires so long time. Using optimal breakpoints provide lower error with little increase of CPU time. 3. GFC method is compared with 1) the MILP model based PLA; 2) MISOCP model based SOCP relaxation, and it is concluded that the proposed method provides a more accurate method than the MILP model based SOCP relaxation. Figure 3.5: Energy production schedules obtained by MISOCP relaxation method and S-MISOCP algorithm. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Periods} & \multicolumn{3}{c}{Relaxed MISOCP method} & \multicolumn{3}{c}{S-MISOCP algorithm} \\ \cline{2-7} & Obj. & MCRV\_P & MCRV\_G & Obj. & MCRV\_P & MCRV\_G \\ & (\$) & (\%) & (\%) & (\$) & (\%) & (\%) \\ \hline 6 & 1387.76 & 0.13 & 219.27 & 1387.96 & 2x10\({}^{-5}\) & 2x10\({}^{-5}\) \\ \hline 12 & 2933.55 & 0.41 & 194.20 & 2933.58 & 5x10\({}^{6}\) & 2x10\({}^{-5}\) \\ \hline 18 & 4735.12 & 0.06 & 198.09 & 4735.20 & 6x10\({}^{6}\) & 9x10\({}^{-6}\) \\ \hline 24 & 6476.25 & 0.28 & 211.16 & 6476.26 & 8x10\({}^{6}\) & 2x10\({}^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 3.7: Effectiveness of the proposed algorithm compared with MISOCP relaxation method optimal solution with shorter time. The second method is the S-MISOCP algorithm that finds the OPGF for IEGS at distribution-level, considering bidirectional energy conversions via GPUs, electric-driven compressors, and P2G facilities. In order to incorporate meshed gas networks, the sign function of Weymouth equations, for unfixed gas flow pipelines, are relaxed as MILP and quadratic constraints. The later and power flow quadratic equations are decomposed as MISCOP using DCP. The proposed algorithm is enhanced by (i) suggesting high-quality initial point instead of traditional or random ones, and (ii) adopting an adaptive penalty growth rate to control the main objective and violations weights in the penalized MISCOP problems. Finally, numerical results are conducted to evaluate the algorithm performance, showing the effectiveness of the suggested initial point and the adaptive penalty growth rate. In fact, the S-MISOCP algorithm can be employed to solve not only the distribution level IEGS with radial power networks but also the transmission level IEGS with DC-OPF model because it respects the gas flow directions inside pipelines and the DC-OPF is a linear set of constraints. In other words, transmission level IEGS can be solved by the S-MISOCP algorithm after removing the two opposite cones of AC-OPF for each branch. The main superiority of the proposed S-MISOCP algorithm is that it can be adopted to solve two-stage robust optimization (RO) models with the non-convex power and gas flow equations in both stages. This algorithm is called twice in the quadruple-loop procedure, which is proposed and employed in chapters 5-6. Although future work can include employing the proposed algorithm to solve a two-stage RO model, this work has been conducted in the thesis to consider power and gas system uncertainties, such as renewable outputs (see Chapter 5), and to be integrated with bilateral gas-electricity marketing model (see Chapter 6). Future work also include solving the IEGS operational model under contingencies and demand response uncertainties. Enhancing its performance open quite a few new research directions, such as proposing better adaptive penalty rate with adjustable parameters or with additional factors, which could be controlled and updated at each iteration. Moreover, considering the AC-OPF model with bidirectional power flow and comparing with up-to-date studies, such as adjustable breakpoints for the PLA methods, are our subsequent works. ## Chapter 4 Robust Resilient Operational Strategies for IEGSs Threatened by natural disasters and man-made attacks, the resilient operation of power systems has drawn increased attention, which gives rise to a greater demand for power generation assets with high operational flexibility, such as natural gas-fired power units (GPUs). This, in turn, results in a greater proportion of GPUs and greater interdependence between power and gas systems. As a consequence, the modeling of the interactions between power systems and natural gas systems to achieve operational resilience in power systems becomes extremely vital. This topic has been discussed by quite a few researchers; however, previous studies suffered from two major drawbacks, namely (1) they assumed the existence of only one utility that has full control authority over the power system and gas system; (2) the economic interactions between power systems and gas systems have been neglected, which goes against current industrial practice. In this study, the power system and gas system are regarded as two individual utilities and their physical and economic interactions are modeled by considering the fuel consumption of the GPUs and gas contracts and by guaranteeing the fuel availability of GPUs in the pre- and post-contingency stages, respectively. The proposed model is developed based on a two-stage robust decision-making framework to optimize the operational performances of power systems under the worst-case \(N-k\) contingencies. To deal with the binary variables introduced by the linearization of the Weymouth equation and the on/off grid operation of generators, the nested column-and-constraint generation (NC&CG) algorithm is adopted. The necessity of considering economic and physical interactions between power systems and natural gas systems and the effectiveness of the proposed model and algorithm are verified by numerical simulations of two test systems. This work has been published as * Ahmed R. Sayed, Cheng Wang, and Tianshu Bi. "Resilient operational strategies for power systems considering the interactions with natural gas systems." Applied energy, 2019 May, 1;241:548-66, DOI: [https://doi.org/10.1016/j.apenergy.2019.03.053](https://doi.org/10.1016/j.apenergy.2019.03.053) This chapter is organized as follows. Section 4.1 provides a state-of-the-art of the existing resilience model for the independent power system (IPS) resilience models, which disregard the interactions with gas infrastructures, and the integrated electric-gas system (IEGS) resilience models, which co-optimize the two infrastructures from economic and security perspectives, as well as it provides the main contributions of the work in this chapter. Section 4.2 presents the mathematical formulations, including operational constraints of power and gas systems in both pre- and post-contingency stages, defender and attacker behavior models, firm and reserved gas contracts, and over-generation considerations. Section 4.3 describes the solution methodology by formulating the proposed non-convex model into mixed-integer linear programming (MILP) framework, and designing the NC&CG algorithm with some recommendations to increase its performance. Numerical examples are provided in Section 4.4 to evaluate the performance and effectiveness of the proposed model. A thorough comparison is provided between the proposed model and (i) IPS resilience models and (2) IEGS resilience models. The effectiveness of the consideration of over-generation issues and gas line pack is discussed. The performance of the proposed solution methodology is validated with two large-scale IEGS test systems. Finally, the main conclusions and a brief discussion are drawn in Section 4.5. ### 4.1 Introduction The reliable and resilient operation of critical infrastructures, such as electricity, water, gas, and telecommunication, is important to strengthen and support economic and social activities in modern society. The electric power system is the most critical infrastructure system because electricity plays an important role in the secure and continuous operation of these systems. However, existing electric power grids experience different forms of vulnerabilities and random failures, such as extreme weather, terrorism, component aging/failure, unexpected generator or power line outages, and human errors, which may result in widespread economic and social contingencies. For example, extreme weather has caused power outages with damages ranging from \(\$20\) to \(\$55\) billion in the USA [1], blackouts such as Hurricane Katrina in \(2005\)[2], the Japan Earthquake in \(2011\)[3], Hurricane Sandy in \(2012\) (\(N-90\) event) [4], and transmission line contingencies in South Australia [5]. Natural disasters, such as extreme weather are expected to increase in the future due to climate change [6]. In addition, vulnerabilities to terrorist attacks could cause more severe system disruptions than natural disasters [7]. From \(1999\) to \(2002\), more than \(150\) terrorist attacks on power networks worldwide have been reported [8]. These vulnerabilities make it crucial to evaluate the performance and facilitate decision-making with regard to the power grid under contingencies by analyzing the power system vulnerability. Vulnerability analysis of power systems has drawn much attention in the pertinent literature with the focus on boosting the power system resilience and reliability. Quantitative and qualitative evaluations of the system resilience were summarized in [24] and weather-affected resilience measures were discussed in [211]. Different studies have presented numerous vulnerability analysis models, which can be grouped into two categories. In the first category, the model identifies the critical components of the power system [134, 135, 136, 137, 138, 139, 140]. In [134], [135], different models for the analysis of the vulnerability under random failures were presented. Reference [134] formulates a simple model based on cascading failures by eliminating a number of system components randomly. In [135], the resilience management is improved based on the component failure probability and hazard frequencies. Likewise, natural disasters, which can result in the failure of multiple power system components simultaneously in a certain area, are usually simulated based on the probability of the damage states, such as hurricanes [136], and earthquake [137]. The vulnerability of critical components may not be the worst-case scenario in terms of disruption; therefore, mixed integer bi-level max-min models [138], topology (graph) models [139], and worst-case interdiction models [140] have been used to determine the worst attack strategies. However, the mere identification and protection of the vulnerable components do not assure an optimal defense plan in case of serious system disturbances. Therefore, models in the second category were developed to determine the optimal protection strategies for such vulnerabilities [141, 142, 143, 144, 145, 146, 147]. In [141], a min-max model was used to identify the optimal defense strategy and the worst attack scenarios under the optimum defense. To reduce the decision-making costs of the bi-level min-max model, a min-max-min model, which performs correction actions after the attack and considers the interaction between the power system defender and attacker, was employed in [142]. Under the same tri-level decision-making framework as [142], quite a few variations and extensions have been reported by the literature, such as the consideration of multi-zone uncertainty in [143], the integration of preventive and emergency responses in [144], the consideration of cyber-physical attacks in the communication network in [145], the combination between system expansion (long-term planning) and switching operations (short-term operation) in [146], and the distribution-level topology reconfiguration and DG islanding formulation in [147]. The aforementioned models can be used to determine the optimal protection plan and economic re-dispatch schedules based on the requirements of the electricity utilities, however, the models neglect the physical interactions between power systems and other energy systems such as natural gas systems, which provide the fuel for GPUs. Therefore, the "independent power system" (IPS) resilience model may not provide the optimal decision for power system operators (PSOs) and it may cause physical violations for other interacting systems, such as under/over nodal pressure and/or well production capacity violations in gas systems [40]. It should be noted that interactions between power systems and gas systems have been significantly enhanced due to the increasing proportion of GPUs [10]. In fact, the wide deployment of GPUs could also mitigate the fluctuations and uncertainties in power systems in a cost-effective manner [119], such as the outputs from renewable energy as well as contingencies, due to their high operational flexibility and efficiency [127]. However, this increased interaction, which is also referred to as an integrated electricity and gas system (IEGS) in the literature, encounters issues related to the secure, reliable, and resilient operation of the power plants. In fact, quite a few studies have been conducted to address the issues of interdependency and the economic and resilient operation of IEGSs [43]. The solvability of the IEGS energy flow is discussed in [212]. The gas system optimization problems are illustrated in detail in [168]. A steady-state economic dispatch of the IEGS is established considering GPUs and power-to-gas (P2G) facilities in [122] to enable bi-directional energy flow, and the impact of demand response is investigated in [123]. The importance of considering the gas infrastructure in the IEGS with high penetration of wind power generation was investigated in [120]. To mitigate the uncertainties associated with wind generation, an interval optimization IEGS model is presented in [124], and a stochastic unit commitment (UC) for IEGSs is introduced in [125] to solve the issues of random power outages and electricity load forecasting errors, based on which the impacts of P2G facilities are analyzed in [126]. The \(N-1\) contingency model for IEGSs was analyzed from economic and security-related aspects for a single outage in [127], and the model was improved by considering the spinning and regulation reserves in [128]. It should be noted that the aforementioned power and gas co-optimization resilience models, namely IEGS models, share one underlying assumption, namely, the existences of one operator or utility that has full authority over both the power system and the gas system. This operator minimizes all costs associated with energy production and provides optimal decisions for the combined system. However, in most cases, the power system(s) and gas system(s) are operated by different utilities and they are unsynchronized in most countries and regions as in European countries [45] and in China [46]. This lack of synchronization indicates that the total fuel cost minimization determined by the IEGS model might not be a realistic operational objective for autonomous sub-systems and, therefore, bilateral energy trading is inevitable. In fact, the operational mode of the electric power system in post-contingency conditions might be significantly different from that of the pre-contingency stage, such as sudden start-up or shut-down of fast-response generators and rapid increases or decreases in generator outputs to minimize operating losses; a similar trend is observed for the gas demands of GPUs. Moreover, GPUs usually provide interruptible gas supply services according to current gas industrial practices [47] and the gas contracts are usually determined by considering day-ahead contracts because real-time contracting would be costly and inconvenient [48]. In other words, GPUs cannot execute the planned regulations without appropriate gas contracting. In addition, both systems act in conjunction during contingencies and gas systems are ready to assist power utility according to real-time contracts. In this chapter, the resiliency of power systems against contingencies in terms of decision-making is revisited considering the interactions of power systems with gas systems. A robust day-ahead dispatch model for electric power system against \(N-k\) contingencies is proposed. The model detects the worst-case attack against power systems and identifies the optimal gas contracts with preventive and corrective actions; this is accomplished by optimizing the economic generation dispatch in both the pre-contingency and post-contingency stages. In addition, the steady-state constraints of the gas system [127] and the approximated flow dynamics are also considered, which increase the operational flexibility to satisfy the power system prerequisites. This model does not only provide optimal decisions in the pre-contingency stage, including the gas contracts, defensive strategy, and generation outputs but also provides the optimal adjustable strategy of the generator output in the post-contingency stage. In this study, the gas contract is considered as a combination of two sub-contracts, namely, the firm gas contract and the reserved gas contract. The firm gas contract determines the scheduled amount of gas required from the power utility to supply GPUs listed in the gas contract during the pre-contingency stage. In the other words, if there is no contingency, the gas demands from the power utility will be satisfied by the amount of gas scheduled in this contract. While, in post-contingency stage, as discussed above, the gas demands of GPUs may be changed. This change should obey the reserved gas contract which describes the scheduled positive (above the firm value) and negative (below the firm value) reserved gas. Alternatively, the power utility could sign a costly real-time contract. The interdependency and interactions between the power and gas systems are considered during normal and abnormal conditions, thus the gas network security can be guaranteed by suitable contracts and is not affected by the utilization of gas reserves. The operational goal of the power system operator is to achieve economic and secure operation with optimal gas contracts, while the operating costs of the gas systems are neglected. The main contributions in this chapter are as follows. 1. A tri-level resilient operational framework of power systems that considers contracts with gas systems is established, where firm gas supply contracts and gas reserve contracts are considered in the pre-contingency and the post-contingency stages, respectively. To the best of our knowledge, this is the first attempt to consider gas reserve contracts in a robust model. The operational constraints of the gas systems are considered, which guarantees their operational feasibility and security in both the pre- and post-contingency stages. Compared with IEGS robust models presented in the literature, the proposed model considers the dynamic state of the gas system. Moreover, as a result of considering contracts, a new kind of attack strategy emerges, i.e., the consumption of gas below/above the reserved (contracted) values. 2. Unlike the most tri-level models where the lower level decision variables are continuous, there are binary variables in the lower level optimization problem in the proposed model. The additional binaries originate from the linearization of the nonlinear non-convex Weymouth equation, as well as the on/off control of the generators in the post-contingency stage, and they are used to determine the potential attack region [52]. Therefore, the NC&CG algorithm proposed by [53] is applied to solve the proposed tri-level model after adjusting its stopping criteria. ### 4.2 Problem Formulation For the ease of illustration, a schematic diagram of the aforementioned power resilience models, i.e., the IPS model, the IEGS model, and the proposed gas contracting (GC) model, is presented in Figure 4.1. The salient features of the aforementioned models are as follows. 1. IPS model: this model treats gas systems as black boxes, where no operational constraints of the gas system are considered. This treatment may lead to violations of the gas constraints, particularly in the post-contingency stage, such as over-/under-pressure of gas nodes and might trigger cascading failures in gas systems. 2. IEGS model: this model treats the power system and the gas system as one system and considers all operational constraints for both systems while neglecting the shared energy contracts. Because there are different utilities for the two systems, this model may result in over-optimistic solutions and cause contract avoidance in practice. 3. GC model: this model minimizes the operational costs of the power system while considering the operational constraints of both the power system and gas system. The physical and economic interactions between the power system and the gas system are modeled by adding the fuel consumed by the GPUs to the nodal gas balancing constraint and gas contract costs in the power system objective function. The mathematical formulation of the proposed GC model is presented in this section after describing the prerequisite assumptions and simplifications commonly adopted in the literature: 1. In power system modeling, (i) the power system operates in a steady state and the transient state after the attack is ignored; (ii) the DC power flow model presented in Section 2.2.3 is adopted; (iii) all power generators are fast-response generator, therefore, the cut-off/connection process is completed without any delay, (iv) the UC is predetermined, and the interested reader can refer to Appendix A.3 for the UC problem formulation. These simplifications are commonly used in power system planning [142, 143, 52, 144]. However, the proposed model can easily be extended to include the start-up and shut-down decisions of traditional coal-fired units with minimum on/off time constraints. 2. In gas system modeling, (i) all gas storages are considered closed (not modeled) to highlight the significance of the gas dynamics; (ii) the simplified compressor model [43, 52, 127], which is presented in Section 2.1.1, is adopted. 3. In contract modeling, the prices of both the firm gas in the pre-contingency stage and the reserved gas below/above the firm gas that is remained/consumed during the Figure 4.1: Operational layout in the pre- and post-contingency stages for the IPS model, IEGS model and the proposed GC model. post-contingency stage are driven from the gas system operator. And the power system operator has received these prices before identifying the optimal gas contacts. 4. Defense and attack strategies modeling: (i) a deterministic malicious attack analysis is implemented; (ii) any component is unavailable only if it is attacked and is not defended; (iii) in the proposed model, it is considered that only power lines are attacked but different component types such as generators and power-gas connection lines can also be included in the model. The overall objective function of the proposed model is presented in (4.1). It consists of two parts, which are expressed by (4.2) and (4.3), respectively. (4.2) depicts the operational costs \(\Gamma_{pre}\) in the pre-contingency stage, including the power generation costs from all generators and costs for the reserved gas. Therefore, the day-ahead gas contracts are optimized by considering its two parameters in the objective function. The first parameter, which is the firm gas contact to be consumed by GPUs in the pre-contingency stage, is relevant to the power generation costs of GPUs. And the second parameter is the reserved gas contract. (4.3) provides the regulation costs \(\Gamma_{post}\) in the post-contingency stage, namely, the re-dispatch costs of the non-GPUs, denoted by the first two terms and the penalties for the non-served electricity load, described by the third term. Note that there is no need to add the re-dispatch costs of GPUs to the regulation costs because they are optimized by considering the reserved gas in the operational costs. In this model, the best defense strategy and available resources in the pre-contingency stage such as the firm/reserved gas contracts and generation outputs are identified by the upper-level problem. The virtual attack strives to maximize worst-case disruption (regulation costs). In the lower level decision-making, the feasible resources are deployed, and generator re-dispatch is optimized to mitigate this disruption. \[\min\ \Gamma_{pre}+\max\ \min\ \Gamma_{post}, \tag{4.1}\] \[\Gamma_{pre}=\sum_{\forall t}\Big{[}\sum_{u\in\mathcal{U}_{n}}C_ {u}(p^{0}_{u,t})+\sum_{h\in\mathcal{H}}(\mu_{h,t}\rho_{h,t}+\mu^{+}_{h,t} \rho^{+}_{h,t}+\mu^{-}_{h,t}\rho^{-}_{h,t})\Big{]}\] (4.2) \[\Gamma_{post}=\sum_{\forall t}\Big{[}\sum_{u\in\mathcal{U}_{n}}( C^{+}_{u}\triangle p^{+}_{u,t}+C^{-}_{u}\triangle p^{-}_{u,t})+\sum_{d\in \mathcal{D}_{p}(n)}C_{d}\triangle p_{d,t}\Big{]} \tag{4.3}\] In this study, the decision-maker of the proposed GC model is the PSO, though the operational constraints of the gas system are involved. Different from existing works, which assumes the power systems and gas systems are controlled by one utility, economic behaviors, such as gas purchase contracts, are modeled in the formulation to reflect the multiple-decision-maker reality. The modeling of gas operational constraints in the power system decision-making problem is to guarantee the feasibility of the signed contracts with respect to the gas system operation. In other words, the gas operational constraints reflect the influence of signed gas contracts on the gas system from the feasibility perspective, and the gas system can always have a feasible operation status as long as the proposed tri-level model is successfully solved. The economic influences of the purchased gas from the power systems on the gas system have been considered in the prices of the contracts. #### Pre-contingency constraints To highlight the interdependency between the power system and the gas system, their operational constraints are introduced separately. ##### Power System Operational Constraints The power system operational constraints are derived from Section 2.2.3, by using the pre-contingency decision variables. They are composed by \[\underline{P}_{u}c_{u,t}\leq p_{u,t}^{0}\leq\overline{P}_{u}c_{u,t },\ \forall u,t, \tag{4.4}\] \[-\overline{R_{u}}\leq p_{u,t}^{0}-p_{u,t-1}^{0}\leq\overline{R}_ {u}^{+},\ \forall u,t,\] (4.5) \[\sum_{u\in\mathcal{U}(n)}p_{u,t}^{0}+\sum_{l\in\mathcal{L}_{1}(n )}p_{l,t}^{0}-\sum_{l\in\mathcal{L}_{2}(n)}p_{l,t}^{0}=\sum_{d\in\mathcal{D}_ {p}(n)}P_{d,t},\ \forall n,t.\] (4.6) \[-\tilde{\pi}\leq\theta_{n,t}^{0}\leq\tilde{\pi},\ \forall n\in \mathcal{N}-1,t,\ \ \theta_{1,t}^{0}=0,\ \forall t,\] (4.7) \[-\overline{P}_{l,t}\leq p_{l,t}^{0}\leq\overline{P}_{l,t},\ \forall l,t.\] (4.8) \[p_{l,t}=\frac{\theta_{m,t}^{0}-\theta_{n,t}^{0}}{x_{l}},\ \forall l,t,(m,n)\in l. \tag{4.9}\] where the superscript symbol \(\binom{0}{0}\) indicates the pre-contingency variables; \(c_{u,t}\) is a pre-determined UC decision; \(\underline{P}_{u,t}/\overline{P}_{u,t}\) is the minimum/maximum limit of power generation; and \(\overline{R}_{u}^{-}/\overline{R}_{u}^{+}\) is the maximum ramping down/up capacity; \(\tilde{\pi}\approx 3.1416\) is the mathematical constant. ##### Natural Gas Operational Constraints Considering gas consumption of GPUs in the nodal balancing equation, the dynamic-state gas flow model presented in Section 2.1.2, is composed by \[\underline{F}_{w}\leq f_{w,t}^{0}\leq\overline{F}_{w},\ \forall w,t, \tag{4.10}\] \[\underline{\Pi}_{i}\leq\pi_{i,t}^{0}\leq\overline{\Pi}_{i},\ \forall i,t,\] (4.11) \[\pi_{i,t}^{0}\leq\pi_{o,t}^{0}\leq\gamma_{c}\pi_{i,t}^{0},\forall c,t,(i,o)\in c\] (4.12) \[0\leq f_{c,t}^{out,0}=(1-\alpha_{c})f_{c,t}^{in,0},\ \forall c \in\mathcal{C},t, \tag{4.13}\] \[\sum_{w\in\mathcal{W}(i)}f_{w,t}^{0}+\sum_{p\in\mathcal{P}_{1}(i )}f_{p,t}^{out,0}-\sum_{p\in\mathcal{P}_{2}(i)}f_{p,t}^{in,0}+\sum_{c\in \mathcal{C}_{1}(i)}f_{c,t}^{out,0}\] \[-\sum_{c\in\mathcal{C}_{2}(i)}f_{c,t}^{in,0}=\sum_{h\in\mathcal{H }(i)}\rho_{h,t}+\sum_{d\in\mathcal{D}_{g}(i)}F_{d,t},\ \forall i,t, \tag{4.14}\] \[m_{p,t}^{0}=\chi_{p}^{m}(\pi_{o,t}^{0}+\pi_{o,t}^{0}),\ \forall p,t,(i,o)\in p\] (4.15) \[f_{p,t}^{in,0}-f_{p,t}^{out,0}=m_{p,t}^{0}-m_{p,t-1}^{0},\ \forall p,t,\] (4.16) \[f_{p,t}^{0}=\frac{f_{p,t}^{in,0}+f_{p,t}^{out,0}}{2},\ \forall p,t,\] (4.17) \[f_{p,t}^{0}|f_{p,t}^{0}|=\chi_{p}^{f}(\pi_{i,t}^{0^{2}}-\pi_{o,t }^{0^{2}}),\ \forall p,t,(i,o)\in p. \tag{4.18}\] where \(\mathcal{H}(i)\) is a subset of gas contracts, whose GPUs are supplied from node \(i\); the firm gas amounts \(\rho_{h,t}\) are defined in Section 4.2.3. #### Behaviors of the Attacker and the Defender It has been demonstrated that transmission lines are one of the most vulnerable assets in power systems [138, 142, 213]. Therefore, we assume that there is a virtual attacker, who attacks transmission lines and has a limited attack budget, which is expressed as \[\sum_{\forall l}C_{l}^{u}u_{l}\leq k,\ u_{l}\in\{0,1\},\ \forall l, \tag{4.19}\] where \(k\) is the attack budget and \(C_{l}^{u}\) is the cost of attack \(u_{l}\). To mitigate the impacts of the attack, the power system operator can deploy defensive resources, such as hardening the lines or sending out line patrol crews aside from dispatching recourse resources in the post-contingency stage. Similarly, we assume that the defense budget of the PSO is limited [52, 138, 141, 142, 143, 144, 145, 146], as defined by \[\sum_{\forall l}C_{l}^{y}y_{l}\leq a,\ y_{l}\in\{0,1\},\ \forall l, \tag{4.20}\] where \(a\) is the defense budget and \(C_{l}^{y}\) is the cost of defender \(y_{l}\). Combining the defense resource deployment and attack strategies, the availability of the lines are defined by (4.21). The line is off-line only if it is attacked and is not defended. In other words, the line cannot be off-line if it is defended. \[h_{l}=1-u_{l}+u_{l}y_{l},h_{l}\in\{0,1\},\ \forall l. \tag{4.21}\] #### Gas Contracts The gas consumed by the GPU is subject to the pipeline capacity and contracts. The two terms of gas contract are firm gas and reserved gas. The firm gas contract specifications are mathematically presented in [47] with UC, and in [48] as take-or-pay contracts. The proposed model provides the mathematical equations of the contracts signed between the gas utility and the electric utility. The interested reader is referred to [47]. The firm gas amount can be defined as (4.22). In the proposed model, the gas contract costs include the cost of the firm gas in the pre-contingency stage and the cost of the reserved gas below/above the firm gas, which may remain or be consumed in the post-contingency stage. The common contract types are take-or-pay contracts and interruptible contracts [48]. The gas consumed by generator \(u\) after an attack (\(\frac{\Phi}{\eta_{u}}p_{u,t}\)) is subjected to (4.23). \(\mathcal{U}_{g}(h)\) is a subset of the GPUs listed in contract \(h\). The reserved gas below/above the firm gas is limited by the lower boundaries defined in (4.24). \[\rho_{h,t}=\sum_{\forall u\in\mathcal{U}_{g}(h)}\frac{\Phi}{\eta_{u}} p^{0}_{u,t},\ \forall h,t \tag{4.22}\] \[-\rho^{-}_{h,t}\leq\sum_{\forall u\in\mathcal{U}_{g}(h)}\frac{\Phi }{\eta_{u}}(p_{u,t}-p^{0}_{u,t})\leq\rho^{+}_{h,t},\ \forall h,t\] (4.23) \[\rho^{+}_{h,t},\ \rho^{-}_{h,t}\geq 0,\ \forall h,t. \tag{4.24}\] #### Post-contingency Constraints In the post-contingency stage, the model constraints include the transmission line availability (4.21) and the GPUs' gas consumption constraints (4.23). The constraints of the power and gas systems are described in the next section. ##### Power System Constraints Equations (4.25)-(4.26) and (4.28) define the generation capacities of the GPUs and non-GPUs, respectively. The ramping up and down limits for the generated power from all units are defined in (4.27). Over-generation (OG), which is defined as a larger energy supply than demand, was first modeled by [52]. It was introduced to detect two attack strategies that may cause more damage than non-served power loads and reserved generated power. The first attack strategy is the violation of ramping down of a generator and the second attack strategy is the violation of the minimum output of a generator. The third attack strategy is introduced in the proposed model after considering the gas contracts. It is the violation of consuming gas by the GPUs below the reserved values. For example, if the reserved gas of contract \(h\) is optimized as \(\rho^{+}_{h,t}=\rho^{-}_{h,t}=0\), the outgoing feeders of the contracted GPUs cannot be attacked and these power plants have to generate the same amount of power in the post-contingency stage as in pre-contingency stage according to the constraints defined by (4.23). \[p_{u,t}=\delta_{u,t}p^{*}_{u,t},\ \forall u\in\mathcal{U}_{g},t, \tag{4.25}\] \[\underline{P}_{u}c_{u,t}\leq p^{*}_{u,t}\leq\overline{P}_{u}c_{u, t},\ \forall u,t,\] (4.26) \[-\overline{R}^{-}_{u}-(1-\delta_{u,t})\overline{P}_{u}\leq p_{u, t}-p_{u,t-1}\leq\overline{R}^{+}_{u}+(1-\delta_{u,t})\overline{P}_{u},\ \forall u,t,\] (4.27) \[\underline{P}_{u}\delta_{u,t}\leq p_{u,t}\leq\overline{P}_{u} \delta_{u,t},\ \forall u,t, \tag{4.28}\] Besides the existing two kinds of OG attack strategies reported by [52], the following example is introduced to illustrate the third kind of attack strategy. In Figure 4.2, the test system has \(4\) buses, one GPU \(G1\), and two loads \(Load1\), \(Load2\). The firm and reserved gas contracts have been signed in the pre-contingency stage, say the firm value is \(65\)MW, and \(40\) and \(0\) are the reserved values below and above the firm, respectively. Therefore, the minimum and maximum outputs of \(G1\) are \(25\)MW and \(65\)MW, respectively. According to the assumption that only power lines could be attacked, therefore, \(L1\), \(L2\) and, \(L3\) are the attack decision. For simplicity, the attack and defender budgets are set as \(1\) and \(0\) respectively, and all cost coefficients in (4.19)-(4.20) are ones. In general, the worst attack scenario should be \(L1\), as the non-served load will reach \(65\)MW. However, without considering OG, it is not allowed as the reserved gas lower bound would be violated. In this regard, the only feasible attack scenario is \(L2\) and the corresponding unserved load is \(25\)MW, which is over-optimistic. Therefore, a binary variable \(\delta_{u,t}\) is added to consider OG during the post-contingency stage. As shown in Figure 4.3, the GPU needs gas with the value \(\Phi p_{u,t}/\eta_{u}\) to generate power of \(p_{u,t}\). On the other hand, the allowable gas \(\Phi p_{u,t}^{*}/\eta_{u}\) is always restricted by the reserve constraints defined in (4.23). Therefore, if the required gas \(\Phi p_{u,t}/\eta_{u}\) fulfilled the contracts, then \(\delta_{u,t}\) equals \(1\), otherwise, \(\delta_{u,t}\) equals \(0\) and the GPU will be suddenly shut down. For non-GPUs, the first two attack strategies are detected using (4.27)and (4.28), respectively. For GPUs, the first strategy is detected using (4.27), whereas the last two strategies are detected using (4.25)-(4.26). To provide supply and attack relaxations during the post-contingency stage, load shedding is added to the power nodal balance equation (4.29). The range of load shedding is defined in (4.30). The bus angle is defined by (4.31). If the line is off-line (\(h_{l}=0\)), (4.32) forces the power flow of that line to be equal to zero; otherwise, it can be calculated from the phase angle difference defined in (4.33). The reserved power generated by the non-GPUs is defined by (4.34)-(4.35). \[\sum_{u\in\mathcal{U}(n)}p_{u,t}+\sum_{l\in\mathcal{L}_{1}(n)}p_{ l,t}-\sum_{l\in\mathcal{L}_{2}(n)}p_{l,t}=\sum_{d\in\mathcal{D}_{p}(n)}(P_{d,t}- \triangle p_{d,t}),\ \forall n,t, \tag{4.29}\] \[0\leq\triangle p_{d,t}\leq P_{d,t},\ \forall d\in\mathcal{D}_{p},t, \tag{4.30}\] Figure 4.3: A simple block diagram of a GPU. Figure 4.2: Illustration example of minimum output capacity constraint violation. #### The Proposed GC Model Levels Figure 4.4 illustrates the purpose and decision variables of each level. In other words, it displays the roles of the system operator in both the pre-contingency and post-contingency stages and the purpose of the virtual disruptive agent. In the following subsections, each level will be discussed in detail. #### Upper Level Problem In this level, the decision variables are the defender action, the gas contracts, and the variables of the power and gas systems in the pre-contingency stage, including generator dispatch. The system operator seeks to minimize the overall cost in the pre-contingency (operation) and post-contingency (regulation) stages as shown in Figure 4.4. \[\min_{\mathbf{y},\ \mathbf{w},\ \mathbf{\alpha}}\Gamma^{pre}\ +\ \Gamma^{post} \tag{4.38a}\] \[s.t:\] Power system constraints: (4.4)-(4.9). (4.38b) \[\text{Gas system constraints: (\ref{eq:10})-(\ref{eq:17}) and PLA models for: (4.18).}\] (4.38c) \[\text{Defense budget: (\ref{eq:4.20}).}\] (4.38d) \[\text{Firm gas and reserved boundaries in contracting: (\ref{eq:4.22}) and (\ref{eq:4.24}).}\] (4.38e) \[\mathbf{y}=\{y_{l,t}\},\ \ \mathbf{w}=\{\rho_{u,t},\rho_{u,t}^{+},\rho_{u,t}^{-}\}, \ \ \mathbf{y},\mathbf{w}\in arg\{\text{Middle-level problem (\ref{eq:4.39})}\}.\] (4.38f) \[\mathbf{\alpha}=\{p_{u,t}^{0},p_{l,t}^{0},\theta_{n,t}^{0},f_{w,t}^{0},f_{p,t}^{0}, \pi_{i,t}^{0},f_{p,t}^{in,0},f_{p,t}^{out,0},f_{c,t}^{in,0},f_{c,t}^{out,0},m_ {p,t}^{0},PLA_{Pre}\} \tag{4.38g}\] where \(PLA_{Pre}\) is the linearization variables, which include all continuous and binary variables for the squared nodal pressures and squared pipelines average flow of Weymouth equation (4.18). Because some variables in the upper level will influence the middle/lower level decision-making, the upper-level decision variables are divided into three vectors: \(\mathbf{y}\), which is the defender action for all power lines, \(\mathbf{w}\), which is the generator dispatch in the pre-contingency stage and the contracting decisions, and \(\mathbf{\alpha}\), which represents the remaining variables of the power and gas systems in the pre-contingency stage. It should be noted that \(f_{p,t},\pi_{i,t},\ f_{p,t}|f_{p,t}|\) and \(\pi_{i,t}^{2}\) are represented by piecewise linearization variables, as discussed in Appendix A.6. As a result, they may not be considered as decision variables. #### Middle Level Problem The objective of the middle-level problem is to maximize the regulation costs by finding the optimal (worst) attack strategy \(\mathbf{u}\), which is the only decision variable in this level. Thus, the Figure 4.4: The three levels of the proposed model. problem is defined as \[\max_{\mathbf{u}}~{}\Gamma^{post}\] (4.39a) \[s.t:\text{Attack budget:~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~ #### The Compact Form of the Proposed Model For ease of analysis, the compact form of the proposed GC model is given below: \[\min_{\mathbf{y},\,\mathbf{w},\,\mathbf{\alpha}} f(\mathbf{w})\max_{\mathbf{u}} \ \min_{\mathbf{x},\,\mathbf{z}}\mathbf{dx} \tag{4.41a}\] \[s.t :\mathbf{R}\mathbf{y}+\mathbf{A}\mathbf{w}+\mathbf{M}\mathbf{\alpha}\geq\mathbf{J},\] (4.41b) \[\mathbf{h}=\mathbf{1}-\mathbf{u}+\mathbf{u}\cdot\mathbf{y},\] (4.41c) \[\mathbf{Bu}\leq k,\] (4.41d) \[\mathbf{Ex}+\mathbf{G}\mathbf{z}\geq\mathbf{F}-\mathbf{Q}\mathbf{h}-\mathbf{D}\mathbf{w},. \tag{4.41e}\] where, \(f(\mathbf{w})\) denotes the operational costs in the pre-contingency state (\(\Gamma^{pre}\)) and \(\mathbf{dx}\) denotes the regulation costs in the post-contingency state (\(\Gamma^{post}\)). \(\mathbf{R},\mathbf{A},\mathbf{M},\mathbf{J},\mathbf{B},\mathbf{E},\mathbf{G},\mathbf{F}\), \(\mathbf{Q}\) and \(\mathbf{D}\) are the coefficient matrices of the model constraints. \(f(\mathbf{w})\) and constraints (4.41b) can be derived from the upper-level problem. The constraint of the middle-level problem (attack budget) is defined in (4.41d). \(\mathbf{dx}\) and the constraints defined in (4.41e) can be determined from the lower level problem. Finally, constraints (4.41c) can be derived from the availability status presented in (4.21), where \(\mathbf{u}\cdot\mathbf{y}\) denotes the element product. There is no need to linearize this product because the two variables are used at different levels. #### The NC&CG Algorithm As shown in Figure 4.5, once an arbitrary feasible decision has been derived from the upper and middle-level problems (i.e., \(\mathbf{y},\mathbf{w}\) and \(\mathbf{u}\)), then the inner C&CG deals with the recourse problem to provide optimal binaries as a primal cut solution to the middle-level attack problem. The inner gap, which equals the relative difference between the middle and lower level objectives, is verified twice at each iteration to guarantee that the optimal decision vectors \(\mathbf{u}^{*},\mathbf{x}^{*}\), and \(\mathbf{z}^{*}\) are achieved. The inner C&CG provides the worst attack strategies as a primal cut solution to the upper-level defense problem in the outer C&CG algorithm. Similar to the inner gap, the outer gap is checked twice at each iteration to find the optimal preventive actions \(\hat{\mathbf{y}}\) and \(\hat{\mathbf{w}}\) in the pre-contingency stage (upper level). #### Outer C&CG We assume that the inner C&CG algorithm can solve the sub-problem defined in (4.44) and derives an optimal attack scenario \(\mathbf{u}^{*}\) and the worst-case value of the sub-problem \(\Psi^{*}(\hat{\mathbf{y}},\hat{\mathbf{w}})\); subsequently, the outer C&CG is implemented to solve the bi-level min-max (first-stage) problem. The details are presented in Algorithm 3. #### Inner C&CG The inner C&CG is used to solve sub-problem (4.44) by expanding it into the tri-level form defined in (4.42). A strong duality reformulation for the last level (linear problem) outperforms the use of the Karush-Kuhn-Tucker (KKT) conditions [134] because of a large number of binary variables used to linearize the KKT conditions; this number equals the summation of total variables and constraints in the last level. In contrast, in strong duality, the variables used in the linearization are continuous and there is a small number of variables that equals the number of components that might be attacked or have to be defended, such as power lines in this study. The model (4.42) represents the sub-problem in the tri-level formulation based on strong duality. The vectors \(\hat{\boldsymbol{y}}\) and \(\hat{\boldsymbol{w}}\) are derived from the outer C&CG and become parameters in the inner C&CG; \(\boldsymbol{\lambda}\) denotes the dual variables of the last level. In the objective, the only nonlinear product is \(\boldsymbol{\lambda}^{\top}\boldsymbol{Q}\boldsymbol{h}\), which is linearized by \(\boldsymbol{\gamma}\) with additional constraints defined in (4.45), where \(\sum\boldsymbol{\gamma}^{r}\) means the summation of the members inside Figure 4.5: The NC&CG algorithm layout. vector \(\mathbf{\gamma}^{r}\), and \(\overline{M}\) is a large sufficient number. \[\max_{\mathbf{u}} \min_{\mathbf{z}} \min_{\mathbf{\lambda}} (\mathbf{F}-\mathbf{Q}\mathbf{h}-\mathbf{D}\hat{\mathbf{w}}-\mathbf{G}\mathbf{z})^{\top}\mathbf{\lambda} \tag{4.42a}\] \[s.t: \mathbf{h}=\mathbf{1}-\mathbf{u}+\mathbf{u}\cdot\hat{\mathbf{y}},\] (4.42b) \[\mathbf{B}\mathbf{u}\leq k,\] (4.42c) \[\mathbf{E}^{\top}\mathbf{\lambda}=\mathbf{d}^{\top}, \mathbf{\lambda}\geq 0. \tag{4.42d}\] ``` 1:Set boundary parameters \(UB^{outer}=\infty,LB^{inner}=-\infty,R=0\), convergence parameters \(\varepsilon,\ last^{outer}=0\) and select an arbitrary feasible \(\hat{\mathbf{y}},\hat{\mathbf{w}}\), then go to Step \(4\). 2:Solve problem MP (4.43) to update \(\hat{\mathbf{y}},\hat{\mathbf{w}},\hat{\mathbf{\alpha}},\varphi^{*},LB^{outer}=\textbf{ MP}^{*}\) and \(Gap^{outer}=(UB^{outer}-LB^{outer})/UB^{outer}\). \[\textbf{MP}: \min_{\mathbf{y},\ \mathbf{w},\ \mathbf{\alpha},\ \varphi,\ x^{r},\ z^{r}} f(\mathbf{w})+\varphi\] (4.43a) \[s.t: \mathbf{R}\mathbf{y}+\mathbf{A}\mathbf{w}+\mathbf{M}\mathbf{\alpha}\geq\mathbf{J},\] (4.43b) \[\varphi\geq\mathbf{d}\mathbf{x}^{r},\ \ \forall r=\{1\dots R\},\] (4.43c) \[\mathbf{h}^{r}=\mathbf{1}-\mathbf{u}^{*,r}+\mathbf{u}^{*,r}\mathbf{y},\ \ \forall r=\{1\dots R\},\] (4.43d) \[\mathbf{E}\mathbf{x}^{r}+\mathbf{G}\mathbf{z}^{r}\geq\mathbf{F}-\mathbf{Q}\mathbf{h}^{r}-\mathbf{ D}\mathbf{w},\ \ \forall r=\{1\dots R\}.\] (4.43e) 3:If \(Gap^{outer}\leq\varepsilon\ \&\ last^{outer}\), terminate; else, \(R=R+1\), \(\mathbf{u}^{*,R}=\mathbf{u}^{*},last^{outer}=0\). 4:Solve problem SP (4.44) to update \(\mathbf{u}^{*},\mathbf{x}^{*},\mathbf{z}^{*},UB^{outer}=\min\{UB^{outer},f(\hat{\mathbf{w}})+ \textbf{SP}^{*}\}\) and \(Gap^{outer}=(UB^{outer}-LB^{outer})/UB^{outer}\). \[\textbf{SP}: \max_{\mathbf{u}} \min_{\mathbf{x},\ \mathbf{z}}\mathbf{d}\mathbf{x}\] (4.44a) \[s.t: \mathbf{h}=\mathbf{1}-\mathbf{u}+\mathbf{u}\cdot\hat{\mathbf{y}},\] (4.44b) \[\mathbf{B}\mathbf{u}\leq k,\] (4.44c) \[\mathbf{E}\mathbf{x}+\mathbf{G}\mathbf{z}\geq\mathbf{F}-\mathbf{Q}\mathbf{h}-\mathbf{D}\hat{\mathbf{w }},.\] (4.44d) 5:If \(Gap^{outer}\leq\varepsilon\), set \(last^{outer}=1\); else, \(last^{outer}=0\), and go Step \(2\). ``` **Algorithm 3** Outer C&CG Algorithm In the traditional NC&CG proposed in [215], if the duality gap is below the convergence tolerance value \(\varepsilon^{outer}/\varepsilon^{inner}\), the stopping criteria are fulfilled for the outer/inner C&CG algorithm, where the duality gap is measured only after the master problem. This means if the master problem achieves its optimal objective, which means it converges with previous sub-problem objectives (minimal duality gap), the algorithm terminates even if the master decision is not optimal. To illustrate that, in the inner C&CG algorithm, for each iteration, the **MPs** in step \(2\) represents the worst attack strategy based on the experience gained from previous iterations. For example, at iteration \(R\), we have \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\), which provide the same objective value \(UB_{inner}\) based on the experience of \(R-1\) iterations. If the **MPs** decision is \(\mathbf{u}_{1}\), the **SPs** in step \(4\) provides \(LB_{inner}\simeq UB_{inner}\). For the next iteration, \(R+1\), the **MPs** decision is either \(\mathbf{u}_{1}\) or \(\mathbf{u}_{2}\), which results in the same \(UB_{inner}\). But if the **MPs** decision is \(\mathbf{u}_{2}\) and the **SPs** decision results in deploying available resources to minimize the damage resulting from \(\mathbf{u}_{2}\), the inner C&CG will terminate due to a sub-optimal attack strategy. This stopping criterion provides the optimal objective value but it may fail to find the optimal attack strategy or optimal values of the lower level variables. This problem may perturb the outer C&CG if it is not considered in the inner C&CG because the optimal attack strategies cannot be found and, consequently, the outer C&CG may not converge. Therefore, the concept of \(last^{outer}/last^{inner}\) is introduced in the outer/inner C&CG algorithms and their values are updated before and after the execution of the sub-problem **SP/SPs**, respectively. ### 4.4 Simulation Results In this section, a \(5\)-Bus-\(7\)-Node electricity-gas integrated energy system is examined to illustrate the effectiveness and features of the proposed model and algorithm; the IEEE \(39\)-Bus-\(20\)-Node and the IEEE \(118\)-Bus-\(20\)-Node test systems are employed to study the scalability of the implemented algorithm. The load-shedding penalty price is set at $\(1000\)/MWh and \(\overline{M}\) used for the linearization in the inner C&CG algorithm is set at \(10^{5}\). In fact, this value affects the computational time and depends on the expected value of the inner C&CG objective [52]. The convergence tolerance value \(\varepsilon\) is set at \(0.1\%\). The number of segments used in the linearization of the Weymouth equation is set at \(6\) for both the gas flow and gas pressure. The numerical experiments are performed using MATLAB R\(2020\)a with the YALMIP toolbox [209] and Gurobi \(8.1\) on a personal laptop with Intel(R) Core(TM) \(\mathrm{i}5-3320\)M CPU and \(8.00\) GB RAM. Figure 4.6 shows the topology of the test system. The details and UC of the system are described in Appendix B.1.1 and Appendix B.3.1. The system has \(6\) power lines, \(2\) GPUs, \(1\) non-GPUs, \(3\) power loads, \(2\) gas wells, \(1\) compressor, \(5\) passive pipelines, and \(3\) gas loads. In the figure, \(G,L,pl,W,C\), and \(gl\) are used with subscripts to denote the power generators, power lines, power loads, gas wells, compressors, and gas loads, respectively. The targets of the defender and attacker are the \(6\) power lines and any other components of the system cannot be attacked. We consider that the attack and defense cost coefficients \(C_{l}^{u}\) and \(C_{l}^{y}\) are ones, respectively. There are \(2\) contracts \(h_{1}\) and \(h_{2}\) for generator \(G2\) and \(G3\), respectively. To demonstrate the effectiveness and practicability of the proposed model, various tests are performed under different defense and attack budgets where the defense budget range is \([1,5]\) and the attack range is \([1,5\)-defense budget]. To make it easy, D and A are paired with numbers to denote the defense and attack budgets. For example, "D1A2" means the defense budget is \(1\) and the attack budget is \(2\). #### Comparison with the Co-optimization Models To demonstrate the importance of optimizing the gas contracts for improving the power system resilience, a comparison in terms of economics and security is performed with IEGS models, which do not consider contracts between utilities and gas suppliers. Different cases are evaluated based on different defense and attack budget combinations. The following two models are compared; Figure 4.6: Topology of the test system 1. The IEGS model represents the tri-level IEGS resilience models, which disregard the contracts during the IEGS resilient operation. In this study, to provide a fair comparison, the objective of this model is to minimize the operation and regulation costs only for the power system, while the cost of the reserved gas (day-ahead contracts) in the upper-level problem is not considered. The required gas, which would be consumed if the attack occurred, is calculated using (4.47)-(4.48), where \(p_{u,t}^{*}\) is the generated power from the GPU \(u\) in the post-contingency stage under the worst-case attack. This value will be used for the real-time contracts, which are more expensive than day-ahead contracts. \[\rho_{h,t}^{+} =\max\Big{\{}0,\ \sum_{u\in\mathcal{U}_{g}(h)}\frac{\Phi}{\eta_{u} }(p_{u,t}^{*}-p_{u,t}^{0})\Big{\}},\ \ \forall h,t,\] (4.47) \[\rho_{h,t}^{-} =\max\Big{\{}0,\ \sum_{u\in\mathcal{U}_{g}(h)}\frac{\Phi}{\eta_{u} }(p_{u,t}^{0}-p_{u,t}^{*})\Big{\}},\ \ \forall h,t.\] (4.48) 2. Proposed GC model that considers the reserved gas in the upper-level problem. Each model provides an optimal defense strategy to minimize the power system disruption (\(\Gamma^{post}\)) under any malicious attack. The optimal protection strategy may be the same for both models because the cost of the reserved gas (if it is considered in the GC model) is small compared to the non-served power load penalty. Table 4.1 displays the economic performance of the proposed model for \(3\) different cases. The numerical results for the two stages are shown, i.e., the pre-contingency and post-contingency stages. In the pre-contingency stage, each model provides the optimal operation cost (\(\Gamma^{pre}\)), which includes the cost of the day-ahead contracts for the reserved gas in the GC model. Therefore, the \(\Gamma^{pre}\) is higher for the GC model than for the IEGS model, as shown in the table. Under the optimal defense and for generator dispatch, the inner C&CG is applied to determine the worst attack strategy (limited by the attack budget of the case), which results in the re-dispatch of all generators to mitigate the power load shedding during the post-contingency stage. This re-dispatch indicates the changes in the gas consumed by the GPUs during the post-contingency stage. Therefore, in the GC model, the reserved gas is contracted and optimized as a day-ahead contract, consequently, the GPUs operate according to the contracted values and there is no additional gas cost, whereas, in the IEGS model, real-time contracts may be commissioned to adjust to the changes in the consumed gas after the generators have been re-dispatched. The total amount of firm gas used in the pre-contingency stage daily and the total reserved gas below/above the firm gas (\(\rho_{h,t}^{+},\ \rho_{h,t}^{-}\)) for each contract are listed in the table. Based on the total cost, the proposed GC model provides a more economical and resilient operation than the IEGS model. To generalize the recent conclusion, both models are applied to all \(15\) cases with different attack and defense budgets. In Figure 4.7, the post-contingency cost includes the non-served power load penalty, the cost of the reserved power generated from non-GPUs, and the cost of real-time contracts. The pre-contingency cost includes the operational cost and the cost of the day-ahead contracts. The post-contingency cost and total cost of the IEGS model are much greater than those of the GC model for all \(15\) cases. The adjustment of the generator output during the post-contingency stage is displayed in Figure 4.8 for the case "D1A2". For the same defense and attack strategies (see Table 4.1 for this case), the IEGS model seeks to minimize the reserved power from non-GPUs and aims for minimum power shedding. Therefore, in the post-contingency stage, the \(G1\) (non-GPU) schedule time is the same as in the pre-contingency schedule provided there are no reserves in the \(G1\). The GPUs share the remaining power load (i.e., high gas reserves). In contrast, the GC model protects the outgoing feeder \(L1\) of the \(G2\) (GPU), which has the largest power capacity, to ensure that \(pl_{1}\) and \(pl_{3}\) are satisfied. Therefore, \(G2\) will produce the same power and will not use any reserved gas during the contingency as shown in Figure 4.8. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Case} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{Pre-contingency stage} & \multicolumn{3}{c}{Post-contingency stage} & \multicolumn{3}{c}{Total costs} \\ \cline{3-10} & & DS\({}^{*}\) & FG\({}^{*}\) & OC\({}^{*}\) & AS\({}^{*}\) & \(\Delta pl\)\({}^{*}\) & \(\rho_{\gamma}^{*}\) & \(\rho_{\gamma}^{*}\) & \(\text{RTC}^{*}\) & (\(10^{6}\)S) \\ \hline \multirow{4}{*}{D1A1} & IEGS & L1 & \begin{tabular}{c} v1: 4.901 \\ v2: 1.148 \\ \end{tabular} & 0.7449 & L5 & 6.81 & \begin{tabular}{c} v1: 0.000 \\ v2: 0.351 \\ \end{tabular} & 0.000 & 3.5601 & 5.047 \\ \cline{2-10} & GC & L1 & \begin{tabular}{c} v1: 4.845 \\ v2: 1.194 \\ \end{tabular} & 0.7537 & L5 & 6.81 & \begin{tabular}{c} v1: 0.000 \\ v2: 0.006 \\ \end{tabular} & 0.000 & No need & 1.496 \\ \hline \multirow{4}{*}{D1A2} & IEGS & L1 & \begin{tabular}{c} v1: 4.901 \\ v2: 1.148 \\ \end{tabular} & 0.7449 & L4, L5 & 114.1 & \begin{tabular}{c} v1: 0.000 \\ v2: 0.009 \\ \end{tabular} & 0.214 & 7.3076 & 19.466 \\ \cline{2-10} & GC & L1 & \begin{tabular}{c} v1: 4.469 \\ v2: 1.498 \\ \end{tabular} & 0.7499 & L4, L5 & 114.1 & \begin{tabular}{c} v1: 0.000 \\ v2: 0.000 \\ \end{tabular} & 0.000 & No need & 12.316 \\ \hline \multirow{4}{*}{D1A3} & IEGS & L5 & \begin{tabular}{c} v1: 4.901 \\ v2: 1.148 \\ \end{tabular} & 0.7449 & \begin{tabular}{c} L2, L3, \\ L6 \\ \end{tabular} & 116.4 & \begin{tabular}{c} v1: 0.046 \\ v2: 0.000 \\ \end{tabular} & 0.000 & No need & 16.836 \\ \cline{2-10} & GC & L5 & \begin{tabular}{c} v1: 4.901 \\ v2: 1.148 \\ \end{tabular} & 0.9391 & L3, L6 & 117.4 & \begin{tabular}{c} v1: 0.046 \\ v2: 0.000 \\ \end{tabular} & 0.231 & 12.965 \\ \hline \hline \end{tabular} * DS: Defense strategy, FG: Firm gas (MSm\({}^{3}\)/day), OC: Operational costs (\(10^{6}\)S), AS: Attack strategy, RTC: real-time contracts (\(10^{6}\)S), \(\Delta pl\)\({}^{*}\)\(\sum\limits_{\forall}\Delta pl\)\({}^{*}_{\wedge}\) (MWh/day), \(\rho_{\gamma}^{*}\)\(\sum\limits_{\forall}\rho_{\gamma}^{*}\) (MSm\({}^{3}\)/day), \(\rho_{\gamma}^{*}\)\(\sum\limits_{\forall}\rho_{\gamma}^{*}\) (MSm\({}^{3}\)/day). #### Comparison with the Independent Operation Models In this subsection, case studies are provided to demonstrate the effectiveness of considering the gas system operational constraints in the proposed model. Physical-based and economic-based comparisons are conducted between the proposed GC model and the IPS model, which disregards the interactions with the gas system in both the pre- and post-contingency stages. The two models are 1. The IPS model, which optimizes the resilient operation of the power system without considering the effect on the gas system. Therefore, (4.10)-(4.18) for pre- and post-contingency stages are not used in this model. To be fair in this comparison, the objective includes the day-ahead contracts. To check the gas system feasibility and security, the power system requirements, which are firm gas in the pre-contingency stage and reserved gas in the post-contingency stage, are considered as a gas load in the gas model. 2. The proposed GC model. In this comparison, case "D1A2" is selected as a benchmark with different levels of gas load stress. It is shown in Table 4.2 that each model identifies the best defense strategy and required firm gas during the pre-contingency operation and detects the worst attack strategy which affects the system with high regulation cost. The table displays the numerical results of the two models for three cases based on gas stress. In Case \(\#1\) (there is no additional stress), the gas system is able to supply gas according to the power system requirements, therefore, the two models are feasible and results in the same total cost. In Case \(\#2\), increasing Figure 4.8: Generator output adjustment before and after the contingency. results in a stressed gas system, especially if this load is connected to node \(3\), which supplies the largest GPU \(G2\). The proposed model considers all physical constraints of the gas system, therefore, it optimizes the required gas in the pre- and post-contingency stages according to the gas system's ability and feasibility unlike the IPS model, which seeks to provide the minimum total costs and disregards the interactions. Similarly, in Case \(\#3\), although the gas load is redistributed without increasing the gas load, the IPS model fails to find the suitable gas production scheduling based on the requirements of the power system. In addition, the IPS model may provide an incorrect protection strategy against \(N-k\) contingencies, particularly in large power systems. Figure 4.9 shows the generator dispatch in the pre-contingency stage and the required firm gas for Case \(\#2\). The power system requirements in the IPS model exceed \(0.2\,\mathrm{MSm}^{3}\), which results in an infeasible gas system. In contrast, the proposed GC model results in a feasible and economic generator dispatch of \(0.12\,\mathrm{MSm}^{3}\) as a maximum of the firm gas by increasing the power generated from the non-GPU \(G1\). Based on the power system requirements for the gas stress in this case, which results in an infeasible gas system, the gas well capacities, nodal pressure boundaries, and compressor pressure capacity are relaxed by replacing (4.10)-(4.12) with \[\underline{F}_{w}(1-\triangle f_{w,t})\leq f_{w,t}^{0}\leq \overline{F}_{w}(1+\triangle f_{w,t}),\ \forall w,t, \tag{4.49}\] \[\underline{\Pi}_{i}(1-\triangle\pi_{i,t})\leq\pi_{i,t}^{0}\leq \overline{\Pi}_{i}(1+\triangle\pi_{i,t}),\ \forall i,t,\] (4.50) \[\pi_{i,t}^{0}\leq\pi_{o,t}^{0}\leq(\triangle\gamma_{c}+1)\pi_{i, t}^{0},\forall c,t,(i,o)\in c \tag{4.51}\] where \(\triangle f_{w,t},\ \triangle\pi_{i,t}\) and \(\triangle\gamma_{c}\) are solution tolerances for gas production, nodal pressure and compressor pressure ratio, respectively. In fact, the optimal gas flow and production scheduling is strongly affected by these tolerances. We applied many tests based on the values of these parameters which are gradually increased by \(5\%\) step from zero to achieve the gas system feasibility. The minimum values are \(20\%,5\%\), and \(25\%\) for \(\triangle f_{w,t},\ \triangle\pi_{i,t}\) and \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \multirow{2}{*}{Case} & \multicolumn{3}{c}{Gas load stress} & \multirow{2}{*}{Model DS\({}^{*}\)} & \multicolumn{3}{c}{Gas required} & \multirow{2}{*}{AS\({}^{*}\)} & \multicolumn{3}{c}{Reserved gas} & Total cost \\ \cline{3-3} \cline{5-11} & & gl\({}_{1}\) & gl\({}_{2}\) & gl\({}_{3}\) & & & & FG\({}^{*}\) & Feasibility & \(\rho_{v}^{*}\) & \(\rho_{v}^{*}\) & Feasibility & (\(10^{6}\)\(\$\)) \\ \hline \multirow{2}{*}{\#1} & \multirow{2}{*}{10} & \multirow{2}{*}{45} & \multirow{2}{*}{45} & GC & L1 & v1: 4.469 & \multirow{2}{*}{Feasible} & L4, L5 & v1: 0.0 & 0.0000 & \multirow{2}{*}{Feasible} & \multirow{2}{*}{12.316} \\ & & & & & v2: 1.498 & & v2: 0.0 & 0.0000 & & & & \\ \multirow{2}{*}{\#2} & \multirow{2}{*}{20} & \multirow{2}{*}{45} & \multirow{2}{*}{45} & \multirow{2}{*}{IPS} & L5 & v1: 4.469 & \multirow{2}{*}{Feasible} & L1, L4 & v1: 0.0 & 0.0000 & \multirow{2}{*}{Feasible} & \multirow{2}{*}{12.316} \\ & & & & v2: 1.498 & & v2: 0.0 & 0.0000 & & & & & \\ \multirow{2}{*}{\#2} & \multirow{2}{*}{20} & \multirow{2}{*}{45} & \multirow{2}{*}{45} & \multirow{2}{*}{GC} & L1 & v1: 2.089 & \multirow{2}{*}{Feasible} & L3, L6 & v1: 0.0 & 0.0024 & \multirow{2}{*}{Feasible} & \multirow{2}{*}{14.386} \\ & & & & & v2: 2.248 & & v2: 0.0 & 0.0000 & & & & \\ \multirow{2}{*}{\#3} & \multirow{2}{*}{30} & \multirow{2}{*}{40} & \multirow{2}{*}{30} & \multirow{2}{*}{IPS} & L1 & v1: 4.592 & \multirow{2}{*}{Infeasible} & L4, L5 & v1: 0.0 & 0.0000 & \multirow{2}{*}{Infeasible} & \multirow{2}{*}{12.142} \\ & & & & v2: 1.782 & & & v2: 0.0 & 0.0000 & & & & \\ \multirow{2}{*}{\#3} & \multirow{2}{*}{30} & \multirow{2}{*}{40} & \multirow{2}{*}{30} & \multirow{2}{*}{GC} & L5 & v2: 1.782 & \multirow{2}{*}{Feasible} & L1, L4 & v1: 0.0 & 0.0000 & \multirow{2}{*}{Feasible} & \multirow{2}{*}{12.219} \\ & & & & & v1: 3.148 & & v2: 0.0 & 0.0000 & & & & \\ \multirow{2}{*}{\#3} & \multirow{2}{*}{30} & \multirow{2}{*}{40} & \multirow{2}{*}{30} & \multirow{2}{*}{IPS} & L1 & v1: 4.592 & \multirow{2}{*}{Infeasible} & L4, L5 & v1: 0.0 & 0.0000 & \multirow{2}{*}{Infeasible} & \multirow{2}{*}{12.142} \\ & & & & v2: 1.782 & & & v2: 0.0 & 0.0000 & & & & \\ \multirow{2}{*}{\#3} & \multirow{2}{*}{30} & \multirow{2}{*}{40} & \multirow{2}{*}{30} & \multirow{2}{*}{30} & \multirow{2}{*}{GC} & L5 & v2: 1.782 & \multirow{2}{*}{Feasible} & L1, L4 & v1: 0.0 & 0.0000 & \multirow{2}{*}{Feasible} & \multirow{2}{*}{12.219} \\ & & & & & v1: 3.148 & & v2: 0.0 & 0.0000 & & & & \\ \multirow{2}{*}{\#3} & \multirow{2}{*}{30} & \multirow{2}{*}{40} & \multirow{2}{*}{30} & \multirow{2}{*}{30} & \multirow{2}{*}{30} & \multirow{2}{*}{30} & \multirow{2}{*}{IPS} & L1 & v1: 4.592 & \multirow{2}{*}{Infeasible} & L4, L5 & v1: 0.0 & 0.0000 & \multirow{2}{*}{Infeasible} & \multirow{2}{*}{12.142} \\ & & & & v2: 1.782 & & & v2: 0.0 & 0.0000 & & & & \\ \hline \multicolumn{10}{l}{*DS: Defense strategy, FG: Firm gas (MSm\({}^{3}\)/day), AS: Attack strategy, \(\rho_{v}^{*}=\sum\limits_{m}\rho_{v}^{*}\) (MSm\({}^{3}\)/day), \(\rho_{v}^{*}=\sum\limits_{m}\rho_{v}\) (MSm\({}^{3}\)/day).} \\ \end{tabular} \end{table} Table 4.2: Computational results of the Case “D1A2” for the proposed GC model and the IPS model \(\triangle\gamma_{c}\), respectively, to provide a feasible gas flow. In Figure 4.10, the physical violations are plotted for the selected case. Figure 4.9: Generator output and the required firm gas in the pre-contingency stage for the IPS model and the proposed GC model for the Case “D1A2” with gas load stress \(\#2\). Figure 4.10: Physical violations in the IPS model for Case “D1A2” and gas stress \(\#2\); (a) gas pressure of all nodes (except node \(4\)) and the boundaries, (b) gas pressure at node \(4\) and the boundaries, (c) gas production from well \(1\) and the capacities, (d) gas production from well \(2\) and the capacities, (e) inlet/outlet pressures of the compressor. #### Significance of Considering Over-generation The consideration of OG in the proposed model is important for the deployment of the defense resources and to optimize the reserved gas of all GPUs to defend against any possible attack strategy. As mentioned before, the optimal protection, gas contracts, and generator dispatch are affected by all attack strategies, which are generated in previous iterations of the NC&CG algorithm. If some of these attack strategies are not detected, the algorithm provides suboptimal decisions. In the proposed methodology, the inner C&CG uses strong duality, which creates a new variable \(\mathbf{\lambda}^{r}\) in each iteration that is independent of \(\hat{\mathbf{w}},~{}\hat{\mathbf{y}},~{}\mathbf{u}\) and \(\mathbf{z}\). This situation always provides a feasible solution for the middle-level problem and ignores the decision from the upper-level problem (i.e., \(\hat{\mathbf{w}},~{}\hat{\mathbf{y}}\)), whereas \(\mathbf{u}\) is restricted only by the attacker budget and it is not restricted by \(\hat{\mathbf{w}},~{}\hat{\mathbf{y}},~{}\mathbf{\lambda}\) and \(\mathbf{z}\) (i.e., it is not affected by the gas contracts). As a result, the **MPs** may provide \(\mathbf{u}\), which leads to the shutdown of power plants or forces a GPU to consume below/above the allowable reserved gas. Therefore, if the OG is not considered, the **SPs** will be infeasible. Therefore, (4.52) is added to the **MPs** (4.45) to provide a feasible attack strategy only if OG is not considered. \[\mathbf{E}\mathbf{x}+\mathbf{G}\mathbf{z}\geq\mathbf{F}-\mathbf{Q}\mathbf{h}-\mathbf{D}\hat{\mathbf{w}}. \tag{4.52}\] Table 4.3 lists the results for four cases based on different defense and attack budgets with and without OG. In case "D1A1", the OG has no effect because the worst attack strategies are detected and defended. Therefore, the optimal defense strategy and reserved gas are same with and without OG. In case "D2A2", however, the defense plans are the same with and without OG and the optimal reserves of gas are not the same because using the OG constraints provides the possibility of new attacks that violate the gas contracts. Consequently, the upper-level problem detects these attacks and adjusts the reserve gas. As shown in Table 4.3, for the last three cases, the models that disregard the OG issues identify the optimal defense, gas contracts, and unit dispatch when there are no system disruptions (zero regulation cost \(\Gamma^{pre}\), zero load shedding) under any malicious attacks. Table 4.4 lists the OG states for the three generators and three cases. In the first Case "D2A1", no OG is detected (all binaries are ones); therefore the optimal decision-making is achieved without considering OG, as shown in Table 4.3. In Case "D2A2", the worst attack strategy is detected, i.e., the violation of the \(G1\) output below the minimum capacity. Therefore, OG only occurs for this generator. In the third Case "D2A3", a larger attack budget provides different violations for the three generators. To fully demonstrate the benefits, Figure 4.11 displays the operating costs in the pre-contingency stage \(\Gamma^{pre}\) and the regulation costs in the post-contingency stage \(\Gamma^{post}\) for the \(15\) possible cases with and without OG. It is evident that the operating costs of the cases with OG are not always less than those of the cases without OG. However, when random attack strategies are included in the lower level problem to determine the worst attack using the same decision (\(\hat{\mathbf{w}},~{}\hat{\mathbf{y}},~{}\hat{\mathbf{\alpha}}\)) obtained without OG, the regulation costs are much greater than for the OG cases. In the aforementioned situation of a random attack strategy, the lower level problem is always feasible by forcing to shut down a generator if (i) the required power is lower than its minimum capacity, (ii) ramping up/down of the required power exceeds its limits, (iii) for GPUs, the consumed gas exceeds the reserved gas. #### Significance of Considering the Dynamic Gas Model As shown in subsections 4.4.1 and 4.4.2, it is very important to determine the optimal contracts while considering the gas interactions. The main purpose of the proposed GC model is to find the best protection plan with the optimal reserved gs under malicious attacks. Therefore, any assumption in the gas system model that may provide suboptimal or infeasible decisions for power system operators has to be known. The consideration of the \begin{table} \begin{tabular}{|l line pack and traveling velocity of the gas has an effect on the proposed model. Therefore, we have 1. Steady-state model: The gas model in the pre- and post-contingency stages is in a steady state without considering the line pack. Please, refer to section 2.1.3 for more details. 2. Dynamic-state model: the dynamic gas system is modeled by considering the line pack as formulated in the proposed model. Table 4.5 lists the defense strategies, optimal gas contracts, worst attack strategy, and the associated total cost for four cases with different gas loads. Under normal loading, the steady-state model cannot provide the power system with the required gas during the pre-contingency stage (there is no power load-shedding) and it provides an infeasible solution, whereas the dynamic model does provide the required amount of gas. Similarly, in the second case, even though the gas load is lower, the steady-state model results in an infeasible solution. Therefore, we present a feasible gas load distribution with different defense and attack budgets "D1A3" and "D2A3". If the line pack is not considered as in the steady-state model, the model may fail to find the best defense strategy, as is the case for the dynamic model in case "D1A3". Another drawback occurs if the steady-state model is adopted, i.e., the contract parameter values. This model provides different amounts of firm and reserved gas for both the pre- and post-contingency stages, resulting in costly contracts, as shown for the last two cases. It is clear that the dynamic model offers more flexibility because it handles the bidirectional gas flows and gas line pack. Therefore, it is important to consider the dynamic gas model during the optimization of the power system's resilient operation. Figure 4.11: Economic performance of the proposed model with and without considering over-generation for different cases. #### Impacts of Defense and Attack Budgets In general, the planning and protection of power lines are always beneficial [142]. Table 4.6 lists the total cost for all possible cases (\(15\) cases). Using only one defender reduces the total cost by about half if this defender is not used. The results show that the power system operator can choose a suitable protection design based on the expected attack and the trade-off between benefits and costs. For example, if the attack budget is one, protecting one power line reduces the resilient operation cost to below half and there is no need to defend more than two power lines. Similarly, defending \(3\) power lines is sufficient to provide full protection if the attack budget equals \(2\). To provide full protection against any attack, it is not necessary to protect all power lines because full protection is achieved by hardening only \(4\) power lines. The results demonstrate that we can identify the suitable defense budget based on the expected attack budgets, especially for large power systems. Adding the UC to the upper level provides very important benefits during optimization [212]. However, the proposed model uses only economic dispatch in the upper level and it considers \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\mathrm{A}\mathrm{D}\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline \(1\) & \(3.6884\) & \(1.4957\) & \(0.7449\) & \(0.7449\) & \(0.7449\) & \(0.7449\) \\ \hline \(2\) & \(23.5955\) & \(12.3155\) & \(4.7294\) & \(0.7449\) & \(0.7449\) & \(0.7449\) \\ \hline \(3\) & \(23.5955\) & \(12.9653\) & \(4.7344\) & \(2.6717\) & \(0.7449\) & \(0.7449\) \\ \hline \(4\) & \(23.5955\) & \(14.3151\) & \(4.9023\) & \(2.6717\) & \(0.7449\) & \(0.7449\) \\ \hline \(5\) & \(23.5955\) & \(14.3151\) & \(4.9023\) & \(2.6717\) & \(0.7449\) & \(0.7449\) \\ \hline \hline \end{tabular} \end{table} Table 4.6: Total cost during the pre- and post-contingency stages for all cases based on different defense and attack budget combinations \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{\(\mathrm{N}\mathrm{G}\mathrm{\ load}\) stress} & & & & & \multicolumn{3}{c}{Gas contracts} & Total cost \\ \hline \(\mathrm{g}\mathrm{l}_{1}\) & \(\mathrm{g}\mathrm{l}_{2}\) & \(\mathrm{g}\mathrm{l}_{3}\) & & Case & Model & \(DS^{*}\) & \(AS^{*}\) & \(FG^{*}\) & \(\rho_{{}_{r}}^{-}\) & \((10^{6}\,\mathrm{\SIUnitSymbols})\) \\ \cline{1-1} \cline{6-10} \(10\) & \(45\) & \(45\) & \(\boldsymbol{-}\) & Steady-state & & & & \multicolumn{3}{c}{Infeasible for normal gas load} \\ \cline{1-1} \cline{6-10} \(30\) & \(20\) & \(20\) & \(\boldsymbol{-}\) & & Dynamic & & & Feasible & \\ \cline{1-1} \cline{6-10} \(45\) & \(20\) & \(0\) & \(\boldsymbol{-}\) & Steady-state & & & & \multicolumn{3}{c}{Infeasible for decreased gas load} \\ \cline{1-1} \cline{6-10} \(45\) & \(20\) & \(0\) & \(\boldsymbol{-}\) & Dynamic & & & & Feasible & \\ \cline{1-1} \cline{6-10} \(45\) & \(20\) & \(0\) & \(\mathrm{D}1A3\) & Steady-state & & L1 & \(\mathrm{L}2,\mathrm{L}4,\mathrm{L}6\) & \(v1\): \(0.0724\) & \(2.8160\) & \(0.1606\) & \multirow{2}{*}{\(13.115\)} \\ \cline{1-1} \cline{6-10} & \(0\) & \(0\) & \(\mathrm{D}1A3\) & \(\mathrm{D}1A\) & \(\mathrm{L}5\) & \(\mathrm{L}1,\mathrm{L}2,\mathrm{L}6\) & \(v2\): \(0.0440\) & \(1.7740\) & \(0.0163\) & \\ \cline{1-1} \cline{6-10} & \(0\) & \(\mathrm{D}1A3\) & \(\mathrm{D}1A\) & \(\mathrm{L}5\) & \(\mathrm{L}1,\mathrm{L}2,\mathrm{L}6\) & \(v1\): \(0.0948\) & \(4.8526\) & \(0.2028\) & \multirow{2}{*}{\(12.754\)} \\ \cline{1-1} \cline{6-10} & \(0\) & \(\mathrm{D}2A3\) & Steady-state & & \(\mathrm{L}1,\mathrm{L}5\) & \(\mathrm{L}3,\mathrm{L}6\) & \(v2\): \(0.6959\) & \(3.4715\) & \(0.0124\) & \\ \cline{1-1} \cline{6-10} \(45\) & \(20\) & \(0\) & \(\mathrm{D}2A3\) & \(\mathrm{D}1A\) & \(\mathrm{L}1,\mathrm{L}5\) & \(\mathrm{L}2,\mathrm{L}3,\mathrm{L}6\) & \(v1\): \(0.0115\) & \(4.6498\) & \(0.0000\) & \\ \cline{1-1} \cline{6-10} & \(0\) & \(\mathrm{D}2A3\) & \(\mathrm{D}1A\) & \(\mathrm{L}1,\mathrm{L}5\) & \(\mathrm{L}2,\mathrm{L}3,\mathrm{L}6\) & \(v2\): \(0.0000\) & \(1.7860\) & \(0.0474\) & \(2.6552\) \\ \hline \hline \end{tabular} *DS: Defense strategy, AS: Attack strategy, FG: Firm gas (\(\mathrm{MSm}^{3}/\mathrm{day}\)), \(\rho_{{}_{r}}^{-}\underset{{}_{u}}{=}\sum\rho_{{}_{r}}^{{}_{r}}\) (\(\mathrm{MSm}^{3}/\mathrm{day}\)), \(\rho_{{}_{r}}^{-}\underset{{}_{u}}{=}\sum\rho_{{}_{r}}^{{}_{r}}\) (\(\mathrm{MSm}^{3}/\mathrm{day}\)). \end{table} Table 4.5: Computational results for four cases with/without the consideration of over-generation that the UC is already known. In the upper curves in Figure 4.12, the dashed lines represent a normal dispatch (ND), which is achieved by optimizing the UC without considering the power system uncertainties (deterministic model). The solid lines represent resilient dispatch (RD), which is optimized in the upper level of the proposed model (robust model) for the Case "D1A2". Unit re-dispatch under the worst attack scenario (\(L4,L5\)) is used in the lower curves. The change in the time schedules during the normal state is to decrease the value of the reserved gas from all units so that it can be used under any attack strategy, even if RD is more costly than ND. The normal operating costs for resilient dispatch (\(\$749928.8\)) are higher than those for normal dispatch (\(\$744889.8\)) by only \(0.6\%\). #### Scalability Tests of the Proposed Algorithm In this section, we select an IEEE \(39\)-Bus power system coupled with a \(20\)-Node gas system (modified high-calorific gas network), denoted as **TS-I**, as well as IEEE \(118\)-Bus power system coupled with the same gas system, denoted as **TS-II**, as large-scale power-gas test systems. The proposed model and algorithm is used to determine their scalability. The test systems topology, parameters, and UC are described in further detail in Appendix B.1.1 and Appendix B.3.1. The load-shedding penalty price is set at \(\$1000\)/MWh and \(\overline{M}\) used for the linearization in the inner C&CG algorithm is set at \(10^{4}\). The convergence tolerance values and are \(0.1\%\). The number of segments used in the linearization of the Weymouth equation is \(4\) for both the gas flow and gas pressure. We consider a problem with \(2\) periods from \(t=2\) to \(t=3\) as the target slot [52]. Figure 4.13 displays the upper and lower boundaries of the inner and outer C&CG algorithms, as well as the operational and regulation costs in the NC&CG iterations for case "D3A2" in the **TS-I**. In the first outer iteration, the recourse action (lower-level problem) can alone reduce the system disruption from about \(\$680,000\) to \(\$320,000\) without the optimal decision-making in the upper level. Defending the most vulnerable components obtained from the first iteration of the outer C&CG does not provide any advantages as shown in the second Figure 4.12: Time schedules in the pre-contingency stage for normal and resilient dispatch for the Case “D1A2” in **TS-I**. iteration (same regulation cost as the inner C&CG). In the first \(6\) outer iterations, the outer C&CG seeks to maintain the operational cost at \(\$25,460\) but the outer C&CG gap is still high (see the lower graph in Figure 4.13). Therefore, the operational cost increase to \(26,250\) in the \(7^{\text{th}}\) outer iteration. The last outer iteration is executed to confirm that the optimal solution is achieved. Table 4.7 lists the total computation time of the NC&CG algorithm and the inner and outer C&CG iteration numbers for different four Cases "D1A2", "D1A2", "D3A2", and "D1A4" in the two test systems, namely **TS-I** for the IEEE \(39\)-Bus-\(20\)-Node and **TS-II** for the IEEE \(118\)-Bus-\(20\)-Node systems. The computation time increases as the defender and attacker budgets increase. The outer C&CG iteration number is influenced more by the defender budget than the attacker budget, whereas the inner C&CG iteration number is affected only by the attacker budget. Figure 4.14 displays the computational time for middle- and upper-level problems for each outer iteration in case of **TS-I**. In case "D1A1", the middle-level time is less than \(18\)s for all inner iterations whereas this time reaches \(200,300\), and \(1500\)s in Cases "D1A2", "D3A2", and "D1A4", respectively, due to the increase in the defender budget. The upper-level time ranges from \(0.2\)s to \(6\)s with defender budget equals \(1\) (Figure 4.14.a, Figure 4.14.b, Figure 4.14.d), while it is doubled with defender budget equals \(3\) (Figure 4.14.c). The solver time of lower-level problem is small (less than \(0.02\)s for any iteration) compared with middle-and upper-level times, therefore it is not plotted in Figure 4.14. which suffers from the additional primal cut constraints and variables. Therefore, the attack budget has a strong effect on the inner C&CG execution time and consequently on the overall computation time. The binary variables resulting from the linearization of the Weymouth equation and the OG have little effect on the computation time because they are used as parameters in the middle-level problem. Therefore, the selection of a suitable value of \(\overline{M}\) will reduce the computational cost [52]. This value depends on the maximum regulation cost under any possible attack strategy (limited by its budget). Table 4.8 presents the computation time for Case "D1A1" with different \(\overline{M}\) values. It should be kept in mind that very low \(\overline{M}\) values do not result in a realistic evaluation of the attack decision in the middle-level problem as shown in Table 4.8. Based on the aforementioned discussion, we recommend the following assumptions and suggestions to reduce the computational burden: \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Case} & \multirow{2}{*}{Test System} & Time & Outer C\&CG & Inner C\&CG \\ & & (s) & Iterations & Iterations \\ \hline \multirow{2}{*}{“D1A1”} & TS-I & 250 & 5 & 11, 7, 10, 8, 8 \\ \cline{2-5} & TS-II & 424 & 6 & 13, 23, 24, 30, 14, 15 \\ \hline \multirow{2}{*}{“D1A2”} & TS-I & 6655 & 5 & 17, 17, 24, 23, 23 \\ \cline{2-5} & TS-II & 8403 & 7 & 13, 26, 14, 19, 24, 16, 16 \\ \hline \multirow{2}{*}{“D3A2”} & TS-I & 14088 & 7 & 17, 17, 17, 22, 28, 35, 35 \\ \cline{2-5} & TS-II & 15258 & 6 & 19, 24, 23, 14, 4, 9 \\ \hline \multirow{2}{*}{“D1A4”} & TS-I & 91605 & 6 & 19, 25, 25, 32, 38, 35 \\ \cline{2-5} & TS-II & 99123 & 9 & 13, 14, 12, 10, 9, 9, 18, 30, 26 \\ \hline \hline \end{tabular} \end{table} Table 4.7: Computation times for the selected four cases Figure 4.14: Middle-and upper-level computational times for **TS-I**; (a) Case “D1A1”, (b) Case “D1A2”, (c) Case “D3A2”, (c) Case “D1A4”. 1. Reduce the defender and attacker budget; this can be achieved by selecting the most frequent and vulnerable components based on the power system operators' experience and by ensuring that healthy and protectable components cannot be attacked. 2. Provide suitable defender and attacker strategies based on the operators' experience rather than using arbitrary strategies. 3. Select a suitable \(\overline{M}\) value for the middle-level problem based on the expected regulation cost. 4. Increase the inner and upper convergence tolerance values (\(\varepsilon\)) to obtain a balance between the accuracy and the computational burden. 5. Reduce the number of binaries used in the linearization of the Weymouth equation. Or use a convexification method for Weymouth equation such as convex-concave procedure [163]. ### 4.5 Conclusions and Discussions Natural disasters and malicious attacks can cause serious threats for electric infrastructures and affect the normal operation of power systems, especially under heavy loads. Due to their fast response, good regulation capacity, relatively high efficiency, and low generation costs, GPUs have been playing increasingly larger roles in the resilient operation of power system, such as quick power flow distribution adjustments in the pre-contingency stage and picking up important loads in the post-contingency stage. These actions have significantly improved the physical interdependency between power systems and gas systems. In addition, in most cases, power systems and gas systems are operated by different utilities, suggesting inevitable economic behaviors between the two energy systems, such as gas purchase contracts. Due to the fact that the utilization of the superior regulation capabilities of GPUs relies on a reliable gas supply, it is essential to model the physical and economic interactions between power systems and gas systems for resilient decision-making. In this chapter, the interactions between power systems and gas system during resilient operations are considered from both the physical perspective, i.e., the consideration of the operational and security constraints of the gas system, and the economic perspective, which is addressed by modeling the gas contracts including the here-and-now gas demands in the \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \(\bar{\text{M}}\) & Time (s) & Outer iterations & \multicolumn{4}{c}{Inner iterations} \\ \hline \(10^{6}\) & 271.2682 & 5 & 11 & 7 & 10 & 8 & 8 \\ \hline \(10^{5}\) & 250.0085 & 5 & 11 & 7 & 10 & 8 & 8 \\ \hline \(10^{4}\) & 248.4415 & 5 & 11 & 7 & 10 & 8 & 8 \\ \hline \(10^{3}\) & 105.2347 & 5 & 6 & 6 & 7 & 8 & 8 \\ \hline \(10^{2}\) & \multicolumn{4}{c}{Inner C\&CG does not converge} \\ \hline \hline \end{tabular} \end{table} Table 4.8: Computation times for case “D2A1” with different \(\overline{M}\) values pre-contingency stage and the wait-and-see fuel consumption in the post-contingency stage. To improve the operational performance of the power system in the worst-case contingency scenario as well as to consider practical decision-making, a two-stage robust optimization and defender-attacker-defender model are used in the mathematical model. Due to the linearization of the non-convex Weymouth equation used in the gas network and the modeling of the on/off grid operation of the generators, binary variables are used for the post-contingency stage decision-making. In this regard, an additional decomposition algorithm is needed to coordinate the results of the middle- and lower-level problems aside from the regular decomposition algorithm for the upper- and middle- and-lower-level problems, this results in a nested problem, which is solved by the nested column-and-constraint algorithm. Various numerical simulations of the two-test systems demonstrate the necessity of considering both physical and economic interactions with the gas system and avoid over-optimistic results driven by integrated modeling. The results of this study open quite a few new research directions. As this study presents the first attempt that considers the reserved gas contracts along with firm gas contracts in a robust optimization (RO) model, it is upgraded to 1) consider bidirectional gas contracts; 2) consider a more practical two-stage (day-ahead and real-time) contracting mechanism; 3) incorporating more uncertainty sources such as renewable power generation. These improvements have been addressed in Chapter 5. Other future works are to decrease the conservativeness of the resilient robust strategy of IEGS using data-driven decision-making theories, and to consider different types of contingencies [208], such as power unit outages and gas-power pipeline malfunctions [52], as well as to consider regional and timely behavior of attacker [216]. [MISSING_PAGE_POST] ## Chapter 5 Robust Economic Operational Strategies for IEGSs Due to increasing penetration of variable and uncertain renewable power generation (RPG), as well as stronger interdependency with gas systems facilitated by the deployment of gas-fired power units (GPUs) and power-to-gas (P2G) facilities, the secure and economic energy management (EM) of the power systems has become more challenging in the decision-making and computational burden perspectives. Most existing works on the EM problem of the power systems (1) neglect the bidirectional interactions of the power system with other energy systems or (2) assume that the power and gas systems are run by one utility while neglecting the fact that there are different regulation authorities for them in most occasions. This chapter revisits the day-ahead operation of large-scale RPG integrated power systems considering the physical and economic interactions with gas systems. Specifically, the physical interaction is simulated by incorporating the gas system operation constraints, and the economic interaction is realized by modeling the gas contracting mechanism. The previous chapter proposed a two-stage robust decision-making framework to optimize the operational performances of power systems under the worst-case \(N-k\) contingencies, where the day-ahead gas contracts are modeled. Emerging P2G facilities to mitigate the surplus RPG outputs, bidirectional gas contracts are inevitable. This chapter develops two operational models for optimal EM in the power system with bidirectional gas contracts. The first model is to find the economic and reliable EM decisions for power distribution networks (PDNs) with RPG integration. A tri-level robust dispatch model is established for the PDN considering bidirectional interactions with the gas distribution networks (GDNs). Two types of gas contracts are modeled, namely, gas-to-power (G2P) contracts for GPUs and P2G contracts for P2G facilities. A quadruple-loop solution procedure is developed for the proposed tri-level EM model, including two column-and-constraint (C&CG) loops for the two-stage decision-making framework, and two sequential mixed integer second-order cone program (S-MISOCP) loops to enhance the solution feasibility with respect to the non-convex power flow and Weymouth equations. This model is illustrated in Section 5.2, indicating the mathematical formulation and the proposed quadruple-loop procedure. This model has been published as * Ahmed R. Sayed, Cheng Wang, Junbo Zhao, and Tianshu Bi. "Distribution-level Robust Energy Management of Power Systems Considering Bidirectional Interactions with Gas Systems." IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2092-2105, May 2020. DOI: [https://doi.org/10.1109/TSG.2019.2947219](https://doi.org/10.1109/TSG.2019.2947219) The second model incorporates both the day-ahead and real-time gas contracts in power system operation. To balance the robustness and the conservativeness of the operation strategy, a distributionally robust optimization (DRO) based decision-making framework is derived, and two-stage contracting mechanism is proposed. The quadruple-loop solution procedure is designed to tackle the computation burden brought by the DRO as well as the non-convexities in gas system modeling. This model is illustrated in Section 5.3, indicating the mathematical formulation and the solution procedure. This model is submitted for publication as * Ahmed R. Sayed, Cheng Wang, Sheng Chen, Ce Shang, and Tianshu Bi. "Two-stage Distributionally Robust Gas Contracting for Power System Operation." Submitted for publication to IEEE Transactions on Smart Grid. The effectiveness of the proposed models and solution methodologies are verified by simulation results of several moderate test systems from distribution to transmission levels in Section 5.4, where the significance of the proposed models compared with the literature analogues, the DRO-based modelling compared with SO- and RO-based ones, the two-stage contracting compared with one-stage contracting, and the solution method scalability are verified by the simulation results. Finally, the main conclusions and discussions are drawn in Section 5.5. ### 5.1 Introduction Climate change and environmental concerns have been major driven forces for the utilization of renewable energy resources, such as wind and solar power generation, around globe [9], [10]. In this regard, the top two CO\({}_{2}\) emitters, China and the US, pledged to increase their wind energy utilization to 20% by 2030 [11]. However, the integration of wind power at a large scale brings new challenges for EM in power systems, especially in the distribution level. This is because of the variable and uncertain output features of RPG as well as the complex operation conditions in PDN. Therefore, it becomes essential to make economic and reliable EM decisions for PDNs with RPG integration. To this end, many efforts have been made on the EM of PDNs with a concentration on optimal power flow (OPF) problem, which is one of the basis of PDN operational analysis. An up-to-date survey on OPF optimization methods is provided in [79], [80]. In [87], reactive power management is developed and solved with the decentralized algorithm. A stochastic EM problem is formulated in [217] considering slow- and fast-timescale controls. Unlike stochastic approaches, robust optimization approaches consider all the possible realizations of renewable uncertainties within a prescribed uncertainty set irrespective of their probabilities. An adjustable robust OPF is proposed in [218], where controllable generators are adjusted affinely according to the wind generation outputs. Meanwhile, quite a few inspiring works have been carried out on coordinating the active and reactive power of PDNs in a robust manner with additional modeling efforts on the continuous and discrete reactive power compensators [219], the on-load tap changer ratios and energy storage systems [155], the price-based demand response [220] and the photovoltaic output uncertainties [221]. The aforementioned works can provide economic and robust EM decision for PDNs in the view of power systems. However, they neglect economic and physical interactions with other energy systems, such as GDNs, which feed GPUs and could damp the surplus RPG through P2G facilities. Therefore, these models, which are referred as the independent power system (IPS) model in this study, may cause physical violations such as under/over- pressure in gas systems, as discussed in Chapter 4. Owing to the rapid development of the P2G technology and wide deployment of GPUs, interactions between power and gas systems have been noticeably enhanced in transmission [37] and distribution [56, 57] levels. This intensified interaction gradually appreciates the concept of integrated electricity and gas system (IEGS) and brings quite a few research interests on the IEGS model [58, 59], in which power and gas systems are coordinated and co-optimized for planning, economic dispatch, and resilient operation. To utilize the gas system flexibility and reliability, the IEGS dispatch problem is formulated in [66], where the gas dynamics are modeled. The mutual dependence in IEGS and importance of considering gas systems, especially, under high wind penetrations, are investigated in [119]. To consider wind uncertainties in the IEGS, different models are proposed in the literature, such as stochastic optimization models [125], the interval optimization model [124], the adjustable robust dispatch model [56], and the two-stage robust model [132]. Note that the aforementioned IEGS dispatch models share one core assumption, that is, power and gas systems are supervised and controlled by one system operator. This operator has full authority to control and optimize all energy resources. However, in most cases, the two systems are operated by different utilities and information synchronization policy may be restricted [45]. Therefore, the IEGS model might provide suboptimal, or even infeasible decisions for subsystems. Under the multi-party decision-making reality as well as the necessity of bidirectional energy conversion, signing energy purchase contracts become inevitable. In industrial practice, GPUs are usually supplied with interruptible gas services under day-ahead G2P contracts, which are more economical and convenient than real-time contracts [40, 48]. Moreover, the gas produced from P2G facilities is injected into the gas system under P2G contracts. Nevertheless, the real-time operation of PDNs might be changed from the day-ahead dispatch largely owing to the renewable uncertainties. This means the outputs of GPUs and P2G facilities might deviate from their day-ahead schedules, to mitigate the operation losses or the surplus wind generation. In this regard, there should also be contracts for the reserved gas besides the contracts for firm energy. ### 5.2 Robust Day-ahead Operation with Bidirectional Gas Contracting To overcome the drawbacks of the IPS and IEGS models, the EM problem of PDNs is revisited considering interactions with gas systems. In this section, a tri-level robust dispatch model of the PDN is presented, considering energy contracts with gas systems. The proposed model not only identifies the optimal gas contracts and generation dispatch in the day-ahead stage, but also provides the optimal re-dispatch strategy in the real-time stage. In addition, the approximated gas dynamics are adopted in both day-ahead and real-time stages to offer additional operational flexibility for the gas system. To consider bidirectional energy trade with the gas system, two types of gas contracts are modeled, namely, gas-to-power (G2P) contracts for GPUs and P2G contracts for P2G facilities. The main objective of the PDN operator is to manage available energy resources economically and to identify the optimal energy contracts with interacted systems irrespective of their operational cost (OC). Beside the challenge in decision-making framework modeling, there is also computational difficulty in the EM of the interacted power and gas systems. For the PDN, the active and reactive power are coupled as the bus voltages are notably influenced by active power variations. Recently, convex relaxation methods have been implemented to solve OPF owing to their computational benefits. As discussed in Section 3.1, that SOC relaxation is exact under mild conditions for radial networks [159], however, the exactness depends on the objective function. In the proposed model, the objective function is not strictly increasing with all injected active power such as wind power (zero-cost). Based on penalty convex-concave procedure (P-CCP), the S-MISOCP algorithm proposed in Section 3.3.3 is employed to guarantee the power and gas feasibility and the accuracy of SOC relaxation. Based on the above discussion, this work is the first attempt to embed the S-MISOCP algorithm into the nested column-and-constraint (NC&CG) algorithm to solve the tri-level model, whose upper and lower levels are non-convex. Compared with recent works, the main contributions of this study are twofold. 1. A tri-level robust dispatch model is established for the PDN considering both physical and economic interactions with the gas systems. Specifically, the physical interaction is achieved by adding the security and feasibility constraints of the gas system into the EM problem of the PDN, while the economic interaction is completed by modeling firm and reserved gas contracts for both G2P and P2G. 2. A quadruple-loop algorithm for the proposed robust EM problem of the PDN is devised, where the second and forth loops are S-MISOCP algorithms to enhance the solution feasibility in the day-ahead and real-time dispatch stages, respectively, and the first and third loops are column-and-constraint (C&CG) algorithms to tackle the tri-level decision-making structure with binary recourse. #### Mathematical Formulation ##### Model Descriptions Based on the discussion above, three models are available to optimize and manage energy distributed for the PDN. Figure 5.1 displays the schematic layout of these models and simplifies the salient features of each one: (i) the IPS model [80], [87], [155], [217], [218], which does not consider the operational and security constraints of the gas system, may cause physical violations, yielding a cascading failure in gas systems, particularly with high penetration of wind generation; (ii) the IEGS model [37], [40], [43], [56], [57], [58], [66], [119], [124], [125], [132], which considers power and gas systems as one system and co-optimizes the total OCs, and neglects the energy transaction contracts, rendering a high probability of providing over-optimistic decisions and contract avoidance in practice; (iii) the proposed model, which finds the optimal decisions for the power system operator (PSO) considering the operational constraints of the gas system to avoid any physical violations and optimizes gas contracts to prevent any contract avoidance under the prescribed uncertainties in the day-ahead stage. Before presenting the detailed formulation of the proposed model, commonly used assumptions and simplifications are presented as follows 1. In general, (i) the distribution system may have some loops as well with the integration of renewable energy and storage. We may just say that this study only considers the scenario of radial structure [159], [219] and the mesh network will be addressed in Figure 5.1: Schematic layout of the IPS, the IEGS, and the proposed models. our future work; (ii) cost coefficients of generated power from GPUs and non-GPUs are known; (iii) PDN and GDN are operated by different utilities. 2. In PDN modeling, (i) the branch flow model in [51], [164] is adopted, where the three-phase system is balanced and the power flow direction is fixed without any reverse power; (ii) for GPUs, the consumed gas and the OC depend only on the active power [51]; (iii) all P2G facilities are owned by the power system utility. 3. In GDN modeling, (i) the approximated dynamic-state model for gas system presented in Section 2.1.2 is adopted. 4. In contract modeling, P2G and G2P contracts are signed in the day-ahead stage and there are no real-time contracts. The gas prices of sale and purchase gas contracts are fixed. These prices and penalties of contract avoidance are obtained from the gas system operator in the day-ahead stage before determining the gas contracts. The proposed two-stage model aims to identify the optimal EM strategy with the best gas contracts by minimizing both OC and regulation cost (RC). The optimal OC is identified in the first (day-ahead) stage based on the forecasted outputs wind generation. While, in the second (real-time) stage, the model seeks to minimize the RC with the worst-case realization of the uncertainties. The overall objective function is presented in (5.1), where \(\Omega_{1},\Omega_{2}\), and \(\Omega_{3}\) are the day-ahead, wind uncertainty, and real-time decision variables, respectively. Please refer to (5.39) for their detailed expressions. Equation (5.2) depicts the OC in the day-ahead stage, including the power production costs from all generators, costs of reserved gas in the G2P contracts, and revenue from sale of gas in the P2G contracts. Equation (5.3) provides the RC in the real-time stage, including the costs of adjustable power production from non-GPUs and the penalties for the non-served power load, wind curtailment, and P2Gs output deviation from their contracts. \[\min_{\Omega_{1}}\ OC\ +\ \max_{\Omega_{2}}\ \min_{\Omega_{3}}\ RC \tag{5.1}\] \[OC=\sum_{t}\Big{[}\sum_{u}C_{u}(\hat{p}_{u,t})+\sum_{y}(\mu_{h} \rho_{h,t}+\mu_{h}^{+}\rho_{h,t}^{+}+\mu_{h}^{-}\rho_{h,t}^{-})-\sum_{j}C_{j} g_{j,t}\Big{]}\] (5.2) \[RC=\sum_{t}\Big{[}\sum_{u\in\mathcal{U}_{n}}(C_{u}^{+}\triangle p _{u,t}^{+}+C_{u}^{-}\triangle p_{u,t}^{-})+\sum_{d}C_{d}\triangle p_{d,t}\] \[\qquad\qquad+\sum_{e}C_{e}\triangle w_{e,t}+\sum_{j}(C_{j}^{+} \triangle g_{j,t}^{+}+C_{j}^{-}\triangle g_{j,t}^{-})\Big{]} \tag{5.3}\] #### Day-ahead Operational Constraints Day-ahead constraint set constitutes of two parts. The first one is the operational constraints of PDN defined in (5.4)-(5.13), which are derived from Section 3.2.2, using the day-ahead decision variables, i.e., adding the hat ( ) symbol on the decision variables for those constraints. They are composed by Active and reactive power capacities of all units: (3.18), (5.4) Maximum ramping up and down limits: (3.19), (5.5) P2G power consumption capacities: (3.20), (5.6) Active and reactive power flow direction: (3.21), (5.7) Squared line current limits: (3.22), (5.8) Squared node voltage limits: (3.23), (5.9) Voltage drop equation: (3.24), (5.10) Power flow equation: (3.25), Active power nodal balancing equation: \[\sum_{u\in\mathcal{U}(n)}\hat{p}_{u,t}+\sum_{e\in\mathcal{E}(n)} \hat{W}_{e,t}+\sum_{l\in\mathcal{L}_{1}(n)}(\hat{p}_{l,t}-r\hat{i}_{l,t})- \sum_{l\in\mathcal{L}_{2}(n)}\hat{p}_{l,t}\] \[\qquad=G_{n}\hat{v}_{n,t}+\sum_{z\in\mathcal{Z}(n)}\hat{p}_{z,t}+ \sum_{d\in\mathcal{D}_{p}(n)}P_{d,t},\ \forall n,t\] (5.12) Reactive power nodal balancing equation: \[\sum_{u\in\mathcal{U}(n)}\hat{q}_{u,t}+\sum_{l\in\mathcal{L}_{1}(n)}(\hat{q} _{l,t}-x\hat{i}_{l,t})-\sum_{l\in\mathcal{L}_{2}(n)}\hat{q}_{l,t}=B_{n}\hat{v }_{n,t}+\sum_{d\in\mathcal{D}_{p}(n)}Q_{d,t},\ \forall i,t \tag{5.13}\] Similarly, the second part is GDN constraints are derived in (5.14)-(5.88). Gas production capacities: (2.1), (5.14) Gas compressors constraints: (2.5)-(2.6), (5.15) Nodal pressure bounds: (2.8), (5.16) Average flow rate equation: (2.11), (5.17) Mass flow equation: (2.13), (5.18) Continuity equation: (2.14), (5.19) GPU gas consumption: (3.15), (5.20) P2G gas production: (2.41), (5.21) Weymouth equation: (2.10). Gas nodal balancing equation: \[\sum_{p\in\mathcal{P}_{1}(i)}\hat{f}^{out}_{p,t} -\sum_{p\in\mathcal{P}_{2}(i)}\hat{f}^{in}_{p,t}+\sum_{c\in \mathcal{C}_{1}(i)}\hat{f}^{out}_{c,t}-\sum_{c\in\mathcal{C}_{2}(i)}\hat{f}^{ in}_{c,t}+\sum_{z\in\mathcal{Z}(i)}\hat{\varrho}_{z,t}+\sum_{w\in\mathcal{W}(i)} \hat{f}_{w,t}\] \[=\sum_{u\in\mathcal{U}_{g}(i)}\rho_{u,t}+\sum_{d\in\mathcal{D}_{ g}(i)}F_{d,t},\ \forall i,t \tag{5.23}\] It should be emphasized that, to guarantee the exactness of SOC relaxation, the cost function should be strictly increasing in all injected active power [164], even for radial PDNs. In this study, the objective function includes zero-cost wind power in the day-ahead stage and indirect negative-cost for injected power from GPUs, non-GPUs, surplus wind, and P2G units in the real-time stage, i.e., \(\rho_{h,t}^{-}\), \(\triangle p_{u,t}^{-}\), \(\triangle w_{e,t}\), and \(\triangle g_{j,t}^{+}\). Therefore, relaxation of power flow equation presented in [159] is generally inexact for the proposed model. Consequently, the S-MISOCP algorithm should be applied on the power flow equations besides the gas flow equations. ##### Gas Contracts Modeling According to current electricity industrial practice, GPUs usually have the interruptible gas delivery service in terms of cost-saving [47], and the gas delivery contracts are usually determined day-ahead or even earlier because real-time contracting would be costly and inconvenient [48]. Gas contracting models proposed in Section 4.2.3, is adopted in the proposed model. The G2P contract is characterized by two subcontracts, namely, contracts for firm and reserved outputs of GPUs, which is consistent with the two-stage power system dispatch. The firm gas contract provides the scheduled gas amounts in the day-ahead stage, which can be calculated by (5.24). The gap between the actual gas consumption of GPUs in the real-time stage and the amount in the firm gas contract must obey the reserved gas contract, which is defined in (5.25)-(5.26). \[\rho_{h,t}\geq\sum_{\forall u\in\mathcal{U}_{g}(h)}\frac{\Phi}{ \eta_{u}}p_{u,t},\ \forall h,t \tag{5.24}\] \[-\rho_{h,t}^{-}\leq\sum_{\forall u\in\mathcal{U}_{g}(h)}\frac{ \Phi}{\eta_{u}}(p_{u,t}-\hat{p}_{u,t})\leq\rho_{h,t}^{+},\ \forall h,t\] (5.25) \[\rho_{h,t}^{+},\ \rho_{h,t}^{-}\geq 0,\ \forall h,t. \tag{5.26}\] There are two methods to cope with the surplus wind energy, which are curtailment by the wind sector management and methanation by P2G facilities, respectively. The proposed model systematizes the two manners based on the curtailment penalty, which is managed by the PDN operator, and the G2P contract avoidance penalty, which is assigned from the gas system operator. The P2G output gas, which is injected into the gas system pipelines, should obey the P2G contracts. The scheduled sale values can be calculated by (5.27) in the day-ahead stage. However, in the real-time stage, any gas variation (upward and downward) detected by (5.28)-(5.29) will be penalized. \[g_{j,t}\leq\sum_{j\in\mathcal{Z}(j)}\Phi\eta_{z}\hat{p}_{z,t}, \forall j,t, \tag{5.27}\] \[-\triangle g_{j,t}^{-}\leq\sum_{z\in\mathcal{Z}(j)}\Phi\eta_{z}( p_{z,t}-\hat{p}_{z,t})\leq\triangle g_{j,t}^{+},\forall j,t,\] (5.28) \[\triangle g_{j,t}^{-},\ \triangle g_{j,t}^{+}\geq 0,\forall j,t. \tag{5.29}\] ##### Wind Power Generation Uncertainty Modeling Wind power generation uncertainties can be formulated with various modeling approaches, such as interval-based model and continuous probability distribution function approximation [124]. In [222], different uncertainty set approaches for robust models are discussed. In this study, the uncertainty set used in [102] is adopted. The uncertainty budgets are defined in (5.30)-(5.32). The real-time outputs under uncertainty is defined in (5.33) based on the average, maximum, and minimum forecasted values. \[\sum_{e}(\xi_{e,t}^{+}+\xi_{e,t}^{-})\leq\Gamma^{e},\forall t \tag{5.30}\] \[\sum_{t}(\xi_{e,t}^{+}+\xi_{e,t}^{-})\leq\Gamma^{t},\forall e \tag{5.31}\] \[\xi_{e,t}^{+}+\xi_{e,t}^{-}\leq 1,\xi_{e,t}^{+},\xi_{e,t}^{-}\in\{0,1\}, \forall e,t \tag{5.32}\] \[w_{e,t}=\hat{W}_{e,t}+(\overline{P}_{e,t}-\hat{W}_{e,t})\xi_{e,t}^{+}+( \underline{P}_{e,t}-\hat{W}_{e,t})\xi_{e,t}^{-},\forall e,t \tag{5.33}\] ##### Real-time Operational Constraints Most of the operation constraints in the real-time stage can be obtained by replacing the day-ahead decision variables with real-time ones in (5.4)-(5.11) and (5.14)-(5.22), namely removing the hat symbols of the decision variables in those constraints. Additionally, in real-time operation of the power network, wind generation curtailment and electrical load shedding are also practical means to recover the power balancing condition, whose adjustment ranges are shown in (5.34)-(5.35). Meanwhile, the nodal power balancing condition should be modified by adding the wind generation curtailment and electrical load shedding terms, resulted in (5.36)-(5.37). To quantify the regulation costs of non-GPUs in the real-time stage, (5.38) is added to describe the outputs adjustment of non-GPUs. \[0\leq\delta_{d,t}\leq 1,\ \forall d,t, \tag{5.34}\] \[0\leq\triangle w_{e,t}\leq w_{e,t},\ \forall e,t, \tag{5.35}\] \[\sum_{u\in\mathcal{U}(n)}p_{u,t}+\sum_{e\in\mathcal{E}(n)}(w_{e,t}-\triangle w _{e,t})+\sum_{l\in\mathcal{L}_{1}(n)}(p_{l,t}-r_{l}i_{l,t})-\sum_{l\in \mathcal{L}_{2}(n)}p_{l,t}\] \[=G_{n}v_{n,t}+\sum_{z\in\mathcal{Z}(n)}p_{z,t}+\sum_{d\in\mathcal{D}_{p}(n)}P_ {d,t}(1-\delta_{d,t}),\ \forall n,t, \tag{5.36}\] \[\sum_{u\in\mathcal{U}(n)}q_{u,t}+\sum_{l\in\mathcal{L}_{1}(n)}(q_{l,t}-x_{l} i_{l,t})-\sum_{l\in\mathcal{L}_{2}(n)}q_{l,t}=B_{n}v_{n,t}+\sum_{d\in\mathcal{D}_{p}(n) }Q_{d,t}(1-\delta_{d,t}),\ \forall i,t, \tag{5.37}\] \[-\triangle p_{u,t}^{-}\leq p_{u,t}-\hat{p}_{u,t}\leq\triangle p_{u,t}^{+},\ \triangle p_{u,t}^{-},\triangle p_{u,t}^{+}\geq 0,\ \forall t,u\in\mathcal{U}_{n}. \tag{5.38}\] ### The Overall Model The holistic model of power system EM problem can be cast as follows \[\min_{\Omega_{1}}\ OC\ +\ \max_{\Omega_{2}}\ \min_{\Omega_{3}}\ RC \tag{5.39a}\] \[s.t.\] Day-ahead constraints: ( 5.4 )- ( 5.88 ), ( 5.39b ) Real-time constraints: ( 5.33 ), ( 5.4 )- ( 5.11 ), ( 5.14 )- ( 5.22 ), ( 5.34 ), ( 5.39c ) Gas contracts: ( 5.24 )- ( 5.26 ), ( 5.27 ), ( 5.39d ) Uncertainty set: ( 5.30 )- ( 5.32 ), ( 5.39e ) \[\Omega_{1}=\{\rho_{h,t}^{+},\rho_{h,t}^{-},\rho_{h,t},g_{j,t},\hat{p }_{u,t},\hat{q}_{u,t},\hat{p}_{z,t},\hat{p}_{l,t},\hat{q}_{l,t},\hat{i}_{l,t}, \hat{v}_{n,t},\hat{f}_{w,t},\hat{\pi}_{i,t},\hat{f}_{c,t}^{in},\] \[\hat{f}_{c,t}^{out},\ \hat{f}_{p,t}^{in},\hat{f}_{p,t}^{out},\ \hat{f}_{p,t},\hat{m}_{p,t}\}\] (5.39f) \[\Omega_{2}=\{u_{e,t}^{+},u_{e,t}^{-}\}\] (5.39g) \[\Omega_{3}=\{\triangle p_{u,t}^{+},\triangle p_{u,t}^{-}, \triangle g_{j,t}^{+}\triangle g_{j,t}^{-},\triangle w_{e,t},\delta_{d,t},w_{e,t},p_{u,t},q_{u,t},p_{z,t},p_{l,t},q_{l,t},v_{n,t},\] \[f_{p,t},\pi_{i,t},f_{c,t}^{in},f_{c,t}^{out},f_{p,t}^{in},f_{p,t} ^{out},f_{p,t},m_{p,t}\} \tag{5.39h}\] which can be viewed as a two-stage or tri-level model. The model reformulation and the corresponding solution methodology are introduced in the following section. #### Solution Methodology In addition to the relatively complicated structure of the overall model, solving its each level also require much computation efforts, due to the presence of the nonlinear and nonconvex power flow and Weymouth equations. Fortunately, the power flow constraints and Weymouth equations can be formulated as DCP problem by expressing the proposed model constraints as difference of two convex functions. Referring to Section 3.3.3, equations (5.11) and (5.22) are reformulated as MISOCP constraints, which are convenient with decomposition algorithms. ##### Power Flow Equation Reformulation For ease of analysis, the general form of day-ahead and real-time power flow equations is given as \[p^{2}+q^{2}=vi \tag{5.40}\] which can be further divided into two opposite inequality constraints. The first inequality is an SOC constraint, and its canonical form is (5.41). Using (3.65), the second inequality is reformulated as (5.41), where \(\ [\overline{p}\ \overline{q}\ \overline{v}\ \overline{i}]^{\top}\) is used aa a linearization point. \[\left\|\begin{matrix}2p\\ 2q\\ (v-i)\end{matrix}\right\|_{2}\leq(v+i), \tag{5.41}\] \[\left\|\begin{matrix}2(v+i)\\ \varpi-1\end{matrix}\right\|_{2}\leq\varpi+1,\ \ \varpi_{l,t}=8\overline{p}p+8 \overline{q}q+2(\overline{v}-\overline{i})(v-i)-4\overline{p}^{2}-4\overline{ q}^{2}-(\overline{v}-\overline{i})^{2}. \tag{5.42}\] ### Gas Flow Equation Reformulation The general form of the Weymouth equations in the day-ahead and real-time stages is presented as \[f|f|=\chi^{f}(\pi_{i}^{2}-\pi_{o}^{2}) \tag{5.43}\] The sign function in (5.43) is firstly removed by introducing set of MILP constraints (5.44)-(5.48), where \(z\) is the directional binary variable and \(\pi^{+}/\pi^{-}\) is the inlet and outlet pressures of the pipeline, respectively. Then the resultant equality constraint is split into two opposite inequality constraints. The first inequality is an SOC constraint, and its canonical form is (5.49). Similar to (5.42), given \([\overline{f}\ \overline{\pi}^{+}\ \overline{\pi}^{-}]^{\top}\) as an initial point, the second inequality is substituted with the approximated canonical form (5.50). \[(1-z)(\overline{\Pi}_{o}-\underline{\Pi}_{i})\geq\pi^{+}-\pi_{i} \geq(1-z)(\underline{\Pi}_{o}-\overline{\Pi}_{i}),\ \ \forall\mathcal{P}^{\pm} \tag{5.44}\] \[(1-z)(\overline{\Pi}_{i}-\underline{\Pi}_{o})\geq\pi^{-}-\pi_{o} \geq(1-z)(\underline{\Pi}_{i}-\overline{\Pi}_{o}),\ \ \forall\mathcal{P}^{\pm}\] (5.45) \[z(\overline{\Pi}_{i}-\underline{\Pi}_{o})\geq\pi^{+}-\pi_{o} \geq z(\underline{\Pi}_{i}-\overline{\Pi}_{o}),\ \ \forall\mathcal{P}^{\pm}\] (5.46) \[z(\overline{\Pi}_{o}-\underline{\Pi}_{i})\geq\pi^{-}-\pi_{i} \geq z(\underline{\Pi}_{o}-\overline{\Pi}_{i}),\ \ \forall\mathcal{P}^{\pm}\] (5.47) \[\pi^{+}=\pi_{i},\ \ \pi^{-}=\pi_{o},\ \ \forall\mathcal{P}/ \mathcal{P}^{\pm}\] (5.48) \[\left\|\sqrt{\chi^{f}}\pi^{-}\right\|_{2}\leq\sqrt{\chi^{f}}\pi^{+},\] (5.49) \[\left\|2\sqrt{\chi^{f}}\pi^{+}\right\|_{2}\leq\Lambda+1,\ \ \Lambda=2\chi^{f}\overline{\pi}^{-}\pi^{-}+2\overline{f}f-\chi^{f}\overline{\pi} ^{-}-\overline{f}^{2}. \tag{5.50}\] Therefore, the compact form of the proposed model after reformulations of nonlinear equations is \[\min_{\mathbf{y},\mathbf{w},\overline{\mathbf{y}}} f(\mathbf{w})+\max_{\mathbf{u}}\min_{\mathbf{x},\mathbf{z},\overline{\mathbf{x}}} \mathbf{e}^{\top}\mathbf{x} \tag{5.51a}\] \[s.t. \mathbf{I}\mathbf{y}+\mathbf{J}\mathbf{w}\leq\mathbf{C}\] (5.51b) \[||\mathbf{A}_{v,t}\mathbf{y}||_{2}\leq\mathbf{a}_{v,t}\mathbf{y},\forall v\in \mathcal{P}\cup\mathcal{L},t,\] (5.51c) \[||\mathbf{B}_{v,t}(\overline{\mathbf{y}})\mathbf{y}+\mathbf{D}_{v,t}(\overline{\bm {y}})||_{2}\leq\mathbf{b}_{v,t}(\overline{\mathbf{y}})\mathbf{y}+\mathbf{d}_{v,t}(\overline{ \mathbf{y}}),\forall v\in\mathcal{P}\cup\mathcal{L},t,\] (5.51d) \[\mathbf{Su}\leq\mathbf{K}\] (5.51e) \[\mathbf{Ex}+\mathbf{G}\mathbf{z}\geq\mathbf{F}-\mathbf{Qu}-\mathbf{P}\mathbf{w}\] (5.51f) \[||\mathbf{H}_{v,t}\mathbf{x}||_{2}\leq\mathbf{h}_{v,t}\mathbf{x},\forall v\in \mathcal{P}\cup\mathcal{L},t,\] (5.51g) \[||\mathbf{M}_{v,t}(\overline{\mathbf{x}})\mathbf{x}+\mathbf{N}_{v,t}(\overline{ \mathbf{x}})||_{2}\leq\mathbf{m}_{v,t}(\overline{\mathbf{x}})\mathbf{x}+\mathbf{n}_{v,t}(\overline {\mathbf{x}}),\forall v\in\mathcal{P}\cup\mathcal{L},t. \tag{5.51h}\] where \(\mathbf{w}\) is the reserved gas in G2P contracts, scheduled gas in P2G sale contracts, and committed power from all generators; \(\mathbf{y}\) is the remained variables from the upper-level problem; \(\mathbf{u}\) is the middle-level decision; \(\mathbf{x}\) is the continuous variables in the lower-level problem, and \(\mathbf{z}\) is the remained binary variables. Because of the approximation in cones (5.42) and (5.50), initial points are required; therefore, \(\overline{\mathbf{y}}\) and \(\overline{\mathbf{x}}\) are considered as decision variables for upper- and lower-level problems, respectively. \(\mathbf{I},\mathbf{J}\), and \(\mathbf{C}\) are coefficients, and they can be driven from the linear constraints in (5.4)-(5.88), (5.24), (5.26), (5.27), (5.29) and (5.44)-(5.48). \(\mathbf{S}\) and \(\mathbf{K}\) are the coefficients of (5.30)-(5.32). \(\mathbf{E},\mathbf{G},\mathbf{F},\mathbf{Q}\), and \(\mathbf{P}\) are the coefficients of linear constraints in (5.25), (5.28), (5.4)-(5.21) (with real-time variables), (5.14)-(5.21) (with real-time variables), (5.33), (5.38), and (5.44)-(5.48). Equation (5.51c) ((5.51g)) can be obtained from (5.41) and (5.49) for upper (lower) level constraints. Similarly, SOC constraints in (5.42) and (5.50) are compacted in (5.51d) ((5.51h)) for upper (lower) level. #### The Quadruple-loop Algorithm: Nested C&CG + Two S-MISOCP Tri-level models can be solved by different techniques based on decomposition methods, such as Benders decomposition [214] and C&CG algorithm [53]. The presence of binary variables \(\mathbf{z}\) and finding a suitable point \(\overline{\mathbf{x}}\) for the approximated SOC prevent the lower-level problem to be dualized with zero-gap. Therefore, the NC&CG algorithm [215] with its improvement discussed in Chapter 4 (see Algorithm 3) is adopted in this study. In addition, the S-MISOCP algorithm proposed in Chapter 3 (see Algorithm 2), is suggested to find the values of \(\overline{\mathbf{y}}\) and \(\overline{\mathbf{x}}\) in each C&CG process, resulting in a quadruple-loop solution procedure. Figure 5.2 shows the proposed methodology and the interactions among different algorithmic loops. With an arbitrary feasible decision \(\mathbf{w}^{*}\) and \(\mathbf{u}^{*}\), the inner C&CG algorithm, i.e., the third loop, starts to solve the lower-level problem using the S-MISOCP algorithm, namely the fourth loop, which provides a primal cut (i.e., \(\mathbf{z}^{*}\), \(\overline{\mathbf{x}}\)) to the middle-level problem. The inner C&CG stops when the inner gap is below a tolerance value. The inner C&CG algorithm provides a primal cut (i.e., \(\mathbf{u}^{*}\)) to the upper-level problem, which is solved by the S-MISOCP algorithm (second loop) in the outer C&CG algorithm (first loop). The outer C&CG stops when the outer gap is below a tolerance value and the optimal decision \(\mathbf{w}^{*}\) is achieved. 1. _The first loop, i.e., outer C&CG_: It is considered that the inner C&CG algorithm can find the worst-case realization of wind generation uncertainties \(\mathbf{u}^{*}\) by solving the max-min subproblem, denoted as **P2**, with a fixed value of \(\mathbf{w}^{*}\). \[\textbf{P2:}\ \max_{\mathbf{u}}\min_{\mathbf{z},\mathbf{x},\mathbf{\overline{x}}} \textbf{e}^{\top}\mathbf{x}\] (5.57a) \[s.t. (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq: * _The second loop, i.e., S-MISOCP algorithm for **P1**:_ Algorithm 5 starts with finding an initial vector \(\overline{\mathbf{y}}\) and \(\overline{\mathbf{x}}^{r}\), which can be obtained by solving the relaxed **P1** as follows \[\min_{\mathbf{y},\mathbf{w},\eta,\mathbf{x}^{r},\mathbf{z}}f(\mathbf{w})+\eta\] (5.59a) \[s.t. (\ref{eq:P1})-(\ref{eq:P1}),(\ref{eq:P1}),(\ref{eq:P1})-(\ref{eq: P1}),(\ref{eq:P1})-(\ref{eq:P1}).\] (5.59b) Not that \(\overline{\mathbf{y}}\) and \(\overline{\mathbf{x}}^{r}\) may yield violations in (5.58f) owing to the missing nonconvex half of each quadratic inequality pair when solving the relaxed **P1**. Therefore, auxiliary variables are added to convexify (5.58f), and their weighted sum is added in the objective function of the penalized **P1** (5.60) with a changeable penalty coefficient. Algorithm 5 is adopted to enhance the feasibility of the solution of the convexified problem with respect to the original problem. \[\min_{\begin{subarray}{c}\mathbf{y},\mathbf{w},\mathbf{\eta},\overline{\mathbf{y}} \\ \mathbf{x}^{r},\mathbf{z}^{r},\overline{\mathbf{x}}^{r}\end{subarray}}f(\mathbf{w})+\eta+\sum_ {t}\sum_{v}\left[\tau^{0}_{v,t}s^{0}_{v,t}+\sum_{r}\tau^{r}_{v,t}s^{r}_{v,t}\right]\] (5.60a) \[s.t. (\ref{eq:P1})-(\ref{eq:P1}),(\ref{eq:P1})\text{(\ref{eq:P1})},( \ref{eq:P1})\text{(\ref{eq:P1})}-(\ref{eq:P1}),\] (5.60b) \[s^{0}_{v,t}\geq 0,\forall v,t,\ \ s^{r}_{v,t}\geq 0,\forall v,t,r\] (5.60c) \[||\mathbf{B}_{v,t}(\overline{\mathbf{y}})\mathbf{y}+\mathbf{D}_{v,t}(\overline{ \mathbf{y}})||_{2}\leq\mathbf{b}_{v,t}(\overline{\mathbf{y}})\mathbf{y}+\mathbf{d}_{v,t}(\overline {\mathbf{y}})+s^{0}_{v,t},\forall v,t\] (5.60d) \[||\mathbf{M}_{v,t}(\overline{\mathbf{x}}^{r})\mathbf{x}^{r}+\mathbf{N}_{v,t}( \overline{\mathbf{x}}^{r})||_{2}\leq\mathbf{m}_{v,t}(\overline{\mathbf{x}}^{r})\mathbf{x}^{r} +\mathbf{n}_{v,t}(\overline{\mathbf{x}}^{r})+s^{r}_{v,t},\forall v,t,r.\] (5.60e) Compared with the standard P-CCP introduced in [163], where a global penalty coefficient \(\tau\) is selected for all the convexified constraints, each convexified constraint is assigned with its own penalty coefficient, and an adaptive rule designed for Algorithm 2 is employed to update penalty coefficients. This allows us to better capture the impact of slack variables on the objective and to facilitate convergence. The relative constraint violation (\(RCV\)), which can be calculated by \[RCV^{0}_{v,t}=s^{0}_{v,t}/\Big{(}\mathbf{b}_{v,t}(\overline{\mathbf{y}}) \mathbf{y}+\mathbf{d}_{v,t}(\overline{\mathbf{y}})\Big{)},\ \ \forall v,t\] (5.61) \[RCV^{r}_{v,t}=s^{r}_{v,t}/\Big{(}\mathbf{m}_{v,t}(\overline{\mathbf{x}}^ {r})\mathbf{x}^{r}+\mathbf{n}_{v,t}(\overline{\mathbf{x}}^{r})\Big{)},\ \ \forall v,t,r.\] (5.62) is assigned to the adaptive penalty growth rate equation defined in (3.77) to update the Algorithm 5 penalties in Step \(5\). * _The third and fourth loops, i.e., solving the **P2**_: Problem **P2** indicates a bi-level programming with integer decision variables in the inner level, which needs to be decomposed into two subproblems and solved iteratively. With fixed values of \(\mathbf{w}^{*}\) and \(\mathbf{u}^{*}\), the inner-level problem of **P2**, denoted as **P4**, is as follows \[\text{\bf P4:} \min_{\mathbf{x},\mathbf{z},\overline{\mathbf{x}}}\mathbf{e}^{\top}\mathbf{x}\] (5.63a) \[s.t. (\ref{eq:P1})-(\ref{eq:P1}),\] \[\mathbf{E}\mathbf{x}+\mathbf{G}\mathbf{z}\geq\mathbf{F}-\mathbf{Q}\mathbf{u}^{*}-\mathbf{P}\mathbf{w}^ {*}.\] (5.63b) Note that an initial vector is needed to formulate the approximated SOC constraint (5.51h) in problem **P4**, which can be obtained by solving the relaxed **P4** (5.64). \[\min_{\mathbf{x},\mathbf{z}}\ \mathbf{e}^{\top}\mathbf{x} \tag{5.64a}\] \[s.t.\ \eqref{eq:s1},\] (5.64b) \[\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathbf{E}\mathbf{x}+\mathbf{G}\mathbf{z}\geq\mathbf{F}-\mathbf{Q}\mathbf{u}^{*}-\mathbf{P}\mathbf{w}^{*}. \tag{5.64c}\] For the same reason of adding the slack variables in (5.60), \(s_{v,t}\) is added to the penalized **P4** (5.65). \[\min_{\mathbf{x},\mathbf{z},\overline{\mathbf{x}}}\ \mathbf{e}^{\top}\mathbf{x}+\sum_{t} \sum_{v}\tau_{v,t}s_{v,t}\] (5.65a) \[s.t.\ \eqref{eq:s1},\] \[\ (5.68), denoted as **P3**. **P3:** \[\max_{\mathbf{u},\mathbf{\lambda},\mathbf{\beta},\mathbf{\omega},\mathbf{\theta},\mathbf{ \sigma},\mathbf{\gamma},\mathbf{\phi}}\mathbf{\phi}\] (5.68a) \[s.t.\] (5.51e), (5.68b) \[\widetilde{\mathbf{\varepsilon}}=\widetilde{\mathbf{E}}^{\top}\mathbf{ \lambda}^{r}+\sum_{t}\sum_{v}\Big{[}\widetilde{\mathbf{H}}_{v,t}^{\top}\mathbf{\beta}_ {v,t}^{r}+\widetilde{\mathbf{h}}_{v,t}^{\top}\mathbf{\omega}_{v,t}^{r}+\widetilde{\mathbf{ M}}_{v,t}^{r\top}\mathbf{\theta}_{v,t}^{r}+\widetilde{\mathbf{m}}_{v,t}^{r\top}\mathbf{ \sigma}_{v,t}^{r}\Big{]},\;\forall r,\] (5.68c) \[\mathbf{\phi}\leq\mathbf{\lambda}^{r\top}\Big{[}\widetilde{\mathbf{F}}- \widetilde{\mathbf{G}}\mathbf{z}^{r}-\widetilde{\mathbf{Q}}\mathbf{u}-\widetilde{\mathbf{P}}\mathbf{w} ^{*}\Big{]}-\sum\mathbf{\gamma}^{r}-\] \[\sum_{t}\sum_{v}\Big{[}\widetilde{\mathbf{N}}_{v,t}^{r\top}\mathbf{\theta }_{v,t}^{r}+\widetilde{\mathbf{n}}_{v,t}^{r\top}\mathbf{\sigma}_{v,t}^{r}\Big{]},\; \forall r,\] (5.68d) \[\|\mathbf{\beta}_{v,t}^{r}\|_{2}\leq\mathbf{\omega}_{v,t}^{r},\;\;\|\mathbf{ \theta}_{v,t}^{r}\|_{2}\leq\mathbf{\sigma}_{v,t}^{r},\forall v,t,r\;\;\mathbf{\lambda }^{r}\geq 0,\;\forall r,\] (5.68e) \[-\overline{M}\mathbf{u}\leq\mathbf{\gamma}^{r}\leq\overline{M}\mathbf{u},\; \forall r,\] (5.68f) \[-\overline{M}(1-\mathbf{u})\leq\widetilde{\mathbf{Q}}^{\top}\mathbf{\lambda}^ {r}-\mathbf{\gamma}^{r}\leq\overline{M}(1-\mathbf{u}),\;\forall r.\] (5.68g) where \(\mathbf{\gamma}^{r}\) is the auxiliary variables to linearize the bilinear terms in **P3**, indices \(r\in\{1...R\}\), and \(v\in\mathcal{P}\cup\mathcal{L}\).. ``` 1: Select an arbitrary feasible \({}^{\text{a}}\,\mathbf{w}^{*}\,\text{/}\,^{\text{b}}\,\mathbf{u}^{*}\), set convergence parameters \(\varepsilon,\;UB=\infty,\;LB=-\infty,\;R=0\), go to Step \(4\). 2:\({}^{\text{a}}\) Call Algorithm 5 to solve problem **P1** (5.58), update \(\mathbf{w}^{*},\;LB=\textbf{P1}^{*}\), and \(Gap=(UB-LB)/LB\). 3: Solve problem **P3** (5.68) to update \(\mathbf{u}^{*},\;UB=\textbf{P3}^{*}\) and \(Gap=(UB-LB)/UB\). 4: If \(Gap\leq\varepsilon\;\&\;\) same \({}^{\text{a}}\,\mathbf{w}^{*}\,\text{/}\,^{\text{b}}\,\mathbf{u}^{*}\), terminate; else, go to Step \(4\). 5:\({}^{\text{a}}\) Call Algorithm 6 to solve problem **P2** (5.57), and update \(\mathbf{u}^{*},\;UB=\min\{UB,\textbf{P3}^{*}+f(\mathbf{w}^{*})\}\). 6: Call Algorithm 5 to solve problem **P4** (5.63), and update \(\mathbf{z}^{*},\overline{\mathbf{x}},\;LB=\max\{LB,\textbf{P4}^{*}\}\). 7: Calculate \(Gap=(UB-LB)/UB\), If \(Gap\leq\varepsilon\), terminate; else, \(R=R+1\), \({}^{\text{a}}\,\mathbf{u}^{*^{r}}=\mathbf{u}^{*}\) / \({}^{\text{b}}\,\mathbf{z}^{*^{r}}=\mathbf{z}^{*},\overline{\mathbf{x}}^{r}=\overline{\mathbf{x}}\), and create new matrices for (5.67), go Step \(2\). ``` **Algorithm 6** The NC&CG Algorithm for RO model As aforementioned, there are four loops in the developed algorithm, where the first and third loops are standard C&CG procedures and their convergence property has been justified by [215], and the rest two loops are P-CCP with binary variables, i.e. S-MISOCP algorithm, to identify feasible solutions for the optimal gas-power flow (OGPF) problem in different decision stages. Its convergence property has been discussed in Section 3.3.3. Directional binaries obtained from the relaxed problems **P1** and **P4** would remain fixed after the first few iterations, which is consistent with the observation in [206]. Therefore, the binary variables can be fixed after the beginning iterations, which is tuned as \(5\) in this work. Then, the original MISOCP model can be converted into an SOCP with fixed binary variables, and Algorithm 5 convergence can be guaranteed. ### 5.3 Two-stage Distributionally Robust Gas Contracting Most of the recent studies only consider the modeling of firm gas contracts in power system operation [47], [49], where the reserved gas contracts and the impacts of uncertainty on the contracts are missing. In [223], a day-ahead market clearing model for IEGS is presented, where the reserved gas amounts are introduced and effectively priced. According to current gas contracting mechanism, though the gas fuel for the firm and part of the reserved outputs of GPUs follows the day-ahead contracts, which are much cheaper than the real-time ones [48], flexible real-time contracts may still be signed for the low-probability utilized reserved GPU outputs in practice, as the corresponding costs are _wait-and-see_ rather than _here-and-now_. The common treatment is to minimize the real-time gas adjustments in the IEGS optimization models, while they are not optimized with the day-ahead gas contracts [58], [224]. In addition, the emerging P2G technologies not only provide an alternative way to accommodate the excessive wind power generation, but also create new opportunities in energy trading markets, as P2G contracts have to be signed for the injected gas from external gas sources. To determine the optimal power system operation strategy, various advanced optimization approach based decision-making frameworks have been proposed, including stochastic optimization (SO) [223], [225], robust optimization (RO) [195], and distributionally robust optimization (DRO) [208], [226] based ones, where the uncertainties in the first two approaches are described by deterministic distributions and uncertainty sets, respectively, and the ambiguity sets are constructed to represent the distribution of uncertainties in the last one. The SO based approaches can generate a high-quality solution as long as the prior distribution of the uncertainties is close enough to the actual one. However, they may not be suitable for determining gas contracts, as the high-loss rare events are hardly to be captured and purchase gas in real-time operation could be costly [48]. The solutions from RO based approaches could cover all the possible realizations of uncertainties within the prescribed uncertainty set, which is promising in security or reliability oriented applications, nevertheless, it may lead to over-conservative gas contracts as they care about the performance under the worst-case scenario. Besides the modeling of gas contracts, the nonlinear Weymouth equations of the gas network also admit computational challenges, due to their non-convexity in the day-ahead and real-time stages. The quadrable-loop procedure proposed in Section 5.2.2 that is based on the S-MISOCP algorithm and the NC&CG algorithm, is the most efficient method to tackle a two-stage RO model for power system operation with gas system constraints. Consequently, reformulating the two-stage DRO into RO model is main task in the proposed methodology of this section. To bridge the gap between industrial practice and academic research on power-gas coordination, a DRO-based power system operation model is proposed, to determine the two-stage bidirectional contracting with gas systems. Compared with the literature, the salient feature of this work is that a two-stage distributionally robust model is proposed for signing bidirectional energy contracts with gas systems from the perspective of power system operator (PSO). To the best knowledge of the authors, this work is the first attempt to incorporate both the day-ahead and real-time gas contracts in power system operation. #### Mathematical Formulation ##### The Two-stage Uncertainty Mitigation Procedure In this work, a two-stage procedure, including the day-ahead stage and the real-time stage, is employed to mitigate the uncertainties originated from wind generation, where the uncertainty mitigation capability from controllable resources is committed and the committed capability in the day-ahead stage or additional regulation capability purchased in the real-time stage, usually costly, is utilized, respectively. It is a conventional practice in power industry as well as academic research [71], [223]. The operation goal of the PSO is to minimize the total costs and simultaneously meet its operation requirements. Meanwhile, the operation of power systems incorporates the bidirectional interactions with natural gas systems in both physical, say the gas demands of GPUs and generated gas by P2G units would influence the operating status of gas systems, and economic perspectives, usually through signing energy contracts. As fast-response controllable resources, the actual outputs (inputs) of GPUs (P2G units) cannot be known beforehand, considering the uncertainty of wind power generation, which indicates the gas demands (outputs) are also uncertain. The two-stage decision-making process of the PSO considering the interactions with the gas system operator (GSO) is demonstrated in Figure 5.3, which includes the following three steps: (i) the PSO constructs the reference distribution of wind power outputs based on historical data and the predicted outputs received from the wind management sector (WMS), and receives the gas prices and contract avoidance penalties from the GSO; (ii) in the day-ahead stage, the PSO commits the outputs of controllable resources according to the strategy obtained from the proposed DRO model, and communicates with the GSO for the day-ahead gas contracts agreement; (iii) in the real-time stage, the PSO adjusts the outputs of controllable resources, contacts with the GSO for real-time gas contracts agreement, and sends wind curtailment signals to the WMS. Figure 5.3: Decision-making process for the PSO. The overall objective of the proposed model is expressed in (5.69), where the probability distribution \(\mathbf{\mu}\) is uncertain and subject to a pre-defined ambiguous set \(\mathcal{M}\). The operating costs (OC) in the day-ahead stage are defined in (5.70), including the generation costs of non-GPUs, the day-ahead G2P contract costs, and the revenue from day-ahead P2G contracts. The regulation costs (RC) in the real-time stage is expressed by (5.71), which incorporates the upward and downward adjustments costs of non-GPUs, penalties of non-served power loads, real-time G2P contract costs, penalties of wind curtailment, penalties/revenue of the shortage/surplus of real-time P2G contracts, respectively. From (5.69), it can be observed that the goal of the PSO is to minimize the sum of the day-ahead dispatch costs and the expectation of the real-time regulation costs under the worst-case distribution. In what follows, the introductions of the constraints would be presented according to interaction types. \[\min\ OC\ +\ \max_{\mathbf{\mu}\in\mathcal{M}}\ E_{\mathbf{\mu}}[\min\ RC] \tag{5.69}\] \[OC=\sum_{\forall t}\Big{[}\sum_{\forall u\in\mathcal{U}_{n}}C _{u}(\hat{p}_{u,t})-\sum_{\forall j}C_{j}g_{j,t}+\sum_{\forall h}(\mu_{h}\rho _{h,t}+\mu_{h}^{+}\rho_{h,t}^{+}+\mu_{h}^{-}\rho_{h,t}^{-})\Big{]}\] (5.70) \[RC=\sum_{\forall t}\Big{[}\sum_{\forall u\in\mathcal{U}_{n}}(C _{u}^{+}\triangle p_{u,t}^{+}+C_{u}^{-}\triangle p_{u,t}^{-})+\sum_{\forall d} C_{d}\triangle p_{d,t}+\sum_{\forall h}(\mu_{h}^{2+}\rho_{h,t}^{2+}+\mu_{h}^{2-} \rho_{h,t}^{2-})\] \[+\sum_{\forall e}C_{e}\triangle w_{e,t}+\sum_{\forall j}C_{j}^{2 -}\triangle g_{j,t}^{-}-\sum_{j}C_{j}^{2+}\triangle g_{j,t}^{+}\Big{]} \tag{5.71}\] **Economic Interactions: Two-stage Bidirectional Contracts Modeling** 1. G2P Contracts Modeling: As aforementioned, the G2P contracts are signed in two different time-scales, which are day-ahead and real-time, respectively. The firm gas fuel demands of GPUs, which can be calculated by (5.72), are covered by the day-ahead G2P contracts. Meanwhile, the gas fuel for the outputs adjustments of GPUs in real-time operation are supported by both the day-ahead and the real-time G2P contracts. Specifically, (5.73) and (5.74) set non-negative limits for reserved gas contracts in day-ahead and real-time stages; (5.75) and (5.76) give upper and lower boundaries of the real-time bidirectional gas fuel variation of GPUs, respectively. \[\rho_{h,t}^{1}=\sum_{u\in\mathcal{U}_{g}(n)}\Phi\hat{p}_{u,t}/ \eta_{u},\ \forall h,t,\] (5.72) \[\rho_{h,t}^{1-},\rho_{h,t}^{1+}\geq 0,\forall h,t,\] (5.73) \[\rho_{h,t}^{2-},\rho_{h,t}^{2+}\geq 0,\forall h,t,\] (5.74) \[\sum_{u\in\mathcal{U}_{g}(n)}\Phi(\hat{p}_{u,t}-p_{u,t})/\eta_{u} \leq\rho_{h,t}^{1-}+\rho_{h,t}^{2-},\forall h,t,\] (5.75) \[\sum_{u\in\mathcal{U}_{g}(n)}\Phi(p_{u,t}-\hat{p}_{u,t})/\eta_{u} \leq\rho_{n,t}^{1+}+\rho_{n,t}^{2+},\forall h,t.\] (5.76) 2. P2G Contracts Modeling: Similar with GPUs, P2G facilities \(z\in\mathcal{Z}\) are also controllable and can be adopted to mitigate the uncertainties of wind power generation. In the day-ahead stage, the operating points of P2G facilities are tuned according to the predicted value of wind generation outputs, and the amount of gas sold to the GSO can be calculated by (5.77), which are the day-ahead P2G contracts. In the real-time stage, the gas shortage in the day-ahead P2G contracts due to inadequate wind power outputs would be penalized, and excessive power from wind farms can be converted into gas if the operation feasibility of the gas systems holds. Therefore, the real-time P2G contracts can be signed based on (5.78)-(5.80). \[g_{j,t}=\sum_{z\in\mathcal{Z}(j)}\Phi\eta_{z}\hat{p}_{z,t}, \forall j,t,\] (5.77) \[\triangle g_{j,t}^{-}\geq\sum_{z\in\mathcal{Z}(j)}\Phi\eta_{z}( \hat{p}_{z,t}-p_{z,t}),\forall z,t,\] (5.78) \[\triangle g_{j,t}^{+}\leq\sum_{z\in\mathcal{Z}(j)}\Phi\eta_{z}(p_{ z,t}-\hat{p}_{z,t}),\forall z,t,\] (5.79) \[\triangle g_{j,t}^{-},\triangle g_{j,t}^{+}\geq 0,\forall j,t.\] (5.80) ##### Physical Interactions: Operation Constraints In the power network, the operational constraints are derived from Sections 2.2.3, where UC of all the generators are assumed to be predetermined, the generation capacities and physical limits are considered and the hat symbol is used for day-ahead decision variables. They are composed by \[\underline{P}_{u}c_{u,t}\leq\hat{p}_{u,t}\leq\overline{P}_{u}c_{u,t},\ \forall u,t, \tag{5.81}\] \[-\overline{R}_{u}^{-}\leq\hat{p}_{u,t}-\hat{p}_{u,t-1}\leq \overline{R}_{u}^{+},\ \forall u,t,\] (5.82) \[\sum_{u\in\mathcal{U}(n)}\hat{p}_{u,t}+\sum_{e\in\mathcal{E}(n)} \hat{W}_{e,t}+\sum_{l\in\mathcal{L}_{1}(n)}\hat{p}_{l,t}-\sum_{l\in\mathcal{L }_{2}(n)}\hat{p}_{l,t}=\sum_{z\in\mathcal{Z}(n)}\hat{p}_{z,t}+\sum_{d\in \mathcal{D}_{p}(n)}P_{d,t},\ \forall n,t.\] (5.83) \[-\tilde{\pi}\leq\hat{\theta}_{n,t}\leq\tilde{\pi},\ \forall n\in\mathcal{N}-1,t,\ \ \hat{\theta}_{1,t}=0,\ \forall t,\] (5.84) \[-\overline{p}_{l,t}\leq\hat{p}_{l,t}\leq\overline{p}_{l,t},\ \forall l,t.\] (5.85) \[\hat{p}_{l,t}=\frac{\hat{\theta}_{m,t}-\hat{\theta}_{n,t}}{x_{l}},\ \forall l,t,(m,n)\in l. \tag{5.86}\] The operation constraints of the gas network are represented by (5.87)-(5.96) that is derived from the dynamic-state gas flow model presented in Section 2.1.2. Gas production capacities: (2.1), Gas nodal balancing equation: \[\sum_{p\in\mathcal{P}_{1}(i)}\hat{f}_{p,t}^{out} -\sum_{p\in\mathcal{P}_{2}(i)}\hat{f}_{p,t}^{in}+\sum_{c\in\mathcal{ C}_{1}(i)}\hat{f}_{c,t}^{out}-\sum_{c\in\mathcal{C}_{2}(i)}\hat{f}_{c,t}^{in}+ \sum_{z\in\mathcal{Z}(i)}\hat{\varrho}_{z,t}+\sum_{w\in\mathcal{W}(i)}\hat{f}_ {w,t}\] \[=\sum_{u\in\mathcal{U}_{g}(i)}\rho_{u,t}+\sum_{d\in\mathcal{D}_{g} (i)}F_{d,t},\;\forall i,t\] (5.88) Gas compressors constraints: (2.5)-(2.6), (5.89) Nodal pressure bounds: (2.8), (5.90) Average flow rate equation: (2.11), (5.91) Mass flow equation: (2.13), (5.92) Continuity equation: (2.14), (5.93) GPU gas consumption: (3.15), (5.94) P2G gas production: (2.41), (5.95) Weymouth equation: (2.10). where \(\mathcal{Z}(i)\) is a subset of P2G units connected with node \(i\), and \(\mathcal{Z}(i)/\mathcal{H}(i)\) is a subset of P2G units/gas contracts, whose GPUs are supplied from node \(i\). It should be noted that (5.81)-(5.96) are day-ahead constraints for the coupled energy system. Further, most of the operation constraints in the real-time stage can be obtained by replacing the day-ahead decision variables with real-time ones in (5.81)-(5.96) except (5.83), namely removing the hat symbols of the decision variables in those constraints. Such overlapped constraints are not listed. In real-time operation of the power network, wind generation curtailment \(\triangle w_{e,t}\) and electrical load shedding \(\triangle p_{d,t}\) are also practical means to recover the power balancing condition, whose adjustment ranges are shown in (5.97). Meanwhile, the nodal power balancing condition should be modified by adding the wind generation curtailment and electrical load shedding terms, resulted in (5.98). To quantify the regulation costs of non-GPUs in the real-time stage, (5.99) is added to describe the outputs adjustment of non-GPUs. \[0\leq\triangle w_{e,t}\leq W_{e,t},\forall e,t;\quad 0\leq \triangle p_{d,t}\leq P_{d,t},\forall d,t, \tag{5.97}\] \[\sum_{u\in\mathcal{U}(n)}p_{u,t}+\sum_{e\in\mathcal{E}(n)}(W_{e,t}-\triangle w_{e,t})+\sum_{l\in\mathcal{L}_{1}(n)}p_{l,t}-\sum_{l\in \mathcal{L}_{2}(n)}p_{l,t}\] \[=\sum_{z\in\mathcal{Z}(n)}p_{z,t}+\sum_{d\in\mathcal{D}_{p}(n)}( P_{d,t}-\triangle p_{d,t}),\;\forall n,t.\] (5.98) \[-\triangle p_{u,t}^{-}\leq p_{u,t}-\hat{p}_{u,t}\leq\triangle p_{u,t}^{+},\triangle p_{u,t}^{-}\geq 0,\triangle p_{u,t}^{+}\geq 0,\forall t,u \in\mathcal{U}_{n} \tag{5.99}\] ##### Ambiguity Set Construction Given a set of historical data of wind generation outputs, they can be clustered as \(K\) scenarios \(\{\mathbf{W}_{1},\mathbf{W}_{2},...,\mathbf{W}_{K}\}\) by sample clustering or scenario reduction methods, and the probability weight coefficient \(\mu_{k}^{0}\) for scenario \(\mathbf{W}_{k}\) can be obtained as well. Then, the empirical distribution can be established as \(\mathbf{\mu}^{0}=\{\mu_{1}^{0},\mu_{2}^{0},...,\mu_{k}^{0}\}\). However, the true distribution \(\mathbf{\mu}=\{\mu_{1},\mu_{2},...,\mu_{K}\}\) may be different from \(\mathbf{\mu}^{0}\), due to the lack of historical data in practical situations. Therefore, the ambiguity set is constructed using \(L_{\infty}\) norm [227], as shown in (5.100), which suggests the statistical distance between the true distribution and the empirical one would always be smaller than the tolerance along with change of the size of the data sample set adaptively. \[\mathcal{M} =\left\{\mathbf{\mu}\in\mathbb{R}_{+}^{K}|\ \|\mathbf{\mu}-\mathbf{\mu}^{0}\|_{ \infty}\leq\sigma,\ \sum_{\forall k}\mu_{k}=1\right\}\] \[=\left\{\mu_{k}|\ \mu_{k}\geq 0,\ \max_{1\leq k\leq K}|\mu_{k}-\mu_{k}^{0}| \leq\sigma,\ \sum_{\forall k}\mu_{k}=1\right\}\] \[=\left\{\mu_{k}|\ \mu_{k}\geq 0,\ -\sigma\leq\mu_{k}-\mu_{k}^{0} \leq\sigma,\ \sum_{\forall k}\mu_{k}=1\right\} \tag{5.100}\] In (5.100), the tolerance value \(\sigma\) depends on the amount of historical data \(S\) and confidence level \(\beta\). According to proposition \(2\) in [227], \(\sigma\) can be calculated by \[\sigma=\frac{1}{2S}\log\frac{2K}{1-\beta} \tag{5.101}\] As a matter of fact, the scenario set used in SO based works [125], [225] can be viewed as a special case of (5.100) with \(\sigma=0\). In addition, (5.100) incorporates the finite uncertainty set employed in RO by tuning \(\sigma=1\). ##### The Holistic Model The completed form of the proposed model is shown as (5.102), where \(\Omega_{1}\) and \(\Omega_{2}\) define the sets of decision variables for the day-ahead and real-time stages, respectively. \[\min_{\Omega_{1}}\ OC\ +\ \max_{\mathbf{\mu}\in\mathcal{M}}\ E_{\mathbf{\mu}}[ \min_{\Omega_{2}}\ RC] \tag{5.102a}\] \[s.t.\] Day-ahead constraints: ( 5.81 )-( 5.96 ), ( 5.102b) \[\text{Real-time constraints: (\ref{eq:10000000})--(\ref{eq:10000000}), (\ref{eq:10000000})--(\ref{eq:10000000}),}\] (5.102c) \[\text{Gas contracts: (\ref{eq:10000000})--(\ref{eq:10000000}),}\] (5.102d) \[\text{Ambiguity set: (\ref{eq:100000}),}\] (5.102e) \[\Omega_{1}=\{\rho_{h,t}^{1},\rho_{h,t}^{1+},\rho_{h,t}^{1-},g_{j, t},\hat{p}_{u,t},\hat{p}_{z,t},\hat{f}_{l,t},\hat{\theta}_{n,t},\hat{f}_{w,t}, \hat{\pi}_{i,t},\hat{f}_{c,t}^{in},\hat{f}_{c,t}^{out},\hat{f}_{p,t}^{in},\hat {f}_{p,t}^{out},\hat{f}_{p,t},\hat{m}_{p,t}\},\] (5.102f) \[\Omega_{2}=\{\triangle p_{u,t}^{+},\triangle p_{u,t}^{-},\rho_{h,t }^{2+},\rho_{h,t}^{2-},\triangle g_{j,t}^{+}\triangle g_{j,t}^{-},\triangle w_{e,t},\triangle p_{d,t},p_{u,t},p_{z,t},f_{l,t},\theta_{n,t},f_{w,t},\pi_{i,t},\] \[\qquad\quad f_{c,t}^{in},f_{c,t}^{out},f_{p,t}^{in},f_{p,t}^{out},f_{p,t},m_{p,t}\}. \tag{5.102g}\] The above model is not readily solvable by commercial solvers due to the presence of non-convex constraints caused by Weymouth equations (5.96) and intractable objective function (5.102a). Therefore, tractable reformulations for the Weymouth equations and the model objective are derived, before introducing the solution approach for the proposed model. #### Solution Methodology **Tractable Reformulations of the Proposed Model** 1. MISOCP-based Approximation for the Weymouth Equations: Weymouth equations can be formulated as DCP constraints, which further are reformulated as MISOCP constraints. By introducing a binary variable \(\hat{z}_{p,t}\) and a pair of auxiliary variables \(\hat{\pi}^{+}_{p,t},\hat{\pi}^{-}_{p,t}\), the sign function in (5.96) can be removed, resulting the equivalent form as follows. \[\left\{\forall t,p\in\mathcal{P}^{\pm}\mid(1-\hat{z}_{p,t}) \overline{M}\leq(\hat{\pi}_{i,t}-\hat{\pi}_{o,t})\|\hat{f}_{p,t}\leq\hat{z}_{p, t}\overline{M},\right.\] (5.103) \[(1-\hat{z}_{p,t})\overline{M}\leq(\hat{\pi}^{+}_{p,t}-\hat{\pi}_ {i,t})\|(\hat{\pi}^{-}_{p,t}-\hat{\pi}_{o,t})\leq(1-\hat{z}_{p,t})\overline{M},\] (5.104) \[\hat{z}_{p,t}\overline{M}\leq(\hat{\pi}^{+}_{p,t}-\hat{\pi}_{o,t })\|(\hat{\pi}^{-}_{p,t}-\hat{\pi}_{i,t})\leq\hat{z}_{p,t}\overline{M}\right\},\] (5.105) \[\hat{\pi}^{+}_{p,t}=\hat{\pi}_{i,t},\;\;\hat{\pi}^{-}_{p,t}=\hat {\pi}_{o,t},\;\;\forall t,p\in\mathcal{P}/\mathcal{P}^{\pm},\] (5.106) \[\hat{f}^{2}_{p,t}=\chi^{f}_{p}(\hat{\pi}^{+^{2}}_{p,t}-\hat{\pi}^ {-^{2}}_{p,t}),\;\forall p,t.\] (5.107) In (5.103), \(\mathcal{P}^{\pm}\) is the subset of pipelines that have bidirectional gas flow, \(\overline{M}\) is a sufficient large positive number, please refer to Section 3.3.2 for a detailed formulation, notation \(\underline{C}\leq a\|b\leq\overline{C}\) represents that \(a\) and \(b\) have the same boundaries, i.e., \(\underline{C}\leq a\leq\overline{C}\) and \(\underline{C}\leq b\leq\overline{C}\). For fixed flow pipelines, \(\hat{\pi}^{+}_{p,t}\) and \(\hat{\pi}^{-}_{p,t}\) are assigned directly by (5.106) to decrease binary variables \(\hat{z}_{p,t}\). Further, (5.107) can be converted into two opposite inequalities, where the canonical form of the first inequality, namely \(\hat{f}^{2}_{p,t}+\chi^{f}_{p}\hat{\pi}^{-^{2}}_{p,t}\leq\chi^{f}_{p}\hat{\pi} ^{+^{2}}_{p,t}\), is \[\left\|\frac{\hat{f}_{p,t}}{\sqrt{\chi^{f}_{p}\hat{\pi}^{-}_{p,t}}}\right\|_{2 }\leq\sqrt{\chi^{f}_{p}\hat{\pi}^{+}_{p,t}},\;\forall p,t\] (5.108) Given an initial vector \([\bar{\hat{f}}_{p,t}\;\;\bar{\hat{\pi}}^{-}_{p,t}]^{\top}\), the right-hand side of the second inequality, namely \(\chi^{f}_{p}\hat{\pi}^{+^{2}}_{p,t}\leq\hat{f}^{2}_{p,t}+\chi^{f}_{p}\hat{\pi} ^{-^{2}}_{p,t}\), can be approximated as \(\hat{\Lambda}_{p,t}\), hence its canonical form would be \[\hat{\Lambda}_{p,t}=2\bar{\hat{f}}_{p,t}\hat{f}_{p,t}+2\chi^{f}_{p}\bar{\hat{ \pi}}^{-}_{p,t}\hat{\pi}^{-}_{p,t}-\bar{\hat{f}}^{2}_{p,t}-\chi^{f}_{p}\bar{ \hat{\pi}}^{-}_{p,t},\;\;\left\|\frac{2\sqrt{\chi^{f}_{p}\hat{\pi}^{+}_{p,t}}}{ \hat{\Lambda}_{p,t}-1}\right\|_{2}\leq\hat{\Lambda}_{p,t}+1,\;\forall p,t.\] (5.109) Till now, an MISOCP based approximation for (5.96) is derived, which consists of (5.103)-(5.107) and (5.108)-(5.109). 2. Equivalent Reformulation for the Objective Function: As discussed in Section 5.3.1, the distribution of wind generation outputs can be approximated by \(K\) clustered scenarios, therefore, the objective of the proposed model can be written as \[\min_{\Omega_{1}}~{}OC~{}+~{}\max_{\mathbf{\mu}\in\mathcal{M}}\sum_{k}\mu_{k}\min_{ \Omega_{2}}~{}RC(\mathbf{W}_{k})\] (5.110) where \(RC(\mathbf{W}_{k})\) denotes the regulation costs under scenario \(\mathbf{W}_{k}\). As all the wind generation output scenarios are independent, the summation and minimization operators can be interchanged as \[\min_{\Omega_{1}}~{}OC~{}+~{}\max_{\mathbf{\mu}\in\mathcal{M}}\min_{\Omega_{2}}~{} \sum_{k}\mu_{k}RC(\mathbf{W}_{k})\] (5.111) making the proposed model a standard two-stage robust program. ##### The Compact Form For ease of exposition, the compact form of the proposed model after reformulation is written as follows. \[\min_{\mathbf{u},\mathbf{y},\overline{\mathbf{u}}}f(\mathbf{u},\mathbf{y})+\max_{\mathbf{ \mu}\in\mathcal{M}}~{}\min_{\mathbf{x}_{k},\mathbf{z}_{k},\overline{\mathbf{x}}_{k}}\sum_ {k}\mu_{k}\mathbf{c}^{\top}\mathbf{x}_{k} \tag{5.112a}\] \[s.t.~{}\mathbf{A}\mathbf{u}+\mathbf{B}\mathbf{y}\leq\mathbf{C},\] (5.112b) \[\|\mathbf{D}_{p,t}\mathbf{y}\|_{2}\leq\mathbf{d}_{p,t}\mathbf{u},\forall p,t,\] (5.112c) \[\|\mathbf{E}_{p,t}^{\overline{\mathbf{u}}}\mathbf{u}+\mathbf{F}_{p,t}^{\overline{ \mathbf{u}}}\|_{2}\leq\mathbf{e}_{p,t}^{\overline{\mathbf{u}}}\mathbf{u}+\mathbf{f}_{p,t}^{ \overline{\mathbf{u}}},\forall p,t,\] (5.112d) \[\mathbf{G}\mathbf{x}_{k}+\mathbf{H}\mathbf{z}_{k}\geq\mathbf{I}-\mathbf{J}\mathbf{W}_{k}-\bm {K}\mathbf{y},\forall k,\] (5.112e) \[\|\mathbf{L}_{p,t}\mathbf{x}_{k}\|_{2}\leq\mathbf{l}_{p,t}\mathbf{x}_{k},\forall p,t,k,\] (5.112f) \[\|\mathbf{P}_{p,t}^{\overline{\mathbf{x}}_{k}}\mathbf{x}_{k}+\mathbf{Q}_{p,t}^{ \overline{\mathbf{x}}_{k}}\|_{2}\leq\mathbf{p}_{p,t}^{\overline{\mathbf{x}}_{k}}\mathbf{x}_{k }+\mathbf{q}_{p,t}^{\overline{\mathbf{x}}_{k}},\forall p,t,k, \tag{5.112g}\] where, \(\mathbf{y}=\{\rho_{h,t},\rho_{h,t}^{1+},\rho_{h,t}^{1-},\hat{p}_{u,t},\hat{p}_{z, t}\}\) ; \(\mathbf{u}\) collects the rest of day-ahead decision variables in \(\Omega_{1}\); \(\mathbf{x}_{k}\) and \(\mathbf{z}_{k}\) are the continuous and binary variables in the real-time stage at wind output scenario \(k\), respectively; to find the optimal initial vector for the approximated cone (5.109), \(\overline{\mathbf{u}}\) and \(\overline{\mathbf{x}}_{k}\) are considered as decision variables in day-ahead and real-time stages, respectively; \(\mathbf{A},\mathbf{B}\), and \(\mathbf{C}\) are coefficients of the first stage linear constraints (5.72)-(5.73), (5.77), (5.81)-(5.95) and (5.103)-(5.106). \(\mathbf{G},\mathbf{H},\mathbf{I},\mathbf{J}\), and \(\mathbf{K}\) are coefficients of the second stage linear constraints (5.74)-(5.76), (5.78)-(5.80), (5.81)-(5.82), (5.84)-(5.95), (5.97)-(5.99) and (5.103)-(5.106); SOC constraints (5.112c) and (5.112d) ((5.112f) and (5.112g)) are driven from the proper cones (5.108) and (5.109) for the day-ahead (real-time) Weymouth equations, respectively. ##### The Overall Solution Procedure The quadruple-loop procedure developed in Section 5.2.2 is employed to tackle the proposed DRO-based model with \(K\) clusters of wind power outputs. Tough the compact form of the proposed model admits a standard two-stage robust program with binary variables and SOC constraints in both stages, which can be solved by the NC&CG algorithm. However, the quality of the MISOCP-based approximation for the Weymouth equation, which appears in both decision-making stages, depends on the linearization point, i.e. \(\overline{\mathbf{u}}\) and \(\overline{\mathbf{x}}_{k}\), not only influencing the optimality and feasibility of the day-ahead operation strategy, but also affecting the conservativeness of the strategy through the robust counterpart of the real-time decision-making model. Thus, the subsection begins with developing a method for a high-quality linearization point. 1. Two loops based S-MISOCP Algorithm: From (5.102), the proposed model is a two-stage program, which can be solved by firstly decomposing it into a master-slave structure and then calling the cutting plane based iteration methods. If the binary variables in (5.102) are fixed, the major obstacle that hinders the solution efficiency is the quadratic equalities, namely the simplified Weymouth equation (5.107), which means both the master and slave subproblems of the proposed model would be degenerated into DCP functions [163]. An efficient algorithm, which is called P-CCP, is devised in [163] to find a high-quality local optimum for DCPs. However, due to the existence of the binary variables, the convergence of the P-CCP in the proposed model cannot be guaranteed, as the convergence proof in [163] is merely valid for continuous problems. Therefore, the S-MISOCP algorithm presented in Chapter 3 (see Algorithm 2), is employed to generate a high-quality initial point for the subproblems of the DRO-based model. Before introducing the details of the S-MISOCP algorithm, the tractable approximations for the master and slave subproblems of (5.102) are given as (5.113)-(5.113d), denoted as **F1**, and (5.114)-(5.114g), denoted as **F2**, respectively. \[\min_{\mathbf{x}_{k},\mathbf{z}_{k},\mathbf{s}\geq 0}\,\sum_{k}(\mu_{k}^{*} \mathbf{c}^{\top}\mathbf{x}_{k}+\sum_{\forall t}\sum_{\forall p}\tau_{p,t,k}s_{p,t,k})\] (5.113a) \[s.t. (\ref{eq:112f}),\] (5.113b) \[\mathbf{G}\mathbf{x}_{k}+\mathbf{H}\mathbf{z}_{k}\geq\mathbf{I}-\mathbf{J}\mathbf{W}_{k}-\mathbf{ K}\mathbf{y}^{*},\forall k,\] (5.113c) \[\|\mathbf{P}_{p,t}^{\overline{\mathbf{x}}_{k}}\mathbf{x}_{k}+\mathbf{Q}_{p,t}^{ \overline{\mathbf{x}}_{k}}+s_{p,t,k}\|_{2}\leq\mathbf{p}_{p,t}^{\overline{\mathbf{x}}_{k} }\mathbf{x}_{k}+\mathbf{q}_{p,t}^{\overline{\mathbf{x}}_{k}}+s_{p,t,k},\forall p,t,k,\] (5.113d) 2. \[\min_{\begin{subarray}{c}\mathbf{u},\mathbf{y},p,\mathbf{x}_{k}^{\top},\\ \mathbf{z}_{k},\mathbf{\xi}^{0},\mathbf{\xi}^{r}\end{subarray}}f(\mathbf{u},\mathbf{y})+\vartheta +\sum_{t}\sum_{p}(\tau_{p,t}^{0}s_{p,t}^{0}+\sum_{r}\sum_{k}\tau_{p,t,k}^{r}s_ {p,t,k}^{r})\] (5.114a) \[s.t. (\ref{eq:112b})-(\ref{eq:112c}),\] (5.114b) \[\mathbf{G}\mathbf{x}_{k}^{r}+\mathbf{H}\mathbf{z}_{k}^{r}\geq\mathbf{I}-\mathbf{J}\mathbf{W}_{ k}-\mathbf{K}\mathbf{y},\ \forall k,r,\] (5.114c) \[\vartheta\geq\sum_{k}\mu_{k}^{r*}\mathbf{c}^{\top}\mathbf{x}_{k}^{r},\ \forall r,\ \ s_{p,t}^{0}\geq 0, \forall p,t,\] (5.114d) \[\|\mathbf{L}_{p,t}\mathbf{x}_{k}^{r}\|_{2}\leq\mathbf{l}_{p,t}\mathbf{x}_{k}^{r}, \ \forall p,t,k,r,\ \ s_{p,t,k}^{r}\geq 0,\forall p,t,k,r,\] (5.114e) \[\|\mathbf{E}_{p,t}^{\overline{\mathbf{u}}}\mathbf{u}+\mathbf{F}_{p,t}^{\overline{ \mathbf{v}}}+s_{p,t}^{0}\|_{2}\leq\mathbf{e}_{p,t}^{\overline{\mathbf{u}}}\mathbf{u}+\mathbf{f}_{ p,t}^{\overline{\mathbf{v}}}+s_{p,t}^{0},\forall p,t,\] (5.114f) \[\|\mathbf{P}_{p,t}^{\overline{\mathbf{x}}_{k}}\mathbf{x}_{k}^{r}+\mathbf{Q}_{p,t}^ {\overline{\mathbf{x}}_{k}}+s_{p,t,k}^{r}\|_{2}\leq\mathbf{p}_{p,t}^{\overline{\mathbf{x} }_{k}}\mathbf{x}_{k}^{r}+\mathbf{q}_{p,t}^{\overline{\mathbf{x}}_{k}}+s_{p,t,k}^{r}, \forall p,t,k,r.\] (5.114g) In **F1**, \(s_{p,t,k}\) is the non-negative slack variable and \(\tau_{p,t,k}\) is the penalty coefficient; the objective is to minimize the sum of the expected real-time RC with a given distribution (\(\mu^{*}_{k},\ \forall k\)) and the penalized violation for (5.112g); (5.113c) is the real-time operation constraints with fixed day-ahead stage variables; the last set of constraints is the relaxed counterpart of (5.112g) to detect the constraint violation. Similarly, in **F2**, \(s^{0}_{p,t}\) and \(s^{r}_{p,t,k}\) are the non-negative slack variable added to (5.112d) and (5.112g), which are parameterized with the candidate worst-case distribution set \(\{\boldsymbol{\mu}^{r},\ \forall r\}\), respectively, and \(\tau^{0}_{p,t}\) and \(\tau^{r}_{p,t,k}\) are the corresponding penalty coefficients; \(\vartheta\) is the auxiliary variable estimating the lower bound of the real-time RC (5.71), which is constrained by (5.114d); \(\boldsymbol{x}^{r}_{k}\) and \(\boldsymbol{z}^{r}_{k}\) are the real-time decision variables under distribution \(\boldsymbol{\mu}^{r}\); (5.114f) and (5.114g) are the relaxed counterparts of (5.112d) and (5.112g), respectively; the real-time linear constraints (5.112e) and (5.112f) under distribution \(\boldsymbol{\mu}^{r}\) are incorporated in (5.114c) and (5.114e), respectively. The details of the proposed S-MISOCP algorithm are as follows. Compared with the P-CCP, the parameter \(J^{max}_{int}\) controlling the iteration process is added in the proposed S-MISOCP algorithm, beyond which the binary variables would be fixed to their solutions in iteration \(J^{max}_{int}\), to enhance the algorithmic convergence. In other words, **F1** and **F2** would degenerate to standard DCPs after iteration \(J^{max}_{int}\), and the S-MISOCP algorithm would become P-CCP accordingly, indicating the guaranteed convergence of the proposed algorithm. ``` 1: Set \(\overline{\mu},\underline{\mu},\sigma,J^{max},J^{max}_{int},\epsilon,\varepsilon,j=1\) and \({}^{a}\tau_{p,t,k}\) / \({}^{b}\tau^{0}_{p,t},\tau^{r}_{p,t,k}\). 2: Solve \({}^{a}\)**F1** (5.113) without (5.113d) / \({}^{b}\)**F2** (5.114) without (5.114f)-(5.114g). 3: Set \({}^{a}\)\(\overline{\boldsymbol{x}}_{k}=\boldsymbol{x}_{k}\)/ \({}^{b}\)\(\overline{\boldsymbol{u}}=\boldsymbol{u},\overline{\boldsymbol{x}}^{r}_{k}= \boldsymbol{x}^{r}_{k}\). 4: If \(j>J^{max}_{int}\), parameterize the binary variables in \({}^{a}\)**F1** (5.113)/ \({}^{b}\)**F2** (5.114) with the solutions in iteration \(J^{max}_{int}\). 5: Solve \({}^{a}\)**F1**\({}^{b}\)**F2** to update: \({}^{a}\)\(\boldsymbol{x}_{k},\boldsymbol{z}_{k},\boldsymbol{s}\) / \({}^{b}\)\(\boldsymbol{u},\boldsymbol{x}^{r}_{k},\boldsymbol{z}^{r}_{k},\boldsymbol{s}^ {0},\boldsymbol{s}^{r}\). 6: If \({}^{a}\) (5.115) / \({}^{b}\) (5.116) is satisfied, or \(j>J^{max}\), terminate; Else, go to Step 7. \[\textbf{F1}^{(j-1)}-\textbf{F1}^{(j)}\leq\epsilon,\ \ s_{p,t,k}\leq \varepsilon,\ \ \forall p,t,k.\] (5.115) \[\textbf{P2}^{(j-1)}-\textbf{P2}^{(j)}\leq\epsilon,\ \ s^{0}_{p,t}\leq \varepsilon,\ \forall p,t,\ \ s^{r}_{p,t,k}\leq\varepsilon,\ \forall p,t,k,r.\] (5.116) 7: Update \({}^{a}\)\(\tau_{p,t,k}\) / \({}^{b}\)\(\tau^{0}_{p,t},\tau^{r}_{p,t,k}\) using the adaptive penalty rate equation (3.77), then go to Step 3. ``` **Algorithm 7** The S-MISOCP Algorithm for DRO Model * Two loops based NC&CG Algorithm: Prior to calling the NC&CG algorithm, the master and slave problems in each C&CG loop need to be identified. The inner-loop C&CG is to solve the max-min problem **F3**: \[\mathbf{F3}:\max_{\mathbf{\mu}\in\mathcal{M}}\;\min_{\mathbf{x}_{k},\mathbf{z}_{k },\mathbf{\overline{x}}_{k}}\sum_{k}\mu_{k}\mathbf{c}^{\top}\mathbf{x}_{k}\] (5.117a) \[s.t. (\ref{eq:112g})-(\ref{eq:112g}),\] (5.117b) \[\mathbf{G}\mathbf{x}_{k}+\mathbf{H}\mathbf{z}_{k}\geq\mathbf{I}-\mathbf{J}\mathbf{W}_{k}-\bm {K}\mathbf{y}^{*},\ \forall k.\] (5.117c) To drive the master problem for **F3**, (i) slack variables \(s_{p,t,k}\) are included to relax the approximated cones (5.112g), and the inner level of **F3** becomes (5.118)-(5.118d), which can be more compact by introducing a new vector, denoted as \(\mathbf{\alpha}_{k}=[\mathbf{x}_{k}^{\top}\;\mathbf{s}_{k}^{\top}]^{\top}\), then (ii) the problem **F3** is expressed in its tri-level form, after creating the new matrices, as follows \[\max_{\mathbf{\mu}\in\mathcal{M}}\;\min_{\mathbf{z}_{k},\mathbf{\overline{x}} _{k}}\;\min_{\mathbf{\alpha}_{k}}\;\sum_{\forall k}\mu_{k}\mathbf{\widetilde{c}}_{k}^ {\top}\mathbf{\alpha}_{k} \tag{5.118a}\] \[s.t. \mathbf{\widetilde{G}}\mathbf{\alpha}_{k}+\mathbf{\widetilde{H}}\mathbf{z}_{k} \geq\mathbf{\widetilde{I}}_{k}\;:\mathbf{\lambda}_{k},\ \forall k,\] (5.118b) \[\|\mathbf{\widetilde{L}}_{p,t}\mathbf{\alpha}_{k}\|_{2}\leq\mathbf{ \widetilde{l}}_{p,t}\mathbf{\alpha}_{k}\;:\mathbf{\beta}_{p,t,k},\mathbf{\gamma}_{p,t,k}, \ \forall p,t,k,\] (5.118c) \[\|\mathbf{\widetilde{P}}_{p,t}^{k}\mathbf{\alpha}_{k}+\mathbf{\widetilde{Q}}_ {p,t}^{k}\|_{2}\leq\mathbf{\widetilde{p}}_{p,t}^{k}\mathbf{\alpha}_{k}+\mathbf{\widetilde{ q}}_{p,t}^{k}\;:\mathbf{\vartheta}_{p,t,k},\mathbf{\omega}_{p,t,k},\forall p,t,k. \tag{5.118d}\] Consequently, the inner-level problem of (5.118) can be directly dualized with the primal cut (\(\mathbf{\overline{x}}_{k}^{**}\) and \(\mathbf{z}_{k}^{**}\)), rendering the master problem of inner C&CG at \(\text{R}^{\text{th}}\) iteration, denoted as **F4**. **F4**: \[\max_{\mathbf{\mu},\mathbf{\varphi},\mathbf{\lambda},\mathbf{\beta},\mathbf{\gamma}, \mathbf{\vartheta},\mathbf{\omega}}\;\varphi\] (5.119a) \[s.t. \mu_{k}\mathbf{\widetilde{c}}_{k}^{\top}=\mathbf{\widetilde{G}}^{\top}\bm {\lambda}_{k}^{r}+\] (5.119b) \[\sum_{t}\sum_{p}\left(\mathbf{\widetilde{L}}_{p,t}^{\top}\mathbf{\theta}_ {p,t,k}^{r}+\mathbf{\widetilde{l}}_{p,t}^{\top}\mathbf{\gamma}_{p,t,k}^{r}+\mathbf{ \widetilde{P}}_{p,t}^{k,r\top}\mathbf{\vartheta}_{p,t,k}^{r}+\mathbf{\widetilde{p}}_{ p,t}^{k,r\top}\mathbf{\omega}_{p,t,k}^{r}\right),\ \forall k,r,\] (5.119c) \[\varphi\leq\mathbf{\lambda}_{k}^{r\top}\left(\mathbf{\widetilde{I}}_{k}^ {r}-\mathbf{\widetilde{H}}\mathbf{z}_{k}^{r^{*}}\right)-\sum_{t}\sum_{p}\left(\mathbf{ \widetilde{Q}}_{p,t}^{k,r\top}\mathbf{\vartheta}_{p,t,k}^{r}+\mathbf{\widetilde{q}}_{ p,t}^{k,r\top}\mathbf{\omega}_{p,t,k}^{r}\right),\ \forall k,r,\] (5.119d) \[\|\mathbf{\beta}_{p,t,k}^{r}\|_{2}\leq\mathbf{\gamma}_{p,t,k}^{r},\ \| \mathbf{\vartheta}_{p,t,k}^{r}\|_{2}\leq\mathbf{\omega}_{p,t,k}^{r},\ \forall p,t,k,r,\] (5.119e) \[\mathbf{\lambda}_{k}^{r}\geq 0\,\forall k,r,\quad\mathbf{\mu}\in \mathcal{M}.\] (5.119f) It should be noted that the master problem of the inner-loop C&CG, which is **F4**, also serves as the slave problem of the outer-loop C&CG. Meanwhile, the master problem of the outer-loop C&CG has already been given in **F2**. By far, (5.112) is readily solvable by calling the C&CG algorithm twice. The flowchart of the overall solution procedure is presented in Figure 5.4, which contains four iteration loops. In the first loop, starting with an arbitrary feasible decision (\(\mathbf{u}^{*},\mathbf{y}^{*}\) and \(\mathbf{\mu}^{*}\)), Algorithm 8 is called to solve the lower-level problem **F1** and it provides a primal cut (\(\mathbf{x}_{k}^{**}\) and \(\mathbf{z}_{k}^{**}\)) to the master problem of the inner C&CG, i.e., the second loop. Then, the second loop is executed also by Algorithm 8, which provides a primal cut (\(\mathbf{\mu}^{*}\)) to the outer C&CG. The master problem of outer C&CG is solved by Algorithm 7 to find optimal \(\overline{\mathbf{u}}\) and \(\overline{\mathbf{x}}_{k}^{r}\) in the third loop. Finally, outer C&CG, i.e., the fourth loop, is terminated with optimal day-ahead decision \((\mathbf{u}^{*},\mathbf{y}^{*})\), and real-time decisions \((\mathbf{x}_{k}^{*},\mathbf{z}_{k}^{*},\ \forall k)\) under the worst-case distribution \(\mathbf{\mu}^{*}\). ### 5.4 Simulation Results In this section, the effectiveness of the proposed two models and the performance of the quadruple-loop algorithm are illustrated by examining four different test systems. Two of them are distribution level and the remaining are transmission level. It should be noted that the robust day-ahead operation model with bidirectional gas contracting is applied for the Figure 5.4: The schematic diagram of the overall solution procedure. first two systems in subsections 5.4.2-5.4.6, while the last two are employed by the two-stage distributionally robust gas contracting in subsections 5.4.7-5.4.10. #### Test Systems Description The tests systems are as follows. 1. A \(13\)-Node PDN interacted with an \(8\)-Node gas system, demoted as **TS-I**, is employed to study the robust day-ahead operation model with bidirectional gas contracting. Figure 5.5 displays the test system topology. It has eleven power lines, one wind farm, one non-GPU, one GPU, seven power loads, four gas loads, one compressor, one P2G facility, and seven passive pipelines. We have one G2P contract for \(G2\), and one P2G contract for the P2G facility. From this topology, we have four fixed direction pipelines _p1_, _p2_, _p5_, and _p6_, and three bidirectional pipelines _p3_, _p4_, and _p7_. More details of the system, wind generation curves, parameters for the two algorithms, prices of gas, and penalties of non-served power loads, wind curtailment, and avoidance of P2G contracts can be found in Appendix B.2.1 and B.3.2. We consider that the time window is \(6\)h; therefore, the wind budget ranges from \(1\) to \(6\). In the following results, we consider the wind budget to be \(4\), which provides a feasible solution with a probability of greater than \(95\%\) against uncertainties [102]. Moreover, we represent cases based on the wind variation levels (WVLs), which represents the limits of the maximum relative deviations of wind generation uncertainties w.r.t. their predicted values. 2. A large-scale test system (\(123\)-Node PDN interacted with \(20\)-Node gas system), denoted as **TS-II**, is employed to study the scalability of the proposed algorithm at distribution level. Figure 5.5: Topology of the test system. 3. A \(5\)-Bus-\(7\)-Node system, denoted as **TS-III**, is employed to study the two-stage distributionally robust gas contracting. The topology of **TS-III** is shown in Figure 5.6. To construct the reference distribution \(\mathbf{\mu}^{0}\), wind outputs are assumed to follow a multivariate Gaussian distribution [208], [227], where their mean values can be found in B.1.1 and the standard deviations equal half of their means. The distribution is used to generate a set of historical data samples, which are consequently utilized to create a histogram with \(5\) pins [227]. Moreover, all data samples are checked to satisfy the wind farm power capacity, i.e. \[0\leq W_{e,t}^{s}\leq\overline{W}_{e},\ \forall e,t,s\] (5.120) 4. A \(118\)-Bus-\(20\)-Node system, denoted as **TS-IV**, is employed to study the scalability of the proposed algorithm at transmission level. Due to space limitation, please refer to Appendix B for detailed description of the selected test systems as well as the cost and algorithmic parameters. The numerical results are performed using MATLAB R\(2018\)a with Gurobi \(8.1.0\) and YALMIP toolbox [209] on a personal laptop with Intel(R) Core(TM) i\(5-3320\)M CPU and \(8.00\)GB RAM. #### Comparison with the IPS Model Physical and economic-based comparisons are performed between the IPS model and the proposed model to reveal the effectiveness of considering the GDN constraints in the EM of the PDN. **TS-I** is selected to be the coupled system. The IPS model optimizes the economic operation of the PDN without taking into account the effect on the gas system feasibility. Therefore, the gas system constraints in (5.14)-(5.88) for day-ahead and real-time stages are dropped in this model. The objective function is the same for both models by including all gas contracts to provide a fair comparison. The feasibility of the gas system is checked after Figure 5.6: The test system topology. identifying the day-ahead contracts by considering the firm gas required in the G2P contract and the scheduled gas in the P2G contract as a gas load and gas source in the gas system, respectively. In this comparison, we present four cases; Case \(1\) (normal WVL, normal gas load); Case \(2\) (\(\pm\)10% WVL, normal gas load); Case \(3\) (\(\pm\)20% WVL, normal gas load); and Case \(4\) (normal WPL, high gas load). Table 5.1 displays the firm and reserved gas for the G2P contracts and scheduled gas for the P2G contract as cumulative hourly values in the time window. The gas system feasibility is denoted by F, which could be feasible (Y) or infeasible (N). Based on the signed contracts, the gas system should be feasible under any wind uncertainty. In Case \(1\), the gas system is competent to supply/sink gas according to the requirements; therefore, the two models are feasible and provide the same total cost. The results from the two models are very close, in Case \(2\), however, the gas system is infeasible with the contracts generated from the IPS model. In Case \(3\), the increase of the WVL, the IPS model flunks to identify the suitable gas contracts to fulfill the requirements of the PDN. In Case \(4\), increasing the gas load leads to a stressed gas system, which has a priority for supplying gas load [56]. Therefore, the proposed model provides a high operation cost. Unlike the IPS model, the proposed model selects the best contracts considering the gas system feasibility and priority. Moreover, multilevel pricing or bidding structure could be applied to consider the G2P contracts in the gas system priority. #### Comparison Between the One-stage Contracting and IEGS Models The importance of considering gas contracts in EM is discussed in this section. An economic comparison with the IEGS models, co-optimizing power and gas systems irrespectively of gas contracts, is performed for different cases based on WVLs. To achieve a fair comparison, the objective of the IEGS model is focused on the power system operation only while neglecting the gas system production cost. Its objective also does not include the cost of day-ahead gas contracts. Therefore, (5.24)-(5.29) are dropped for this model. However, the PDN operator \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Case} & \multirow{2}{*}{Model} & \multirow{2}{*}{OC (\$)} & \multicolumn{2}{c}{G2P Contract (km\({}^{3}\))} & \multicolumn{2}{c}{P2G} & \multirow{2}{*}{Total cost} \\ & & & \multicolumn{1}{c}{Firm} & \multicolumn{2}{c}{Reserved} & & \multicolumn{1}{c}{Contract} & \multicolumn{1}{c}{F} \\ \cline{5-8} & & & +ve & -ve & (m\({}^{3}\)) & & (S) \\ \hline \multirow{2}{*}{1} & PSO & 391.36 & 2.507 & 0.00 & 0.00 & 64.982 & Y & 422.58 \\ \cline{2-8} & Proposed & 395.46 & 2.429 & 0.00 & 0.00 & 64.903 & Y & 427.02 \\ \hline \multirow{2}{*}{2} & PSO & 427.14 & 2.772 & 0.00 & 0.00 & 111.532 & N & 479.91 \\ \cline{2-8} & Proposed & 436.04 & 2.602 & 0.00 & 0.00 & 111.315 & Y & 489.65 \\ \hline \multirow{2}{*}{3} & PSO & 500.97 & 3.262 & 0.00 & 0.00 & 204.539 & N & 597.01 \\ \cline{2-8} & Proposed & 523.65 & 2.828 & 0.00 & 0.00 & 204.126 & Y & 621.36 \\ \hline \multirow{2}{*}{4} & PSO & 391.36 & 2.505 & 0.00 & 0.00 & 64.982 & N & 422.56 \\ \cline{2-8} & Proposed & 474.53 & 0.952 & 0.00 & 0.00 & 64.816 & Y & 506.62 \\ \hline \hline \end{tabular} \end{table} Table 5.1: Physical comparison with the PSO model can sign costly real-time contracts based on the GPUs adjustment and P2G output deviations under uncertainties. Table 5.2 presents three different cases based on the WVLs to analyze the two models in a cost-effective manner. Each model provides the optimal dispatch to minimize the total operating cost. It is clear that the cost of day-ahead contracts in the IEGS model is zero because it does not consider them. Therefore, its total cost is lower. However, any change in the firm gas in G2P contracts is considered as a real-time contract. In addition, any variations in the P2G outputs will be penalized to mitigate any disturbance in the interacted gas systems. The table shows the real-time contracts under the worst uncertainty set, which is obtained by applying the inner C&CG algorithm with the optimal solution \(\mathbf{w}^{*}\). In contrast, the proposed model considers all the aforementioned problems in the day-ahead stage, so it provides a more economical operation than the IEGS model. #### Impacts of the Penalty Coefficients The proposed model provides the ability for the PDN operator to control and identify the optimal scenario for wind generation management, i.e., curtailment or conversion to gas. In practice, gas prices and penalties of contract avoidance are driven from the gas system operator, and other penalties, in the proposed model, can be adjusted by the PDN operator. In this subsection, we present the effect of wind curtailment penalty \(C_{e}\) on the gas contracts and OCs. Whereas the non-served load penalty \(C_{d}\), prices of the adjustment in non-GPUs redispatch \(C_{u}^{+},C_{u}^{-}\), and gas prices \(C_{h},C_{h}^{+},C_{h}^{-},C_{j},C_{j}^{+},C_{j}^{-}\) are not changed in this comparison. Table 5.3 displays the influence of \(C_{e}\) on the P2G contracts and curtailed wind energy. In Case I, \(C_{e}\) is the same as the results above, i.e., $\(100\)/MW, which is greater than the penalties of the P2G contract ($\(0.4\)/\(0.8\)/m\({}^{3}\) for up/down deviations). With decreasing \(C_{e}\), the wind curtailment increases, whereas the P2G contract cost decreases, as shown in the table. The cost variation is not high (small system with the \(6\) h operation); however, it should be considered for a large PDN. Therefore, by the PDN operator experience, \(C_{e}\) can be adjusted to optimize and utilize the surplus wind energy. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{WVL\% Model} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{Day-ahead (\$)} & \multirow{2}{*}{Total} & \multirow{2}{*}{Real-time (\$)} & \multirow{2}{*}{Actual} \\ \cline{3-4} \cline{6-9} & & \multicolumn{2}{c}{G2P} & & & & & \\ \cline{3-4} \cline{6-9} & & \multicolumn{2}{c}{Firm} & \multicolumn{2}{c}{Reserved} & & \multicolumn{1}{c}{(\$)} & & \\ \hline \multirow{2}{*}{0\%} & IEGS & 319.96 & - & - & 345.21 & 753.32 & 50.07 & 1098.5 \\ \cline{2-9} & Proposed & 364.38 & 0 & -9.73 & 427.02 & 0 & 27.27 & 427.02 \\ \hline \multirow{2}{*}{10\%} & IEGS & 378.66 & - & 0 & 409.98 & 478.62 & 95.01 & 888.6 \\ \cline{2-9} & Proposed & 390.31 & 0 & -16.69 & 489.65 & 0 & 88.81 & 489.65 \\ \hline \multirow{2}{*}{20\%} & IEGS & 380.54 & - & 0 & 503.76 & 737.19 & 40.97 & 1241.0 \\ \cline{2-9} & Proposed & 424.28 & 20.48 & -30.62 & 621.36 & 0 & 35.49 & 621.36 \\ \hline \hline \end{tabular} \end{table} Table 5.2: Economic comparison with the IEGS model under different WPLs #### Performance of the S-MISOCP Algorithm in RO Model The S-MISOCP algorithm is proposed in Chapter 3, where a detailed discussion is introduced, indicating its convergence and solution quality for deterministic IEGS optimization problems. In this subsection, it is compared with two widely used methods in the literature, namely, mixed-integer linear programming (MILP) formulation [66], [188] and MISOCP relaxation [40], [43], [204], [219], [228]. In the MILP model, the piece-wise linear approximation method (PLA) with optimal breakpoints is adopted for gas flow equation (5.43), i.e., \(f|f|\), and \(\pi^{2}\). In (5.40), the product \(vi\) is equivalent to \(0.25(U^{2}-L^{2})\), \(U=v+i\), and \(L=v-i\), so the PLA is adopted for \(p^{2},q^{2},U^{2}\), and \(L^{2}\). This method is highly influenced by the number of segments; therefore, two MILP models are applied with \(20\) (MILP_20) and \(40\) (MILP_40) segments. In the MISOCP relaxation method, quadratic equality constraints (power and gas flow equations) are reformulated as SOC constraints, and the resulted model will be as (5.59) for **P1** and (5.64) for **P4**, without using the proposed S-MISOCP algorithm. Table 5.4 displays the results for the discussed models under different four cases based on solving problems **P1** (5.58) and **P4** (5.63). With the optimal solution \(\mathbf{w}^{*}\), problem **P4** is optimized under the worst-case uncertainty set (\(\mathbf{u}=\mathbf{u}^{*}\)) and zero uncertainty sets (\(\mathbf{u}=\mathbf{0}\)) in Cases I and II, respectively. In Case I, compared with other models, the proposed algorithm provides the lowest maximum constraint violation (MCV) with the optimal objective cost. Note that MCV is calculated by (5.121) and (5.122) for problems **P1** and **P4**, respectively, and these values represent the relaxation gaps, please refer to the compact model (5.51) to drive MCV expressions. The relaxed MISOCP model usually introduces high maximum MCV in the power and gas flow equations. The high MCV from the latter might be originated from the ignorance of the gas system operation costs. With the worst \(\mathbf{u}^{*}\) in Case II, MILP_20 model presents an infeasible solution; however, MILP_40 introduces a suitable cost with \(0.3\)% MCV. Problem **P1** is solved with a deterministic uncertainty set \(\mathbf{u}^{*}\) in Cases III and IV, where \(\mathbf{u}^{*}\) equals zero and worst set, respectively. \[\max\big{(}\frac{\mathbf{a}_{v,t}\mathbf{y}}{\parallel\mathbf{A}_{v,t}\mathbf{y} \parallel_{2}}-1,\forall t,v,\quad\frac{\mathbf{h}_{v,t}\mathbf{x}^{r}}{\parallel\mathbf{ H}_{v,t}\mathbf{x}^{r}\parallel_{2}}-1,\forall v,t,r\big{)} \tag{5.121}\] \[\max\big{(}\frac{\mathbf{h}_{v,t}\mathbf{x}}{\parallel\mathbf{H}_{v,t}\mathbf{x} \parallel_{2}}-1,\forall v,t\big{)} \tag{5.122}\] \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} \(C_{e}\) \\ (\$/MWh) \\ \end{tabular} } & \multicolumn{2}{c}{P2G} & \multicolumn{2}{c}{Day-ahead (m\({}^{3}\))} & \multicolumn{2}{c}{P2G} & \multicolumn{2}{c}{Total} \\ & penalties & \multicolumn{2}{c}{(\$/m\({}^{3}\))} & \multicolumn{2}{c}{(\$)} & G2P & \multicolumn{2}{c}{P2G} & \multicolumn{2}{c}{(KWh)} & \multicolumn{2}{c}{(S)} \\ \hline 100 & 0.4/0.8 & 395.46 & 2429.2 & 64.90 & 0 & 27.2793 & 427.02 \\ \hline 40 & 2/4 & 394.50 & 2428.2 & 63.73 & 28 & 26.4180 & 536.74 \\ \hline 20 & 2/4 & 395.23 & 2428.4 & 60.22 & 150 & 12.2607 & 535.34 \\ \hline 10 & 2/4 & 422.23 & 2440.4 & 30.66 & 625 & 2.3358 & 525.34 \\ \hline \hline \end{tabular} \end{table} Table 5.3: Impact of wind curtailment penalty & \(C_{e}\) on the surplus wind energy The adaptive penalty rate method is applied to the simulations of Table 5.4. To show the effectiveness of the proposed adaptive penalty rate method, computational comparisons with the traditional fixed penalty rate are conducted. Note that the adaptive rate ranges from \(1.1\) to \(2\) based on the associated constraint violation (3.77). The traditional rate is fixed at \(1.5\), which is the average value of the range for the adaptive rate method. Penalties of Algorithm 5 are updated by (3.77) and (5.123) for the adaptive and fixed rates, respectively. Algorithm 5 is applied to solve problem **P1** with the worst uncertainty set (\(\mathbf{u}=\mathbf{u}^{*}\)), which is identified under different uncertainty budgets, similar to Case IV of Table 5.4. Table 5.5 shows the performance of Algorithm 5 with adaptive and fixed penalty rates under different uncertainty budgets. From Table 5.5, it can be observed that the adaptive rate method outperforms the traditional one in all the cases in terms of both solution quality and convergence speed as it controls the penalties according to the violation of each constraint individually. In the traditional fixed penalty rate method, the penalty coefficient grows equally for each constraint in each iteration of Algorithm 5, which means the coefficient would become relatively large in a couple of iterations and may make the algorithm overemphasizing the weights of penalty terms, resulting in poorer solution quality. From the last two columns of Table 5.5, the penalty coefficients in the adaptive penalty rate method are much lower than the ones in the traditional method, the major reason is that the penalty coefficient of one constraint would stop increasing after its violation is beneath the given threshold. \[\tau=min\Big{[}\;\mu\tau,\;\;\tau^{max}\Big{]} \tag{5.123}\] where \(\mu\) is the fixed penalty growth rate. even can be fixed according to the topology of the gas network. Rest parameters of the system, wind generation forecast curves, algorithms parameters, and contracts prices can be found in Appendix B. We consider a problem with \(6\) periods, where \(t=1\) to \(t=6\) are selected as the target slot. Figure 5.7 displays the iterations of algorithms and sequence of problems solved in the proposed quadruple-loop algorithm for the large-scale test system with uncertainty budget being \(4\). Starting with solving **P4** by Algorithm 5, which converges after \(6\) iterations, the inner C&CG algorithm takes \(2\) iterations to solve **P2**. The outer C&CG algorithm, which calls Algorithm 5 and inner C&CG algorithm four and five times, respectively, terminates after \(5\) iterations. Meanwhile, the execution time of problem **P1** increases along with the iteration index of outer C&CG due to the additional primal cuts of columns and constraints generated from previous iterations, which can be observed in Figure 5.7 as the execution time of each iteration of the second loop is approximated by the length of the red blocks. It should be noted that inner C&CG algorithm usually terminates after a few iterations (no more than \(2\)) in these simulations, owing to the relatively small number of binary variables used in the lower-level recourse problem (directional gas flow). Table 5.6 summarizes the simulation results after applying the proposed algorithm on the above system with different wind uncertainty budgets, namely \(0,2,4\), and \(6\). It can be observed that the quadruple-loop algorithm always converges in a relatively reasonable number of iterations for each loop, and the total execution time is agreeable, where the simulation platform is a personal laptop rather than a high-performance work-station. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\(K^{T}\)} & Obj. & Time & \multicolumn{3}{c}{Algorithms Call} & Number \& [Iterations per call] \\ \cline{3-7} & (\(10^{2}\)S) & (s) & Outer C\& Algorithm 2 & Inner & Algorithm 2 \\ \cline{2-7} & 4.146 & 56.34 & 1\&[1] & 1\&[50] & 1\&[4] \\ \hline 2 & 4.494 & 841.56 & 1\&[4] & 3\&[55,66,30] & 4\&[2,2,2,2] & 8\&[5,4,4,4,10,5,6,3] \\ \hline 4 & 4.861 & 2249.3 & 1\&[5] & 4\&[50,63,50,45] & 5\&[2,2,2,2,1] & 9\&[6,8,5,8,6,10,7,9,6] \\ \hline 6 & 4.992 & 4114.8 & 1\&[7] & 7\&[48,60,14,8, & 7\&[2,2,2,2,2, 14\&[8,14,15,9,25,20, \\ & & & & & 24,15,39] & 2,2,2] & 17,23,26,18,9,10,5,9] \\ \hline \hline \end{tabular} \end{table} Table 5.6: Computation times for the large test system under four different uncertainty budgets \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Budget & Penalty & Obj. (S)\({}^{*}\) & Time (s) & Iter.\({}^{**}\) & \(\tau_{\min}\)\({}^{***}\) & \(\tau_{\max}\)\({}^{***}\) \\ \hline \multirow{2}{*}{2} & Adaptive & 391.434 & 20.2 & 15 & 3.24 & 143.30 \\ \cline{2-7} & Fixed & 391.934 & 29.3 & 18 & 26214.4 & 26214.4 \\ \hline \multirow{2}{*}{4} & Adaptive & 407.557 & 48.2 & 23 & 2.55 & 421.35 \\ \cline{2-7} & Fixed & 407.869 & 53.6 & 29 & 10\({}^{6}\) & 10\({}^{6}\) \\ \hline \multirow{2}{*}{6} & Adaptive & 435.295 & 30.2 & 20 & 2.51 & 233.43 \\ \cline{2-7} & Fixed & 438.541 & 34.5 & 22 & 10\({}^{6}\) & 10\({}^{6}\) \\ \hline \hline \end{tabular} \({}^{*}\)Objective value; \({}^{*}\) Iteration no.; \({}^{**}\) Minimum and maximum penalties at final iteration. \end{table} Table 5.5: Computational comparisons between the traditional and adaptive penalty rate methods Besides, based on the aforementioned analysis and simulation results, the following suggestions are recommended to reduce the computational costs: 1. For Algorithm 5: as the violations from power or gas flow equations may still be relatively large after quite a few iterations due to the infeasibility of the initial point, we suggest (i) a high-quality initial point by adding the right hand side of the exact MISOCP constraints, i.e., \(\lambda_{p}\sum_{t}\sum_{t}\hat{i}_{l,t}+\lambda_{g}\sum_{t}\max_{p}\hat{a}_{p,t}\), to the objective of the relaxed **P4** (5.64) and \(\lambda_{p}\sum_{t}\sum_{l}\left(\hat{i}_{l,t}+\sum_{r}i_{l,t}\right)+\lambda_ {g}\sum_{t}\max_{p}\left(\hat{\pi}_{p,t}^{+}+\sum_{r}\pi_{p,t}^{+}\right)\) to the objective of the relaxed **P1** (5.59), where \(\lambda_{p}\) and \(\lambda_{g}\) are small factors. Same treatment have been done in Algorithm 2 using equation (3.75a); (ii) a relatively high initial penalty coefficient, such as \(0.1\), to force the solution into a feasible region close to the one obtained from the relaxed problem; (iii) set lower solution quality requirement for the solver in the first few iterations with MISOCP problems (before fixing the binary variables), by increasing the relative and absolute optimality gaps, and selecting a suitable time limit. The reason is that, at the first iterations, the initial point needs to be adjusted and it is not necessary to find the optimal solutions. A worm-start is also recommended at these iterations by providing initial guess obtained from previous iterations to the solver. 2. For nested C&CG: (i) reduce the uncertainty budget to decrease the algorithm iterations (e.g. cases of Table 5.6), by analyzing the periods when the outputs of wind generation are more likely to deviate significantly from their forecasted values based on historical data. (ii) select a suitable value of \(\overline{M}\) used in problem **P3** (5.68) as the algorithm is strongly influenced by this value, as discussed in Section 4.4.6, which have additional suggestions to improve the NC&CG algorithm performance. #### Comparison with SO and RO Models In this subsection, \(2000\) samples are generated to construct the reference distribution \(\mathbf{\mu}^{0}\). Then, a total of six sets of strategies (day-ahead decisions \(\mathbf{u}^{*}\) and \(\mathbf{y}^{*}\)) can be obtained from Figure 5.7: Sequences of solving problems and algorithms iterations in the proposed quadruple-loop algorithm for the large test system with wind uncertainty budget being \(4\). the SO (\(\sigma=0\)), the RO (\(\sigma=1\)), and the proposed DRO (\(\sigma=0.2,0.4,0.6,0.8\)) models. The total costs of the strategies from the SO and the RO models are \(\$1.113\times 10^{6}\) and \(\$1.310\times 10^{6}\), respectively. \(100\) random distributions are created under each ambiguity set (\(\sigma=0.2,0.4,0.6,0.8\)) to serve as the validation data. The performances of the strategies under the four validation distribution sets are summarized in Table 5.7, where the maximum and the average total costs under each validation set are listed. The proposed DRO outperforms both the SO and RO models under all sets of validation distributions. Compared with the SO model, the proposed model able to see all distributions in the ambiguity set, therefore it provides the optimal OC with best \(\boldsymbol{y}^{*}\) to tackle any distributions of wind uncertainty. In addition, compared with the RO model, which identifies a high OC to satisfy the \(100\%\) confidence level, the proposed model provides a less conservative decision that is optimal for each ambiguity set. Based on the above discussion, the proposed model has better performance on balancing the robustness and conservativeness of the dispatch strategy than the SO and RO models. #### Comparison Between the Two-stage Contracting and IEGS Models To highlight the necessity and effectiveness of gas contracts modeling, economic comparisons between the proposed model and the IEGS model are conducted. Specially, the DRO based IEGS model can be obtained by removing the contract related terms in the objective and the constraint set from the proposed model, where the production costs of the gas system are not included to provide a fair comparison by focusing only on the power system operational costs. It should be noted that the PSO is still able to sign a costly real-time purchase G2P gas contracts as well as a cheap real-time sale P2G gas contracts with the operation strategy from the IEGS model to control the regulation costs under the worst-case distribution. The costs of real-time G2P contracts can be calculated according to (5.124), while the income of real-time P2G contracts is defined in (5.125). \[\sum_{h}\sum_{t}\sum_{u\in\mathcal{U}_{n}(h)}\frac{\Phi}{\eta_{u} }\left(C_{h}^{2+}\max\{p_{u,t}-\hat{p}_{u,t},0\}+C_{h}^{2-}\max\{\hat{p}_{u,t} -p_{u,t},0\}\right), \tag{5.124}\] \[-\sum_{j}\sum_{t}\sum_{z\in\mathcal{Z}(j)}C_{j}^{2+}\Phi\eta_{z} p_{z,t}. \tag{5.125}\] Table 5.8 lists the results of four cases under different confidence levels and P2G capacities, where \(50\) samples are generated to construct the \(5\)-pin reference distribution. From Table 5.8, it can be observed that the proposed model outperforms the IEGS model in all four cases in terms of the out-of-pocket costs, as the PSO have to sign more expensive real-time G2P contracts, resulting in the increment of total costs, as well as cheaper real-time P2G contracts, leading to the decrement of the revenue, to mitigate the uncertainty of wind generation outputs, while it can have more reasonable contracts in the proposed model. In addition, the larger \(\beta\) or the small capacity of P2G facilities leads to higher total costs. From (5.101), the \(\sigma\) would increase along with \(\beta\), which suggests the ambiguity set would become more conservative if \(\beta\) increases, giving rise to higher total costs. Meanwhile, the impacts of confidence level \(\beta\) on the total costs are more significant than the capacity of P2G facilities, due to the fact that the PSO has an alternative means to deal with the excessive wind generation, which is to curtail it and pay the fine. #### Comparison Between Two-stage and One-stage Contracting Mechanisms In the sequel, the performance of the proposed two-stage contracting mechanism is compared with the one-stage contracting modeling (e.g. see [223]). In the one-stage mechanism, the PSO can only sign day-ahead gas contracts, which means the real-time contracts related-costs are removed from the objective and \(\rho_{h,t}^{2-},\rho_{h,t}^{2+},\triangle g_{j,t}^{+}\) and \(\triangle g_{j,t}^{-}\) are forced to zero. \(50\) samples are generated to construct the reference distribution \(\boldsymbol{\mu}^{0}\) and the confidence level \(\beta\) is set as \(0.9\). The results are gathered in Table 5.9, where different wind generation curtailment penalty coefficients \(C_{e}\) are tuned. The total wind curtailment over the day (\(\triangle w_{e}\)), the costs of gas contracts in the day-ahead (DA) and real-time (RT) stages and the expected total costs for the two mechanisms are listed in the table. Note that penalties \(C_{v}\), which are regulated by the PSO, control the excessive amount of wind power outputs to be curtailed or converted into gas. The curtailed wind generation would increase if the \(C_{e}\) decreases, as signing gas contracts would be less cost-effective than curtailing the wind generation. Moreover, the expected contracted values of gas in both day-ahead and real-time stages are decreasing along with \(C_{e}\) due to the fact that low penalty value would weaken the influence of wind uncertainties. Consequently, the expected total costs decline for both mechanisms with the \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{\(\beta\)\(\&\)} & \multicolumn{3}{c}{Day-ahead (\(10^{3}\$\))} & \multirow{2}{*}{Total cost} & \multicolumn{2}{c}{Real-time\({}^{*}\) (\(10^{3}\$\))} & \multirow{2}{*}{Actual cost} \\ \cline{3-3} \cline{5-10} & & & G2P & & & & & \\ \cline{3-3} \cline{5-10} & & Firm & Res.\({}^{**}\) & P2G & (\(10^{6}\$\)) & G2P & P2G & (\(10^{6}\$\)) \\ \hline 0.95 \& \multirow{2}{*}{\(\&\)} & IEGS & 381.53 & – & – & 0.97569 & 315.8 & -0.9 & 1.35052 \\ 100MW & & Proposed & 381.64 & 300.5 & -2.1 & 1.29891 & 0.002 & 0 & 1.29891 \\ \hline 0.99 \& \multirow{2}{*}{\(\&\)} & IEGS & 381.29 & – & – & 0.99794 & 435.2 & -0.9 & 1.43224 \\ 100MW & & Proposed & 381.27 & 321.8 & -2.2 & 1.32129 & 0 & 0 & 1.32129 \\ \hline 0.95\& \multirow{2}{*}{\(\&\)} & IEGS & 381.51 & – & – & 0.97569 & 320.1 & -1.2 & 1.29490 \\ 200MW & & Proposed & 398.39 & 55.9 & -3.1 & 1.28946 & 215.6 & 0 & 1.28946 \\ \hline 0.99 \& \multirow{2}{*}{\(\&\)} & IEGS & 372.01 & – & – & 0.98976 & 507.1 & -2.2 & 1.49470 \\ 200MW & & Proposed & 374.51 & 321.4 & -3.9 & 1.30911 & 0 & 0 & 1.30911 \\ \hline \hline \end{tabular} * Expected costs of the real-time contracts; \({}^{**}\) Res. is the reserved gas \end{table} Table 5.8: Comparison between the proposed two-stage contracting model and the IEGS model penalty reduction. The one-stage mechanism, which signs only day-ahead contracts and the real-time contracts are prohibited, provides larger expected total costs compared with the proposed two-stage mechanism as the latter has more flexibility to sign both day-ahead and real-time gas contacts and identifies the optimal decisions for the PSO. #### Scalability Tests of the Procedure with DRO Models In this subsection, the proposed methodology is applied on the large-scale test system **TS-IV** to evaluate its performance and scalability. Economic interactions between power and gas systems are formulated as ten G2P and four P2G gas contracts for the \(18\) GPUs and \(4\) P2G units, respectively. \(1000\) samples are generated to construct the reference distribution, and the confidence level is set at \(0.95\). The problem is considered with \(6\) periods and the target slots are from \(1\) to \(6\). Figure 5.8 displays the optimality and feasibility of the proposed quadruple-loop procedure. In Figure 5.8(a), the outer C&CG algorithm converges after only four iterations, where the inner C&CG and the S-MISOCP algorithm are called four and three times to provide the UB and LB of RC, respectively. The **F2** feasibility is guaranteed by the S-MISOCP, which decreases the maximum relative constraints violation (MRCV) to \(10^{-5}\). The first call of inner C&CG is under an arbitrary day-ahead decision and the remaining three calls are organized in Figure 5.8(b)-(d), respectively, where each call takes two iterations to terminate as the number of binary variables (bidirectional gas flow) used in the recourse problem is relatively small (according to the topology of the gas network, we have only four pipelines with bidirectional gas flow and the remaining \(17\) can be with fixed flow). The S-MISOCP is applied twice at each call to find a feasible solution of **F1**, and the first call is under an arbitrary \(\boldsymbol{\mu}\), which explains the iteration curve of S-MISOCP before the first iteration of inner C&CG in Figure 5.8(b)-(d). The MRCV of **F2** and **F1** are calculated by (5.126) and (5.127), respectively. \[\max\big{(}\frac{\boldsymbol{d}_{p,t}\boldsymbol{u}}{\|\boldsymbol{D}_{p,t} \boldsymbol{y}\|_{2}}\forall p,t,\frac{\boldsymbol{l}_{p,t}\boldsymbol{x}_{k }^{r}}{\|\boldsymbol{L}_{p,t}\boldsymbol{x}_{k}^{r}\|_{2}}\forall p,t,k,r \big{)}-1, \tag{5.126}\] \[\max\big{(}\frac{\boldsymbol{l}_{p,t}\boldsymbol{x}_{k}}{\| \boldsymbol{L}_{p,t}\boldsymbol{x}_{k}\|_{2}}\forall p,t,k\big{)}-1. \tag{5.127}\] Table 5.10 lists the simulation results after applying the proposed procedure on the **TS-IV** with different values of confidence level \(\beta\), namely \(0.95,0.97,0.98,\) and \(0.99\). It should note \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{\(C_{e}\)} & \multicolumn{4}{c}{One-step mechanism} & \multicolumn{4}{c}{Proposed two-step mechanism} \\ \cline{2-9} & \multirow{2}{*}{\(\Delta w_{e}^{**}\)} & \multicolumn{2}{c}{Contracts (\(\$10^{5}\))} & \multicolumn{2}{c}{Total\({}^{***}\)} & \multirow{2}{*}{\(\Delta w_{e}^{**}\)} & \multicolumn{2}{c}{Contracts (\(\$10^{5}\))} & \multicolumn{2}{c}{Total\({}^{***}\)} \\ \cline{2-9} & & DA & RT\({}^{***}\) & (\(\$10^{6}\)) & & DA & RT\({}^{***}\) & (\(\$10^{6}\)) \\ \hline \(50^{*}\) & 0 & 5.683 & 0 & 1.2028 & 1.33 & 3.989 & 1.636 & 1.1162 \\ \hline \(25\) & 82.72 & 5.716 & 0 & 1.1856 & 63.51 & 3.856 & 1.628 & 1.1083 \\ \hline \(10\) & 677.2 & 5.380 & 0 & 1.1148 & 677.2 & 3.673 & 1.050 & 1.0897 \\ \hline \(5\) & 677.2 & 5.380 & 0 & 1.0809 & 677.2 & 3.673 & 1.050 & 1.0558 \\ \hline \hline \end{tabular} * Base cost; * Cumulative curtailed wind power (MW-day); * * Expected value under worst-case \end{table} Table 5.9: Comparisons with between the proposed tow-stage and one-stage contracting mechanisms. that NC&CG algorithm usually terminates after a few iterations (no more than \(5\) for outer C&CG and \(3\) for inner C&CG), and the solution time for each algorithm is acceptable. It can be observed that the total execution time is mainly influenced by S-MISOCP algorithm, which appears in outer and inner iterations for **F2** and **F1**, respectively. Based on our experiences, the following recommendations, are pointed out to enhance the algorithm performance: 1. Select suitable coefficients in the penalty equation (C). As these coefficients may force the solver to focus on the violations rather than the main objective or drive the algorithm to execute more iterations to decrease MCRV, if their values are over-high or over-low, respectively. For example, see Table 5.11 in Cases #1. 2. Select a proper initial penalty. Poor choices of the initial penalty coefficient may lead to infeasibility issue or sub-optimal solutions. For example, see Table 5.11 in Cases #2. Figure 5.8: (a) RC obtained by the inner and outer C&CG and the MRCV of **F2** by S-MISOCP; (b)-(d) RC obtained by the inner C&CG (LB and UB) and the MRCV of **F1** by S-MISOCP for outer iterations (1)-(3), respectively. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\(\beta\)} & \multirow{2}{*}{Obj. (\(10^{5}\) S)} & \multicolumn{3}{c}{Algorithms call number \(|\) Average time per call (s)} & \multirow{2}{*}{Total time} \\ \cline{3-3} \cline{5-7} & & Outer & S-MISOCP & Inner & S-MISOCP \\ & & C\&CG & for **F2** & C\&CG & for **F1** \\ \hline 0.95 & 8.053 & 1 \(|\) 1527.6 & 3 \(|\) 222.01 & 4 \(|\) 215.39 & 8 \(|\) 106.42 & 1527.6 \\ \hline 0.97 & 8.221 & 1 \(|\) 2018.9 & 3 \(|\) 238.48 & 4 \(|\) 325.86 & 8 \(|\) 65.93 & 2018.9 \\ \hline 0.98 & 8.684 & 1 \(|\) 1509.0 & 3 \(|\) 199.55 & 4 \(|\) 227.58 & 9 \(|\) 96.52 & 1509.0 \\ \hline 0.99 & 9.188 & 1 \(|\) 2415.1 & 4 \(|\) 109.52 & 5 \(|\) 395.39 & 13 \(|\) 116.18 & 2415.1 \\ \hline \hline \end{tabular} \end{table} Table 5.10: Computation times for the large-scale test system with different confidence levels. 3. Divide **F1** into K sub-problems for each wind output scenario, as defined in (5.128), instead of solving the overall problem directly. Therefore, each sub-problem has fewer numbers of variables and constraints. Note that the solver will be called \(K\) times to solve (5.128). In Cases #3 of Table 5.11, problem **F1** is solved with optimal values of \(\mathbf{u}^{*},\mathbf{y}^{*}\) and \(\mathbf{\mu}^{*}\) two times: (i) one-shot using formula (5.113), and (ii) k-shot using formula (5.128). \[\sum_{\forall k}\mu_{k}^{*}\min_{\mathbf{x}_{k},\mathbf{z}_{k},\mathbf{\overline{x}}_{k}} \mathbf{c}^{\top}\mathbf{x}_{k}+\sum_{\forall t}\sum_{\forall p}\tau_{p,t,k}\xi_{p,t,k}.\] (5.128) ### 5.5 Conclusions and Discussions With the increasing interactions between power and gas systems, energy contracts are desired to guarantee secure and reliable operations, as these systems are, in most cases, controlled by different operators. The integration of variable and uncertain wind power generation into power systems makes the contracting even more challenging. This chapter proposes two different operation models for the integrated electric-gas systems from the perspective of the PSO, where bidirectional contracts, including P2G and G2P, are mathematically formulated. The first model is a robust energy management (EM) model for the power distribution network against wind generation uncertainty, where both the physical, through the modeling of the gas system operation constraints, and the economic, by the modeling of bidirectional energy trading contracts, interactions with the gas system are considered. Mathematically, the proposed robust EM problem suggests a two-stage programming, where the summation of the day-ahead operation costs and the worst-case real-time regulation costs is minimized. To guarantee the robustness of the EM strategy, the contract for the reserved gas, which would be utilized for mitigating wind generation outputs deviation, is determined day-ahead along with the contract for firm energy. To tackle the computational challenge brought by the nonconvex Weymouth equations in the two decision-making stages, a quadruple-loop solution procedure is devised, including two C&CG loops and two S-MISOCP loops, through which a robust, feasible and nearly optimal solution can be obtained. Numerical simulations \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Cases & \(\overset{\rightarrow}{\mu}\), \(\underline{\mu}\), \(\sigma\) & \(t^{(0)}\) & Case name & Objective & Iterations & Solver time (s) \\ \hline \multirow{4}{*}{\#1} & \(10,5,10^{4}\) & – & – & 805281.42 & 16 & 625.14 \\ \cline{2-7} & \(\overset{\ast}{2}\) & \(3,1.5,10^{4}\) & – & – & 805277.34 & 10 & \(\overset{\ast}{543.52}\) \\ \cline{2-7} & \(2,1.2,10^{3}\) & – & – & 805289.89 & 21 & 692.19 \\ \hline \multirow{4}{*}{\#2} & – & \(0.01\) & – & \multicolumn{3}{c}{Does not converge} \\ \cline{2-7} & – & \(\overset{\ast}{0.1}\) & – & 805277.34 & 10 & \(\overset{\ast}{543.52}\) \\ \cline{2-7} & – & \(10\) & – & 805279.55 & 25 & 727.55 \\ \hline \multirow{2}{*}{\#3} & – & – & One-shoot & 140361.69 & 31 & 196.41 \\ \cline{2-7} & – & – & \(k\)-shoot & 140361.69 & 31 & 114.39 \\ \hline \multicolumn{7}{l}{* default values; Cases \#1 and \#2 are obtained by **P2**, and Cases \#3 by **P1**} \\ \end{tabular} \end{table} Table 5.11: Computational performance of S-MISOCP algorithm under different parameters with \(\beta=0.95\) and \(S=1000\) confirm that the proposed model outperforms both the IPS and IEGS models. The impact of gas prices and contract avoidance penalties on the wind energy control (curtailment or methanation) are investigated. The effectiveness of the proposed methods is validated by the comparisons with other solution methods in terms of the computation performance as well as the solution quality. The second model is a distributionally robust two-stage contracting model, where bidirectional contracts, including P2G and G2P, can be signed in both day-ahead and real-time decision-making stages. The physical interactions between the power and gas systems, such as gas consumed by GPUs and electricity used by P2G facilities, would follow the signed contracts. The quadruple-loop solution procedure, which is used above for the RO model, is designed for this model to solve the DRO model with \(K\) clusters of wind power outputs. Simulation results validate: i. the effectiveness of proposed DRO model over the SO and RO based ones; ii. the advantage of the two-stage contracting model over the one-stage one; iii. the scalability of the proposed solution methodology in transmission/distribution level IEGS. Future works include identifying the proper prices of gas contracts to guarantee the optimal revenue for gas systems instead of using predetermined ones, adopting extensive distributional uncertainty sets such as Kullback-Leibler divergence and Wasserstein distance based ones, and incorporating more types of uncertainties. ## Chapter 6 Robust Operational Equilibrium for Coupled Electricity and Gas Markets Chapters 4-5 proposed resilient-economic coordinated robust operation models for the interdependent power and gas systems, while formulating the physical and economic interactions between the two systems. These models focus on modeling the challenges in decision-making framework as well as developing solution methodologies to overcome the computational difficulty in the solving the non-convex nonlinear two-stage optimization problems, where energy and reserve prices are predetermined by system operators. Currently, this chapter seeks to derive the optimal values of these prices to complete the proposed models' compatibility to be applied in the existing industrial practice. The increasing integration of uncertain and volatile renewable power generation (RPG) poses challenges not only to the operation of the interdependent electricity and gas systems but also their coupled markets. In this Chapter, a robust operational equilibrium solution method for the interactive markets of power and gas systems is proposed, where the bidirectional interactions include both firm energy and reserve contracting, and the impacts of the uncertainties of wind generation outputs on the two markets are characterized. The line pack as well as bidirectional flow characteristics of gas pipelines are depicted in the gas system model so as to improve its operational flexibility. To guarantee the robustness of market equilibrium against uncertainties, the power and gas market clearing models become two-stage robust ones. Column-and-constraint generation (C&CG) based and the best response algorithms are devised to clear the two markets as well as to coordinate them, respectively. Simulation results validate the effectiveness of the proposed robust operational equilibrium model and the performance of the devised solution methodology. This work is submitted for publication as * Ahmed R. Sayed, Cheng Wang, Wei Wei, Tianshu Bi, and Mohammad Shahidehpour. "Robust Operational Equilibrium for Electricity and Gas Markets Considering Bilateral Energy and Reserve Contracts." Submitted for publication to IEEE Transactions on Power Systems. ## Chapter 6 Robust Operational Equilibrium for Coupled Electricity and Gas Markets ### 6.1 Introduction The extensively strengthened interdependencies between the power and gas systems not only suggest potential economic and environmental benefits for modern society but also can enhance the operational flexibility of both systems1-2, which is crucial to accommodate the uncertain renewable power generation (RPG)3. Despite these interdependencies, as discussed earlier, power and gas systems are operated independently in most occasions [45]. Therefore, the co-optimization manner in the literature [66], [195] may not be a realistic way to guide the operation of the two systems. Moreover, new challenges have appeared on pricing the resources which are used to mitigate the uncertainties of RPG in both electricity and gas markets [54]. In this regard, the clearing mechanisms of power and gas markets in the uncertain and interactive environment need to be revisited. Footnote 1: [https://www.pjm.com/markets-and-operations.aspx](https://www.pjm.com/markets-and-operations.aspx) Footnote 2: [https://www.eia.gov/todayinenergy/detail.php?id=34612](https://www.eia.gov/todayinenergy/detail.php?id=34612) Footnote 3: [https://www.eia.gov/todayinenergy/detail.php?id=41533](https://www.eia.gov/todayinenergy/detail.php?id=41533) Recently, some inspiring studies investigated the interdependency of power and gas systems from the market perspective. Two market coordination methodologies for power and gas systems were proposed in [229] to minimize the operational costs. A bi-level optimization model was proposed in [123] to maximize the profits of the offering companies considering the demand response mechanism, where the steady-state model of the gas system was adopted. This model could be extended to identify the optimal strategies of multiple offering companies, as shown in [154] and [49]. In [108] a coordinated market mechanism considering gas dynamics is proposed and solved by a sequential convex optimization. Ref. [44] proposed a methodology to identify the operational equilibrium of the integrated power-gas market with limited information exchange, where the Karush-Kuhn-Tucker (KKT) system of the model was derived. A price-based coordinated market mechanism was developed in [230] to clear the day-ahead markets of the two independent systems. It should be highlighted that the aforementioned works neglect the uncertainties of RPG during the decision-making process. To handle the uncertainties in the integrated power and gas systems, different optimization approaches have been adopted by the pertinent literature, such as stochastic optimization (SO) and robust optimization (RO), especially in operation and planning problems. However, limited work has been conducted to address the issue of operating the non-deterministic interactive markets, where SO is the common choice of the relevant ones [223], [231], [232]. Concretely, the focuses of [223], [231], [232] are to minimize the total expected costs of the integrated market of power and gas systems, to allocate the reserves considering the gas system transmission capacities, and to demonstrate the value of P2G in practical cases, respectively. It should be noted that the market clearing results of those works depend on the predetermined scenarios of RPG, which might be over-optimistic if the scenarios are insufficient, yet a large number of scenarios would lead to huge computation burden. The greatest merit of the RO approach is that it guarantees the feasibility of the solution against all the realizations of uncertainties within the prescribed set. Therefore, it has driven more attention in electricity market problems with uncertainties [54], [55], [233]. Based on the above discussion, there are significant research gaps between the existing works and the practical application, that need to be fulfilled. First, the core assumption in the non-deterministic market models is that one utility has full control and operation authority over the power and gas systems. However, in industrial practice, there are significant institutional and administrative barriers to operate the two systems in a holistic manner [44]. Second, no attempt has been found in the literature that adopts the RO approach to analyze the equilibrium between the two markets, where the main difficulty is how to reflect the impacts of power system uncertainties on the gas system, and vice versa. Third, similar to the up-to-date researches in the electricity market [54], [55], the gas system uncertainties introduced by GPUs demands need to be considered in its pricing scheme. In the following sections, a robust operational equilibrium seeking framework for electricity and gas markets considering bilateral energy trading is proposed. The proposed framework considers that the two markets are independently operated and allows limited information exchange, including only the prices and demands of both systems for contract agreements. In the electricity market, robust operation strategies including the gas consumption of GPUs during day-ahead and real-time stages against uncertainties of RPG as well as the electricity prices are determined. In the gas market, a robust production schedule is devised against the deviation of gas demands caused by the reserve utilization of GPUs, where the gas prices for firm energy and reserves can be obtained as well. The electricity and gas markets are cleared by column-and-constraint generation (C&CG) and nested column-and-constraint generation (NC&CG) algorithms, respectively. To deal with the nonconvex Weymouth equation in the gas market, the relaxation and the sequential penalty procedure are applied to guarantee the solution feasibility. Finally, the operational equilibrium between the two markets is tackled by the best-response decomposition (BRD) algorithm. In light of this discussion, the innovations are multi-fold: 1. A robust operational equilibrium for the coupled electricity and gas markets is characterized considering the uncertainties of RPG as well as bidirectional energy and reserve contracts. 2. Inspired by [54] and [55], the marginal energy and reserve prices for electricity and gas markets are derived based on the cost causation principle to reflect the impacts of uncertainties. 3. The BRD algorithm is proposed to identify the characterized operational equilibrium, where the electricity and gas markets are separately cleared by the C&CG and NC&CG algorithms, respectively. 4. The superiority of the robust operational equilibrium over the deterministic one, its effectiveness under limited data exchange, the importance of considering the gas dynamics, and solution procedure performance have been verified by numerical results. ### 6.2 Mathematical Formulation #### Pool-based Market Mechanism Figure 6.1 displays the overall schematic diagram for the coupled electricity-gas operation. In the electricity market, the electricity market operator (EMO) decides the optimal robust dispatch strategy for all generators and signs the best gas contracts (GCs) for GPUs, considering the electricity consumed by P2G units and the uncertainties of RPG. It should be noted that GCs are signed as two sub-contracts as discussed in Chapters \(4\) and \(5\): i) firm gas contract, which provides the required gas amounts for GPUs under the forecasted outputs of RPG in the day-ahead stage; ii) reserved gas contract, which defines the real-time gas consumption considering the utilization of upward and downward reserves of GPUs. The prices of GCs are obtained from the gas market clearing results, namely, the locational marginal firm gas prices (LMFGPs) and the locational marginal reserved gas prices (LMRGPs). In the gas market, the gas market operator (GMO) aims to find the robust gas production schedule against the uncertainties from the gas demands of GPUs. The GMO also identifies the best electricity contracts (ECs) to supply the P2G facilities according to locational marginal electricity prices (LMEPs), which can be received from the EMO. Some assumptions are listed as follows to simplify the mathematical formulation. 1. In general, (i) uncertainties only originates from RPG,, and energy demands are non-elastic; (ii) electricity and gas markets are cleared at the same time [44], [49], [108], [223]. 2. In the electricity market, (i) the lossless DC power flow model is adopted [44], [45], [54], [123], [229], [230]; (ii) unit commitment (UC) decisions are known [49]. 3. In the gas market, (i) the approximated gas line pack model is adopted; (ii) the simplified compressor and P2G models are employed [44], [45], [49], [108], [123], Figure 6.1: Market mechanism for the coupled power and gas systems. [154], [229]; (iii) gas storages are non-strategic components (in a closed state) [49], [108]; (iv) P2G units are not reserve providers in the electricity market. #### Bilateral Energy and Reserve Contracting Economical interactions between the two markets are modeled as bidirectional energy transactions. According to [223], day-ahead energy contracts are more convenient and cheaper than real-time ones, therefore, only day-ahead contracts are considered in this work. ##### Gas Contrcts modeling In the electricity market, GCs are determined based on the gas prices received from the GMO. Under the forecasted outputs of RPG, the EMO signs firm GCs on the basis of base-case outputs of GPUs, which are defined in (6.1). Besides, GCs for reserved gas should satisfy the fluctuations of gas demands of GPUs in the real-time stage, as described by (6.2) and (6.3). \[\rho_{h,t}=\sum_{u\in\mathcal{U}_{g}(h)}\Phi\hat{p}_{u,t}/\eta_{u}, \forall h,t, \tag{6.1}\] \[0\leq\rho_{h,t}^{-},\rho_{h,t}^{+},\forall h,t,\] (6.2) \[-\rho_{h,t}^{-}\leq\sum_{u\in\mathcal{U}_{g}(h)}\Phi(p_{u,t}- \hat{p}_{u,t})/\eta_{u}\leq\rho_{h,t}^{+},\forall h,t, \tag{6.3}\] where \(u\), \(h\) and \(t\) are indices of generators, GCs and time periods, respectively; \(\rho_{h,t}\) and \(\rho_{h,t}^{-}/\rho_{h,t}^{+}\) are the contracted firm and upward/downward reserved gas amounts of GC, respectively; \(\Phi\) is the power-to-gas conversion factor; \(\eta_{u}\) is the generation efficiency of GPU; \(\mathcal{U}_{g}(h)\) is a subset of GPUs listed in contract \(h\); \(\hat{p}_{u,t}/p_{u,t}\) is the outputs of GPUs in day-ahead/real-time stages. ##### Electricity Contracts Modeling In the gas market, ECs are optimized based on the electricity prices received from the EMO. In this study, two choices are provided for the excessive outputs of RPG: curtailed by the management sector of RPG; consumed by the P2G units. The contracted electricity consumed by P2G units, denoted as \(p_{j,t}\), is defined in (6.4), where \(j\) and \(z\) are the indices for ECs and P2G units, respectively; \(\mathcal{Z}(j)\) is a subset of P2G units; \(\varrho_{z,t}\) denotes the produced gas from P2G; \(\eta_{z}\) is the production efficiency. Gas production from P2G units is limited by their capacities in (6.5), where \(\underline{\varrho}_{z}/\overline{\varrho}_{z}\) is the lower/upper production capacity. \[p_{j,t}=\sum_{z\in\mathcal{Z}(j)}\varrho_{z,t}/(\Phi\eta_{z}), \forall j,t, \tag{6.4}\] \[\underline{\varrho}_{z}\leq\varrho_{z,t}\leq\overline{\varrho}_{ z},\forall z,t \tag{6.5}\] #### Robust Clearing Model of the Electricity Market The operation goal of the EMO is expressed in (6.6), which is to minimize the sum of total day-ahead operational costs and the worst-case real-time regulation costs. In (6.6), the first five terms are the generation cost of non-GPUs, the cost of GCs and penalties of power load shedding, respectively, and the last four terms express the costs of non-GPUs re-dispatching as well as penalties of wind curtailment and power load shedding. It should be noted that the real-time adjustment costs of GPUs are included in the day-ahead GC costs. \[\min_{\mathbf{y}}\ \sum_{t}\Big{[}\sum_{u\in\mathcal{U}_{n}}C_{u}(\hat{p }_{u,t})+\sum_{h\in\mathcal{H}}(\mu_{h,t}\rho_{h,t}+\mu_{h,t}^{+}\rho_{h,t}^{+ }+\mu_{h,t}^{-}\rho_{h,t}^{-})+\sum_{d\in\mathcal{D}_{P}}C_{n}\triangle\hat{p}_{ n,t}\Big{]}\] \[+\ \max_{\mathbf{\xi}\in\Upsilon}\ \min_{\mathbf{x}}\ \sum_{t}\Big{[}\sum_{u\in \mathcal{U}_{n}}(C_{u}^{+}\triangle p_{u,t}^{+}+C_{u}^{-}\triangle p_{u,t}^{- })+\sum_{e\in\mathcal{E}}C_{e}\triangle p_{e,t}+\sum_{d\in\mathcal{D}_{p}}C_{n }\triangle p_{n,t}\Big{]} \tag{6.6}\] In (6.6), \(\mathbf{y}\) and \(\mathbf{x}\) are the day-ahead and real-time decision vectors, respectively, and the uncertainty \(\mathbf{\xi}\) follows a predefined uncertainty set \(\Upsilon\); \(C_{u}(.)\) is the convex cost function of non-GPUs. \(\mu_{h,t}\) and \(\mu_{h,t}^{+}/\mu_{h,t}^{-}\) are the contracted prices for firm and reserved gas, respectively; \(\triangle p_{u,t}^{-}/\triangle p_{u,t}^{+}\) is the downward/upward adjustments of non-GPU outputs and its penalty is \(C_{u}^{-}/C_{u}^{+}\); \(\triangle\hat{p}_{n,t}/\triangle p_{n,t}\) is the load shedding in the day-ahead/real-time stage and its penalty is \(C_{n}\); \(\triangle p_{e,t}\) is the curtailment of RPG and its penalty is \(C_{e}\). The operational constraints of the power system are defined in (6.7)-(6.13). Generation and ramping up/down capacities for both GPUs and non-GPUs are presented in (6.7)-(6.9). In (6.10)-(6.12), bus angle \(\hat{\theta}_{n,t}\), power flow \(\hat{p}_{l,t}\), and load shedding limits are defined, respectively. Considering the potential infeasibility caused by aggressive ECs, the bus balancing equation (6.13) is relaxed by adding the load shedding term, where \(\hat{W}_{e,t}\) is the predicted outputs of wind generation and \(p_{n,t}=\sum_{d\in\mathcal{D}_{p}(n)}P_{d,t}+\sum_{j\in\mathcal{J}(n)}p_{j,t}, \ \forall n,t\) is the aggregated electricity demand. \[U_{u,t}\underline{P}_{u}\leq\hat{p}_{u,t}\leq U_{u,t}\overline{P} _{u},\forall t,u\in\mathcal{U},\mathcal{U}=\mathcal{U}_{n}\cup\mathcal{U}_{g}, \tag{6.7}\] \[\hat{p}_{u,t}-\hat{p}_{u,t-1}\leq U_{u,t}R_{u}^{+}+(1-U_{u,t+1}) \overline{P}_{u},\ \forall t,u\in\mathcal{U},\] (6.8) \[\hat{p}_{u,t-1}-\hat{p}_{u,t}\leq U_{u,t+1}R_{u}^{-}+(1-U_{u,t}) \overline{P}_{u},\ \forall t,u\in\mathcal{U},\] (6.9) \[-\pi\leq\hat{\theta}_{n,t}\leq\pi,\ \forall t,n,\hat{\theta}_{1,t}=0, \ \forall t,\] (6.10) \[|\hat{p}_{l,t}=(\hat{\theta}_{m,t}-\hat{\theta}_{n,t})/x_{l}|\leq \overline{P}_{l},\ \forall l,t,(m,n)\in l.\] (6.11) \[0\leq\triangle\hat{p}_{n,t}\leq p_{n,t},\ \forall n,t,\] (6.12) \[\sum_{u\in\mathcal{U}(n)}\hat{p}_{u,t}+\sum_{e\in\mathcal{E}(n)} \hat{P}_{e,t}+\sum_{l\in\mathcal{L}_{1}(n)}\hat{p}_{l,t}-\sum_{l\in\mathcal{L}_ {2}(n)}\hat{p}_{l,t}=P_{n,t}-\triangle\hat{p}_{n,t},\ \forall n,t. \tag{6.13}\] In (6.7)-(6.13), \(U_{u,t}\) is a predetermined unit commitment decision; \(\underline{P}_{u}/\overline{P}_{u}\) and \(R_{u}^{-}/R_{u}^{+}\) are the minimum/maximum generation limits and ramping down/up limits of generators, respectively; \(\overline{P}_{l}\) and \(x_{l}\) are the power flow capacity and the reactance of power line \(l\); \(n\) and \(m\) are indices of power buses; \(\mathcal{U}(n),\mathcal{E}(n)\), and \(\mathcal{D}_{p}(n)\) are subsets of generators, wind farms, power lines and power loads connected to bus \(n\), and \(\mathcal{J}(n)\) is a subset of ECs, in which P2G units are supplied from bus \(n\); and \(\mathcal{L}_{1}(n)/\mathcal{L}_{2}(n)\) is a subset of power lines connected with the end or start terminals. In this work, a box-like uncertainty set is employed [54], as shown in (6.14). \[\Upsilon:=\left\{\begin{array}{c}\boldsymbol{\xi}=(\overline{\xi}_{e,t}, \underline{\xi}_{e,t})_{\forall e,t}\left|\sum_{e}(\overline{\xi}_{e,t}+ \underline{\xi}_{e,t})\leq\Gamma^{e},\forall t,\\ \left|\sum_{t}(\overline{\xi}_{e,t}+\underline{\xi}_{e,t})\leq\Gamma^{t}, \forall e,\\ \overline{\xi}_{e,t}+\underline{\xi}_{e,t}\leq 1,\\ \left|\overline{\xi}_{e,t},\underline{\xi}_{e,t}\in\{0,1\},\forall e,t \end{array}\right.\right\}\right. \tag{6.14}\] where \(\overline{\xi}_{e,t}\) and \(\underline{\xi}_{e,t}\) are uncertainty variables; \(\Gamma_{1}\) and \(\Gamma_{2}\) are the spatial and temporal uncertainty budgets, respectively. The actual outputs of wind generation is defined by (6.15), where \(\overline{P}_{e,t}\) and \(\underline{P}_{e,t}\) are its upper and lower bounds, respectively. \[p_{e,t}=\hat{P}_{e,t}(1-\overline{\xi}_{e,t}-\underline{\xi}_{e,t})+ \overline{P}_{e,t}\overline{\xi}_{e,t}+\underline{P}_{e,t}\underline{\xi}_{e,t},\forall e,t \tag{6.15}\] In the real-time stage, the operational constraints are similar to those in the day-ahead stage. Specifically, some of them can be directly obtained by replacing the day-ahead variables in (6.7)-(6.13) with the real-time ones, i.e., by removing the hat symbols. Here, the overlapped constraints are not shown. Further, the real-time power balancing equation allows wind generation curtailment, as shown in (6.16). Accordingly, the upper and lower boundaries for wind generation curtailment is added as (6.17). The outputs adjustment of non-GPUs can be measured by (6.18). Note that the real-time GPUs outputs has been restricted in (6.3). \[\sum_{u\in\mathcal{U}(n)}p_{u,t}+\sum_{e\in\mathcal{E}(n)}(p_{e,t} -\triangle p_{e,t})+\sum_{l\in\mathcal{L}_{1}(n)}p_{l,t}-\sum_{l\in\mathcal{L} _{2}(n)}p_{l,t}=p_{d,t}-\triangle p_{d,t},\ \forall n,t, \tag{6.16}\] \[0\leq\triangle p_{e,t}\leq p_{e,t},\forall e,t,\] (6.17) \[-\triangle p_{u,t}^{-}\leq p_{u,t}-\hat{p}_{u,t}\leq\triangle p_{ u,t}^{+},\forall u,t. \tag{6.18}\] #### Robust Clearing Model of the Gas Market Similarly, the goal of the GMO is to minimize the energy supply costs of the gas system, as shown in (6.19), which includes the firm production and reserved gas costs, costs of ECs as well as penalties of GCs avoidance, gas load shedding and real-time gas imbalance. In this work, both the gas wells and gas loads can be gas reserve providers. \[\min_{\Psi_{1}}\ \sum_{t}\Big{[}\sum_{w\in\mathcal{W}}(C_{w}\hat{f}_{w, t}+C_{w}^{+}R_{w,t}^{+}+C_{w}^{-}R_{w,t}^{-})+\sum_{j\in\mathcal{J},n\in j} \beta_{n}p_{j,t}+\sum_{h\in\mathcal{H}}C_{h}\triangle\rho_{h,t}\] \[+\sum_{i\in\mathcal{I}}(C_{i}\triangle f_{i,t}+C_{i}^{+}f_{i,t}^{ +}+C_{i}^{-}f_{i,t}^{-})\Big{]}+\max_{\boldsymbol{g}}\min_{\Psi_{2}}\sum_{t} \sum_{i\in\mathcal{I}}\overline{C}_{i}(V_{i,t}^{+}+V_{i,t}^{-}) \tag{6.19}\] In (6.19), \(w\) and \(i\) are the indices of gas wells and nodes, respectively; \(\Psi_{1}\) and \(\Psi_{2}\) are the day-ahead and real-time decision variable sets, respectively; \(\boldsymbol{g}=\{g_{h,t},\ \forall h,t\}\) represents the uncertain gas consumption of the power system and \(g_{h,t}\) is limited by the GCs of reserved gas, as described by (6.20); \(\hat{f}_{w,t}\) and \(R^{+}_{w,t}/R^{-}_{w,t}\) are the gas production and the up/down reserves of gas well, and their prices are \(C_{w}\) and \(C^{+}_{w}/C^{-}_{w}\), respectively; \(\beta_{n}\) is the LMEP at bus \(n\); \(\triangle\rho_{h,t}\) denotes the violation of GCs and its penalty is \(C_{h}\); \(\triangle\hat{f}_{i,t}\) and \(f^{+}_{i,t}/f^{-}_{i,t}\) are the gas load shedding and the provided up/down reserves, and their prices are \(C_{i}\) and \(C^{+}_{i}/C^{-}_{i}\), respectively; the real-time nodal gas imbalance is \(V^{+}_{i,t}/V^{-}_{i,t}\), which is penalized with \(\overline{C}_{i}\). \[-\rho^{-}_{h,t}\leq g_{h,t}\leq\rho^{+}_{h,t},\ \forall h,t. \tag{6.20}\] The operational constraints of the gas system in the day-ahead stage are defined in (6.21)-(6.33). The gas production capacities and nodal pressure boundaries are described by (6.21) and (6.22), respectively. Terminal pressures and gas flow of compressors are expressed in (6.23)-(6.24). The line pack can be calculated by (6.25) and its continuity equation is depicted by (6.26). Note that the gas nodal balancing equation (6.27) is relaxed by adding unserved gas amounts \(\triangle\rho_{h,t}\) of the signed GCs and replacing the total gas loads \(F_{i,t}\) with the bided one \(\hat{f}_{i,t}\), to recover the operational feasibility. The served gas load is bounded by (6.28). Weymouth equation is defined in (6.29)-(6.30). Unserved gas amounts of GCs are limited in (6.31), and lower boundaries of unserved gas loads and reserves are defined in (6.32) and (6.33), respectively. \[\underline{Q}_{w}\leq\hat{f}_{w,t}\leq\overline{F}_{w},\ \forall w,t, \tag{6.21}\] \[\underline{\Pi}_{i}\leq\hat{\pi}_{i,t}\leq\overline{\Pi}_{i},\ \forall i,t,\] (6.22) \[\hat{\pi}_{i,t}\leq\hat{\pi}_{o,t}\leq\gamma_{c}\hat{\pi}_{i,t}, \forall c,t,(i,o)\in c,\] (6.23) \[0\leq\hat{q}^{out}_{c,t}=(1-\alpha_{c})\hat{q}^{in}_{c,t},\ \forall c \in\mathcal{C},t,\] (6.24) \[\hat{m}_{p,t}=K^{m}_{p}(\hat{\pi}_{i,t}+\hat{\pi}_{o,t}),\ \forall p,t,(i,o)\in p,\] (6.25) \[\hat{q}^{in}_{p,t}-\hat{q}^{out}_{p,t}=\hat{m}_{p,t}-\hat{m}_{p,t- 1},\ \forall p,t\] (6.26) \[\sum_{w\in\mathcal{W}(i)}\hat{f}_{w,t}+\sum_{p\in\mathcal{P}_{1} (i)}\hat{f}^{out}_{p,t}-\sum_{p\in\mathcal{P}_{2}(i)}\hat{f}^{in}_{p,t}+\sum_ {c\in\mathcal{C}_{1}(i)}\hat{f}^{out}_{c,t}-\sum_{c\in\mathcal{C}_{2}(i)}\hat {f}^{in}_{c,t}\] \[+\sum_{z\in\mathcal{Z}(i)}\varrho_{z,t}=\sum_{h\in\mathcal{H}( \rho_{h,t}-\triangle\rho_{h,t})+\hat{q}_{i,t},\ \forall i,t,\] (6.27) \[0\leq\hat{f}_{i,t}\leq F_{i,t},\ \forall i,t.\] (6.28) \[\hat{f}_{p,t}=0.5(\hat{f}^{in}_{p,t}+\hat{f}^{out}_{p,t}),\ \forall p,t,\] (6.29) \[\hat{f}_{p,t}|\hat{f}_{p,t}|=\chi^{f}_{p}(\hat{\pi}^{2}_{i,t}-\hat {\pi}^{2}_{o,t}),\ \forall p,t,(i,o)\in p,\] (6.30) \[0\leq\triangle\rho_{h,t}\leq\rho_{h,t},\ \forall h,t,\] (6.31) \[F_{hi,t}-\hat{f}_{i,t}\leq\triangle f_{i,t},\ \forall i,t,\] (6.32) \[0\leq f^{+}_{i,t},f^{-}_{i,t},\ \forall i,t,\ \ 0\leq R^{+}_{w,t},R^{-}_{w,t},\ \forall w,t. \tag{6.33}\] In (6.21)-(6.33), \(\underline{F}_{w}/\overline{F}_{w}\) and \(\underline{\Pi}_{i}/\overline{\Pi}_{i}\) are the lower/upper production and pressure limits, respectively; \(\gamma_{c}\) and \(\alpha_{c}\) are the compression and fuel consumption factors of the compressor; \(\hat{f}^{out}_{c,t}\) and \(\hat{f}^{in}_{p,t}\) and \(\hat{f}^{out}_{p,t}\)/\(\hat{f}^{in}_{p,t}\)/\(\hat{f}_{p,t}\) are the out-/in-flow of the compressor and out-/in-/average-flow of the pipeline, respectively; \(K_{p}^{m}/K_{p}^{f}\) is mass flow/Weymouth equation coefficient; \(\mathcal{W}(i)\), \(\mathcal{Z}(i)\), and \(\mathcal{D}_{g}(i)\) are subsets of gas wells, P2G units and gas demands connected to node \(i\), and \(\mathcal{H}(i)\) is a subset of GCs, in which GPUs are supplied from node \(i\); \(\mathcal{P}_{1}(i)/\mathcal{P}_{2}(i)\) and \(\mathcal{C}_{1}(i)/\mathcal{C}_{2}(i)\) are subsets of pipelines and compressors, whose ending/beginning terminals are node \(i\), respectively. Similarly, most of the real-time operation constraints of the gas system can be derived from the day-ahead ones (6.21)-(6.30), by removing the hat symbols and including the reserved gas amounts of GCs \(g_{h,t}\) as well as nodal violations \(V_{i,t}^{+}/V_{i,t}^{-}\) in the nodal balancing equation (6.27), as expressed in (6.34). Besides, gas well production and the gas loads follow (6.35) and (6.36), respectively. (6.37) sets the non-negative restriction on the nodal gas imbalance variables. \[\sum_{w\in\mathcal{W}(i)}f_{w,t}+\sum_{p\in\mathcal{P}_{1}(i)}f_{p,t}^{out}-\sum_{p\in\mathcal{P}_{2}(i)}f_{p,t}^{in}+\sum_{c\in\mathcal{C}_{1} (i)}f_{c,t}^{out}-\sum_{c\in\mathcal{C}_{2}(i)}f_{c,t}^{in}\] \[+\sum_{z\in\mathcal{Z}(i)}\varrho_{z,t}=f_{i,t}+V_{i,t}^{-}+V_{i, t}^{+}+\sum_{h\in\mathcal{H}(i)}(\rho_{h,t}-\triangle\rho_{h,t}+g_{h,t}),\; \forall i,t, \tag{6.34}\] \[-R_{s,t}^{-}\leq f_{s,t}-\hat{f}_{s,t}\leq R_{s,t}^{+},\;\forall s,t,\] (6.35) \[-R_{i,t}^{-}\leq f_{i,t}-\hat{f}_{i,t}\leq R_{i,t}^{+},\;\forall i,t,\] (6.36) \[V_{i,t}^{-},V_{i,t}^{+}\geq 0,\;\forall i,t. \tag{6.37}\] ### 6.3 Solution Methodology #### Clearing the Electricity Market with Uncertainties For ease of analysis, the electricity market robust clearing model is compacted as \[\mathcal{E}(\mathbf{q},\mathbf{\mu})=\min_{\mathbf{y}} \mathbf{c}^{\top}\mathbf{y}+\mathbf{\mu}^{\top}\mathbf{C}\mathbf{y}+\max_{\mathbf{ \xi}\in\Upsilon}\min_{\mathbf{x}}\mathbf{d}^{\top}\mathbf{x} \tag{6.38a}\] \[s.t.\;\mathbf{A}_{1}\mathbf{y}\geq\mathbf{A}_{2}-\mathbf{A}_{3}\mathbf{q},\] (6.38b) \[\mathbf{D}_{1}\mathbf{y}+\mathbf{D}_{2}\mathbf{x}\geq\mathbf{D}_{3}- \mathbf{D}_{4}\mathbf{\xi}-\mathbf{D}_{5}\mathbf{q}. \tag{6.38c}\] where \(\mathbf{y}=\{\rho_{h,t},\rho_{h,t}^{-},\rho_{h,t}^{+},\hat{p}_{u,t},\hat{p}_{l,t}, \triangle\hat{p}_{d,t},\hat{\theta}_{i,t}\}\) and \(\mathbf{x}=\{p_{u,t},p_{l,t},\triangle p_{d,t},\theta_{i,t},\triangle p_{u,t}^{- },\triangle p_{u,t}^{+}\}\) are the day-ahead and real-time decision vectors; \(\mathbf{q}\) is the day-ahead decision vector of the gas system, and it aggregates the power demands listed in ECs, i.e., \(p_{j,t},\forall j,t\). The gas prices is \(\mathbf{\mu}=\{\mu_{h,t};\mu_{h,t}^{+};\mu_{h,t}^{-}\}\), which is a coefficient vector. (6.38b) represents the first-stage constraints (6.1)-(6.2) and (6.7)-(6.13). (6.38c) expresses the second-stage constraints (6.3), (6.7)-(6.12) (without the hat symbols) and (6.15)-(6.18). Model (6.38)-(6.38c) admits a standard two-stage robust program and can be solved by the C&CG algorithm and its details are presented in Algorithm 9, where the electricity market subproblem (EM-SP) is defined as \[\text{EM-SP}: \max_{\mathbf{\xi}\in\Upsilon}\min_{\mathbf{x}}\mathbf{d}^{\top}\mathbf{x} \tag{6.39a}\] \[s.t.\;\mathbf{D}_{2}\mathbf{x}\geq\mathbf{D}_{3}-\mathbf{D}_{4}\mathbf{ \xi}-\mathbf{D}_{5}\mathbf{q}-\mathbf{D}_{1}\mathbf{y}^{*} \tag{6.39b}\] In (6.39), \(\mathbf{y}^{*}\) is updated by the electricity market master problem (EM-MP), which can be expressed as a single-level optimization problem in \[\text{EM-SP}: \max_{\mathbf{\xi}\in\Upsilon,\mathbf{\omega},\mathbf{u}}\mathbf{\omega}^{\top}( \mathbf{D}_{3}-\mathbf{D}_{5}\mathbf{q}-\mathbf{D}_{1}\mathbf{y}^{*})-\sum\mathbf{u} \tag{6.40a}\] \[s.t. \mathbf{D}_{2}^{\top}\mathbf{\omega}=\mathbf{d},\ \ \mathbf{\omega}\geq 0,\] (6.40b) \[-\overline{M}(1-\mathbf{\xi})\leq\mathbf{D}_{4}^{\top}\mathbf{\omega}- \mathbf{u}\leq\overline{M}(1-\mathbf{\xi})\] (6.40c) \[-\overline{M}\mathbf{\xi}\leq\mathbf{u}\leq\overline{M}\mathbf{\xi} \tag{6.40d}\] where \(\mathbf{u}\) is an axillary variable equivalent to \(\mathbf{\omega}^{\top}\mathbf{D}_{4}\mathbf{\xi}\) and \(\overline{M}\) is a sufficient large positive number. The formulation of the electricity market-master problem EM-MP in the \(r^{\text{th}}\) iteration is expressed in (6.41), where the uncertainty vector (\(\mathbf{\xi}^{*r},\forall r\)) is dynamically generated by the EM-SP at each iteration, and \(\varphi\) is the worst-case regulation costs. \[\text{EM-MP}: \min_{\mathbf{y},\varphi,\mathbf{x}^{r}}\mathbf{c}^{\top}\mathbf{y}+\mathbf{\mu}^ {\top}\mathbf{C}\mathbf{y}+\varphi \tag{6.41a}\] \[s.t. \mathbf{A}_{1}\mathbf{y}\geq\mathbf{A}_{2}-\mathbf{A}_{3}\mathbf{q},\] (6.41b) \[\varphi\geq\mathbf{d}^{\top}\mathbf{x}^{r},\forall r,\] (6.41c) \[\mathbf{D}_{1}\mathbf{y}+\mathbf{D}_{2}\mathbf{x}^{r}\geq\mathbf{D}_{3}- \mathbf{D}_{4}\mathbf{\xi}^{*r}-\mathbf{D}_{5}\mathbf{q},\forall r. \tag{6.41d}\] Then, LMEP can be derived from the Lagrangian function \(\mathcal{L}(\mathbf{y},\mathbf{x}^{r},\varphi,\mathbf{\lambda},\mathbf{\pi}^{r},\mathbf{\alpha}^{r})\) of EM-MP after identifying the worst uncertainty vector (\(\mathbf{\xi}^{*r},\forall r\)) by Algorithm 9, where \(\mathbf{\lambda},\mathbf{\pi}^{r}\) and \(\mathbf{\alpha}^{r}\) are lagrangian multipliers of (6.41b), respectively. According to [54] and [234], the LMEP is calculated by \[\beta_{n,t} =\frac{\partial\mathcal{L}(\mathbf{y},\mathbf{x}^{r},\varphi,\mathbf{\lambda},\mathbf{\pi}^{r},\mathbf{\alpha}^{r})}{\partial p_{n,t}},\ \ \forall n,t\] \[=\underline{\lambda}_{n,t}-\overline{\lambda}_{n,t}-\overline{ \lambda}_{n,t}^{\triangle}+\sum_{\forall r}(\underline{\alpha}_{n,t}^{r}- \overline{\alpha}_{n,t}^{r}-\overline{\alpha}_{n,t}^{\triangle,r}),\forall n,t \tag{6.42}\] where \(\underline{\lambda}_{n,t}/\overline{\lambda}_{n,t}\) and \(\overline{\lambda}_{n,t}^{\triangle}\) are the dual variables of constraints (6.13) and (6.12) (upper bound), respectively. \(\underline{\alpha}_{n,t}^{r}/\overline{\alpha}_{n,t}^{r}\) and \(\overline{\alpha}_{n,t}^{\triangle,r}\) are dual variables of real-time operation constraints (6.16) and (6.12) (upper bound) under the worst-case scenario at \(r^{\text{th}}\) iteration. It should be noted that the impacts of uncertainties on the LMEP have been considered in (6.42). #### Clearing the Gas Market with Uncertainties Besides the two-stage robust optimization based market clearing framework, the nonconvex Weymouth equations in both day-ahead and real-time stages increase the solution difficulty. Existing methods, which are adopted to approximate the Weymouth equation, such as second-order-cone (SOC) relaxation [44], [154] and linearization method [188], can not guarantee the solution feasibility. In what follows, the solution procedure for the robust clearing of the gas market would be presented. The Weymouth equation is formulated as MISOCP constraints as discussed in Section 3.3.2. That is achieved by writing sign-function-free form of Weymouth equation, as shown in (6.43), with the indicator constraints (6.44)-(6.45), \[\hat{f}_{p,t}^{2}=\chi_{p}^{f}(\hat{\pi}_{p,t}^{+,2}-\hat{\pi}_{p,t} ^{-,2}),\ \forall p,t. \tag{6.43}\] \[z_{p,t}=0\iff\hat{f}_{p,t}\geq 0,\hat{\pi}_{p,t}^{+}=\hat{\pi}_{i,t },\ \hat{\pi}_{p,t}^{-}=\hat{\pi}_{o,t},\ \forall p,t,\] (6.44) \[z_{p,t}=1\iff\hat{f}_{p,t}\leq 0,\hat{\pi}_{p,t}^{+}=\hat{\pi}_{o,t },\ \hat{\pi}_{p,t}^{-}=\hat{\pi}_{i,t},\ \forall p,t. \tag{6.45}\] where \(z_{p,t}=\{0,1\}\) is the gas flow directional indicator. Note that (6.44)-(6.45) can be further represented as a logic equation using the big-M method, please refer to Section 3.3.2 for narrow boundaries. The quadratic constraint (6.43) can be written as two opposite inequalities \[\hat{f}_{p,t}^{2}+(\sqrt{\chi_{p}^{f}}\hat{\pi}_{p,t}^{-,2})\leq( \sqrt{\chi_{p}^{f}}\hat{\pi}_{p,t}^{+,2}),\ \forall p,t, \tag{6.46}\] \[(\sqrt{\chi_{p}^{q}}\hat{\pi}_{p,t}^{+,2})-[\hat{f}_{p,t}^{2}+( \sqrt{\chi_{p}^{q}}\hat{\pi}_{p,t}^{-,2})]\leq 0,\ \forall p,t. \tag{6.47}\] where the first inequality (6.46) admits a convex cone constraint, and the latter one (6.47) is nonconvex. The compact form of the gas market robust clearing model with relaxed Weymouth equation is presented as follows. \[\mathcal{Q}(\mathbf{y},\mathbf{\beta})= \min_{\mathbf{q},\mathbf{z}}\ F(\mathbf{q})+\max_{\mathbf{g}}\min_{\mathbf{j},\mathbf{v}} d(\mathbf{v})\] (6.48a) \[s.t. (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq vector of the power system, and it aggregates the firm and reserved gas amounts listed in GCs, i.e., \(\{\rho_{h,t},\rho_{h,t}^{+},\rho_{h,t}^{-},\forall h,t\}\); \(\boldsymbol{\beta}\) is the vector of LMEP; constraints (6.48c) and (6.48d) gather the nonconvex inequality (6.47) in the day-ahead and real-time stages, respectively; \(\mathcal{A}\) and \(\mathcal{B}\) are constraint sets and their expressions are given as \[\mathcal{A}=\{(\boldsymbol{q},\boldsymbol{z})\mid\eqref{eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq solved by Algorithm 9 and the inner C&CG algorithm, respectively. \[\text{GM-O-MP}: \min_{\mathbf{q},\mathbf{z},\mathbf{j}^{r},\mathbf{v}^{r}}\ F(\mathbf{q})+\psi+\sum_{t} \sum_{p}(\hat{\tau}_{p,t}\hat{s}_{p,t}+\sum_{r}\tau_{p,t}^{r}s_{p,t}^{r}) \tag{6.53a}\] \[s.t. (\mathbf{q},\mathbf{z})\in\mathcal{A};\ (\mathbf{q},\mathbf{g}^{*r},\mathbf{v}^{r},\mathbf{j} ^{r})\in\mathcal{B},\forall r;\ \psi\geq d(\mathbf{v}^{r}),\forall r,\] (6.53b) \[\hat{w}_{p,t}(\mathbf{q})-\bar{\bar{g}}_{p,t}(\mathbf{q},\mathbf{q}^{0})\leq \hat{s}_{p,t},\ \forall p,t,\] (6.53c) \[w_{p,t}(\mathbf{v}^{r})-\bar{g}_{p,t}(\mathbf{v}^{r},\mathbf{v}^{0,r})\leq s_ {p,t}^{r},\ \forall p,t,r,\] (6.53d) \[\hat{s}_{p,t}\geq 0,\forall p,t,\ \ \ s_{p,t}^{r}\geq 0,\forall p,t,r. \tag{6.53e}\] \[\text{GM-O-SP}:\max_{\mathbf{g}}\min_{\mathbf{j},\mathbf{v}}\ \{\ d(\mathbf{v})\ :\ \eqref{eq:g-o-SP},\ \eqref{eq:g-o-SP}\} \tag{6.54}\] In (6.53), \(\psi\) is the worst-case regulation costs under the contracted amounts of reserved gas \(\mathbf{g}^{*r}\), which are dynamically generated by GM-O-SP and provided as a primal cut to the GM-O-MP; \(\hat{s}_{p,t}/s_{p,t}^{r}\) and \(\hat{\tau}_{p,t}/\tau_{p,t}^{r}\) are violations and the associated penalties, respectively. In (6.54), \(\mathbf{q}^{*}\) is the optimal day-ahead decision from GM-O-MP. In the inner-loop C&CG, the subproblem of GM-O-SP, defined in (6.55), can be solved by Algorithm 10. \[\text{GM-I-SP}: \min_{\mathbf{j},\mathbf{v}}\ d(\mathbf{v})+\sum_{t}\sum_{p}\tau_{p,t}s_{p,t} \tag{6.55a}\] \[s.t. (\mathbf{q}^{*},\mathbf{g}^{*},\mathbf{v},\mathbf{j})\in\mathcal{B},\ \ s_{p,t}\geq 0, \forall p,t,\] (6.55b) \[w_{p,t}(\mathbf{v})-\bar{g}_{p,t}(\mathbf{v},\mathbf{v}^{0})\leq s_{p,t},\ \forall p,t. \tag{6.55c}\] where \(\mathbf{g}^{*}\) is the worst-case uncertainty obtained from the master problem of GM-O-SP; \(s_{p,t}\) denotes the constraint violation and has been penalized in the objective function with penalty \(\tau_{p,t}\). The tractable formulation of the master problem of GM-O-SP is given as below. \[\text{GM-I-MP}: \max_{\mathbf{g},\mathbf{\sigma}^{r},\mathbf{\pi}_{k}^{r},\mathbf{\theta}_{k}^{r}}\vartheta \tag{6.56a}\] \[s.t. (\ref{eq:g-o-SP}),\ \ \mathbf{\sigma}^{r}\geq 0,\forall r,\ \ \big{\|}\mathbf{\pi}_{k}^{r}\big{\|}_{2}\leq\mathbf{\theta}_{k}^{r},\forall k,r,\] (6.56b) \[\mathbf{H}_{3}^{\top}\mathbf{\sigma}^{r}+\sum_{k}(\mathbf{M}_{k}^{r \top}\mathbf{\pi}_{k}^{r}+\mathbf{m}_{k}^{r\top}\mathbf{\theta}_{k}^{r})=\mathbf{D}^{ \top},\forall r,\] (6.56c) \[\vartheta\leq(\mathbf{H}_{4}-\mathbf{H}_{1}\mathbf{g}-\mathbf{H}_{2} \mathbf{j}^{*r})^{\top}\mathbf{\sigma}^{r}-\sum_{j}(\mathbf{N}_{k}^{r\top}\mathbf{\pi}_{k} ^{r}+\mathbf{n}_{k}^{r\top}\mathbf{\theta}_{k}^{r}),\forall r. \tag{6.56d}\] where \(\vartheta\) is the regulation costs under integer recourse actions \(\mathbf{j}^{*r}\); \(\mathbf{\sigma}^{r}/\mathbf{\pi}_{k}^{r}/\mathbf{\theta}_{k}^{r}\) is the dual variables of (6.57) at \(r^{\text{th}}\) iteration; \(k=\{1,2...2|\mathcal{P}||\mathcal{T}|\}\) is the cone index. The bilinear terms in (6.56d), namely \((\mathbf{H}_{1}\mathbf{g})^{\top}\mathbf{\sigma}^{r}\), can be linearized by the exact separation approach presented in Appendix A.5. \[\min_{\mathbf{w}}D\mathbf{w} \tag{6.57a}\] \[s.t. \mathbf{H}_{1}\mathbf{g}^{*}+\mathbf{H}_{2}\mathbf{j}^{*}+\mathbf{H}_{3} \mathbf{w}\geq\mathbf{H}_{4}: \mathbf{\sigma}\] (6.57b) \[\big{\|}\mathbf{M}_{k}\mathbf{w}+\mathbf{N}_{k}\big{\|}_{2}\leq \mathbf{m}_{k}\mathbf{w}+\mathbf{n}_{k}: \mathbf{\pi}_{k},\mathbf{\theta}_{k},\ \forall k \tag{6.57c}\] where \(\mathbf{w}=[\mathbf{v}^{\top},\,s_{p,t},\forall p,t]^{\top}\); (6.57c) represents the compact form of cones (6.46) and (6.55c); (6.57b) is the remaining constraints in (6.55b). The gas market would be cleared if the NC&CG algorithm converges, and the Lagrangian function \(\mathcal{L}(\mathbf{q},\psi,\mathbf{v}^{r},\mathbf{\nu},\mathbf{\omega}^{r},\mathbf{\kappa}^{r})\) can be constructed based on the latest GM-O-MP, where \(\mathbf{\nu}/\mathbf{\kappa}^{r}\) and \(\mathbf{\omega}^{r}\) are the Lagrangian multipliers. Then, the LMFGP \(\mu_{h,t}\) can be calculated by \[\mu_{h,t} =\frac{\partial\mathcal{L}(\mathbf{q},\psi,\mathbf{v}^{r},\mathbf{\nu},\mathbf{ \omega}^{r},\mathbf{\kappa}^{r})}{\partial\rho_{h,t}},\,\,\forall h,t\] \[=\underline{\nu}_{i,t}-\overline{\nu}_{i,t}-\overline{\nu}_{h,t} ^{\triangle}+\sum_{\forall r}(\underline{\kappa}_{i,t}^{r}-\overline{\kappa} _{i,t}^{r}),\,\,\forall i\in\mathcal{H}^{-1}(h) \tag{6.58}\] where \(\underline{\nu}_{i,t}/\overline{\nu}_{i,t}\) and \(\overline{\nu}_{i,t}^{\triangle}\) are the dual variables of constraints (6.27) and (6.31) (upper bound), respectively; \(\underline{\kappa}_{i,t}^{r}/\overline{\kappa}_{i,t}^{r}\) is dual variable of the real-time operation constraint (6.34) associated with the uncertainty scenario \(\mathbf{g}^{*r}\); \(\mathcal{H}^{-1}(h)\) is a subset of gas nodes listed in contract \(h\). Similarly, the impacts of the fuel consumption uncertainties of the GPPs on LMFGP have been considered, as the worst-case operation constraints of the gas system have been added in GM-O-MP. ``` 1:Set \(I_{1}^{max},I_{2}^{max},\overline{\mu},\underline{\mu},\sigma,\epsilon, \varepsilon,i=1\) and \({}^{\mathrm{a}}\)\(\tau_{p,t}\)\(/\)\({}^{\mathrm{b}}\)\(\tau_{p,t},\tau_{p,t}^{r}\). 2:Find the initial point: \({}^{\mathrm{a}}\)\((\mathbf{v}^{0})\) by solving GM-I-SP (6.55) without (6.55c)\(/\)\({}^{\mathrm{b}}\)\((\mathbf{q}^{0},\mathbf{v}^{0,r})\) by solving GM-O-MP (6.53) without (6.53c)-(6.53d). 3:If \(i>I_{1}^{max}\), fix gas flow directions with optimal ones in iteration \(I_{1}^{max}\). 4:Solve \({}^{\mathrm{a}}\) GM-I-SP (6.55) \(/\)\({}^{\mathrm{b}}\) GM-O-MP (6.53). 5:If \({}^{\mathrm{a}}\) (6.59) \(/\)\({}^{\mathrm{b}}\) (6.60), or \(i>I_{2}^{max}\), terminate; else, go to Step 6. \[|\text{GM-I-SP}^{(i-1)}-\text{GM-I-SP}^{(i)}|\leq\epsilon,\,\,\,s_{ p,t}\leq\varepsilon,\forall p,t\] (6.59) \[|\text{GM-O-MP}^{(i-1)}-\text{GM-O-MP}^{(i)}|\leq\epsilon,\,\,\,\, \hat{s}_{p,t},s_{p,t}^{r}\leq\varepsilon,\forall p,t,r\] (6.60) 6:Apply the adaptive penalty rate (3.77) to update \({}^{\mathrm{a}}\)\(\tau_{p,t}\)\(/\)\({}^{\mathrm{b}}\)\(\hat{\tau}_{p,t}\), \(\tau_{p,t}^{r}\), \(i=i+1\), update \({}^{\mathrm{a}}\)\((\mathbf{v}^{0})\)\(/\)\({}^{\mathrm{b}}\)\((\mathbf{q}^{0},\mathbf{v}^{0,r})\), then go to Step \(3\). ``` **Algorithm 10** The S-MISOCP Algorithm for Gas Market Clearing For problem GM-I-SP (6.55); \({}^{\mathrm{b}}\) For problem GM-O-MP (6.53) The LMRGP is calculated according to the cost causation principle [54], [235], [235]. Therefore, LMRGP is equivalent to the costs caused by uncertainties \(g_{h,t}^{r}\) from the view of the GMO. In other words, it is the marginal price of the additional unit of uncertainty, which equals \[\mu_{h,t}^{r}=\frac{\partial\mathcal{L}(\mathbf{q},\psi,\mathbf{v}^{r},\mathbf{\nu},\mathbf{ \omega}^{r},\mathbf{\kappa}^{r})}{\partial g_{h,t}^{r}}=\underline{\kappa}_{i,t}^ {r}-\overline{\kappa}_{i,t}^{r},\,\,\,i\in\mathcal{H}^{-1}(h) \tag{6.61}\] Based on _Lemma 1_ in [54], the upward and downward LMRGPs can be aggregated as \[\mu_{h,t}^{+} =\sum_{\forall r}\max\{\underline{\kappa}_{i,t}^{r}-\overline{ \kappa}_{i,t}^{r},\,\,0\},\,\,\,\,i\in\mathcal{H}^{-1}(h)\] \[\mu_{h,t}^{-} =\sum_{\forall r}\max\{\overline{\kappa}_{i,t}^{r}-\underline{ \kappa}_{i,t}^{r},\,\,0\},\,\,\,\,i\in\mathcal{H}^{-1}(h) \tag{6.62}\] #### Seeking the Operational Equilibrium To this end, seeking the equilibrium between the two markets boils down to the fixed point problem as follows. \[[\boldsymbol{q},\boldsymbol{\mu}]=\mathcal{E}^{-1}(\boldsymbol{y},\boldsymbol{ \beta})\ \ \&\ \ [\boldsymbol{y},\boldsymbol{\beta}]=\mathcal{Q}^{-1}(\boldsymbol{q}, \boldsymbol{\mu}) \tag{6.63}\] Similar with [108], [230] and [236], the BRD algorithm is devised to solve the aforementioned fixed point problem, where the algorithm starts with an initial guess of gas prices and power demands of ECs to solve the electricity market, and transmits the optimal GCs as well as electricity prices to the gas market, calculating its best ECs and gas prices, then a new iteration launches. A detailed flow chart of the overall solution procedure is given in Figure 6.2 The details of the BRD algorithm are presented in Algorithm 11. To enhance the convergence performance of the BRD algorithm, the following recommendations are made. 1. With fixed integer variables obtained from Steps \(2\) and \(4\) of Algorithm 11, i.e., worst-case scenarios \((\boldsymbol{\xi}^{*r},\boldsymbol{\delta}^{*r})\) and gas flow directions \((\boldsymbol{z}^{*},\boldsymbol{j}^{*r})\), all continuous variables, including energy prices and demands, can be obtained by solving the KKT optimality conditions [44], [236] of both EM-MP and GM-O-MP, where uncertainties of reserved gas \(\boldsymbol{g}^{r}\) would be replaced with \(\mathbf{T}_{1}\boldsymbol{y}\mathbf{T}_{2}\boldsymbol{\delta}^{*r}\) as pointed out in Appendix A.5. Therefore, Step \(5\) of Algorithm 11 can be replaced with solving the KKT conditions, and the BRD algorithm will only need to update the integer variables between the two markets. 2. Passing a weighted combination of the recent iteration prices and the previous iteration ones, instead of the recent price only, to the electricity and gas market clearing models, which can be done before Step \(2\) and Step \(4\) of Algorithm 11. Similar treatment has been found in the Cobweb algorithm applied in the national energy model system of the US [237]. ### 6.4 Simulation Results In this section, the proposed model and solution methods are performed on two test systems, where the first one, denoted **TS-I**, includes a \(5\)-bus power network and a \(7\)-node gas network, and the second one, named **TS-II**, consists of a \(118\)-bus power network and a \(20\)-node gas network. The topology of **TS-I** is displayed in Figure 6.3. Due to space limitations, a detailed description of the two test systems, parameters of the three algorithms and wind forecasted data are provided in Appendix B. The market models and algorithms are programmed using MATLAB with Gurobi solver and YALMIP toolbox [209] on a computer with \(8\) GB RAM and \(2.6\) GHz. #### Base-case Analysis In this sequel, **TS-I** is examined to provide the characterized equilibrium features. As shown in Figure 6.3, we have one GC for the GPU and one EC for the P2G unit. The time periods Figure 6.3: The topology of **TS-I**. Figure 6.2: The proposed procedure for the interdependent market mechanism. are selected to be from \(1\) to \(6\), and according to Theorem \(3\) in [238], the wind budget \(\Gamma_{2}\) is set at \(4\) to provide feasible decisions with probability more than \(97\%\). The BRD algorithm starts with zero \(\mathbf{q}\) and gas prices equal its production costs. All algorithms are terminated in a suitable number of iterations, where the BRD algorithm converges in three iterations, and the C&CG algorithm for electricity (gas) markets and S-MISOCP algorithm converge with \(6\) (\(3\)) and \(5\) iterations in average, respectively. The energy prices are depicted as a bone map in Figure. 6.4. The y-axes represent the power buses or gas nodes, and the x-axis displays the time intervals. The color bar of each sub-figure provides various colors for the energy prices range, for example, the LMEP range is \(3.24-40\) S/MWh. It can be observed from Figure. 6.4(a) that the LMEP remains unchanged during some intervals, such as hours \(2-5\), while the first and last intervals have a large difference in LMEP due to the ramping generation constraints. The relatively high LMEPs at bus \(5\) are caused by the expensive thermal generator \(G1\) and the limited capacities of connected the neighboring power transmission lines. On the other hand, the LMFGPs and LMRGPs, as shown in Figure 6.4(b)-(d), are almost constant in all intervals, reflecting the effectiveness of considering the line pack in the gas market model. Based on the simulation results, investments of flexible resources, such as energy storages, might be attracted to reduce energy prices at bus \(5\) and nodes \(1\), \(5\) and \(6\). #### Effectiveness of Modeling the Gas Dynamics The effectiveness of modeling the gas system dynamics is studied by comparing with other gas system models, namely the steady-state [223] and fixed gas flow direction [44], [108] models. In the steady-state model, the stored mass of gas inside pipelines are neglected, and the inlet- and outlet-flow rates are equal. It can be modeled by dropping (6.25)-(6.26) and Figure 6.4: Bone Map for energy prices at equilibrium: (a) LMEP ($/MWh); (b) LMFGP ($/kSm\({}^{3}\)h); (c) Upward LMRGP ($/kSm\({}^{3}\)h); (d) Downward LMRGP ($/kSm\({}^{3}\)h) adding \(q_{p,t}^{out}=q_{p,t}^{in}=q_{p,t}\) to the proposed gas market model. In the fixed direction model, the indicator binary variables \(\mathbf{z}\) are predetermined, i.e., the day-ahead gas flow directions are known and these directions would not be changed in the intra-day operation, namely \(\mathbf{j}=\mathbf{z}\). The operating costs and energy prices are listed in Table 6.1 for the three gas models. Neglecting the line pack increases the operating costs of the electricity and gas markets by \(18.74\%\) and \(24.40\%\), respectively. Meanwhile, fixing the gas flow directions would bring these markets additional \(\$2.723\times 10^{4}\) and \(\$5.533\times 10^{4}\), respectively. It can be concluded that considering the gas line pack and bidirectional gas flow decreases the energy prices and operating costs. #### Comparison with Deterministic Market Clearing Models To reveal the effectiveness of considering wind generation uncertainties, the deterministic market clearing models [44], [49], [108], [123], [154], [229], [230] are compared with the proposed robust operational equilibrium under different wind penetration levels (WPLs). The operational equilibrium of the deterministic model can be obtained by setting the upper and lower boundaries of wind power outputs with the forecasted values, i.e., \(\overline{P}_{e,t}=\underline{P}_{e,t}=\hat{W}_{e,t},\forall e,t\). WPL represents the relative change in the boundary values of wind outputs, for example, when WPL\(=+10\%\), the upper outputs are \(\overline{P}_{e,t}=1.1\times\overline{P}_{e,t}\), and the lower ones are \(\underline{P}_{e,t}=0.9\times\underline{P}_{e,t}\). The simulation results are listed in Table 6.2. In the deterministic model, there is no reserved gas requirement as the uncertainties are not considered. With the increment of WPL, the contracted reserved gas increases, while the firm amount decreases. The reason is that the LMFGPs would increase and the gas-fired generation is becoming less cost-effective than non-gas ones. However, when WPL is high, the need for the operational flexibility provided by GPU is significant. Therefore, the required gas demands in the GC would increase, from \(632.9\) to \(635.4\) kSm\({}^{3}\)h in this case. It is clear that the costs of the EC equal to zero, as the LMEPs are higher than both LMFGPs and LMRGPs considering energy conversion, and the operation of P2G is not cost-effective to the GMO. Due to the ramping constraints, the LMEP may be non-positive value when the WPL is high, as shown in the last row of Table 6.2. Therefore, the GMO would take advantage of the opportunity to sign EC as a revenue. In conclusion, it is important to make sure that the contracted gas amounts are deliverable for GPUs to utilize their flexibilities, and the proposed market clearing framework can effectively reflect the impacts of wind outputs uncertainties. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{Electricity Market} & \multicolumn{3}{c}{Gas Market} \\ \cline{2-6} & Cost (\(\$10^{5}\)) & LMEP & Cost (\(\$10^{5}\)) & LMFGP\({}^{**}\) & LMRGP\({}^{**}\) \\ \hline Without line pack & 2.0361 & 28.24 & 3.8504 & 300.0 & 210\({}^{\circ}\)/100\({}^{\circ}\) \\ \hline With fixed flow direction & 1.9871 & 27.78 & 3.2037 & 210.5 & 210\({}^{\circ}\)/100\({}^{\circ}\) \\ \hline The proposed model & 1.7148 & 25.94 & 2.6504 & 150.0 & 110\({}^{\circ}\)/100\({}^{\circ}\) \\ \hline \hline \end{tabular} * Average electricity prices at bus 3 (\(\$\)/MWh); * Average gas prices at node 4 (\(\$\)/kSm\({}^{3}\)h); \end{table} Table 6.1: Operating costs and energy prices at equilibrium with different gas system models #### Comparison with the Centralized Clearing Model In this study, the independent operation (IO) mode of electricity and gas markets, which is in line with the industrial practice, is compared with the central operation (CO) mode of the integrated market adopted in [223], [231], [232]. In the CO mode, the objective is to minimize the total operational costs of the two systems in both day-ahead and real-time stages, and the operational constraints of the two systems are included. The robust clearing results of the CO mode operated market can also be obtained by calling Algorithm 10 and Algorithm 9. The overall CO market model can be written as \[\{\min\eqref{eq:C0}+\eqref{eq:C0}:s.t.\eqref{eq:C0}-\eqref{eq:C0},\eqref{eq:C0} -\eqref{eq:C0}\},\eqref{eq:C0}-\eqref{eq:C0}\}\] where the energy contract terms are removed from the objective function. To provide a fair comparison with the CO mode, the net pocket-of-money (NPM), which equals the operational costs minus the revenue of contracts, for each market is calculated. The reserved gas volumes in the CO mode can be calculated by \[\rho_{h,t}^{+} =max\{\sum_{u\in\mathcal{U}_{g}(h)}\Phi(p_{u,t}^{r}-\hat{p}_{u,t} )/\eta_{u},\forall r,\ 0\},\forall h,t \tag{6.66}\] \[\rho_{h,t}^{-} =max\{\sum_{u\in\mathcal{U}_{g}(h)}\Phi(\hat{p}_{u,t}-p_{u,t}^{r })/\eta_{u},\forall r,\ 0\},\forall h,t \tag{6.67}\] The simulation results under different loading levels are summarized in Table 6.3 with different energy demand levels. In the CO mode, due to the high operational flexibility of GPUs and excluding the cost of contracts from the objective, gas demands for the power system would increase. Consequently, the NPM of the electricity market increases as shown in the first two levels. When the loading level of the gas system increases, a high competition between the two systems would be introduced, and the gas market needs to pay additional money. Besides, in the IO mode, the increment of gas (power) demands provides economical benefits to the electricity (gas) market due to the extra energy listed in EC (GC), please see the \(6^{\text{th}}\) and \(8\)textsuperscriptth columns of Table 6.3. Moreover, although the two markets are not cleared in the integrated mode, the total operational costs of the IO are very close to those of the CO. For example, in the first loading level, the total costs of the IO mode is \(\$3.3\times 10^{5}\), and that of CO is \(\$3.299\times 10^{5}\). Therefore, the social welfare impact (SWI), \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{3}{*}{WPL} & \multicolumn{3}{c}{Electricity Market} & \multicolumn{3}{c}{Gas Market} \\ \cline{2-7} & GC (kSm\({}^{3}\)h \(|\)\(\$10^{5}\)) & Total Cost & EC (MWh \(|\)\(\$10^{5}\)) & Total Cost \\ \cline{2-7} & Firm/Res.\({}^{*}\) & Cost & (\(\$10^{5}\)) & PtG\({}^{*}\) & Cost & (\(\$10^{5}\)) \\ \hline Deterministic & 659.8/00.0 & 0.9897 & 1.5900 & 00.0 & 00.0 & 2.5458 \\ \hline 0\% & 641.8/97.9 & 1.0644 & 1.7148 & 00.0 & 00.0 & 2.6504 \\ \hline \(+\)10\% & 632.9/126.1 & 1.0802 & 1.7551 & 00.0 & 00.0 & 2.6756 \\ \hline \(+\)20\% & 635.4/151.7 & 1.1103 & 1.9549 & 200.0 & -0.0324 & 2.6595 \\ \hline \hline \end{tabular} \({}^{\star}\) Cumulative energy amounts listed in contracts. \end{table} Table 6.2: Operational equilibria under different wind penetration levels which is defined as the deviation of the total operating costs of the two markets from costs of the CO mode, is quite small, as shown in the last column of Table 6.3. The SWI can be viewed as the costs of preserving the data privacy of the two markets. #### Computational Efficiency Analysis To demonstrate the computational efficiency of the proposed procedure, the devised algorithms are implemented on **TS-II**. The time intervals are considered from \(1\) to \(6\), and the uncertainty budgets are \(\Gamma_{1}=2,\Gamma_{2}=3\). The initial gas prices and EC in the BRD algorithm are set as the average production costs of gas wells and zero, respectively. The operational equilibrium is reached at the \(7^{\text{th}}\) iteration of the BRD algorithm. The iterative operational costs and execution time for electricity and gas market clearing are articulated in Figure 6.5. It is observed that the optimal costs of each market seems to iteratively move on a wave to a settled point, i.e., electricity and gas market clearing costs start from $\(895.8\)k and $\(905.5\)k and end at $\(947.9\)k and $\(8806.2\)k, respectively. The total solution time is \(2917.6\)s, which is mainly spent on solving the EM-SP and GM-I-MP, as there are large numbers of binary variables. Therefore, we recommend adopting the suggestions proposed in Chapter 3-Chapter 5 to enhance the performances of Algorithm 9 and Algorithm 10. Moreover, the following suggestions are made to improve the computational efficiency of the BRD algorithm: 1. Assign all the scenarios obtained from the last iteration, i.e., \(\boldsymbol{\zeta}^{*r},\boldsymbol{\delta}^{*r}\), which probably are the worst-case scenarios, to decrease the solution time of EM-SP and GM-O-SP. 2. Use an initial guess in solving each problem from the previous iteration, which can be done in many commercial solvers. With the above suggestions, although the iteration number of the BRD algorithm remains unchanged, the solution time decreases to \(1162.7\)s (\(\approx 40\%\)). That indicates the suitability of the proposed solution procedure for large-scale systems, considering it is programmed on a PC, rather than a high-performance workstation. \begin{table} \begin{tabular}{l c|c c|c c c|c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Loading \\ \end{tabular} } & \multicolumn{3}{c}{Central operation (\$10\lx@arcdegree$)} & \multicolumn{3}{c}{Independent operation (\$10\lx@arcdegree$)} & \multirow{2}{*}{ \begin{tabular}{c} SWI \\ (\%) \\ \end{tabular} } \\ \cline{2-3} \cline{5-8} & EM+GM & EM & GM & & \multicolumn{1}{c}{EM} & & \multicolumn{1}{c}{GM} & \\ \cline{2-3} \cline{5-8} & OC & NPM & NPM & OC & NPM & OC & NPM & \\ \hline 1.0G+1.0P & 3.299 & 1.835 & 1.464 & 1.714 & 1.714 & 2.650 & 1.586 & -0.03 \\ \hline 1.0G+1.5P & 4.433 & 2.892 & 1.541 & 2.874 & 2.874 & 2.927 & 1.585 & -0.58 \\ \hline 1.5G+1.0P & 4.584 & 2.560 & 2.024 & 1.949 & 1.637 & 4.184 & 2.974 & -0.59 \\ \hline 1.5G+1.5P & 5.813 & 2.822 & 2.991 & 2.854 & 2.853 & 4.319 & 2.968 & -0.02 \\ \hline \hline \end{tabular} * EM = electricity market; GM = gas market; OC = operational costs; NPM = net pocket-of-money. \end{table} Table 6.3: Economical comparisons between the independent and central market operations under different loading levels. ### 6.5 Conclusions and Discussions This chapter proposes a method for the robust operational equilibrium seeking of the coupled electricity and gas markets, considering wind generation outputs uncertainties and bidirectional energy transactions. In the gas market clearing model, the gas flow dynamics is considered through the modeling of the line pack and the gas flow directions are allowed to change in both day-ahead and real-time operation stages, so as to maximize its operational flexibility. A six-loop solution procedure is devised to obtain the market equilibrium, including one C&CG loop to clear the electricity market, two C&CG loops for the robust clearing of the gas market, two S-MISOCP loops to recovery the solution feasibility in the day-ahead and real-time operation stages of the gas system, and one BRD loop to seek the robust equilibrium. Several suggestions and recommendations are made to enhance the algorithmic performance. Simulation results reveal the benefits of enabling the gas system with more operational flexibility, the superiority of the robust operational equilibrium over the deterministic one, and the gap between the proposed independent clearing framework and the centralized clearing one. Incorporating the contingencies into the uncertainty set as well as the construction of distributionally robust optimization based market clearing framework would be the future work. Figure 6.5: The performance of the BRD algorithm. [MISSING_PAGE_POST] ## Chapter 7 Conclusions and Future Works ### 7.1 Conclusions and Discussions The resilient-economic robust operation of the most critical infrastructure energy system, i.e., the electric power system, is important to strengthen and support economic and social activities in modern society because electricity plays an important role in the secure and continuous operation of other energy systems. However, existing electric power grids experience different forms of vulnerabilities and random failures, such as natural disaster and malicious attacks, which may result in widespread economic and social contingencies. On the other hand, climate change and environmental concerns have been major driven forces for the integration of renewable power generation (RPG) with the power systems around globe. However, this integration at a large scale brings new challenges for power system operations because of the variable and uncertain output features of RPG. Therefore, it crucial to provide operation models for power systems against such uncertainties, i.e., contingencies and RPG fluctuations, to boost their resilience and reliability in a cost-effective manner. Moreover, due to their fast response, good regulation capacity, relatively high efficiency, and low generation costs, gas-fired power units (GPUs) have been playing increasingly larger roles in the resilient and economic operations of power system, such as quick power flow distribution adjustments in the pre-contingency stage, picking up important loads in the post-contingency stage, and mitigating the RPG penetrations in the real-time operation. These actions have significantly improved the physical interdependency between power systems and gas systems. This interdependency has been intensified due to not only the wide deployment of GPUs but also the advanced technologies of power-to-gas (P2G) facilities, which are the most well-qualified solution for the long-term energy storage in the existing bulk power system integrated large-scale RPG. Because the natural gas can be stored with large capacities in a cost-effective manner, P2G facilities are recently employed to effectively convert electricity into gas, which further is stored, transported and reutilized by gas networks. To this end, many efforts have been made on the power system resilient-economic operation that are categorized into two optimization models: (1) independent power system (IPS) optimization models, which determine the optimal operation strategies based on the requirements of the electricity utilities, however, they neglect the bidirectional physical interactions with natural gas systems. Therefore, they may not provide the optimal decisions for power system operators (PSOs) and it may cause physical violations for the interacted gas systems; (2) integrated electric-gas system (IEGS) co-optimization models, which overcome the above issue of neglecting physical interactions, and provide a strong solution in terms of energy efficiency improvement and cost-effective perspectives. However, in most cases, power and gas systems are operated by different utilities, suggesting inevitable economic behaviors between the two energy systems. Due to the fact that the utilization of the superior regulation capabilities of GPUs relies on a reliable gas supply, it is essential to model the physical and economic interactions between power systems and gas systems for resilient and economic decision-making. Therefore, this research has developed different operation models for the integrated electric-gas systems from the perspective of the PSO, where the bidirectional interactions between power systems and gas systems are considered from both the physical perspective, i.e., the consideration of the operational and security constraints of the gas system, and the economic perspective, which is addressed by modeling the gas contracts including the here-and-now gas demands and the wait-and-see fuel consumption utilized before and after uncertainty realization, respectively. The developed operation models has proved their ability to provide a high level of reliability and flexibility, which is required for PSO, and secure and feasible optimal decisions for both systems. This research has fulfilled the existing gaps between the academic researches and the industrial application by concerning the lack of neglecting physical and/or economic interactions with gas systems. Initially, the question of how to model and solve the interdependent power and gas system has been addressed. Then, different power system dispatch models are developed and efficiently optimized against \(N-k\) contingencies and volatile wind power uncertainties, where energy contracts are modeled. Finally, pool-based market mechanism is proposed to separately clear the interdependent electricity and gas markets under uncertainties. This study comprehensively discussed the accurate and efficient formulations of both natural gas system and power system that can be incorporated in the optimization models. Physical structures and operational constraints of each system, demonstrating the main components models, has been presented. The dynamic-state gas flow model, which is adopted in the thesis work to provide additional operating flexibility and practical system representation, has been formulated along with steady-state gas flow, AC power flow and DC power flow models. Different types of coordinations between the two systems are listed, while demonstrating their applicability with recent industrial practice. The work proposes two convex alternatives to solve the most fundamental problem in the IEGS operation, i.e., the optimal power-gas flow (OPGF). Considering the gas dynamics and bidirectional gas flow inside pipelines, which are adopted in all studies of the thesis, poses additional computational challenges to the OPGF problem. The first alternative, called gas flow correction (GFC) method, employs the multi-slack-node method and designs the Levenberg-Marquardt algorithm to solve the IEGS at transmission level. The second alternative, named the S-MISOCP algorithm, finds the OPGF for IEGS at distribution-level, considering the non-convex power and gas flow equations. The proposed algorithm is enhanced by (i) suggesting high-quality initial point instead of traditional or random ones, and (ii) adopting an adaptive penalty growth rate to control the main objective and violations weights in the penalized MISCOP problems. Thereafter, the resiliency of power systems against contingencies in terms of decision-making is revisited considering the interactions of power systems with gas systems. A two-stage robust day-ahead dispatch model for electric power system against \(N-k\) contingencies is proposed. The model detects the worst-case attack against power systems and identifies the optimal gas contracts with preventive and corrective actions; this is accomplished by optimizing the economic generation dispatch in both the pre-contingency and post-contingency stages. Due to the linearization of the non-convex Weymouth equation used in the gas network and the modeling of the on/off grid operation of the generators, binary variables are used for the post-contingency stage decision-making. Therefore, the resultant tri-level framework is solved by the nested column-and-constraint (NC&CG) algorithm. This developments illustrates how: 1. The proposed model provides a more economical and resilient operation than the IEGS literature models for the PSO because of incorporating the reserved gas contracts in the robust optimization problem. 2. The IPS literature models generally provide incorrect protection strategy against \(N-k\) contingencies, particularly in large power systems, and infeasible decisions for the IEGS operation. 3. Considering the over-generation issue is essential for the resilient optimization models to cover all possible malicious attacks. 4. The NC&CG algorithm can handle the proposed model, especially with the recommended suggestions, which increase its performances. 5. Dynamic-state gas flow model offers more flexibility because it handles the bidirectional gas flows and gas line pack. In the above model, the day-ahead gas contracts are formulated as a combination of two sub-contracts, namely, the firm gas contract and the reserved gas contract. Emerging P2G facilities to mitigate the surplus RPG outputs, bidirectional gas contracts are inevitable. This research develops two operational models for optimal power system operation with bidirectional gas contracts, including P2G and gas-to-power (G2P). The first model is a robust energy management (EM) model for the power distribution network (PDN) against wind generation uncertainty, where both the gas system operation constraints and bidirectional energy trading contracts are considered. The second model is a distributionally robust two-stage contracting model, where bidirectional contracts can be signed in both day-ahead and real-time decision-making stages. To tackle the computational challenge brought by the nonconvex Weymouth equations in the two decision-making stages, a quadruple-loop solution procedure is devised for the first model, including two C&CG loops and two S-MISOCP loops, through which a robust, feasible and nearly optimal solution can be obtained. The quadruple-loop solution procedure is also designed for the second model to solve the ### 7.1 Introduction The proposed approach is to use the proposed approach to solve the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem #### Modeling the Integrated Electric-gas Systems For gas system modeling, different simplifications are employed to obtain tractable formulations to be incorporated in their operational optimization problems. However, more developments are required to control and systemize the final solution accuracy and the computational burden. For example, the simplified dynamic-state gas flow model is formulated under some assumptions applied on the PDEs, and the gas flow inside compressors and their consumed energy are usually approximated into linear constraints. These simplifications could provide errors in the integrated operation with power system. The unanswered questions are 1) is the exactness of the final decisions acceptable for the interactive utilities? 2) is it possible to deduce the solution quality with such assumptions? 3) is it possible to better represent the gas system dynamics with low computational burden? 4) what are the best lengthes and sizes of pipelines as well as time intervals of the optimization problem, at which the simplified dynamic-state model is still working? In fact, the IEGS research is in its first era, and it is very fast in modeling and solution methodology developments. Modern modeling techniques are suggested to represent the energy systems, such as [150, 151, 152], which can be employed in IEGS decision-making frameworks. For power system modeling, adopting the accurate AC-OPF, instead of the widely employed and simplified DC-OPF, would be an interesting future research direction to develop robust operational models that consider the reactive power flow, voltage stability and power line losses. Considering the unit commitment decisions along with the resilient and economic dispatch provide more reliable and economical benefits to the PSO. Some interesting studies that are provide tight, accurate and efficient UC problems are [93], [239]. For the IEGS coordination, it is observed that integrating the electricity and natural gas systems reduces energy costs, decreases environmental impacts and improves the overall stability and security. However, this integration needs to adopt new technologies for energy conversion, energy storage, efficient energy transportation systems and end-user flexibilities, and it also requires to remove a number of economic and regulatory barriers, which restrict the information exchange. New directions for the future work can be educed from these resent requirements, which enhance the IEGS operation. Moreover, providing more accurate models for the coupling components, including GPUs, gas compressors and P2G facilities, as well as emerging the communication systems in the IEGS, introduce high performance for a reliable coordination mechanism. Furthermore, different uncertainties can be considered in the IEGS optimization models, such as demand response, local and sizes of the interacted gas systems. #### Solving the Integrated Electric-gas Systems Future works in solving the optimization problems of IEGSs could be categorized into two main directions. The first direction is to improve the performance and the applicability of the proposed methods that could be accomplished by: 1. Employing the S-MISOCP algorithm in solving a two-stage RO model. This work has been conducted in the thesis to consider power and gas system uncertainties, such as renewable outputs (see Chapter 5), and to be integrated with bilateral gas-electricity marketing model (see Chapter 6). It is the first attempt to employ DCP algorithms in two-stage RO models. Additional work is opened to solve the IEGS operation models under contingencies and demand response uncertainties as well as with other optimization techniques, such as DRO and SO. 2. GFC method is designed for transmission level IEGS with DC-OPF. New research is to drive novel energy flow correction methods that able to guarantee the solution feasibility of OPGF problems at distribution levels and to be incorporated with non-deterministic optimization models. 3. Proposing better adaptive penalty rate for the S-MISOCP algorithm with adjustable parameters or with additional factors, which could be controlled and be updated at each iteration. Moreover, suggesting high-quality initial solution is challengeable, especially with large-scale practical systems. 4. Considering the AC-OPF model with bidirectional power flow in the S-MISOCP algorithm for meshed power grids instead of the fixed directions in the radial networks. The second directions is to suggest new solution approaches. The future research could include 1. In fact, during this conducted research, novel approaches are proposed to tackle the nonlinearity and non-convexity of the IEGS problems. For example, an iterative PLA algorithm is designed in [75] to dynamically update the breakpoints instead of the fixed ones. Comparing and/or adopting with such up-to-date studies, are our subsequent works. 2. Other advanced approaches and strategies, such as resilience-oriented information gap decision theory [240] and geographical information system based models [241], have been implemented for IPS. New studies are stimulated to upgrade such studies for IEGS. 3. The proposed quadruple-loop procedure calls the C&CG algrithm two times to tackle the tri-level robust models with binary variables in the recourse problem. Other decomposition algorithms, such as [242] and [243], are recently proposed to solve this type of problems. Future work could be to employ such algorithms in the quadruple-loop procedure. Finally, there are several research directions to investigate the data-driven models. Alternative ambiguity sets have been recently proposed in power system distributionally robust studies that include moment-based, Kullback-Leibler divergence-based and Wasserstein distance based ones. The required data, solution conservativeness and computational burden depend on the type of ambiguity set [208]. Tractable reformulations of the DRO models have a great interest from researchers to find high-quality conservative and robust decisions with low computational burden. It should be noted that, till finishing this research, there is no attempt has found to adopt the DRO-based models in the coupled electricity and gas markets. ## Appendix A Reference Formulations ### A.1 Nonlinear Gas Compressor Model The gas flow inside the compressor \(f_{c,t}\) is defined as \[f_{c,t}=sgn(\pi_{i,t}-\pi_{o,t})\frac{H_{c,t}}{\bar{k}_{c}-\dot{k}_{c}\left(\frac {max\{\pi_{i,t},\pi_{o,t}\}}{min\{\pi_{i,t},\pi_{o,t}\}}\right)^{\bar{\alpha}_ {c}}},\ \forall c,t,(i,o)\in c.\] (A.1) where, the empirical parameters \(\bar{k}_{c},\ \dot{k}_{c}\) and \(\hat{\alpha}_{c}\) depend on the compressor design and gas physical properties [47], [67]; \(\pi_{i,t}\) and \(\pi_{o,t}\) are terminal pressures of the compressor connected with nodes \(i\) and \(o\), respectively; \(H_{c,t}\) is the controlled power of compressor \(c\) at time \(t\), it is restricted by a technical range as \[\underline{H}_{c}\leq H_{c,t}\leq\overline{H}_{c},\ \forall c,t.\] (A.2) where \(\underline{H}_{c}\) and \(\overline{H}_{c}\) are the minimum and maximum power for the compressor. The pressure ratio should be limited according to the compressor capacity as \[\underline{\Gamma}_{c}\leq\frac{\max\{\pi_{i,t},\pi_{o,t}\}}{\min\{\pi_{i,t}, \pi_{o,t}\}}\leq\overline{\Gamma}_{c},\ \forall c,t,(i,o)\in c.\] (A.3) where \(\underline{\Gamma}_{c}\) and \(\overline{\Gamma}_{c}\) are the minimum and maximum pressure ratios for the compressor. For the gas-driven compressors, the equivalent gas flow consumed by the compressor is a convex function in the required power \(H_{c,t}\) as \[\Delta f_{c,t}=c_{c}+b_{c}H_{c,t}+a_{c}H_{c,t}^{2},\ \forall c,t.\] (A.4) where \(c_{c},\ b_{c}\) and \(a_{c}\) are constant parameters. ### A.2 Formulation of the Exact Gas System Dynamics To represent the gas system dynamics, a set of PDE are derived from the physics of gas particles. This set guarantees that the system is affected by transportation process only, and there is no lost/gained energy or gas mass. According to [61], this set can be defined and summarized as follows. * _Continuity equation_: it demonstrates the mass balance, and guarantees that the mass inside pipelines remains constant over the time unless a quantity of gas is withdrawn or injected into the pipeline. The continuity equation is expressed in (A.5), please refer to [244] for the equation derivation. \[\partial(\lambda v)/\partial x+\partial\lambda/\partial t=0\] (A.5) where \(\upsilon\) and \(\lambda\) are the gas flow velocity and gas density, respectively. * _Momentum equation_: according to the Newton's second low, the momentum equation illustrates the rate of momentum of the gas particles and the resultant force on these particles. It is defined in (A.6), please refer to [199] for the equation derivation. \[\partial\pi/\partial x+G\lambda\partial h/\partial x+\upsilon\partial\lambda/ \partial t+\partial(\lambda\upsilon^{2})/\partial x=\alpha\lambda\upsilon| \upsilon|/(2D)\] (A.6) where \(\pi\) is the average gas pressure, and \(G,\ h,\ \alpha\) and \(D\) are the gravity force, pipeline height, friction and diameter, respectively. The terms represents the pressure gradient, friction force, gas flow rate, kinetic energy, and gravity force, respectively. * _Energy equation_: according to [167], the law of conservative of energy is \[\gamma\lambda=\frac{\partial}{\partial t}\left[\lambda(CT+\upsilon^{2}/2+ Gh)\right]+\frac{\partial}{\partial x}\left[\lambda\upsilon(CT+\pi/\lambda+ \upsilon^{2}/2+Gh)\right]\] (A.7) where \(\gamma\) is the heat transfer rate per unit time and mass, and \(C\) and \(T\) are the specific heat and temperature, respectively. * _State equation_: it expresses the relationship between the state variables of gas, including pressure, density and temperature. In [61], the thermodynamic equation is derived from universal gas law considering the gas compressibility \(Z(\pi,T)\), as \[\pi=\lambda RTZ(\pi,T)\] (A.8) where \(R\) is the specific gas constant. ### A.3 Unit Commitment Problem The UC problem identifies the optimal generating schedule for a set of generators subjected to their technical constraints. This problem is a large-scale NP-hard problem that adopted to proceed a weekly or day-ahead dispatch by system operator. Due to its importance in power system operation, it attracts the researchers' attention during the last four decades. In this section, the tight, efficient and compact UC problem formulated in [93] is presented. This formulation considers the generation limits, ramping up/down rates and minimum up/down times. The objective function of system operator is to minimize the total operational costs. It is defined in (A.9) to respectively include the generation costs, startup and shutdown costs, and penalties of non-served power loads. \[\min_{\Omega}\sum_{t\in\mathcal{T}}\left\{\sum_{u\in\mathcal{U}}\left(C_{u}(c_{u, t}p_{u,t})+C_{u}^{U}y_{u,t}+C_{u}^{D}z_{u,t}\right)+\sum_{d\in\mathcal{D}_{p}}C_{ d}\triangle p_{d,t}\right\}\] (A.9) where \(c_{u,t}\) is the commitment decision of unit \(u\) at time interval \(t\). \(y_{u,t}\) and \(z_{u,t}\) are the startup and shutdown variables and their associated costs are \(C_{u}^{U}\) and \(C_{u}^{D}\), respectively. \(\triangle p_{d,t}\) is the power load shedding at time \(t\) for demand \(d\), and it is limited with a practical feasible limits as \[0\leq\triangle p_{d,t}\leq P_{d,t},\ \forall d\in\mathcal{D}_{p},t.\] (A.10) The total generation power must equal the total served power at all times, that is guaranteed by \[\sum_{u\in\mathcal{U}}p_{u,t}=\sum_{d\in\mathcal{D}_{p}}(P_{d,t}-\triangle p_{ d,t}),\ \forall t.\] (A.11) Up and down spinning reserves (\(r_{u,t}^{+},\ r_{u,t}^{-}\)) are limited with a predefined operational constraints as \[r_{u,t}^{+}\geq\underline{R}_{u}^{+},\ \ r_{u,t}^{-}\geq\underline{R}_{u}^{-}, \ \forall u,t.\] (A.12) where \(\underline{R}_{u}^{+}\) and \(\underline{R}_{u}^{-}\) are the minimum limits for up and down spinning reserves, respectively. The generation limits are accomplished by \[p_{u,t}+r_{u,t}^{+}\leq\overline{P}_{u}c_{u,t}-(\overline{P}_{u }-P_{u}^{+})y_{u,t}-\max(P_{u}^{+}-P_{u}^{-},\ 0)z_{u,t+1},\ \forall u\in\mathcal{U}_{1},t,\] (A.13) \[p_{u,t}+r_{u,t}^{+}\leq\overline{P}_{u}c_{u,t}-(\overline{P}_{u }-P_{u}^{-})z_{u,t+1}-\max(P_{u}^{-}-P_{u}^{+},\ 0)y_{u,t},\ \forall u\in\mathcal{U}_{1},t,\] (A.14) \[p_{u,t}+r_{u,t}^{+}\leq\overline{P}_{u}c_{u,t}-(\overline{P}_{u }-P_{u}^{+})y_{u,t}-(\overline{P}_{u}-P_{u}^{-})z_{u,t+1},\ \forall u\in\mathcal{U}/\mathcal{U}_{1},t,\] (A.15) \[p_{u,t}\geq\underline{P}_{u}c_{u,t}+r_{u,t}^{-},\ \forall u,t.\] (A.16) where \(\overline{P}_{u}\) and \(\underline{P}_{u}\) are the maximum and minimum generation capacities, \(P_{u}^{+}\) and \(P_{u}^{-}\) are the startup and shutdown capabilities. \(\mathcal{U}_{1}\) is a subset of power units that have minimum up time limit \(T_{u}^{+}=1\). Ramping up and down constraints are defined as \[p_{u,t}+r_{u,t}^{+}-p_{u,t-1}\leq\overline{R}_{u}^{+},\ \forall u,t,\] (A.17) \[p_{u,t-1}-p_{u,t-1}+r_{u,t}^{-}\leq\overline{R_{u}}^{-},\ \forall u,t,\] (A.18) where \(\overline{R}_{u}^{+}\) and \(\overline{R}_{u}^{-}\) are the maximum limits for ramping up and down generation capacities. The minimum up and down times are expressed as \[\sum_{i=t-T_{u}^{+}+1}^{t}y_{u,i}\leq c_{u,t},\ \forall u,t\in[T_{u}^{+},T],\] (A.19) \[\sum_{i=t-T_{u}^{-}+1}^{t}z_{u,i}\leq 1-c_{u,t},\ \forall u,t\in[T_{u}^{-},T],\] (A.20) \[c_{u,t}-c_{u,t-1}=y_{u,t}-z_{u,t},\ \forall u,t.\] (A.21) Note that (A.21) is the startup and shutdown logical constraints. In order to guarantee the UC decisions are feasible and secure in practical operation, the network constraints, i.e., the CP equations (2.20)-(2.21), are added to the above UC problem to formulate the network-constrained UC (NCUC). ### A.4 Bus Injection Power Flow Model Before presenting the bus injection model, the general power flow equation is firstly discussed. It expresses the relationship between buses voltage \(\mathbf{v}\) and branches current \(\mathbf{i}\), as follows. \[\vec{\mathbf{i}}_{t}=\vec{\mathbf{Y}}_{t}\vec{\mathbf{v}}_{t},\ \ \forall t.\] (A.22) where \(\vec{\mathbf{v}}_{t}=\{\vec{v}_{1,t},....,\vec{v}_{|\mathcal{N}|},t\}\) is an \(|\mathcal{N}|\)-dimensional phasor vector of the voltages at each bus at time \(t\), \(|\mathcal{N}|\) is the number of buses, \(\vec{\mathbf{i}}_{t}=\{\vec{i}_{1,t},....,\vec{i}_{|\mathcal{N}|},t\}\) is an \(|\mathcal{N}|\)-dimensional phasor vector of the current induced at each system bus. \(\vec{\mathbf{Y}}_{t}\) is an \(|\mathcal{N}|\times|\mathcal{N}|\)-dimensional phasor matrix, known as bus admittance complex matrix. Please refer to the power system textbook [245] for the derivation and the final expressions for the bus admittance matrix considering the accurate models for transmission lines, cables and transformers. The power flow equation can be expressed in terms of power instead of currents as \[\vec{\mathbf{s}}_{t}=\vec{\mathbf{v}}_{t}\bullet(\vec{\mathbf{Y}}\vec{\mathbf{v}}_{t})^{ \star},\ \ \forall t.\] (A.23) where \(\vec{\mathbf{s}}_{t}=\mathbf{p}_{t}+j\mathbf{q}_{t}\) is an \(|\mathcal{N}|\)-dimensional phasor vector of the power at each bus. \(\mathbf{p}_{t}\) and \(\mathbf{q}_{t}\) are the active and reactive power injections. "\(\bullet\)" denotes element-wise multiplication, and "\({}^{\star}\)" denotes complex conjugation. Note that working with (A.23) is more efficient and convenient than (A.22), as it is independent on currents and directly computes the energy flows. Note that the following formulations provide the exact solution for the system power flows considering that the system is under a sinusoidal steady-state operation (no change in system frequency, phase shift and magnitude). Equation (A.23) can be written as two algebraic equations by equalizing the active power with the real terms and the reactive power with the imaginary terms. The following equations, in order, are the most common formulations of the AC-OPF: * Using the polar coordinates for voltage and rectangular coordinates for admittance, i.e., \(\vec{v}_{n,t}=v_{n,t}\angle\theta_{n,t},\ \vec{Y}_{mn}=G_{mn}+jB_{mn}\), then \[f_{l_{p},t}(\mathbf{v}_{t},\mathbf{\theta}_{t})=v_{n,t}v_{m,t}\big{[}G_{ mn}\cos(\theta_{n,t}-\theta_{m,t})+B_{mn}\sin(\theta_{n,t}-\theta_{m,t}) \big{]},\ \forall n,t,\] (A.24) \[f_{l_{q},t}(\mathbf{v}_{t},\mathbf{\theta}_{t})=v_{n,t}v_{m,t}\big{[}G_{ mn}\sin(\theta_{n,t}-\theta_{m,t})+B_{mn}\cos(\theta_{n,t}-\theta_{m,t}) \big{]},\ \forall n,t.\] (A.25) * Using the polar coordinates for both voltage and admittance, i.e., \(\vec{v}_{n,t}=v_{n,t}\angle\theta_{n,t},\ \vec{Y}_{mn}=Y_{mn}\angle\theta_{mn}\), then \[f_{l_{p},t}(\mathbf{v}_{t},\mathbf{\theta}_{t})=v_{n,t}v_{m,t}Y_{mn}\cos (\theta_{n,t}-\theta_{m,t}-\delta_{mn}),\ \forall n,t,\] (A.26) \[f_{l_{q},t}(\mathbf{v}_{t},\mathbf{\theta}_{t})=v_{n,t}v_{m,t}Y_{mn}\sin (\theta_{n,t}-\theta_{m,t}-\delta_{mn}),\ \forall n,t.\] (A.27) * Using the rectangular coordinates for both voltage and admittance, i.e., \(\vec{v}_{n,t}=a_{n,t}+jb_{n,t},\ \vec{Y}_{mn}=G_{mn}+jB_{mn}\), then \[f_{l_{p},t}(\mathbf{v}_{t},\mathbf{\theta}_{t}) =G_{mn}(a_{n,t}a_{m,t}+b_{n,t}b_{m,t})+B_{mn}(b_{n,t}a_{m,t}-a_{n,t }b_{m,t}),\ \forall n,t,\] (A.28) \[f_{l_{q},t}(\mathbf{v}_{t},\mathbf{\theta}_{t}) =G_{mn}(b_{n,t}a_{m,t}-a_{n,t}b_{m,t})-B_{mn}(a_{n,t}a_{m,t}+b_{n, t}b_{m,t}),\ \forall n,t.\] (A.29) Note that the fourth formulation, i.e., rectangular coordinates for the voltage and polar coordinates for the admittance, has no computational benefits or practical use. A set of boundary constraints are required to define the upper and lower limits of the decision variables. They could be characterized as \[-\tilde{\pi}\leq\theta_{n,t}\leq\tilde{\pi},\ \forall n,t,\ \underline{V}_{n}\leq v_{n,t}\leq\overline{V}_{n},\ \forall n,t\] (A.30) \[\underline{A}_{n}\leq a_{n,t}\leq\overline{A}_{n},\ \underline{B}_{n}\leq b_{n,t}\leq\overline{B}_{n},\ \forall n,t\] (A.31) \[f_{l_{p},t}(\mathbf{v}_{t},\mathbf{\theta}_{t})^{2}+f_{l_{q},t}(\mathbf{v}_ {t},\mathbf{\theta}_{t})^{2}\leq\overline{S}_{l,t}^{2},\ \forall l,t.\] (A.32) In the above expressions, the range of bus voltage in polar and rectangular forms are defined in (A.30) and (A.31), respectively; Finally, the active and reactive power flow is limited with the power line capacity by equation (A.32). ### A.5 Exact Separation Approach for GM-I-MP Exact separation approach is adopted to linearize the GM-I-MP (6.56). Based on _Lema 1_ in [246], there exists an optimal solution \(\mathbf{g}^{*}\) such that \(\mathbf{g}^{*}_{h,t}=\rho_{h,t}^{+}\), \(\mathbf{g}^{*}_{h,t}=-\rho_{h,t}^{-}\), or \(\mathbf{g}^{*}_{h,t}=0,\ \forall h,t\). Therefore, the following constraints satisfy the optimal solution. \[g_{h,t}=\rho_{h,t}^{+}\delta_{h,t}^{+}-\rho_{h,t}^{-}\delta_{h, t}^{-},\ \ \forall h,t,\] (A.33) \[\delta_{h,t}^{+}+\delta_{h,t}^{-}\leq 1,\ \ \delta_{h,t}^{+}, \delta_{h,t}^{-}=\{0,1\},\ \ \forall h,t.\] (A.34) Then, (A.33) can be written as \(\mathbf{g}=\mathbf{T}_{1}\mathbf{y}\mathbf{T}_{2}\mathbf{\delta}\). And the nonlinear product \((\mathbf{H}_{1}\mathbf{g})^{\top}\mathbf{\sigma}^{r}\) can be replaced with \(\mathbf{\varpi}^{r}\), which is linearized as \[-\overline{M}(1-\mathbf{\delta})\leq(\mathbf{H}_{1}\mathbf{T}_{1} \mathbf{y}\mathbf{T}_{2})^{\top}\mathbf{\sigma}^{r}-\mathbf{\varpi}^{r}\leq\overline{M}(1 -\mathbf{\delta}),\ \forall r,\] (A.35) \[-\overline{M}\mathbf{\delta}\leq\mathbf{\varpi}^{r}\leq\overline{M}\mathbf{ \delta},\ \forall r\] (A.36) where \(\overline{M}\) is a sufficient large positive number. ### A.6 Incremental Piecewise Linear Approximation Model Solving non-convex functions, such as Weymouth equation, poses difficulties in the NP-hard optimization problems. Piecewise linear approximation (PLA) methods provide a suitable solution to handel the nonlinearities of these functions. Different linearization models have been presented to solve the gas system problems. These models include special ordered set of type two [247], basic convex combination [248], logarithmic [249], disaggregated convex combination, disaggregated logarithmic, multiple choice [250], and incremental [251] models. In [188], all of these models are applied to the steady-state and dynamic-state gas flow models and it was shown that the incremental model outperformed the others in terms of computational time and accuracy. The Weymouth equation has two nonlinear terms, which are the squared nodal pressure \(\pi_{i,t}^{2}\) and the directional squared pipeline flow \(f_{p,t}|f_{p,t}|\). These two terms are linearized individually by applying the incremental PLA model (A.37)-(A.40), in which \(\Im(x)\) is the nonlinear function of variable \(x\) and it is defined by breakpoints \(\{\Im(x_{1}),\Im(x_{2})...\Im(x_{S+1})\}\). A continuous variable \(\lambda_{k}\) is introduced in (A.39) to represent the portion of each segment \(k\). Binary variables \(\zeta_{k}\) are used in (A.40) to select the active segment and to force the use of all continuous variables \(\lambda_{k}\) of the lower segments. The linearization error decrement can be accomplished by increasing the number of segments \(S\) and by changing the breakpoints values \(\Im(x_{k})\)[127]. \[\Im(x)\simeq\Im(x_{1})+\sum_{k\in\{1,2...S\}}\left[\Im(x_{k+1})-\Im(x_{k}) \right]\lambda_{k}\] (A.37) \[x=x_{1}+\sum_{k\in\{1,2...S\}}\left[x_{k+1}-x_{k}\right]\] (A.38) \[0\leq\lambda_{k}\leq 1,\ \ \forall k\in\{1,2...S\},\] (A.39) \[\lambda_{k+1}\leq\zeta_{k}\leq\lambda_{k},\,\zeta_{k}\in\{0,1\},\ \ \forall k\in\{1,2...S-1\}\] (A.40) For example, the breakpoints used for the incremental PLA model in case of 7Nodes gas system are displayed in Figure A.1. We select node \(i=1\) to display its normal and optimal breakpoints with two segments. The pressure range is \(35\)bar to \(70\)bar. The squared pressure function \(\Im(x)=\pi_{i,t}^{2}\) is depicted in the vertical axis. ## Appendix A A The 7N Nodes ### A.1 The 7N Nodes The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7N nodes. The 7N nodes are the 7NN nodes. The 7N nodes are the 7N nodes. [MISSING_PAGE_POST] ## Appendix B Energy Test Systems ### Power Transmission Systems \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline Hour & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline Load [GW] & 1.2486 & 1.1855 & 1.1483 & 1.1160 & 1.1160 & 1.1645 & 1.2292 & 1.3424 & 1.4233 & 1.5203 & 1.5689 & 1.6206 \\ \hline Hour & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 \\ \hline Load [GW] & 1.6740 & 1.6821 & 1.6982 & 1.6999 & 1.7468 & 1.7500 & 1.6902 & 1.6335 & 1.6335 & 1.5527 & \multirow{6}{*}{1.3262} & \multirow{6}{*}{1.3101} \\ \cline{1-1} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-111} \cline{6- #### IEEE-39Bus Power Transmission System We used the same parameters of power system which is utilized in paper "Robust Defense Strategy for Gas Electric Systems against Malicious Attacks", and there is no particular mention. Please see the system parameters from [https://sites.google.com/site/chengwang0617/home/data-sheet](https://sites.google.com/site/chengwang0617/home/data-sheet). The additional parameters are defined as follows. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{Bus} & \multirow{2}{*}{Type} & \multirow{2}{*}{Efficiency} & \multicolumn{2}{c|}{Real-time Upward} & \multicolumn{2}{c}{Real-time Downward} \\ & & & Adjustment [\(\$\)/MWh] & Adjustment [\(\$\)/MWh] \\ \hline 1 & 39 & NGPP & 0.5 & – & – \\ \hline 2 & 38 & Non-NGPP & – & 103.5600 & 51.7800 \\ \hline 3 & 37 & Non-NGPP & – & 99.6000 & 49.8000 \\ \hline 4 & 36 & Non-NGPP & – & 99.0000 & 49.5000 \\ \hline 5 & 35 & Non-NGPP & – & 118.2000 & 59.1000 \\ \hline 6 & 34 & Non-NGPP & – & 133.5600 & 66.7800 \\ \hline 7 & 33 & Non-NGPP & – & 166.4400 & 83.2200 \\ \hline 8 & 32 & NGPP & 0.5 & – & – \\ \hline 9 & 31 & NGPP & 0.6 & – & – \\ \hline 10 & 30 & Non-NGPP & – & 166.7400 & 83.3700 \\ \hline \hline \end{tabular} \end{table} Table 8: Adjustment costs for non-GPUs and Efficiencies of GPUs– IEEE–39Bus System Figure 20: Topology of IEEE–39Bus Power Transmission System ### B.1.3 IEEE-118Bus Power Transmission System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{No} & \multirow{2}{*}{Bus} & \multirow{2}{*}{\begin{tabular}{c} Pmax \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Pmin \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} a \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} b \\ [S/MWh2] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} c \\ [S/MWh] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Ramp Up \\ [MW/h] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Ramp Down \\ Adjustment \\ [S/MWh] \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Real-time \\ Downward \\ Adjustment \\ [S/MWh] \\ \end{tabular} } \\ \hline [MISSING_PAGE_POST] \ \hline \end{tabular} \end{table} Table 10: Parameters of P2G Facilities – IEEE–118Bus System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{No} & \multirow{2}{*}{Bus} & \multirow{2}{*}{\begin{tabular}{c} Pmax \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Pmin \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} a \\ [S/MWh2] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} b \\ [S/MWh] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} c \\ [S/MWh] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Ramp Up \\ [MW/h] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Ramp Down \\ Adjustment \\ [S/MWh] \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Real-time \\ Adjustment \\ [S/MWh] \\ \end{tabular} } \\ \hline 1 & 4 & 30 & 0 & 0.00 & 40 & 0.00 & 60 & 60 & 240 & 160 \\ \hline 2 & 6 & 30 & 0 & 0.00 & 40 & 0.00 & 6 & 6 & 240 & 160 \\ \hline 3 & 8 & 30 & 0 & 0.00 & 41 & 0.00 & 6 & 6 & 246 & 164 \\ \hline 4 & 12 & 300 & 0 & 0.00 & 45 & 0.00 & 6 & 6 & 270 & 180 \\ \hline 5 & 15 & 30 & 0 & 0.00 & 43 & 0.00 & 20 & 20 & 258 & 172 \\ \hline 6 & 19 & 30 & 0 & 0.00 & 35 & 0.00 & 20 & 20 & 210 & 140 \\ \hline 7 & 24 & 30 & 0 & 0.00 & 35 & 0.00 & 20 & 20 & 210 & 140 \\ \hline 8 & 25 & 300 & 0 & 0.0 ## Appendix B Energy Test Systems ### _Power Transmission Systems_ \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**No**} & \multirow{2}{*}{**Beginning**} & \multirow{2}{*}{**Terminal**} & \multirow{2}{*}{**Impedance**} & \multirow{2}{*}{**Capacity**} & \multirow{2}{*}{**No**} & \multirow{2}{*}{**Beginning**} & \multirow{2}{*}{**Terminal**} & \multirow{2}{*}{**Impedance**} & \multirow{2}{*}{**Capacity**} \\ & & **Bus** & & **[p.u]** & & & **Bus** & **Bus** & **[p.u]** & **[MW]** \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 16: Parameters of Power Lines – IEEE–118Bus System ### Power Transmission Systems #### 1.1.1 Power Transmission Systems The power transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power-law transmission system. The power-law transmission system is a power law transmission system. The power-law transmission system is a power law transmission system. The power-law transmission system is a power law transmission system. The power system is a power law transmission system. The power-law transmission system is a power law transmission system. The power-law transmission system is a power law transmission system. ### Power Distribution Networks \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline No & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ \hline Vmin [p.u] & 1 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 & 0.9 \\ \hline Vmax [p.u] & 1 & 1.05 & 1.05 & 1.05 & 1.05 & 1.05 & 1.05 & 1.05 & 1.05 & 1.05 & 1.05 & 1.05 \\ \hline \hline \end{tabular} \end{table} Table 21: Parameters of Power Bus – IEEE–13Bus System \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} ;v \\ \end{tabular} } & \multirow{2}{*}{Bus} & \multicolumn{2}{c|}{Active Power} & \multicolumn{2}{c|}{Reactive Power} & \multicolumn{2}{c|}{Non-served power penalty [S/MWh]} \\ \hline 1 & 4 & 0.1218 & 0.146 & 1000 \\ \hline 2 & 5 & 0.0518 & 0.0629 & 1000 \\ \hline 3 & 6 & 0.07 & 0.0665 & 1000 \\ \hline 4 & 9 & 0.0518 & 0.0403 & 1000 \\ \hline 5 & 10 & 0.0518 & 0.076 & 1000 \\ \hline 6 & 8 & 0.2567 & 0.3031 & 1000 \\ \hline 7 & 12 & 0.3517 & 0.3323 & 1000 \\ \hline 8 & 13 & 0.039 & 0.04336 & 1000 \\ \hline \hline \end{tabular} \end{table} Table 23: Load Portion – IEEE–13Bus System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline No & \begin{tabular}{c} Beginning \\ Bus \\ \end{tabular} & \begin{tabular}{c} Terminal \\ Bus \\ \end{tabular} & \begin{tabular}{c} X \\ [p.u] \\ \end{tabular} & \begin{tabular}{c} R \\ [p.u] \\ \end{tabular} & \begin{tabular}{c} G \\ [p.u] \\ \end{tabular} & \begin{tabular}{c} B \\ [p.u] \\ \end{tabular} & \begin{tabular}{c} Line Ampacity \\ [A] \\ \end{tabular} \\ \hline 1 & 1 & 2 & 0.001927903 & 0.000656271 & 0.00000000 & 0.119318212 & 500 \\ \hline 2 & 2 & 5 & 0.000559393 & 0.000356356 & 0.000000000 & 0.026984765 & 500 \\ \hline 3 & 5 & 6 & 0.000385495 & 0.000376092 & 0.00000000 & 0.013255538 & 500 \\ \hline 4 & 2 & 3 & 0.000559393 & 0.000356356 & 0.00000000 & 0.026984765 & 500 \\ \hline 5 & 3 & 4 & 0.000895029 & 0.000570170 & 0.000000000 & 0.035348101 & 500 \\ \hline 6 & 2 & 7 & 0.001927903 & 0.000656271 & 0.000000000 & 0.119318212 & 500 \\ \hline 7 & 7 & 8 & 0.000385495 & 0.000376092 & 0.000000000 & 0.013255538 & 500 \\ \hline 8 & 8 & 9 & 0.000385495 & 0.000376092 & 0.000000000 & 0.013255538 & 500 \\ \hline 9 & 7 & 11 & 0.000895029 & 0.000570170 & 0.00000000 & 0.035348101 & 500 \\ \hline 10 & 11 & 13 & 0.001927903 & 0.000656271 & 0.000000000 & 0.119318212 & 500 \\ \hline 11 & 7 & 10 & 0.000385495 & 0.000376092 & 0.000000000 & 0.013255538 & 500 \\ \hline 12 & 11 & 12 & 0.000385495 & 0.000376092 & 0.000000000 & 0.013255538 & 500 \\ \hline \hline \end{tabular} \end{table} Table 20: Parameters of Power Feeders – IEEE–13Bus System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline Hour & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline Active Load [MW] & 3.284 & 3.096 & 2.965 & 2.89 & 2.909 & 3.003 & 3.246 & 3.565 & 3.847 & 4.072 & 4.279 & 4.429 \\ \hline Reactive Load [MVAR] & 1.986 & 1.873 & 1.793 & 1.748 & 1.759 & 1.816 & 1.963 & 2.156 & 2.326 & 2.463 & 2.587 & 2.678 \\ \hline Hour & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 \\ \hline Active Load [MW] & 4.541 & 4.56 & 4.654 & 4.785 & 4.804 & 4.616 & 4.598 & 4.447 & 4.447 & 4.26 & 3.772 & 3.678 \\ \hline Reactive Load [MVAR] & 2.746 & 2.758 & 2.814 & 2.894 & 2.905 & 2.792 & 2.78 & 2.69 & 2.69 & 2.576 & 2.281 & 2.224 \\ \hline \hline \end{tabular} \end{table} Table 23: Load Portion – IEEE–13Bus System ### B.2 IEEE-123Bus Power Distribution Network ### Gas Systems \begin{table} \begin{tabular}{c| ### 8.2.2.1. _Sas Systems_ ## Appendix B Energy Test Systems ### B.1 Energy Test Systems \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{Node} & \multirow{2}{*}{Maximum} & \multirow{2}{*}{Minimum} & \multirow{2}{*}{Price} & \multirow{2}{*}{Upward reserves} & \multirow{2}{*}{Downward reserves} \\ & & [MSm\({}^{3}\)/h] & [MSm\({}^{3}\)/h] & [k\(\$/\)MSm\({}^{3}\)] & [k\(\$/\)MSm\({}^{3}\)] & [k\(\$/\)MSm\({}^{3}\)] \\ \hline 1 & 5 & 0.7 & 0 & 150 & 210 & 180 \\ \hline \hline \end{tabular} \end{table} Table 35: Parameters of Load Demand – 7Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline Hour & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Active \\ Load \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{0.2286} & \multirow{2}{*}{0.2157} & \multirow{2}{*}{0.2057} & \multirow{2}{*}{0.2086} & \multirow{2}{*}{0.2200} & \multirow{2}{*}{0.2457} & \multirow{2}{*}{0.2471} & \multirow{2}{*}{0.2629} & \multirow{2}{*}{0.2714} & \multirow{2}{*}{0.2743} & \multirow{2}{*}{0.2700} & \multirow{2}{*}{0.2700} \\ & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Hour \\ \end{tabular} } & \multirow{2}{*}{13} & \multirow{2}{*}{14} & \multirow{2}{*}{15} & \multirow{2}{*}{16} & \multirow{2}{*}{17} & \multirow{2}{*}{18} & \multirow{2}{*}{19} & \multirow{2}{*}{20} & \multirow{2}{*}{21} & \multirow{2}{*}{22} & \multirow{2}{*}{23} & \multirow{2}{*}{24} \\ & & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Active \\ Load \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{4.70} & \multirow{2}{*}{4.7211} & \multirow{2}{*}{4.8182} & \multirow{2}{*}{4.9542} & \multirow{2}{*}{4.9737} & \multirow{2}{*}{4.7794} & \multirow{2}{*}{4.76} & \multirow{2}{*}{4.6045} & \multirow{2}{*}{4.6045} & \multirow{2}{*}{4.4012} & \multirow{2}{*}{3.9051} & \multirow{2}{*}{3.808} \\ & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Reactive \\ Load \\ [MV] \\ \end{tabular} } & \multirow{2}{*}{2.76} & \multirow{2}{*}{2.7771} & \multirow{2}{*}{2.8342} & \multirow{2}{*}{2.9142} & \multirow{2}{*}{2.9257} & \multirow{2}{*}{2.8114} & \multirow{2}{*}{2.8} & \multirow{2}{*}{2.7085} & \multirow{2}{*}{2.7085} & \multirow{2}{*}{2.5942} & \multirow{2}{*}{2.2971} & \multirow{2}{*}{2.24} \\ & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{ \begin{tabular}{c} Active \\ Load \\ [MV] \\ \end{tabular} } & \multirow{2}{*}{5714} & \multirow{2}{*}{43} & \multirow{2}{*}{86} & \multirow{2}{*}{86} & \multirow{2}{*}{14} & \multirow{2}{*}{29} & \multirow{2}{*}{2.8114} & \multirow{2}{*}{2.8} & \multirow{2}{*}{2.7085} & \multirow{2}{*}{2.7085} & \multirow{2}{*}{2.5942} & \multirow{2}{*}{2.2971} & \multirow{2}{*}{2.24} \\ & & & & & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 32: Parameters of Gas Pipelines – 7Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Hour \\ \end{tabular} } & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Active \\ Load \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{3.4} & \multirow{2}{*}{3.2057} & \multirow{2}{*}{3.0697} & \multirow{2}{*}{2.992} & \multirow{2}{*}{3.0114} & \multirow{2}{*}{3.1085} & \multirow{2}{*}{3.36} & \multirow{2}{*}{3.9828} & \multirow{2}{*}{4.216} & \multirow{2}{*}{4.4297} & \multirow{2}{*}{4.585} \\ & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Reactive \\ Load \\ [MV] \\ \end{tabular} } & \multirow{2}{*}{2} & \multirow{2}{*}{1.8857} & \multirow{2}{*}{1.8057} & \multirow{2}{*}{1.76} & \multirow{2}{*}{1.7714} & \multirow{2}{*}{1.8285} & \multirow{2}{*}{1.97} & \multirow{2}{*}{2.1714} & \multirow{2}{*}{2.3428} & \multirow{2}{*}{2.48} & \multirow{2}{*}{2.6057} & \multirow{2}{*}{2.697} \\ & & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Hour \\ \end{tabular} } & \multirow{2}{*}{13} & \multirow{2}{*}{14} & \multirow{2}{*}{15} & \multirow{2}{*}{16} & \multirow{2}{*}{17} & \multirow{2}{*}{18} & \multirow{2}{*}{19} & \multirow{2}{*}{20} & \multirow{2}{*}{21} & \multirow{2}{*}{22} & \multirow{2}{*}{23} & \multirow{2}{*}{24} \\ & & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{\begin{tabular}{c} Active \\ Load \\ [MW] \\ \end{tabular} } & \multirow{2}{*}{4.70} & \multirow{2}{*}{4.7211} & \multirow{2}{*}{4.8182} & \multirow{2}{*}{4.9542} & \multirow{2}{*}{4.9737} & \multirow{2}{*}{4.7794} & \multirow{2}{*}{4.76} & \multirow{2}{*}{4.6045} & \multirow{2}{*}{4.6045} & \multirow{2}{*}{4.4012} & \multirow{2}{*}{3.9051} & \multirow{2}{*}{3.808} \\ & & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{ \begin{tabular}{c} Reactive \\ Load [MV] \\ \end{tabular} } & \multirow{2}{*}{2.76} & \multirow{2}{*}{2.7771} & \multirow{2}{*}{2.8342} & \multirow{2}{*}{2.9142} & \multirow{2}{*}{2.9257} & \multirow{2}{*}{2.8114} & \multirow{2}{*}{2.8} & \multirow{2}{*}{2.7085} & \multirow{2}{*}{2.7085} & \multirow{2}{*}{2.5942} & \multirow{2}{*}{2.2971} & \multirow{2}{*}{2.24} \\ & & & & & & & & & & & & & \\ \hline \hline \end{tab \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline No & \multicolumn{2}{c|}{From Node} & \multicolumn{2}{c|}{To Node} & \multicolumn{2}{c}{Compression Factor} \\ \hline 1 & 1 & 6 & 7 & 1.3 \\ \hline \end{tabular} \end{table} Table 40: Parameters of Gas Compressors – 8Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline No & \multicolumn{2}{c|}{\begin{tabular}{c} Maximum \\ Pressure \\ [bar] \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Minimum \\ Pressure \\ [bar] \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Load \\ Portion \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Non-served \\ penalty cost \\ [k\$/MSm\$\textsuperscript{3}] \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Upward \\ reserves \\ [k\$/MSm\$\textsuperscript{3}] \\ \end{tabular} } & \multicolumn{2}{c|}{ \begin{tabular}{c} Downward \\ measures \\ [k\$/MSm\textsuperscript{3}] \\ \end{tabular} } \\ \hline 1 & 1 & 70 & 35 & 0.00 & 500 & 110 & 100 & 20000 \\ \hline 2 & 2 & 70 & 35 & 0.10 & 500 & 110 & 100 & 20000 \\ \hline 3 & 3 & 70 & 35 & 0.15 & 500 & 110 & 100 & 20000 \\ \hline 4 & 4 & 70 & 50 & 0.30 & 500 & 110 & 100 & 20000 \\ \hline 5 & 5 & 70 & 35 & 0.00 & 500 & 110 & 100 & 20000 \\ \hline 6 & 6 & 70 & 35 & 0.00 & 500 & 110 & 100 & 20000 \\ \hline 7 & 7 & 70 & 35 & 0.20 & 500 & 110 & 100 & 20000 \\ \hline \end{tabular} \end{table} Table 37: Connection Lines Between the PJM–5Bus System coupled and 7Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline No & \multicolumn{2}{c|}{\begin{tabular}{c} Maximum \\ Pressure [bar] \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Minimum \\ Pressure [bar] \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Load \\ Portion \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Non-served \\ penalty cost \\ [k\$/MSm\textsuperscript{3}] \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Upward \\ reserves \\ [k\$/MSm\textsuperscript{3}] \\ \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} Downward \\ penalty \\ [k\$/MSm\textsuperscript{3}] \\ \end{tabular} } \\ \hline 1 & 1 & 70 & 35 & 0.00 & 500 & 110 & 100 & 20000 \\ \hline 2 & 2 & 70 & 35 & 0.10 & 500 & 110 & 100 & 20000 \\ \hline 3 & 3 & 70 & 35 & 0.15 & 500 & 110 & 100 & 20000 \\ \hline 4 & 4 & 70 & 50 & 0.30 & 500 & 110 & 100 & 20000 \\ \hline 5 & 5 & 70 & 35 & 0.00 & 500 & 110 & 100 & 20000 \\ \hline 6 & 6 & 70 & 35 & 0.00 & 500 & 110 & 100 & 20000 \\ \hline 7 & 7 & 70 & 35 & 0.20 & 500 & 110 & 100 & 20000 \\ \hline \end{tabular} \end{table} Table 36: Parameters of Gas Nodes and their Load Portion – 7Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c} \hline No & \multicolumn{2}{c|}{\begin{tabular}{c} Normal \\ Node \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Linear \\ [k\$/ms\textsuperscript{3}/h] \\ \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} Compression Factor \\ [(KSm\textsuperscript{3}/h)\textsuperscript{2}/bar\textsscript{2}] \\ \end{tabular} } \\ \hline 1 & 1 & 6 & 0.4 & 4500 & Yes & 0.6778 & 276.3390 \\ \hline 2 & 3 & 2 & 0.2 & 8000 & Yes & 0.3012 & 4.8575 \\ \hline 3 & 5 & 3 & 0.2 & 5500 & No & 0.2071 & 7.0655 \\ \hline 4 & 4 & 3 & 0.2 & 5000 & No & 0.1883 & 7.7720 \\ \hline 5 & 5 & 4 & 0.2 & 8000 & Yes & 0.3012 & 4.8575 \\ \hline 6 & 7 & 4 & 0.2 & 5500 & Yes & 0.2071 & 7.0655 \\ \hline 7 & 7 & 8 & 0.2 & 8000 & No & 0.0753 & 0.1518 \\ \hline \end{tabular} \end{table} Table 37: Connections Between the PJM–5Bus System coupled and 7Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c} \hline No & \multicolumn{2}{c|}{\begin{tabular}{c} Normal \\ Node \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Linear \\ [k\$/ms\textsuperscript{3}/h] \\ \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} Compression Factor \\ [(KSm\textsuperscript{3}/h)\textsuperscript{2}/bar\textsscript{2}] \\ \end{tabular} } \\ \hline 1 & 1 & 6 & 0.4 & 4500 & Yes & 0.6778 & 276.3390 \\ \hline 2 & 3 & 2 & 0.2 & 8000 & Yes & 0.3012 & 4.8575 \\ \hline 3 & 5 & 3 & 0.2 & 5500 & No & 0.2071 & 7.0655 \\ \hline 4 & 4 & 3 & 0.2 & 5000 & No & 0.1883 & 7.7720 \\ \hline 5 & 5 & 4 & 0.2 & 8000 & Yes & 0.3012 & 4.8575 \\ \hline 6 & 7 & 4 & 0.2 & 5500 & Yes & 0.2071 & 7.0655 \\ \hline 7 & 7 & 8 & 0.2 & 8000 & No & 0.0753 & 0.1518 \\ \hline \end{tabular} \end{table} Table 40: Parameters of Gas Compressors – 8Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{\begin{tabular}{c} List of \\ Generators \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Day-ahead \\ Upward \\ Reserve [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Day-ahead \\ Downward \\ Reserve [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Real-time \\ Upward \\ Penalty [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Real-time \\ Downward \\ Penalty [\$/KSm\({}^{3}\)] \\ \end{tabular} } \\ \hline 1 & G2 & 900 & 450 & 1800 & 900 \\ \hline \end{tabular} \end{table} Table 45: P2G Gas Contracts Between IEEE-13Bus System and 8Nodes \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{\begin{tabular}{c} List of \\ Generators \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Day-ahead \\ Upward \\ Reserve [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Day-ahead \\ Downward \\ Reserve [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Real-time \\ Upward \\ Penalty [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Real-time \\ Downward \\ Penalty [\$/KSm\({}^{3}\)] \\ \end{tabular} } \\ \hline 1 & G2 & 900 & 450 & 1800 & 900 \\ \hline \end{tabular} \end{table} Table 44: P2G Gas Contracts Between IEEE-13Bus System and 8Nodes \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{\begin{tabular}{c} Node \\ Node \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Maximum Pressure [bar] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Minimum Pressure [bar] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Load Portion \\ (without stress) \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Load Portion \\ (with stress) \\ \end{tabular} } \\ \hline 1 & 1 & 20 & 5 & 0.00 & 0.00 \\ \hline 2 & 2 & 20 & 5 & 0.00 & 0.00 \\ \hline 3 & 3 & 20 & 5 & 0.25 & 0.30 \\ \hline 4 & 4 & 20 & 5 & 0.20 & 0.24 \\ \hline 5 & 5 & 20 & 5 & 0.00 & 0.00 \\ \hline 6 & 6 & 20 & 5 & 0.25 & 0.30 \\ \hline 7 & 7 & 20 & 5 & 0.20 & 0.24 \\ \hline \hline \end{tabular} \end{table} Table 43: Connection Lines Between IEEE–13Bus System and 8Nodes Gas \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{\begin{tabular}{c} Type \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Power Bus \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Gas Node \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Efficiency \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Conversion Ratio (including efficiency) \\ \end{tabular} } \\ \cline{1-1} & & & 20 & 5 & 0.00 & 0.00 \\ \hline 2 & 2 & 20 & 5 & 0.00 & 0.00 \\ \hline 2 & P2G unit & 8 & 5 & 35\% & 0.03280 \\ \hline \hline \end{tabular} \end{table} Table 43: Connection Lines Between IEEE–13Bus System and 8Nodes Gas \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{\begin{tabular}{c} List of \\ Generators \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Day-ahead \\ Upward \\ Reserve [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Day-ahead \\ Downward \\ Reserve [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Real-time \\ Upward \\ Penalty [\$/KSm\({}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Real-time \\ Downward \\ Penalty [\$/KSm\({}^{3}\)] \\ \end{tabular} } \\ \hline 1 & G2 & 900 & 450 & 1800 & 900 \\ \hline \end{tabular} \end{table} Table 45: P2G Gas Contracts Between IEEE–13Bus System and 8Nodes \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline Hour & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline \begin{tabular}{c} Load \\ [KSm\({}^{3}\)/h] \\ \end{tabular} & 1.7025 & 1.7510 & 1.7754 & 1.7245 & 1.6518 & 1.6421 & 1.6945 & 1.7236 & 1.6781 & 1.6115 & 1.6087 & 1.6683 \\ \hline \begin{tabular}{c} Hour \\ Load [KSm\({}^{3}\)/h] \\ \end{tabular} & 1.7050 & 1.6674 & 1.6090 & 1.6142 & 1.6819 & 1.7265 & 1.6964 & 1.6451 & 1.6569 & 1.7304 & 1.7802 & 1.7545 \\ \hline \end{tabular} \end{table} Table 41: Parameters of Load Demand – 8Nodes Gas #### b.3.3 20Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{\begin{tabular}{c} List of \\ Generators \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Day-ahead \\ Upward \\ Reserve [\$/KSm\textsuperscript{\(}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Day-ahead \\ Upward \\ Reserve [\$/KSm\textsuperscript{\(}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Real-time \\ Upward \\ Upward \\ Penalty [\$/KSm\textsuperscript{\(}^{3}\)] \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Real-time \\ Downward \\ Penalty [\$/KSm\textsuperscript{\(}^{3}\)] \\ \end{tabular} } \\ \hline 1 & G2+G4 & 900 & 450 & 1800 & 900 \\ \hline 2 & G6 & 900 & 450 & 1800 & 900 \\ \hline 3 & G8 & 900 & 450 & 1800 & 900 \\ \hline 4 & G10 & 900 & 450 & 1800 & 900 \\ \hline \end{tabular} \end{table} Table B.51: Parameters of Load Demand – 20Nodes Gas System \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{\begin{tabular}{c} Beginning \\ Node \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Terminal \\ Node \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Diameter \\ [m] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Length \\ [m] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(K_{\omega}^{=}\) \\ [(MSm\textsuperscript{\(}^{3}\)/h)/pa] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Direction \\ \{0=bi- \\ direction, \\ 1=fixed\} \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Friction \\ \end{tabular} } \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table B.49: Parameters of Gas Pipelines – 20Nodes Gas System ## Appendix B Energy Test Systems ### Gas Systems [MISSING_PAGE_POST]
2306.02912
Unsupervised haze removal from underwater images
Several supervised networks exist that remove haze information from underwater images using paired datasets and pixel-wise loss functions. However, training these networks requires large amounts of paired data which is cumbersome, complex and time-consuming. Also, directly using adversarial and cycle consistency loss functions for unsupervised learning is inaccurate as the underlying mapping from clean to underwater images is one-to-many, resulting in an inaccurate constraint on the cycle consistency loss. To address these issues, we propose a new method to remove haze from underwater images using unpaired data. Our model disentangles haze and content information from underwater images using a Haze Disentanglement Network (HDN). The disentangled content is used by a restoration network to generate a clean image using adversarial losses. The disentangled haze is then used as a guide for underwater image regeneration resulting in a strong constraint on cycle consistency loss and improved performance gains. Different ablation studies show that the haze and content from underwater images are effectively separated. Exhaustive experiments reveal that accurate cycle consistency constraint and the proposed network architecture play an important role in yielding enhanced results. Experiments on UFO-120, UWNet, UWScenes, and UIEB underwater datasets indicate that the results of our method outperform prior art both visually and quantitatively.
Praveen Kandula, A. N. Rajagopalan
2023-06-05T14:15:46Z
http://arxiv.org/abs/2306.02912v1
# Unsupervised haze removal from underwater images ###### Abstract Several supervised networks exist that remove haze information from underwater images using paired datasets and pixel-wise loss functions. However, training these networks requires large amounts of paired data which is cumbersome, complex and time-consuming. Also, directly using adversarial and cycle consistency loss functions for unsupervised learning is inaccurate as the underlying mapping from clean to underwater images is one-to-many, resulting in an inaccurate constraint on the cycle consistency loss. To address these issues, we propose a new method to remove haze from underwater images using unpaired data. Our model disentangles haze and content information from underwater images using a Haze Disentanglement Network (HDN). The disentangled content is used by a restoration network to generate a clean image using adversarial losses. The disentangled haze is then used as a guide for underwater image regeneration resulting in a strong constraint on cycle consistency loss and improved performance gains. Different ablation studies show that the haze and content from underwater images are effectively separated. Exhaustive experiments reveal that accurate cycle consistency constraint and the proposed network architecture play an important role in yielding enhanced results. Experiments on UFO-120, UWNet, UWScenes, and UIEB underwater datasets indicate that the results of our method outperform prior art both visually and quantitatively. ## I Introduction Removing haze from underwater images remains a challenging task due to degradation by the participating medium. The low quality of underwater images can be attributed to many factors. Light gets attenuated and scattered by the underwater medium before it reaches the camera lens resulting in colour shifts, low-light, and haze effects in the observed images [1, 2, 3, 4]. Additionally, wavelength-based attenuation causes various undesirable colour tones. Specifically, most underwater images have green and blue colour tones as red colour undergoes rapid attenuation. The benefits of removing haze from underwater images include efficient monitoring of coral reefs, marine biology [5], analysis of flora and fauna, in addition to improving high-level computer vision tasks like segmentation and classification of marine animals. Several works have been proposed in the past few decades for different restoration tasks like deblurring [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], de-hazing [18, 19, 20], inpainting [21, 22], enhancement [23, 24], super-resolution [25, 26, 27, 28, 29, 30, 31], bokeh rendering [32] etc., and many others. Among them, Deep CNN [33] uses a convolutional neural network (CNN) to remove colour shifts and restores the underlying clean image. WaterGAN [34] generates synthetic UW images using terrestrial images and corresponding depth maps. The generated data is then used to train a neural network for removing colour casts in UW images. Simultaneous Enhancement and Super-Resolution (SESR) [35] employs an encoder-decoder model for simultaneous enhancement and super-resolution of UW images. Dense GAN [36] trains a multi-stage neural network using adversarial and \(L_{1}\) loss functions. Water-Net [37] constructs a Underwater Image Enhancement Benchmark (UIEB) datset using an real images and the corresponding references to be the best results from different traditional algorithms. A significant drawback of these methods is the requirement of supervised pairs to train the neural network. **Unsupervised restoration:** To address the unavailability of supervised pairs, several unsupervised image restoration algorithms have been proposed. Among them, [39] uses two generative adversarial networks (GANs) to transfer images from low-resolution to high-resolution. [8] makes use of re-blurring and gradient losses to restore blurry images. [40] uses KL-divergence to disentangle blur and content information from input images. UEGAN [41] enhances low-light images using local and global discriminators along with perceptual loss. Similar to [41, 42] additionally utilizes attention maps for image enhancement. Directly using cycle consistency of [38] for haze removal in underwater images is not correct as the underlying mapping from clean to underwater domain is one-to-many. A detailed discussion of cycle consistency mismatch in CycleGAN is given in Sec. II. In this paper, we propose an unpaired underwater haze removal algorithm to address the shortcomings of previous works. Our method uses accurate cycle consistency matching loss (see Fig. 1 (b)) for removing haze in underwater images. More specifically, we disentangle haze and content image from underwater images using a haze disentanglement network (HDN). The disentangled content from HDN is used as input to a restoration network (\(G_{C}\)), which enhances the content information to generate the latent image. The disentangled haze is then used in underwater image regeneration for cycle consistency loss. This loss, combined with adversarial losses on underwater and clean images, successfully removes haze from underwater images. Since the underwater regeneration uses disentangled haze, we employ a correct constraint on cycle consistency matching, unlike vanilla CycleGAN [38] (see Fig. 1 (a)). During test time, the input underwater image is first passed through the HDN network for content information and then the generated content is passed through \(G_{C}\) for the final restored image. Our main contributions are listed below: * To the best of our knowledge, this is the first learning-based approach to employ correct cycle-consistency loss for unpaired haze removal in underwater images. * Exhaustive experiments on different underwater datasets reveal that accurate cycle consistency matching in conjunction with disentangled content for restoration network gives high-quality dehazing results compared to prior unsupervised methods. In our approach, HDN uses feature regularization, feature adversarial, and cyclic losses to decouple haze and content information from input images. Different ablation studies are provided to visualize haze and content information present in underwater images. Exhaustive experiments on different publicly available datasets show that accurate cycle consistency constraint combined with the disentangled content for \(G_{C}\) gives superior results compared to prior unsupervised methods. ## II Proposed method Fig. 1 (b) outlines the proposed methodology. Our framework has two main parts: haze and content disentanglement from underwater images, and restoration of underwater images using the disentangled information. ### _Haze and content disentanglement_ The objective of HDN is to disentangle haze and content information from an underwater image \((I_{w}^{\dagger})\) using a clean or haze-free image (\(I_{e}^{*}\)). Note that \(I_{w}^{\dagger}\) and \(I_{e}^{*}\) are unpaired images randomly sampled from an underwater and a clean set, respectively. We use haze (haze-free) and underwater (clean) words interchangeably. To achieve these objectives, we use the following loss functions. **Feature adversarial loss:** To ensure that \(E_{hf}\) extracts only haze-free information from a given input image, we use the following strategy. Since \(I_{e}^{*}\) is clean, \(E_{hf}\) extracts only haze-free information. Let the output of \(E_{hf}\) given \(I_{e}^{*}\) be denoted as \(F_{hf_{e}}\in R^{B\times C\times H\times W}\), where \(B,C,H,W\) are, respectively, batch size, number of channels, height and width of feature maps \(F_{hf_{e}}\). Similarly, let the output of \(E_{hf}\) given \(I_{w}^{\dagger}\) be denoted as \(F_{hf_{w}}\in R^{B\times C\times H\times W}\). To ensure that \(F_{hf_{w}}\) contains only haze-free information, we use adversarial loss function on feature maps with \(F_{hf_{e}}\) as real samples and \(F_{hf_{w}}\) as fake samples. The loss function can then be formulated as \[\begin{split} L_{d_{1}}(E_{hf},E_{h},D_{adv})=& \mathbb{E}[\log(D_{adv}(F_{hf_{e}}))]+\\ &\mathbb{E}[\log(1-D_{adv}(F_{hf_{w}}))]\end{split} \tag{1}\] **Feature regularization loss:** Since \(I_{e}^{*}\) is clean (or haze-free), \(E_{h}\) (the haze encoder) should not respond to it. We use feature regularization loss to ensure this by constraining the output feature maps for all the layers in \(E_{h}\) to zero when the input image is \(I_{e}^{*}\). This loss combined with \(L_{d1}\) (from Eq. 1) ensures that \(E_{h}\) and \(E_{hf}\) extract only haze and haze-free (content) information from input image. The loss function for this objective can be formulated as \[L_{d_{2}}(E_{h})=\sum_{i=1}^{n}||F_{h_{e}}^{i}||_{1} \tag{2}\] Fig. 1: (a) Using CycleGAN [38] directly for cycle consistency is not sufficient as \(G_{U}\) generates underwater images with random haze information that do not match the input underwater images. (b) Haze disentanglement network (HDN) disentangles haze and content information from input images. The disentangled haze is used by \(G_{U}\) to regenerate underwater images that match the input images. where \(F_{h_{e}}\) is the output feature map of \(E_{h}\) with \(I_{c}^{*}\) as input and \(i\) denotes intermediate feature maps of \(E_{h}\). **Disentangled cyclic losses:** We use cyclic loss functions to ensure that the decoder (\(D\)) is sufficiently trained to restore the corresponding input image. Given \(I_{c}^{*}\), \(D\) combines feature maps \(F_{hf_{e}}\) and \(F_{h_{c}}\) to estimate the corresponding input image. The loss function can be written as \[L_{d_{3}}(E_{hf},E_{h},D)=||D(F_{hf_{e}}+F_{h_{e}})-I_{c}^{*}||_{1} \tag{3}\] where \(F_{hf_{e}}\) and \(F_{h_{e}}\) are output feature maps of \(E_{hf}\) and \(E_{h}\), respectively, for input image \(I_{w}^{*}\). Similarly, the cyclic loss function for \(I_{w}^{\dagger}\) can be written as \[L_{d_{4}}(E_{hf},E_{h},D)=||D(F_{hf_{w}}+F_{h_{w}})-I_{w}^{\dagger}||_{1} \tag{4}\] where \(F_{hf_{w}}\) and \(F_{h_{w}}\) are output feature maps of \(E_{hf}\) and \(E_{h}\), respectively, for input image \(I_{w}^{\dagger}\). The final loss function to disentangle haze and content information is a combination all the four loss functions and can be written as \[\begin{split} L_{d}(E_{hf},E_{h},D,D_{adv})=\lambda_{1}L_{d_{1}}+ \lambda_{2}L_{d_{2}}+\\ \lambda_{3}L_{d_{3}}+\lambda_{4}L_{d_{4}}\end{split} \tag{5}\] where \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) and \(\lambda_{4}\) are trade-off weights. Given an underwater image \(I_{w}^{\dagger}\) as input to the HDN network, the output of haze encoder, \(F_{h_{w}}\), contains the haze information and the output of haze-free encoder \(F_{hf_{w}}\) contains content information. The disentangled content information from HDN is used to generate the content image \(I_{con}\) using the following equation \[I_{con}^{\dagger}=D(E_{hf}(I_{w}^{\dagger})) \tag{6}\] The haze information and content image separated from an underwater image using HDN network is used by a restoration module to generate haze-free image. Specifically, the disentangled content image is used as input for \(G_{C}\) to generate clean images using adversarial and cyclic loss functions, and the output features of the haze encoder given an underwater image i.e., \(F_{h_{w}}\) are used by \(G_{U}\) for underwater regeneration. A detailed discussion of the restoration mechanism is given next. ### _Restoration of underwater images_ This section explains the methodology to restore underwater images using disentangled haze and content information from Sec. II-A. The restoration mechanism consists of four networks, a Generator network \(G_{C}\) to generate clean images, another Generator network \(G_{U}\) to transfer images from clean to underwater domain, and two discriminator networks, \(D_{U}\) and \(D_{C}\), to differentiate between clean and underwater images. Below is a detailed discussion on different loss functions used in the restoration framework. **Cycle consistency losses:** Adversarial losses alone are not sufficient to restore underwater images as different artifacts and unwanted colour shifts [38] can be observed in the generated clean images. To mitigate these effects, we use cycle consistency loss to ensure that the generated images are free from artifacts and remain faithful to the input images. However, as pointed earlier in the introduction, generating an underwater image from a clean image is one-to-many mapping, i.e., \(G_{U}\) can generate an underwater image with random haze information that does not match the input image which results in cycle consistency mismatch. We propose to solve this by using disentangled haze information from input underwater images. Specifically, given an underwater image \(I_{w}^{\dagger}\), we know that HDN network disentangles haze information \(F_{h_{w}}\), using loss function \(L_{d}\). The disentangled haze information \(F_{h_{w}}\) is used in \(G_{U}\) to generate an underwater image that matches the input image. The resultant cycle consistency loss is written as \[\begin{split} L_{r_{3}}(G_{C},G_{U})&=||G_{U}(G_{C}( I_{con}^{\dagger})+I_{con}^{\dagger},F_{h_{w}})-I_{w}^{\dagger}||_{1}\\ &=||\hat{I}_{w}^{\dagger}-I_{w}^{\dagger}||_{1}\end{split} \tag{7}\] Similarly, cycle consistency loss for the clean image \(I_{c}\) can be written as \[\begin{split} L_{r_{4}}(G_{C},G_{U})&=||G_{C}(I_{con }^{\star})+I_{con}^{\star}-I_{c}^{\star}||_{1}\\ &=||\hat{I}_{c}^{\star}-I_{c}^{\star}||_{1}\end{split} \tag{8}\] The final loss function to restore an underwater image is a combination of all the four loss functions and can be written as \[\begin{split} L_{r}(G_{C},G_{U},D_{U},D_{C})=\omega_{1}L_{r_{1}}+ \omega_{2}L_{r_{2}}+\\ \omega_{3}L_{r_{3}}+\omega_{4}L_{r_{4}}\end{split} \tag{9}\] where \(\omega_{1}\), \(\omega_{2}\), \(\omega_{3}\) and \(\omega_{4}\) are trade-off weights. The total loss function to train HDN and restoration modules is the combination of Eq. 5 and Eq. 9 and can be written as \[L_{total}=L_{d}+L_{r} \tag{10}\] where \(L_{total}\) is the total loss function, \(L_{d}\), and \(L_{r}\) are disentanglement (Eq. 5) and restoration (Eq. 9) losses, respectively. **Testing:** HDN, \(G_{C}\), \(G_{U}\), \(D_{C}\), and \(D_{U}\) networks are trained in an end-to-end manner until convergence using Eq. 10. A detailed discussion on experimental set-up and hyper-parameters is given in following section. To restore an underwater image (\(I_{w}\)) during test time, \(I_{w}\) is first passed through the HDN network for content image \(I_{con}\) (Eq. 6). The resultant content image \(I_{con}\) is passed through \(G_{C}\) for the final restored image i.e., \[I_{res}=G_{C}(I_{con})+I_{con} \tag{11}\] where \(I_{res}\) is the final restored image. ## III Experiments This section is arranged as follows: (i) Implementation details. (ii) Datasets and metrics used. (iii) Ablation studies. (iv) Comparison results. ### _Implementation details_ We used NVIDIA-2080 Ti GPU and Pytorch library to train and test our network. HDN and restoration modules are trained in an end-to-end manner using Eq. 10. For Eq. 5 (HDN network), we empirically found \(\lambda_{1}=1\), \(\lambda_{2}=10\), \(\lambda_{3}=1\), and \(\lambda_{4}=1\) to give best results and for Eq. 9, we followed CycleGAN [38] with \(\omega_{1}=1\), \(\omega_{2}=1\), \(\omega_{3}=10\), and \(\omega_{4}=10\). We observed that the training converged in around 60-70 epochs and further continued training till 80 epochs. Following options are used to train our network: patch size of 128 and batch size of 4, ADAM optimizer for updating the weights, momentum values for optimizer with \(\beta_{1}\) = 0.9, and \(\beta_{2}\) = 0.99 and learning rate of 0.0005. Additional details about the training mechanism and architecture of the HDN and restoration modules are given in the supplementary material. ### _Datasets and metrics_ **Datasets:** We used four publicly available UW datasets to train and evaluate our model: UFO-120 [35], UWNet [43], UWScenes [43], and UIEB [37]. UFO-120 [35] contains 1500 training samples and 120 testing samples. Underwater images in UFO-120 are generated using distortion mimicking CycleGAN [38] based model, followed by Gaussian blurring and bicubic interpolation. [43] has two underwater image datasets, UWNet, and UWScenes, with corresponding ground truth images. UWNet has 3700 paired images for training and 1270 testing samples; UWScenes has 2185 training and 130 test images. The underwater images in UWNet and UWScenes are synthetically generated using a similar procedure as followed in [35]. Recently, UIEB [37] proposed a real underwater benchmark dataset with paired images. The authors of [37] collected different UW images from Google, Youtube and UW related works [44, 45, 46, 47]. The reference images are carefully selected by human volunteers from among the output of 12 enhancement methods, including nine underwater restoration methods, two dehazing methods and a commercial tool. In total UIEB [37] has 890 UW images with corresponding references. Since the authors did not provide a train-test split, we randomly selected 800 images for training and the remaining 90 for testing. We observed that the UW images in UIEB are more challenging to restore and have a rich variety of scenes compared to other datasets. Since these datasets contain paired images, we used the following procedure to prepare the unpaired set. For every dataset with \(X\) number of clean and haze paired images, \(\frac{X}{2}\) haze images are randomly selected from the underwater set, and the corresponding ground-truth images are removed from the clean set. The remaining images in the clean set and the selected haze images are used for training. The same procedure is followed for all four datasets. This ensures that there are no paired images in the training set. ### _Conclusions_ We proposed an unsupervised haze removal algorithm from underwater images using a haze disentanglement network (HDN) and a restoration module. HDN is used for disentangling haze and content from UW images. While the disentangled content is used as input for restoration module, the haze information is used for 'consistent' cycle consistency. Different ablation studies revealed that the proposed HDN network successfully decouples haze and content from an underwater image. Comparisons with prior art show that our methodology visually improves Visual comparisons on other state-of-the-art and the quantitative metrics as well. We believe that the loss functions and the network architecture proposed in this paper will help improve the performance of unsupervised networks further.
2310.15578
VMAF Re-implementation on PyTorch: Some Experimental Results
Based on the standard VMAF implementation we propose an implementation of VMAF using PyTorch framework. For this implementation comparisons with the standard (libvmaf) show the discrepancy $\lesssim 10^{-2}$ in VMAF units. We investigate gradients computation when using VMAF as an objective function and demonstrate that training using this function does not result in ill-behaving gradients. The implementation is then used to train a preprocessing filter. It is demonstrated that its performance is superior to the unsharp masking filter. The resulting filter is also easy for implementation and can be applied in video processing tasks for video copression improvement. This is confirmed by the results of numerical experiments.
Kirill Aistov, Maxim Koroteev
2023-10-24T07:42:04Z
http://arxiv.org/abs/2310.15578v4
# VMAF Re-implementation on PyTorch: Some Experimental Results ###### Abstract Based on the standard VMAF implementation we propose an implementation of VMAF using PyTorch framework. For this implementation comparisons with the standard (libvmaf) show the discrepancy \(\lesssim 10^{-2}\) in VMAF units. We investigate gradients computation when using VMAF as an objective function and demonstrate that training using this function does not result in ill-behaving gradients. The implementation is then used to train a preprocessing filter. It is demonstrated that its performance is superior to the unsharp masking filter. The resulting filter is also easy for implementation and can be applied in video processing tasks for video compression improvement. This is confirmed by the results of numerical experiments. VMAF, video quality metrics, PyTorch, optimal filter, preprocessing ## Introduction Video Multimethod Assessment Fusion (VMAF) developed by Netflix [1] was released in 2016 and quickly gained popularity due to its high correlation with subjective quality metrics. It has become in recent years one of the main tools used for image/video quality assessment for compression tasks in both research and industry. In the same time it was shown that VMAF score can be significantly increased by certain preprocessing methods, e.g., sharpening or histogram equalization [2]; this led Netflix to release an alternative version of the metric referred to as VMAF NEG that is less susceptible to such preprocessing. The original VMAF algorithm was implemented in C [3] and no effort is known to us to re-implement it _fully, i.e., including all its sub-metrics_ using some ML framework. One of the reasons for that is the claimed non-differentiability of this metric. We propose an implementation of VMAF using PyTorch and analyze its differentiability with various methods. We also discuss potential problems related to the computation of this metric in the end of the paper. ## Construction of VMAF VMAF score is computed by calculating two elementary image metrics referred to as VIF and ADM (sometimes DLM) for each frame, and a so called "Motion" feature; the final score is produced via SVM regression that uses these features as an input. Here we provide brief descriptions for these features. ### _Vif_ VIF (visual information fidelity)[4] computes a ratio of two mutual information measures between images under the assumptions of the gaussian channel model for image distortion and HVS (human visual system). Roughly speaking the algorithm can be described as follows. For the reference image patches\(\{C_{i}\}_{i=1}^{N}\) and distorted image patches \(\{D_{i}\}_{i=1}^{N}\) one computes the ratio of two mutual informations which has the form \[VIF=\frac{\sum_{i=1}^{N}\log_{2}\left(1+\frac{g_{i}^{2}\cdot\sigma_{C_{i}}^{2} }{\sigma_{V_{i}}^{2}+\sigma_{N}^{2}}\right)}{\sum_{i=1}^{N}\log_{2}\left(1+ \frac{\sigma_{C_{i}}^{2}}{\sigma_{N}^{2}}\right)}, \tag{1}\] where parameters in (1) are those of the gaussian channel models (not written down here explicitely, see [4] for details) and \(g_{i}\) and \(\sigma_{V_{i}}^{2}\) are estimated as \[g_{i}=\frac{\sigma_{C_{i}D_{i}}}{\sigma_{C_{i}}^{2}},\] \[\sigma_{V_{i}}^{2}=\sigma_{D_{i}}^{2}-g_{i}\cdot\sigma_{C_{i}D_{i}},\] and \(\sigma_{N}^{2}\) is a variance of the gaussian noise incorporated into the HVS model. Note, that these estimates in principle have to be computed over the sample of images. Instead, the assumption is made that the estimates can be computed over the patches ([4], section IV; [5]) VIF is computed on four scales by downsampling the image; four values per frame are used as features for final score regression. The original version of VIF included the wavelet transform, but the same authors released another version of VIF in the pixel domain [6]. VMAF uses only the pixel domain version, so it is this version we implemented in our work1. Footnote 1: The PyTorch implementation of the wavelet domain version is also available and can be found at [https://github.com/chaoofeng/IQA-PyTorch/blob/main/pyiqa/archs/vif_arch.py](https://github.com/chaoofeng/IQA-PyTorch/blob/main/pyiqa/archs/vif_arch.py) ### _Adm_ ADM (Additive Detail Metric) [7] operates in the wavelet domain, the metric tries to decompose the target (distorted) image \(T\) into a restored imaged \(R\) using the original image \(O\) and an additive impairment image \(A\): \(T=R+A\) where \[R=\left\{\begin{array}{cc}\mathrm{clip}_{[0,1]}\left(\frac{T}{O}\right)O,& \text{if }\left|\Psi_{O}-\Psi_{T}\right|>1^{\circ}\\ T,&\text{if }\left|\Psi_{O}-\Psi_{T}\right|\leq 1^{\circ}.\end{array}\right.\] Here \(\Psi\) is arctan of the ratio between two coefficients co-located in the vertical subband and the horizontal subband of the same scale, the special case \(\left|\Psi_{O}-\Psi_{T}\right|<1^{\circ}\) is made to handle contrast enhancement. For more information refer to the original paper [7] or [8]. After decoupling the original image \(O\) goes through contrast sensitivity function (CSF) and the restored image \(R\) goes through a contrast sensitivity function and a contrast masking (CM) function. CSF is computed by multiplying wavelet coefficients of each subband with its corresponding CSF value. CM function is computed by convolving these coefficients with a specific kernel and a thresholding operation. The final score is computed using the formula: \[ADM=\frac{\sum_{\lambda=1}^{4}\sum_{\theta=2}^{4}\left(\sum_{i,j\in\text{ C }}\mathrm{CM}(\mathrm{CSF}(R(\lambda,\theta,i,j)))^{3}\right)^{\frac{1}{3}}}{ \sum_{\lambda=1}^{4}\sum_{\theta=2}^{4}\left(\sum_{i,j\in\text{ C }}\mathrm{CSF}(O(\lambda,\theta,i,j))^{3}\right)^{\frac{1}{3}}},\] where \(R(\lambda,\theta,i,j)\) and \(O(\lambda,\theta,i,j)\) are wavelet coefficients of the restored and original image at scale \(\lambda\), subband \(\theta\) (vertical \(\theta=2\), horizontal \(\theta=4\) and diagonal \(\theta=3\)) and spatial coefficients \(i,j\), \(C\) represents the central area of the image (coefficients at the outer edge are ignored). The default VMAF version uses a single value from ADM per frame for the final score regression. Alternatively, four values for four scales from ADM can be computed by omitting the first sum in the formula and used as individual features. ### _Motion_ The motion feature for frame \(i\) is computed using the formula \[\min(\mathrm{SAD}(f_{i},f_{i-1}),\mathrm{SAD}(f_{i},f_{i+1})),\] where \(f_{i}\) is frame \(i\) after smoothing using a \(5\times 5\) gaussian filter, and SAD is the sum of absolute differences between pixels. This is the only feature that contains temporal information. #### Regression The features described above can be computed for each frame of a video stream; all features use only the luma component of the frame. A score for each frame is produced using SVM regression (after feature normalizaton). SVM uses an RBF kernel; given a feature vector \(x\), the score is computed with the following formula \[\sum_{i\in SV}\alpha_{i}K(x_{i},x)+b,K(u,v)=\exp(-\gamma||u-v||^{2}),\] where \(x_{i}\) are support vectors. The final score for the video is produced by taking the average of frame scores and clipping it to \([0,100]\) range. #### Vmaf NEG VMAF NEG version modifies the formulas used to calculate VIF and ADM elementary features by introducing parameters called enhancement gain limit (EGL) and modifying (essentially clipping) certain internal values based on these parameters: for VIF \[g_{i}=\min(g_{i},EGL_{VIF})\] for ADM \[R=\min\left(R\cdot EGL_{DLM},T\right),\text{ if }\left|\Psi_{O}-\Psi_{T}\right|<1 ^{\circ}\text{ and }R>0,\] \[R=\max\left(R\cdot EGL_{DLM},T\right),\text{ if }\left|\Psi_{O}-\Psi_{T}\right|<1 ^{\circ}\text{ and }R<0.\] For a more detailed description and reasoning behind this see [8] and [9]. ## Numerical experiments We implement both the base VMAF algorithm and NEG version in PyTorch framework. This is to our knowledge the first implementation to allow gradient based optimization. We closely follow the official Netflix implementation in C [3] in order to obtain output values as close as possible to it. The difference in scores measured over \(79\) video streams provided by Netflix public dataset[1] is \(\leq 0.01\pm 0.01\) VMAF units (using first \(50\) frames from each video); note, VMAF scales in the interval \(0-100\) and for typical natural images VMAF takes on values around \(80-95\) so the error is by order \(10^{-4}\) smaller than actual VMAF values measured for natural images. We also compare all elementary features for two implementations. It was found that the difference is from \(\approx 7\times 10^{-6}\) for ADM to \(\approx 2\times 10^{-4}\) for Motion on the same data. So it seems the latter metric is least precisely reproduced even though the numbers show that this precision is sufficient for the majority of applications. The small differences observed for sub-metrics probably occur because of discrepancies in image padding which are different in PyTorch and the official implementation in libvmaf; this issue will be investigated further. Some small difference is also likely due to the fact that default libvmaf version uses quantized integer values for performance reasons and our PyTorch version uses floating point values to allow differentiation. ## Gradient checking VIF, ADM and motion features along with the final score regression are mostly composed of simple tensor manipulations, convolution operations (for downsampling, wavelet transform and contrast masking), and elementary functions such as exponents and logarithms, which are differentiable. The problem to computing gradients may emerge from operations such as clipping and ReLU which produce gradients equal to zero in some part of their domain. We observe that gradients computed in the case of default VMAF version do not approach to machine precision zero, e.g., \(\sim 10^{-16}\). Another peculiarity is the fact that ADM as implemented in VMAF uses only central area of the image and ignores the outer edge, so the ADM gradients for outer edge pixels are zero. However this is compensated by VIF gradients. To ensure that gradients are computed correctly we perform a procedure known as gradient checking (see e.g., [10]). Given some function \(f(\theta)\) and a function \(g(\theta)\) that is supposed to compute \(\frac{\partial f}{\partial\theta}\) we can ensure that \(g(\theta)\) is correct by numerically verifying \[g(\theta)\approx\frac{f(\theta+\varepsilon)-f(\theta-\varepsilon)}{2\varepsilon}\] In the case of VMAF gradient checking is complicated by the fact that reference C implementation takes files in.yuv format as input i.e the input values can be only integer numbers in \([0,255]\). To perform gradient checking we compute the derivative of a very simple learnable image transform - a convolution with a single filter kernel. We perform this on single frame. If \(R\) is a reference image, \(W=\{W_{ij}\}_{i,j=1}^{k}\) is the convolution kernel, \(R*W\) is the output of the convolution, we compute \[\frac{\partial\operatorname{VMAF}(\operatorname{R},\operatorname{R}* \operatorname{W})}{\partial W_{ij}}\] by backpropagation algorithm using the PyTorch version. Let matrices \(W^{(km+)}\), \(W^{(km-)}\) be defined by \[W_{ij}^{(km\pm)}=W_{ij}\pm\varepsilon\delta_{ki}\delta_{mj},\] where \(\delta_{ki}\), \(\delta_{mj}\) are Kronecker deltas. Then we compute the central difference approximation of the derivative as \[\frac{\operatorname{VMAF}(\operatorname{R},\operatorname{R}* \operatorname{W}^{(\operatorname{km+})})-\operatorname{VMAF}(\operatorname{R},\operatorname{R}*\operatorname{W}^{(\operatorname{km-})})}{2\varepsilon}\] using the reference C version. We round the output of the convolutions \(R*W^{(km+)}\) and \(R*W^{(km-)}\) to nearest integer before giving it as input to VMAF C version. Initialization for the filter weights should be done carefully since we need all pixels of the resulting image to be in \([0,255]\) range. We initialize each element with \(\frac{1}{\varepsilon^{2}}\), where \(k\) is the size of the filter to ensure that the average brightness does not change. The tests were performed with the filters of sizes \(k=3\) and \(k=5\). It is clear that the finite difference approximation of the derivative becomes inexact when \(\varepsilon\) grows so this parameter can not be made too big. On the other hand, in the case of small \(\varepsilon\) the outputs of the perturbed convolutions \(R*W^{(km+)}\) and \(R*W^{(km-)}\) may differ by the magnitude smaller than one pixel and if the differences are \(<0.5\), then rounding will remove the impact of perturbation. The output of the perturbed convolution should of course also be in \([0,255]\). Taking all this into account we set \(\varepsilon=10^{-2}\). We find that in the settings described above the derivatives are close (taking into account that rounding introduces additional error): for the central coefficient of \(3\times 3\) filter the derivative computed numerically using C implementation is \(223.8\) and the derivative computed by means of backpropagation using PyTorch is \(223.4\). We compared derivatives for all elements of the kernel and found that average difference is \(0.41\pm 0.35\) for \(k=3\) and \(0.57\pm 0.45\) for \(k=5\) while gradients themselves have magnitudes of \(150-250\). Training with VMAF as an objective function To assess the applicability of VMAF as a loss function we perform a simple optimization procedure: inspired by unsharp masking filter we attempt to train a single convolutional filter. The unsharp masking filter is a widespread image high-pass filter [1] that is used to increase the sharpness of image; it is known to increase VMAF score [2]. The unsharp masking filter can be expressed by \[U=I+\alpha(I-G),\] where \(I\) is the identity filter (a matrix with \(1\) at the center and \(0\) everywhere else), \(G\) is a gaussian filter and \(\alpha\) is a parameter acting as an amplification/attenuation coefficient. Unsharp masking can also be viewed as a single convolution of small size applied to the luma component of the image. We train a convolutional filter of size \(7\times 7\) on luma data in the same way as the unsharp masking filter is usually applied. Given a batch of images \(\{R_{i}\}_{i=1}^{n}\) we optimize \[L(W)=\sum_{i=1}^{n}\operatorname{VMAF}(\operatorname{R_{i}},\operatorname{R_ {i}}*\operatorname{W})\] with respect to the filter coefficients \(w_{ij}\) using stochastic gradient descent with learning rate \(1\times 10^{-5}\). The weights are initialized with identity filter weights. An additional restriction \[\sum_{ij}w_{ij}=1\] is applied to keep the average scale for brightness of the image; this condition is also satisfied by unsharp masking filter \(U\). To ensure this we normalize the kernel by dividing the elements by their sum at each training step, this can be thought of as a form of projected gradient descent; the details of this procedure will be described elsewhere. We disable clipping VMAF into \([0,100]\) range since we already start with VMAF scores close to \(100\) and the clipping operation zeroes the gradients. We perform early stopping since during training the magnitude of VMAF grows to the infinity, which can be explained by the fact that VMAF score is obtained by SVM regression. This situation, however, can be presumably improved. The resulting filter \(W^{*}\) is circularly symmetric up to certain precision, while no restriction on symmetry was applied; the results of its use on the image is shown in Fig. 1. We observe that the application of the resulting filter also visually sharpens the image, even though visual difference between two processed images (b and c) is hardly noticeable. To assess the performance of our filter with respect to the unsharp masking it is not enough to look at VMAF value alone because the growth of the amplification coefficient \(\alpha\) in the unsharp masking results in increasing VMAF and lowers PSNR. For the convenience of representation we transform our filter to the form similar to unsharp masking filter \(W^{*}=I+\hat{W}\) and introduce \(\alpha\) parameter \(W_{\alpha}^{*}:=I+\alpha W\) to make the form of the filter resembling the unsharp masking filter. Increase in \(\alpha\) leads to the increase in VMAF and the decrease in PSNR analogous to unsharp masking filter. The comparison of our optimal learnt filter with the unsharp masking filter for various Figure 1: Visual comparison of unsharp masking filter (1b) with the optimal filter (1c) constructed as described in the main text and the reference image (1a). The image represents a frame extracted from the publicly available Netflix data set[1]. amplification magnitudes \(\alpha\) is provided in Fig. 2. It is clearly seen that in a wide range of PSNR values the optimal filter yields better image quality in terms of VMAF. These results were confirmed using HEVC video codec2 and are shown in Fig. 3 to compress the streams processed with various filters. They show that the filter obtained by means of SGD method using our implementation of VMAF as a cost function provides better performance compared to the unsharp masking in a range of bitrates. Footnote 2: We used the proprietary Hauwei hw265 video codec for tests. A more extended study and results with open source video codecs are in preparation and will be published elsewhere. ## Discussion The proposed implementation raises some questions. In the literature one can find claims that VMAF "is not differentiable" (see, e.g., [11] and [12]). Surprisingly, we were not able to find precise clarifications of these claims or reasoning behind them. On the one hand, from the optimization point of view perspective, these claims may merely state that the existing C implementation does not implement functions that compute precise gradients, which is correct. On the other hand, these claims may imply that VMAF as a function is not differentiable in the mathematical sense and probably the reason for that is _not_ the fact that default VMAF version C implementation uses integer values, because these values are just quantized versions of floating point values. One can note that the definition of VIF metric contains mutual information and generally speaking it is impossible to talk about the derivative wrt. a random variable even though such a definition can be consistently constructed. Probably, in this sense one can say that the mutual information and consequently, VIF and VMAF are non-differentiable. However, for _ad hoc_ purposes this is not necessary: we can alter the derivative definition so as to obtain the consistently working algorithm. First of all, VIF model implies the gaussian channel model for HVS (human visual system) as well as for the image distortion. This model introduces a set of parameters, e.g., parameters \(g_{i}\) in (1) and it seems easy to differentiate (1) wrt. these parameters. Moreover, these parameters may depend on other, hidden parameters, e.g., some filter coefficients \(\mathbf{w}\) as we demonstrated in the previous section, so in fact we will have a composition of functions containing \(g_{i}(\mathbf{w})\) and would be able to differentiate VIF both wrt. \(g_{i}\) and \(\mathbf{w}\). It may be worth noting here that some works also try to establish the computability of the gradient of the mutual information wrt. parameters, presumably, in a much more rigorous way [13]. Secondly, the implementation of a cost function in PyTorch and sufficiently good behavior of its gradients as in the experiments described above may imply its differentiability in _this restricted sense_. Thus, if we accept two conditions above, we can roughly say VMAF can be considered as differentiable and what is more important used in gradient descent tasks. This again was confirmed numerically in the computational experiments described above. For the purposes of gradient descent related algorithms there are attempts to train a convolutional neural network to predict the VMAF score for images [14] and video [11]. In [14] the network is used to optimize a neural net for image compression. The disadvantage of this approach is that the net is not guaranteed to produce the output close to VMAF on input that differs from the training data. Indeed, the authors of [14] have to re-train the net continually together with the compression net. In [2] the authors also applied stochastic gradient descent to find a single convolution filter that maximizes VMAF value. The computation of the gradient was carried out approximately using finite differences approach for derivative estimate. This Fig. 3: VMAF RD curves were obtained using a synthetic stream presenting a video game. The measurement was done on four target bitrates \(4000\), \(6000\), \(8000\), \(9500\) kbps. For the unsharp masking filter \(\alpha=0.5\); for the optimal filter \(\alpha=0.25\). Fig. 2: VMAF vs PSNR trade-off for the optimal filter. The computations were done on frames from Netflix public dataset after applying our filter and unsharp masking filter with various values of \(\alpha\) parameter (shown next to the points). Note that for \(\alpha\to 0\) VMAF score converges to \(\sim 97.4\) instead of \(100\); this occurs when the Motion feature of VMAF is equal to zero. Both filters have the size \(7\times 7\). approach is computationally inefficient and produces only approximate values for gradients. This makes this approach not applicable to tasks where the number of parameters is significantly higher than in a single convolution filter. These approaches also do not allow to study the properties of VMAF itself either since the net only approximates the output and does not capture the specifics of the VMAF algorithm or just because of the lack of the appropriate tool. On the other hand, our implementation enabled us to employ VMAF as a cost function for various optimization tasks related to compression; these results will be published elsewhere. Our implementation of VMAF reproduces values obtained with the standard implementation with significant precision \(\lesssim 10^{-2}\). We believe that this implementation can be beneficial to image/video quality and compression communities due to its possible use for training neural networks for tasks such as compression, image enhancement and others3. The validity of this implementation is confirmed by the results of the learning procedure and application in the video codec. Footnote 3: We plan to release this code as an opensource software which can not be fulfilled immediately for security procedures. This is confirmed by the results of our optimization procedure and comparisons with the standard unsharp masking filter. ## Acknowledgment The authors are grateful to their colleagues in Media Technology Lab Alexey Leonenko, Vladimir Korviakov, and Denis Parkhomenko for helpful discussions.
2302.02215
On 2-strong connectivity orientations of mixed graphs and related problems
A mixed graph $G$ is a graph that consists of both undirected and directed edges. An orientation of $G$ is formed by orienting all the undirected edges of $G$, i.e., converting each undirected edge $\{u,v\}$ into a directed edge that is either $(u,v)$ or $(v,u)$. The problem of finding an orientation of a mixed graph that makes it strongly connected is well understood and can be solved in linear time. Here we introduce the following orientation problem in mixed graphs. Given a mixed graph $G$, we wish to compute its maximal sets of vertices $C_1,C_2,\ldots,C_k$ with the property that by removing any edge $e$ from $G$ (directed or undirected), there is an orientation $R_i$ of $G\setminus{e}$ such that all vertices in $C_i$ are strongly connected in $R_i$. We discuss properties of those sets, and we show how to solve this problem in linear time by reducing it to the computation of the $2$-edge twinless strongly connected components of a directed graph. A directed graph $G=(V,E)$ is twinless strongly connected if it contains a strongly connected spanning subgraph without any pair of antiparallel (or twin) edges. The twinless strongly connected components (TSCCs) of a directed graph $G$ are its maximal twinless strongly connected subgraphs. A $2$-edge twinless strongly connected component (2eTSCC) of $G$ is a maximal subset of vertices $C$ such that any two vertices $u, v \in C$ are in the same twinless strongly connected component of $G \setminus e$, for any edge $e$. These concepts are motivated by several diverse applications, such as the design of road and telecommunication networks, and the structural stability of buildings.
Loukas Georgiadis, Dionysios Kefallinos, Evangelos Kosinas
2023-02-04T18:14:07Z
http://arxiv.org/abs/2302.02215v3
# On 2-strong connectivity orientations of mixed graphs and related problems+ ###### Abstract A mixed graph \(G\) is a graph that consists of both undirected and directed edges. An orientation of \(G\) is formed by orienting all the undirected edges of \(G\), i.e., converting each undirected edge \(\{u,v\}\) into a directed edge that is either \((u,v)\) or \((v,u)\). The problem of finding an orientation of a mixed graph that makes it strongly connected is well understood and can be solved in linear time. Here we introduce the following orientation problem in mixed graphs. Given a mixed graph \(G\), we wish to compute its maximal sets of vertices \(C_{1},C_{2},\ldots,C_{k}\) with the property that by removing any edge \(e\) from \(G\) (directed or undirected), there is an orientation \(R_{i}\) of \(G\setminus e\) such that all vertices in \(C_{i}\) are strongly connected in \(R_{i}\). We discuss properties of those sets, and show how to solve this problem in linear time by reducing it to the computation of the 2-edge twinless strongly connected components of a directed graph. A directed graph \(G=(V,E)\) is twinless strongly connected if it contains a strongly connected spanning subgraph without any pair of antiparallel (or _twin_) edges. The twinless strongly connected components (TSCCs) of a directed graph \(G\) are its maximal twinless strongly connected subgraphs. A \(2\)_-edge twinless strongly connected component (2eTSCC) of \(G\)_ is a maximal subset of vertices \(C\) such that any two vertices \(u,v\in C\) are in the same twinless strongly connected component of \(G\setminus e\), for any edge \(e\). These concepts are motivated by several diverse applications, such as the design of road and telecommunication networks, and the structural stability of buildings. Our algorithm is based on two notions: (i) a collection of auxiliary graphs \(\mathcal{H}\) that preserve the 2eTSCCs of \(G\) and, for any \(H\in\mathcal{H}\), the strongly connected components of \(H\) after the deletion of any edge have a very simple structure, and (ii) a reduction to the problem of computing the connected components of an undirected graph after the deletion of certain vertex-edge cuts. ## 1 Introduction In this paper, we investigate some connectivity problems in mixed graphs and in directed graphs (digraphs). A mixed graph \(G\) contains both undirected edges and directed edges. We denote an edge with endpoints \(u\) and \(v\) by \(\{u,v\}\) if it is undirected, and by \((u,v)\) if it is directed from \(u\) to \(v\). An _orientation_\(R\) of \(G\) is formed by orienting all the undirected edges of \(G\), i.e., converting each undirected edge \(\{u,v\}\) into a directed edge that is either \((u,v)\) or \((v,u)\). Several (undirected or mixed) graph orientation problems have been studied in the literature, depending on the properties that we wish an orientation \(R\) of \(G\) to have. See, e.g., [1, 13, 14]. An orientation \(R\) of \(G\) such that \(R\) is strongly connected is called a _strong orientation of \(G\)_. More generally, an orientation \(R\) of \(G\) such that \(R\) is \(k\)-edge strongly connected is called a _\(k\)-edge strong orientation of \(G\)_. Motivated by recent work in 2-edge strong connectivity in digraphs [1, 1, 10], we introduce the following strong connectivity orientation problem in mixed graphs. Given a mixed graph \(G\), we wish to compute its maximal sets of vertices \(C_{1},C_{2},\ldots,C_{k}\) with the property that for every \(i\in\{1,\ldots,k\}\), and every edge \(e\) of \(G\) (directed or undirected), there is an orientation \(R\) of \(G\setminus e\) such that all vertices of \(C_{i}\) are strongly connected in \(R\). We refer to these maximal vertex sets as the _edge-resilient strongly orientable blocks_ of \(G\). See Figure 1. Note that when \(G\) contains only directed edges, then this definition coincides with the usual notion of 2-edge strong connectivity, i.e., each \(C_{i}\) is a 2-edge strongly connected component of \(G\). We show how to solve this problem in linear time, by providing a linear-time algorithm for computing the 2-edge twinless strongly connected components [1], that we define next. Moreover, as a consequence of our algorithm, it follows that \(\{C_{1},\ldots,C_{k}\}\) is a partition of \(V\). We recall some concepts in directed graphs. A digraph \(G=(V,E)\) is _strongly connected_ if there is a directed path from each vertex to every other vertex. The _strongly connected components_ (SCCs) of \(G\) are its maximal strongly connected subgraphs. Two vertices \(u,v\in V\) are _strongly connected_ if they belong to the same strongly connected component of \(G\). We refer to a pair of antiparallel edges, \((x,y)\) and \((y,x)\), of \(G\) as _twin edges_. A digraph \(G=(V,E)\) is _twinless strongly connected_ if it contains a strongly connected spanning subgraph \((V,E^{\prime})\) without any pair of twin edges. The _twinless strongly connected components_ (TSCCs) of \(G\) are its maximal twinless strongly connected subgraphs. Two vertices \(u,v\in V\) are _twinless strongly connected_ if they belong to the same twinless strongly connected component of \(G\). Equivalently, \(u\) and \(v\) are twinless strongly connected if \(G\) contains a path from \(u\) to \(v\) and a path from \(v\) to \(u\) so that the union of these two paths does not contain any pair of twin edges. Raghavan [15] provided a characterization of twinless strongly connected digraphs, and, based on this characterization, presented a linear-time algorithm for computing the TSCCs of a digraph. An edge (resp., a vertex) of a digraph \(G\) is a _strong bridge_ (resp., a _strong articulation point_) if its removal increases the number of strongly connected components. A strongly connected digraph Figure 1: Examples illustrating the notion of edge-resilient strongly orientable blocks of a mixed graph \(G\); undirected edges are shown in blue color. (a)-(b) Vertices \(u\) and \(v\) are not in the same edge-resilient strongly orientable block of \(G\). After the deletion of edge \(e\) (which is directed in (a) and undirected in (b)), there is no orientation of \(G\setminus e\) such that \(u\) and \(v\) are strongly connected. (c) Here \(u\) and \(v\) are in the same edge-resilient strongly orientable block of \(G\), since for any edge \(e\), there is an orientation of \(G\setminus e\) such that \(u\) and \(v\) are strongly connected. \(G\) is _\(2\)-edge strongly connected_ if it has no strong bridges, and it is _\(2\)-vertex strongly connected_ if it has at least three vertices and no strong articulation points. Two vertices \(u,v\in V\) are said to be _\(2\)-edge strongly connected_ (resp., _\(2\)-vertex strongly connected_) if there are two edge-disjoint (resp., two internally vertex-disjoint) directed paths from \(u\) to \(v\) and two edge-disjoint (resp., two internally vertex-disjoint) directed paths from \(v\) to \(u\) (note that a path from \(u\) to \(v\) and a path from \(v\) to \(u\) need not be edge- or vertex-disjoint). Equivalently, by Menger's theorem [14] we have that \(u\) and \(v\) are \(2\)-edge strongly connected if they remain in the same SCC after the deletion of any edge. A _\(2\)-edge strongly connected component_ (resp., _\(2\)-vertex strongly connected component_) of a digraph \(G=(V,E)\) is defined as a maximal subset \(C\subseteq V\) such that every two vertices \(u,v\in C\) are \(2\)-edge strongly connected (resp., \(2\)-vertex strongly connected). Also, note that the subgraph induced by \(C\) is not necessarily \(2\)-edge strongly connected (resp., \(2\)-vertex strongly connected). The above notions extend naturally to the case of twinless strong connectivity. An edge \(e\in E\) is a _twinless strong bridge_ of \(G\) if the deletion of \(e\) increases the number of TSCCs of \(G\). Similarly, a vertex \(v\in V\) is a _twinless strong articulation point_ of \(G\) if the deletion of \(v\) increases the number of TSCCs of \(G\). A linear-time algorithm for detecting all twinless strong bridges can be derived by combining the linear-time algorithm of Italiano et al. [10] for computing all the strong bridges of a digraph, and a linear-time algorithm for computing all the edges which belong to a cut-pair in a \(2\)-edge-connected undirected graph [11]. Georgiadis and Kosinas [11] showed that the computation of twinless strong articulation points reduces to the following problem in undirected graphs, which is also of independent interest: Given a \(2\)-vertex-connected (biconnected) undirected graph \(H\), find all vertices \(v\) that belong to a vertex-edge cut-pair, i.e., for which there exists an edge \(e\) such that \(H\setminus\{v,e\}\) is not connected. Then, [11] presented a linear-time algorithm that not only finds all such vertices \(v\), but also computes the number of vertex-edge cut-pairs of \(v\) (i.e., the number of edges \(e\) such that \(H\setminus\{v,e\}\) is not connected). Alternatively, it is possible to compute the vertices that form a vertex-edge cut-pair by exploiting the structure of the triconnected components of \(H\), represented by an SPQR tree [12, 13] of \(H\). A _\(2\)-edge twinless strongly connected component (2eTSCC) of \(G\)_ is a maximal subset of vertices \(C\) such that any two vertices \(u,v\in C\) are in the same TSCC of \(G\setminus e\), for any edge \(e\). Two vertices \(u\) and \(v\) are _\(2\)-edge twinless strongly connected_ if they belong to the same \(2\)eTSCC. See Figure 2. Jaberi [15] studied some properties of the \(2\)-edge twinless strongly connected components, and presented an \(O(mn)\)-time algorithm for a digraph with \(m\) edges and \(n\) vertices. We provide a linear-time algorithm that is based on two notions: (i) a collection of auxiliary graphs \(\mathcal{H}\) that preserve the \(2\)-edge twinless strongly connected components of \(G\) and, for any \(H\in\mathcal{H}\), the SCCs of \(H\) after the deletion of any edge have a very simple structure, and (ii) a reduction to the problem of computing the connected components of an undirected graph after the deletion of certain vertex-edge cuts. The notions of twinless strong connectivity and mixed graph orientations are indeed related, and are motivated by several diverse applications, such as the design of road and telecommunication networks, the structural stability of buildings [1, 1, 1, 13, 14], and the analysis of biological networks [1]. Given a mixed graph \(G\), it is natural to ask whether it has an orientation so that the resulting directed graph is strongly connected [1, 12] or, more generally, \(k\)-edge strongly connected [10, 13]. Raghavan [14] noted that testing whether a digraph \(G\) is twinless strongly connected is equivalent to testing whether a mixed graph has a strong orientation. We can also observe that the computation of the twinless strongly connected components is equivalent to the following generalization of strong orientations of a mixed graph. Given a mixed graph \(G\), we wish to compute its maximal sets of vertices \(C_{1},C_{2},\ldots,C_{k}\) with the property that for every \(i\in\{1,\ldots,k\}\) there is an orientation \(R\) of \(G\) such that all vertices of \(C_{i}\) are strongly connected in \(R\). Similarly, we show that the computation of the edge-resilient strongly orientable blocks reduces to the computation of the \(2\)-edge twinless strongly connected components. The computation of edge-resilient strongly orientable blocks is related to \(2\)-edge strong orientations of mixed graphs in the following sense. A mixed graph \(G\) has a \(2\)-edge strong orientation only if it consists of a single edge-resilient strongly orientable block. While finding a strong orientation of a mixed graph is well understood and can be solved in linear time [1, 1], computing a \(k\)-edge strong orientation for \(k>1\) seems much harder. Frank [11] gave a polynomial-time algorithm for this problem based on the concept of submodular flows. Faster algorithms were later presented by Gabow [1], and by Iwata and Kobayashi [13]. More efficient algorithms exist for computing a \(k\)-edge strong orientation of an undirected graph [1, 1, 2]. In particular, the algorithm of Bhalgat and Hariharan [1] runs in \(\tilde{O}(nk^{4}+m)\) time, for an undirected graph with \(n\) vertices and \(m\) edges. We refer to Section 7 for a discussion of related concepts and open problems. ## 2 Preliminaries Let \(G\) be a (directed or undirected) graph. In general, we allow \(G\) to have multiple edges, unless otherwise specified. We denote by \(V(G)\) and \(E(G)\), respectively, the vertex set and the edge set of \(G\). For a set of edges (resp., vertices) \(S\), we let \(G\setminus S\) denote the graph that results from \(G\) after deleting the edges in \(S\) (resp., the vertices in \(S\) and their incident edges). We extend this notation for mixed sets \(S\), that may contain both vertices and edges of \(G\), in the obvious way. Also, if \(S\) has only one element \(x\), we abbreviate \(G\setminus S\) by \(G\setminus x\). Let \(C\subseteq V(G)\). The induced subgraph of \(C\), denoted by \(G[C]\), is the subgraph of \(G\) with vertex set \(C\) and edge set \(\{e\in E\mid\text{both endpoints of $e$ are in $C$}\}\). For any two vertices \(x\) and \(y\) of a directed graph \(G\), the notation \(x\stackrel{{ G}}{{\leftrightarrow}}y\) means that \(x\) and \(y\) are strongly connected in \(G\), and the notation \(x\stackrel{{ G}}{{\leftrightarrow}}_{t}y\) means that \(x\) and \(y\) are twinless strongly connected in \(G\). We omit the reference graph \(G\) from the \(\stackrel{{ G}}{{\leftrightarrow}}\) notation when it is clear from the context. Thus we may simply write \(x\leftrightarrow y\) and \(x\leftrightarrow_{t}y\). Similarly, we let \(x\stackrel{{ G}}{{\leftrightarrow}}_{2e}y\) and \(x\stackrel{{ G}}{{\leftrightarrow}}_{2et}y\) denote, respectively, that the vertices \(x\) and \(y\) are \(2\)-edge strongly connected and \(2\)-edge twinless strongly connected in \(G\). Let \(G=(V,E)\) be a strongly connected digraph. The _reverse digraph_ of \(G\), denoted by \(G^{R}=(V,E^{R})\), is the digraph that results from \(G\) by reversing the direction of all edges. In a digraph \(G\), we say that a vertex \(x\)_reaches_\(y\) if there is a path in \(G\) from \(x\) to \(y\). We say that an edge \(e\) of a strongly connected digraph \(G\)_separates_ two vertices \(x\) and \(y\) if \(x\) and \(y\) belong to different strongly connected components of \(G\setminus e\). Figure 2: Vertices \(u\) and \(v\) of the strongly connected digraph \(G\) are \(2\)-edge strongly connected but not \(2\)-edge twinless strongly connected. The deletion of the edge \(e\) leaves \(u\) and \(v\) in a strongly connected subgraph that must contain both twin edges \((x,y)\) and \((y,x)\). For any digraph \(G\), the associated undirected graph \(G^{u}\) is the _simple_ undirected graph with vertices \(V(G^{u})=V(G)\) and edges \(E(G^{u})=\{\{u,v\}\mid(u,v)\in E(G)\vee(v,u)\in E(G)\}\). Let \(H\) be an undirected graph. An edge \(e\in E(H)\) is a _bridge_ if its removal increases the number of connected components of \(H\). A connected graph \(H\) is \(2\)-edge-connected if it contains no bridges. Raghavan [10] proved the following characterization of twinless strongly connected digraphs. **Theorem 2.1**.: _([10]) Let \(G\) be a strongly connected digraph. Then \(G\) is twinless strongly connected if and only if its underlying undirected graph \(G^{u}\) is \(2\)-edge-connected._ We introduce the concept of _marked vertex-edge blocks_ of an undirected graph, which will be needed in our algorithm for computing the \(2\)-edge twinless strongly connected components. (We note that Heinrich et al. [1] introduced the related concept of the \(2.5\)_-connected components_ of a biconnected graph.) Let \(G\) be an undirected graph where some vertices of \(G\) are marked. Let \(V^{\prime}\) be the set of the marked vertices of \(G\). Then, a marked vertex-edge block of \(G\) is a maximal subset \(B\) of \(V(G)\setminus V^{\prime}\) with the property that all vertices of \(B\) remain connected in \(G\setminus\{v,e\}\), for every marked vertex \(v\) and any edge \(e\). In Section 6 we provide a linear-time algorithm for computing the marked-vertex edge blocks of a biconnected undirected graph \(G\), by exploiting properties of the SPQR-tree of the triconnected components of \(G\)[1, 1]. ## 3 Reduction to the computation of the \(2\)-edge twinless strongly connected components Let \(G\) be a mixed graph. By _splitting_ a directed edge \((x,y)\) of a graph \(G\), we mean that we remove \((x,y)\) from \(G\), and we introduce a new auxiliary vertex \(z\) and two edges \((x,z),(z,y)\). By _replacing with a gadget_ an undirected edge \(\{x,y\}\) of a graph \(G\), we mean that we remove \(\{x,y\}\) from \(G\), and we introduce three new auxiliary vertices \(z,u,v\) and the edges \((x,z),(z,x),(z,u),(u,v),(v,y),(y,u),(v,z)\). See Figure 3. (This definition is not symmetric for \(x,y\), but we assume an arbitrary order of them.) We refer to a non-auxiliary vertex as an _ordinary vertex_. Also, we call \((u,v)\) the _critical edge_ of the gadget. The idea in using this gadget is twofold. First, by removing the critical edge, we simulate the operation of removing \(\{x,y\}\) from the original graph. And secondly, if we remove an edge that does not belong to the gadget, then the only paths from \(x\) to \(y\) and from \(y\) to \(x\) inside the gadget must use the pair of twin edges \((x,z)\) and \((z,x)\). These properties are useful in order to establish a correspondence between orientations and twinless strong connectivity for our applications. Now we can reduce the computation of the edge-resilient strongly orientable blocks of a mixed graph to the computation of the \(2\)eTSCC of a digraph. As a warm-up, we show how to compute the strongly orientable blocks of a mixed graph via a reduction to the computation of the TSCCs of a digraph. This is achieved by Algorithm 1, whose correctness follows easily from known results. As a consequence of this method for computing \(C_{1},\ldots,C_{k}\), we can see that \(\{C_{1},\ldots,C_{k}\}\) is a partition of \(V\). Furthermore, \(C_{1},\ldots,C_{k}\) satisfy the stronger property, that there is an orientation \(R\) of Figure 3: Replacing an undirected edge \(\{x,y\}\) with a gadget. such that, for every \(i\in\{1,\ldots,k\}\), \(R[C_{i}]\) is strongly connected. ``` 1split every directed edge of \(G\) 2replace every undirected edge \(\{x,y\}\) of \(G\) with a pair of twin edges \((x,y),(y,x)\) 3compute the TSCCs \(T_{1},\ldots,T_{k}\) of the resulting graph return the sets of ordinary vertices of \(T_{1},\ldots,T_{k}\) ``` **Algorithm 1**A linear-time algorithm for computing the strongly orientable blocks of a mixed graph \(G\) **Proposition 3.1**.: _Algorithm 1 is correct._ Proof.: Let \(G\) be the input graph, let \(G^{\prime}\) be the graph derived from \(G\) after we have performed steps 1 and 2, and let \(C\) be one of the sets returned by the algorithm. First we will show that there is an orientation \(R\) of \(G\) such that all vertices of \(C\) are strongly connected in \(R\). Then we will show that \(C\cup\{s\}\) does not have this property for any \(s\in V(G)\setminus C\). Since \(C\) is a twinless strongly connected component of \(G^{\prime}\), by [10] we know that there is a subgraph \(G_{0}\) of \(G^{\prime}\) that contains no pair of twin edges and is such that \(C\) is strongly connected in \(G_{0}\). Let \(\{x,y\}\) be an undirected edge of \(G\), and let \((x,y),(y,x)\) be the pair of twin edges of \(G^{\prime}\) that replaced \(\{x,y\}\) in step 2. Assume w.l.o.g. that \(G_{0}\) contains \((x,y)\). Then we orient \(\{x,y\}\) in \(G\) as \((x,y)\). We do this for every undirected edge of \(G\), and let \(R\) be the resulting orientation. Now it is not difficult to see that all vertices of \(C\) are strongly connected in \(R\), and so \(C\) is contained within a strongly orientable block of \(G\). To establish the maximality of \(C\), let \(s\) be a vertex in \(V(G)\setminus C\). This means that \(s\) is not twinless strongly connected with the vertices of \(C\) in \(G^{\prime}\). Then we have that either (1) \(s\) and (the vertices of) \(C\) are not strongly connected, or (2) there is a pair of twin edges \((x,z),(z,x)\) of \(G^{\prime}\), such that for every path \(P\) from \(s\) to (a vertex of) \(C\) in \(G^{\prime}\) and every path \(Q\) from (a vertex of) \(C\) to \(s\) in \(G^{\prime}\), we have \((x,z)\in P\) and \((z,x)\in Q\). In case (1) we have that \(s\) is not strongly connected with \(C\) in \(G\), even if we allow every undirected edge to be replaced by a pair of twin edges. In case (2), let \(\{x,y\}\) be the undirected edge of \(G\) that was replaced in \(G^{\prime}\) with the pair of twin edges \((x,y),(y,x)\). Then we have that, even if we replace every undirected edge of \(G\) - except \(\{x,y\}\) - with a pair of twin edges, then we still have to replace \(\{x,y\}\) with the pair of twin edges \((x,y),(y,x)\) in order to have \(s\) strongly connected with \(C\). In any case, we have that there is no orientation \(R\) of \(G\) such that \(s\) is strongly connected with \(C\) in \(R\). Notice that this argument also shows that there is no orientation of \(G\) such that a vertex \(t\in C\) becomes strongly connected with a vertex \(s\in V(G)\setminus C\). Thus, the sets returned by Algorithm 1 are all the strong orientable blocks of \(G\). Now Algorithm 2 shows how we can compute all the edge-resilient strongly orientable blocks \(C_{1},\ldots,C_{k}\) of a mixed graph in linear time. As a consequence of this method for computing \(C_{1},\ldots,C_{k}\), we can see that the edge-resilient strongly orientable blocks partition the vertex set \(V\). ``` 1split every directed edge of \(G\) 2replace every undirected edge of \(G\) with a gadget 3compute the 2eTSCCs \(T_{1},\ldots,T_{k}\) of the resulting graph return the sets of ordinary vertices of \(T_{1},\ldots,T_{k}\) ``` **Algorithm 2**A linear-time algorithm for computing the edge-resilient strongly orientable blocks of a mixed graph \(G\) **Proposition 3.2**.: _Algorithm 2 is correct._ Proof.: Let \(G\) be the input graph, let \(G^{\prime}\) be the graph derived from \(G\) after we have performed steps 1 and 2, and let \(C\) be one of the sets returned by the algorithm. First we will show that, for every edge \(e\), there is an orientation \(R\) of \(G\setminus e\) such that all vertices of \(C\) are strongly connected in \(R\). Then we will show that \(C\cup\{s\}\) does not have this property for any \(s\in V(G)\setminus C\). Now let \(e\) be an edge of \(G\). Suppose first that \(e\) is a directed edge of the form \((x,y)\). Let \((x,z),(z,y)\) be the two edges of \(G^{\prime}\) into \(e\) was split. Then we have that all vertices of \(C\) are twinless strongly connected in \(G^{\prime}\setminus(x,z)\). By [10], this implies that there is a subgraph \(G_{0}\) of \(G^{\prime}\setminus(x,z)\) that contains no pair of twin edges and is such that all vertices of \(C\) are strongly connected in it. Let \(\{x^{\prime},y^{\prime}\}\) be an undirected edge of \(G\), and assume w.l.o.g. that \((x^{\prime},z^{\prime}),(z^{\prime},x^{\prime})\) is the pair of twin edges in the gadget of \(G^{\prime}\) that replaced \(\{x^{\prime},y^{\prime}\}\). Assume w.l.o.g. that \(G_{0}\) contains \((x^{\prime},z^{\prime})\). Then we orient \(\{x^{\prime},y^{\prime}\}\) in \(G\) as \((x^{\prime},y^{\prime})\). We do this for every undirected edge of \(G\), and let \(R\) be the resulting orientation of \(G\setminus(x,y)\). Then it is not difficult to see that all vertices of \(C\) are strongly connected in \(R\). Now suppose that \(e\) is an undirected edge of the form \(\{x,y\}\). Let \((u,v)\) be the critical edge of the gadget of \(G^{\prime}\) that replaced the undirected edge \(\{x,y\}\) of \(G\). Then we have that all vertices of \(C\) are twinless strongly connected in \(G^{\prime}\setminus(u,v)\). Now let \(G_{0}\) and \(R\) be defined similarly as above. Then it is not difficult to see that all vertices of \(C\) are strongly connected in \(R\). To establish the maximality of \(C\), let \(s\) be a vertex in \(V(G)\setminus C\). This means that \(s\) is not 2-edge twinless strongly connected with the vertices of \(C\) in \(G^{\prime}\), and so there is an edge \(e\) of \(G^{\prime}\) such that \(s\) is not in the same twinless strongly connected component of \(G^{\prime}\setminus e\) that contains \(C\). Assume first that \(e\) is either \((x,z)\) or \((z,y)\), where \((x,z),(z,y)\) is the pair of twin edges of \(G^{\prime}\) into which a directed edge \((x,y)\) of \(G\) was split. Then we have that either (1) \(s\) and (the vertices of) \(C\) are not strongly connected, or (2) there is a pair of twin edges \((x^{\prime},z^{\prime}),(z^{\prime},x^{\prime})\) of \(G^{\prime}\), such that for every path \(P\) from \(s\) to (a vertex of) \(C\) in \(G^{\prime}\setminus e\) and every path \(Q\) from (a vertex of) \(C\) to \(s\) in \(G^{\prime}\setminus e\), we have \((x^{\prime},z^{\prime})\in P\) and \((z^{\prime},x^{\prime})\in Q\). In case (1) we have that \(s\) is not strongly connected with \(C\) in \(G\setminus(x,y)\), even if we allow every undirected edge to be replaced by a pair of twin edges. In case (2), let \(\{x^{\prime},y^{\prime}\}\) be the undirected edge of \(G\) that was replaced with the gadget of \(G^{\prime}\) that contains the pair of edges \((x^{\prime},z^{\prime}),(z^{\prime},x^{\prime})\). Then we have that, even if we replace every undirected edge of \(G\) - except \(\{x^{\prime},y^{\prime}\}\) - with a pair of twin edges, then we still have to replace \(\{x^{\prime},y^{\prime}\}\) with the pair of twin edges \((x^{\prime},y^{\prime}),(y^{\prime},x^{\prime})\) in order to have \(s\) strongly connected with \(C\) in \(G\setminus(x,y)\). In any case, we have that there is no orientation \(R\) of \(G\setminus(x,y)\) such that \(s\) is strongly connected with \(C\) in \(R\). Now, if \(e\) is an edge of a gadget that replaced an undirected edge \(\{x,y\}\) of \(G\), then with a similar argument we can show that there is no orientation \(R\) of \(G\setminus\{x,y\}\) such that \(s\) is strongly connected with \(C\) in \(R\). Notice that this argument also shows that if \(t\) is a vertex in \(C\) and \(s\) is a vertex in \(V(G)\setminus C\), then there is an edge \(e\) (directed or undirected) such that there is no orientation of \(G\setminus e\) that makes \(s\) and \(t\) strongly connected. Thus, the sets returned by Algorithm 2 are all the edge-resilient strongly orientable blocks of \(G\). Our goal in the following sections is to provide a linear-time algorithm for computing the 2-edge twinless strongly connected components of a digraph. In order to keep the presentation of the general idea as clear as possible, we defer the proofs of correctness in Appendices A and B. ## 4 Connectivity-preserving auxiliary graphs In this section we describe how to construct a set of auxiliary graphs that preserve the 2-edge twinless strongly connected components of a twinless strongly connected digraph, and moreover have the property that their strongly connected components after the deletion of any edge have a very simple structure. We base our construction on the auxiliary graphs defined in [1] for computing the \(2\)-edge strongly connected components of a digraph, and perform additional operations in order to achieve the desired properties. We note that a similar construction was given in [1] to derive auxiliary graphs (referred to as \(2\)-connectivity-light graphs) that enable the fast computation of the \(3\)-edge strongly connected components of a digraph. Still, we cannot apply directly the construction of [1], since we also need to maintain twinless strong connectivity. ### Flow graphs and dominator trees A _flow graph_ is a directed graph with a distinguished _start vertex_\(s\) such that every vertex is reachable from \(s\). For a digraph \(G\), we use the notation \(G_{s}\) in order to emphasize the fact that we consider \(G\) as a flow graph with source \(s\). Let \(G=(V,E)\) be a strongly connected graph. We will let \(s\) be a fixed but arbitrary start vertex of \(G\). Since \(G\) is strongly connected, all vertices are reachable from \(s\) and reach \(s\), so we can refer to the flow graphs \(G_{s}\) and \(G_{s}^{R}\). Let \(G_{s}\) be a flow graph with start vertex \(s\). A vertex \(u\) is a _dominator_ of a vertex \(v\) (\(u\)_dominates_\(v\)) if every path from \(s\) to \(v\) in \(G_{s}\) contains \(u\); \(u\) is a _proper dominator_ of \(v\) if \(u\) dominates \(v\) and \(u\neq v\). The dominator relation is reflexive and transitive. Its transitive reduction is a rooted tree, the _dominator tree_\(D(G_{s})\): \(u\) dominates \(v\) if and only if \(u\) is an ancestor of \(v\) in \(D(G_{s})\). See Figure 4. For every vertex \(x\neq s\) of \(G_{s}\), \(d(x)\) is the immediate dominator of \(x\) in \(G_{s}\) (i.e., the parent of \(x\) in \(D(G_{s})\)). For every vertex \(r\) of \(G_{s}\), we let \(D(r)\) denote the subtree of \(D(G_{s})\) rooted at \(r\). Lengauer and Tarjan [12] presented an algorithm for computing dominators in \(O(m\alpha(m,n))\) time for a flow graph with \(n\) vertices and \(m\) edges, where \(\alpha\) is a functional inverse of Ackermann's function [12]. Subsequently, several linear-time algorithms were discovered [1, 1, 2]. An edge \((u,v)\) is a _bridge_ of a flow graph \(G_{s}\) if all paths from \(s\) to \(v\) include \((u,v)\).1 The following properties were proved in [10]. Footnote 1: Throughout the paper, to avoid confusion we use consistently the term _bridge_ to refer to a bridge of a flow graph and the term _strong bridge_ to refer to a strong bridge in the original graph. **Property 4.1**.: ([10]) _Let \(s\) be an arbitrary start vertex of \(G\). An edge \(e=(u,v)\) is strong bridge of \(G\) if and only if it is a bridge of \(G_{s}\), in which case \(u=d(v)\), or a bridge of \(G_{s}^{R}\), in which case \(v=d^{R}(u)\), or both._ Let \(G_{s}\) be a strongly connected digraph. For every bridge \((x,y)\) of \(G_{s}\), we say that \(y\) is a _marked_ vertex. (Notice that \(s\) cannot be marked.) Property 4.1 implies that the bridges of \(G_{s}\) induce a decomposition of \(D(G_{s})\) into rooted subtrees. More precisely, for every bridge \((x,y)\) of \(G_{s}\), we Figure 4: A flow graph \(G_{s}\) with start vertex \(s\), its dominator tree \(D(G_{s})\), and the auxiliary graph \(H(G_{s},s)\). The bridges of \(G_{s}\) are colored red in \(D(G_{s})\). Marked vertices and auxiliary vertices are colored black in \(D(G_{s})\) and \(H(G_{s},s)\), respectively. remove the edge \((x,y)\) from \(D(G_{s})\). (By Property 4.1, this is indeed an edge of \(D(G_{s})\).) Thus we have partitioned \(D(G_{s})\) into subtrees. Every tree \(T\) in this decomposition inherits the parent relation from \(D(G_{s})\), and thus it is rooted at a vertex \(r\). We denote \(T\) as \(T(r)\) to emphasize the fact that the root of \(T\) is \(r\). Observe that the root \(r\) of a tree \(T(r)\) is either a marked vertex or \(s\). Conversely, for every vertex \(r\) that is either marked or \(s\), there is a tree \(T(r)\). ### Construction of auxiliary graphs Now let \(G_{s}\) be a strongly connected digraph, and let \(r\) be either a marked vertex of \(G_{s}\) or \(s\). We define the _auxiliary_ graph \(H(G_{s},r)\) as follows. In \(G_{s}\) we shrink every \(D(z)\), where \(z\) is a marked vertex such that \(d(z)\in T(r)\) into \(z\). Also, if \(r\neq s\), we shrink \(D(s)\setminus D(r)\) into \(d(r)\). During those shrinkings we maintain all edges, except for self-loops. Also, in [1] multiple edges are converted into single edges. Here, multiple edges are converted into double edges, in order to avoid introducing new strong bridges in the auxiliary graphs. The resulting graph is \(H(G_{s},r)\). We consider \(H(G_{s},r)\) as a flow graph with start vertex \(r\). Notice that it consists of the subgraph of \(G_{s}\) induced by \(T(r)\), plus some extra vertices and edges. To be specific, the vertex set of \(H(G_{s},r)\) consists of the vertices of \(T(r)\), plus all marked vertices \(z\) of \(G_{s}\) such that \(d(z)\in T(r)\), plus \(d(r)\) if \(r\neq s\). The vertices of \(T(r)\) are called _ordinary_ in \(H(G_{s},r)\). The vertices of \(H(G_{s},r)\setminus T(r)\) are called _auxiliary_ in \(H(G_{s},r)\). In particular, if \(r\neq s\), \(d(r)\) is called the _critical_ vertex of \(H(G_{s},r)\), and \((d(r),r)\) is called the _critical_ edge of \(H(G_{s},r)\). (Thus, \(H(G_{s},s)\) is the only auxiliary graph of \(G_{s}\) that has no critical vertex and no critical edge.) The above construction guarantees that each path in \(G_{s}\) whose endpoints lie in some auxiliary graph \(H(G_{s},r)\) has a corresponding path in \(H(G_{s},r)\) with the same endpoints and vice versa. In particular, this implies that each \(H(G_{s},r)\) is strongly connected. Moreover, we have the following results: **Theorem 4.2**.: ([1]) _Let \(G_{s}\) be a strongly connected digraph, and let \(r_{1},\ldots,r_{k}\) be the marked vertices of \(G_{s}\)._ 1. _For any two vertices_ \(x\) _and_ \(y\) _of_ \(G_{s}\)_,_ \(x\stackrel{{ G_{s}}}{{\leftrightarrow}}_{\text{2e}}y\) _if and only if there is a vertex_ \(r\) _(a marked vertex of_ \(G_{s}\) _or_ \(s\)_), such that_ \(x\) _and_ \(y\) _are both ordinary vertices of_ \(H(G_{s},r)\) _and_ \(x\stackrel{{ H(G_{s},r)}}{{\leftrightarrow}}_{\text{2e}}y\)_._ 2. _The collection_ \(H(G_{s},s)\)_,_ \(H(G_{s},r_{1}),\ldots,H(G_{s},r_{k})\) _of all the auxiliary graphs of_ \(G_{s}\) _can be computed in linear time._ We provide the analogous result for \(2\)-edge twinless strong connectivity. **Proposition 4.3**.: _Let \(x,y\) be two vertices of a strongly connected digraph \(G_{s}\). Then \(x\stackrel{{ G_{s}}}{{\leftrightarrow}}_{\text{2et}}y\) if and only if there is a vertex \(r\) (a marked vertex of \(G_{s}\) or \(s\)), such that \(x\) and \(y\) are both ordinary vertices of \(H(G_{s},r)\) and \(x\stackrel{{ H(G_{s},r)}}{{\leftrightarrow}}_{\text{2et}}y\)._ Proof.: See Proposition A.10 in Appendix A. Now let \(G\) be a strongly connected digraph and let \((x,y)\) be a strong bridge of \(G\). We will define the \(S\)_-operation_ on \(G\) and \((x,y)\), which produces a set of digraphs as follows. Let \(C_{1},\ldots,C_{k}\) be the strongly connected components of \(G\setminus(x,y)\). Now let \(C\in\{C_{1},\ldots,C_{k}\}\). We will construct a graph \(C^{\prime}\) as follows. First, notice that either \(x\notin C\) and \(y\in C\), or \(y\notin C\) and \(x\in C\), or \(\{x,y\}\cap C=\emptyset\). Then we set \(V(C^{\prime})=V(C)\cup\{x\}\), or \(V(C^{\prime})=V(C)\cup\{y\}\), or \(V(C^{\prime})=V(C)\cup\{x,y\}\), respectively. Every edge of \(G\) with both endpoints in \(C\) is included in \(C^{\prime}\). Furthermore, for every edge \((u,v)\) of \(G\) such that \(u\in C\) and \(v\notin C\), we add the edge \((u,x)\) to \(C^{\prime}\). Also, for every edge \((u,v)\) of such that \(u\notin C\) and \(v\in C\), we add the edge \((y,v)\) to \(C^{\prime}\). Finally, we also add the edge \((x,y)\) to \(C^{\prime}\). Now we define \(S(G,(x,y)):=\{C^{\prime}_{1},\ldots,C^{\prime}_{k}\}\). See Figure 5. Note that for a strongly connected digraph \(G\) and a strong bridge \(e\) of \(G\), every graph of \(S(G,e)\) is strongly connected. Furthermore, the next proposition shows that the \(S\)-operation maintains the relation of \(2\)-edge twinless strong connectivity. **Proposition 4.4**.: _Let \(G\) be a strongly connected digraph and let \((x,y)\) be a strong bridge of \(G\). Then, for any two vertices \(u,v\in G\), we have \(u\xleftrightarrow{G}_{\text{2et}}v\) if and only if \(u\) and \(v\) belong to the same graph \(C\) of \(S(G,(x,y))\) and \(u\xleftrightarrow{C}_{\text{2et}}v\)._ Proof.: See Proposition A.15 in Appendix A. We can combine Propositions 4.3 and 4.4 in order to derive some auxiliary graphs that maintain the relation of \(2\)-edge twinless strong connectivity of the original graph. Then we can exploit properties of those graphs in order to provide a linear-time algorithm for computing the \(2\)-edge twinless strongly connected components. First we introduce some notation. Let \(G_{s}\) be a strongly connected digraph, and let \(r\) be either a marked vertex of \(G_{s}\) or \(s\). Then we denote \(H(G_{s},r)\) as \(H_{r}\). Furthermore, if \(r^{\prime}\) is either a marked vertex of \(H_{r}\) or \(r\), we denote \(H(H_{r}^{R},r^{\prime})\) as \(H_{rr^{\prime}}\). A vertex that is ordinary in both \(H_{r}\) and \(H_{rr^{\prime}}\) is called an ordinary vertex of \(H_{rr^{\prime}}\); otherwise, it is called auxiliary. **Corollary 4.5**.: _Let \(G_{s}\) be a strongly connected digraph, and let \(x,y\) be two vertices of \(G_{s}\). Then \(x\xleftrightarrow{G}_{\text{2et}}y\) if and only if \(x\) and \(y\) are both ordinary vertices in \(H\) and \(x\xleftrightarrow{H}_{\text{2et}}y\), where \(H\) is either \((1)\)\(H_{ss}\), or \((2)\)\(H_{rr}\), or \((3)\) a graph in \(S(H_{sr},(d(r),r))\), or \((4)\) a graph in \(S(H_{rr^{\prime}},(d(r^{\prime}),r^{\prime}))\) (where \(r\) and \(r^{\prime}\) are marked vertices)._ Proof.: This is an immediate consequence of Propositions 4.3 and 4.4. Now we can describe the structure of the strongly connected components of the graphs that appear in Corollary 4.5 when we remove a strong bridge from them. Figure 5: A strongly connected digraph \(G\) with a strong bridge \(e=(x,y)\) shown red. The deletion of \(e\) splits \(G\) into four strongly connected components \(C_{1}\), \(C_{2}\), \(C_{3}\) and \(C_{4}\) (numbered in topological order). \(C^{\prime}_{2}\) is the digraph in \(S(G,e)\) that corresponds to \(C_{2}\) after attaching \(x\) and \(y\) to it, and all the edges due to the \(S\)-operation. **Proposition 4.6**.: _Let \(H\) be one of the auxiliary graphs that appear in Corollary 4.5, and let \(e=(x,y)\) be a strong bridge of \(H\). Then the strongly connected components of \(H\setminus e\) are given by one of the following2:_ Footnote 2: With some simplifications, that are expanded on in Appendix A. 1. \(\{x\}\) _and_ \(H\setminus\{x\}\)_, where_ \(x\) _is an auxiliary vertex_ 2. \(\{y\}\) _and_ \(H\setminus\{y\}\)_, where_ \(y\) _is an auxiliary vertex_ 3. \(\{x\}\)_,_ \(\{y\}\)_, and_ \(H\setminus\{x,y\}\)_, where_ \(x,y\) _are both auxiliary vertices_ Proof.: See Lemma A.22, Lemma A.25, Corollary A.31, and Corollary A.32, in Appendix A. ## 5 Computing \(2\)-edge twinless strongly connected components We assume that \(G\) is a twinless strongly connected digraph, since otherwise we can compute the twinless strongly connected components in linear time and process each one separately. We let \(E_{t}\) denote the set of twinless strong bridges of \(G\), and let \(E_{s}\) denote the set of strong bridges of \(G\). (Note that \(E_{s}\subseteq E_{t}\).) Algorithm 3 is a simple \(O(mn)\)-time algorithm for computing the \(2\)-edge twinless strongly connected components of \(G\). (It is essentially the same as in [1].) ``` 1 initialize a partition of the vertices \(\mathcal{P}=\{V(G)\}\) compute the set \(E_{t}\) of the twinless strong bridges of \(G\)foreach\(e\in E_{t}\)do 2 compute the twinless strongly connected components \(C_{1},\ldots,C_{k}\) of \(G\setminus e\) let \(\mathcal{P}=\{S_{1},\ldots,S_{l}\}\) be the current partition of the vertices in \(V(G)\) refine the partition by computing the intersections \(C_{i}\cap S_{j}\) for all \(i=1,\ldots,k\) and \(j=1,\ldots,l\) 3 end foreach ``` **Algorithm 3**Partition vertices of \(G\) with respect to its twinless strong bridges. Our goal is to provide a faster algorithm by processing separately the edges in \(E_{t}\setminus E_{s}\) and the edges in \(E_{s}\). That is, we first partition the vertices according to the twinless strong bridges of \(G\) that are not strong bridges, and then we refine this partition by considering the effect of strong bridges. We call the first partition the one that is _"due to the twinless strong bridges that are not strong bridges"_, and the second partition the one that is _"due to the strong bridges"_. Let \(e\) be an edge in \(E_{t}\setminus E_{s}\). (See Figure 6.) Then the TSCCs of \(G\setminus e\) are given by the \(2\)-edge-connected components of \(G^{u}\setminus\{e^{u}\}\), where \(e^{u}\) is the undirected counterpart of \(e\)[12]. Thus, we can simply remove the bridges of \(G^{u}\setminus\{e^{u}\}\), in order to get the partition into the TSCCs that is due to \(e\). To compute the partition that is due to all edges in \(E_{t}\setminus E_{s}\) at once, we may use the cactus graph \(Q\) which is given by contracting the \(3\)-edge-connected components of \(G^{u}\) into single nodes [14]. \(Q\) comes together with a function \(\phi:V(G^{u})\to V(Q)\) (the quotient map) that maps every vertex of \(G^{u}\) to the node of \(Q\) that contains it, and induces a natural correspondence between Figure 6: Example of an edge \(e\) (colored red) that is a twinless strong bridge but not a strong bridge. Note that the deletion of \(e\) leaves the digraph strongly connected by not twinless stronlgy connected. edges of \(G^{u}\) and edges of \(Q\). The cactus graph of the \(3\)-edge-connected components provides a clear representation of the \(2\)-edge cuts of an undirected graph; by definition, it has the property that every edge of it belongs to exactly one cycle. Thus, Algorithm 4 shows how we can compute the partition of \(2\)eTSCCs that is due to the edges in \(E_{t}\setminus E_{s}\). ``` 1 compute the cactus \(Q\) of the \(3\)-edge-connected components of \(G^{u}\), and let \(\phi:V(G^{u})\to V(Q)\) be the quotient map foreach edge \(e\) of \(Q\)do 2if\(e\) corresponds to a single edge of \(G\) that has no twin and is not a strong bridgethen remove from \(Q\) the edges of the cycle that contains \(e\) 3 end if 4 let \(Q^{\prime}\) be the graph that remains after all the removals in the previous step let \(C_{1},\ldots,C_{k}\) be the connected components of \(Q^{\prime}\) return\(\phi^{-1}(C_{1}),\ldots,\phi^{-1}(C_{k})\) ``` **Algorithm 4**Compute the partition of \(2\)eTSCCs of \(G\) that is due to the twinless strong bridges that are not strong bridges. **Proposition 5.1**.: _Algorithm 4 is correct and runs in linear time._ Proof.: The correctness of Algorithm 4 is easily established due to the structure of the cactus of the \(3\)-edge-connected components. Since the strong bridges of a directed graph can be computed in linear time [12], and the \(3\)-edge-connected components of an undirected graph can also be computed in linear time (see e.g., [12, 13]), we have that Algorithm 4 runs in linear time. Now we consider the problem of computing the partition of the \(2\)eTSCCs of \(G\) due to the strong bridges. Here we reduce the problem to the auxiliary graphs that appear in Corollary 4.5, and we apply the information provided by Proposition 4.6 as follows. Let \(H\) be one of those auxiliary graphs. For every strong bridge \(e\) of \(H\), we define the subset \(X_{e}\) of \(V(H)\) as \(X_{e}=\{x\}\), or \(X_{e}=\{y\}\), or \(X_{e}=\{x,y\}\), depending on whether \(e\) satisfies \((i)\), \((ii)\), or \((iii)\), respectively, of Proposition 4.6. (See Figure 7.) Then, \(X_{e}\) satisfies the following: 1. \(H[V\setminus X_{e}]\) is a strongly connected component of \(H\setminus e\) 2. \(X_{e}\) contains only auxiliary vertices Now we can apply the following procedure to compute the partition of \(2\)-edge twinless strongly connected components of the ordinary vertices of \(H\) due to the strong bridges. Initially, we let Figure 7: An auxiliary graph \(H=H_{uu}\), corresponding to the digraph of Figure 2, with auxiliary vertices colored black. Edge \(e\) (shown in red) is a strong bridge of \(H\). The underlying undirected graph of the SCCs of \(H\setminus e\). be the trivial partition of \(V\) (i.e., \(\mathcal{P}=\{V\}\)). Then, for every strong bridge \(e\) of \(H\), we compute the TSCCs of \(H\setminus X_{e}\), and we refine \(\mathcal{P}\) according to those TSCCs. By [10], the computation of the TSCCs of \(H\setminus X_{e}\) is equivalent to determining the \(2\)-edge-connected components of \(H^{u}\setminus X_{e}\). Observe that this procedure does not run in linear time in total, since it has to be performed for every strong bridge \(e\) of \(H\). Thus our goal is to perform the above procedure for all strong bridges \(e\) of \(H\) at once. We can do this by first taking \(H^{u}\), and then shrinking every \(X_{e}\) in \(H^{u}\) into a single marked vertex, for every strong bridge \(e\) of \(H\). Let \(H^{\prime}\) be the resulting graph. Then we simply compute the marked vertex-edge blocks of \(H^{\prime}\). (See Figure 8 for an example of the process of contracting all \(X_{e}\) into single marked vertices.) The whole procedure is shown in Algorithm 1. We note that, given an auxiliary graph \(H\) as above, we can compute all sets \(X_{e}\) in linear time by first computing all strong bridges of \(H\)[12], and then checking which case of Proposition 4.6 applies for each strong bridge. ``` input:An auxiliary graph \(H\) equipped with the following information: for every strong bridge \(e\) of \(H\), the set \(X_{e}\) defined as above output:The partition of \(2\)-edge twinless strongly connected components of the ordinary vertices of \(H\) due to the strong bridges 1begin 2compute the underlying undirected graph \(H^{u}\)foreachstrong bridge \(e\) of \(H\)do 3contract \(X_{e}\) into a single vertex in \(H^{u}\), and mark it 4 5 end fore 6let \(H^{\prime}\) be the graph with the marked contracted vertices derived from \(H^{u}\) 7 compute the partition \(\mathcal{B}_{ve}\) of the marked vertex-edge blocks of \(H^{\prime}\) 8let \(\mathcal{O}\) be the partition of \(V\) consisting of the set of the ordinary vertices of \(H\) and the set of the auxiliary vertices of \(H\) 9return\(\mathcal{B}_{ve}\) refined by \(\mathcal{O}\) 10 end fore ``` **Algorithm 5**A linear-time algorithm for computing the partition of \(2\)-edge twinless strongly connected components of an auxiliary graph \(H\) due to the strong bridges **Proposition 5.2**.: _Algorithm 5 is correct and runs in linear time._ Proof.: For every strong bridge \(e\) of \(H\), let \(\mathcal{B}_{ve}(e)\) denote the collection of the marked vertex-edge blocks of the graph that is formed from \(H^{u}\) when we contract \(X_{e}\) into a single marked vertex. Then it should be clear that it suffices to compute the simultaneous refinement of all \(\mathcal{B}_{ve}(e)\), where \(e\) is a strong bridge of \(H\). However, this is not a linear-time procedure. In order to achieve linear time, Algorithm 5 works by simultaneously contracting every vertex set \(X_{e}\) into a single marked vertex in \(H^{u}\), and then computes the marked vertex-edge blocks of the resulting graph. Thus, the main challenge in proving the correctness of Algorithm 5 is to demonstrate that, even after the simultaneous contraction of all sets \(X_{e}\), we basically get the same result as above. This is established in Proposition B.3 in Appendix B. We defer this proof in the appendix because it is rather tedious, since it relies on detailed information about the structure of the strongly connected components of an auxiliary graph upon removal of a strong bridge, for all types of auxiliary graphs. The final 2eTSCCs of (the subset of the ordinary vertices of) an auxiliary graph are given by the mutual refinement of the partitions computed by Algorithms 4 and 5. (The mutual refinement of two partitions can be computed in linear time using bucket sort.) Hence, by Corollary 4.5 and Propositions 5.1 and 5.2, we have that the 2eTSCCs of a strongly connected digraph can be computed in linear time. It remains to establish that Algorithm 5 runs in linear time. For this we provide a linear-time procedure for Step 7. Observe that the marked vertices of \(H^{\prime}\) have the property that their removal from \(H\) leaves the graph strongly connected, and thus they are not articulation points of the underlying graph \(H^{u}\). This allows us to reduce the computation of the marked vertex-edge blocks of \(H^{\prime}\) to the computation of marked vertex-edge blocks in biconnected graphs. Specifically, we first partition \(H^{\prime}\) into its biconnected components, which can be done in linear time [10]. Then we process each biconnected component separately, and we compute the marked vertex-edge blocks that are contained in it. Finally, we "glue" the marked vertex-edge blocks of all biconnected components, guided by their common vertices that are articulation points of the graph. In the next section we provide a linear-time algorithm for computing the marked vertex-edge blocks of a biconnected graph. ## 6 Computing marked vertex-edge blocks Let \(G\) be a biconnected undirected graph. An SPQR tree \(\mathcal{T}\) for \(G\) represents the triconnected components of \(G\)[1, 1]. Each node \(\alpha\in\mathcal{T}\) is associated with an undirected graph \(G_{\alpha}\). Each vertex of \(G_{\alpha}\) corresponds to a vertex of the original graph \(G\). An edge of \(G_{\alpha}\) is either a _virtual edge_ that corresponds to a separation pair of \(G\), or a _real edge_ that corresponds to an edge of the original graph \(G\). The node \(\alpha\), and the graph \(G_{\alpha}\) associated with it, has one of the following types: * If \(\alpha\) is an \(S\)-node, then \(G_{\alpha}\) is a cycle graph with three or more vertices and edges. * If \(\alpha\) is a \(P\)-node, then \(G_{\alpha}\) is a multigraph with two vertices and at least 3 parallel edges. * If \(\alpha\) is a \(Q\)-node, then \(G_{\alpha}\) is a single real edge. * If \(\alpha\) is an \(R\)-node, then \(G_{\alpha}\) is a simple triconnected graph. Each edge \(\{\alpha,\beta\}\) between two nodes of the SPQR tree is associated with two virtual edges, where one is an edge in \(G_{\alpha}\) and the other is an edge in \(G_{\beta}\). If \(\{u,v\}\) is a separation pair in \(G\), then one of the following cases applies: * \(u\) and \(v\) are the endpoints of a virtual edge in the graph \(G_{\alpha}\) associated with an \(R\)-node \(\alpha\) of \(\mathcal{T}\). * \(u\) and \(v\) are vertices in the graph \(G_{\alpha}\) associated with a \(P\)-node \(\alpha\) of \(\mathcal{T}\). Figure 8: An auxiliary graph \(H=H_{uu}\), corresponding to the digraph of Figure 2, with auxiliary vertices colored black. The underlying undirected graph \(H^{u}\) of \(H\), and the corresponding graph \(H^{\prime}\) resulting from \(H^{u}\) after shrinking the vertex sets \(X_{e}\) for all strong bridges \(e\); here \(X_{e}=\{w,z\}\). Note that we have two parallel edges \(\{X_{e},u\}\) and \(\{X_{e},v\}\). 3. \(u\) and \(v\) are vertices in the graph \(G_{\alpha}\) associated with an \(S\)-node \(\alpha\) of \(\mathcal{T}\), such that either \(u\) and \(v\) are not adjacent, or the edge \(\{u,v\}\) is virtual. In case (c), if \(\{u,v\}\) is a virtual edge, then \(u\) and \(v\) also belong to a \(P\)-node or an \(R\)-node. If \(u\) and \(v\) are not adjacent then \(G\setminus\{u,v\}\) consists of two components that are represented by two paths of the cycle graph \(G_{\alpha}\) associated with the \(S\)-node \(\alpha\) and with the SPQR tree nodes attached to those two paths. Gutwenger and P. Mutzel [14] showed that an SPQR tree can be constructed in linear time, by extending the triconnected components algorithm of Hopcroft and Tarjan [13]. Let \(e=\{x,y\}\) be an edge of \(G\) such that \(\{v,e\}\) is a vertex-edge cut-pair of \(G\). Then, \(\mathcal{T}\) must contain an \(S\)-node \(\alpha\) such that \(v\), \(x\) and \(y\) are vertices of \(G_{\alpha}\) and \(\{x,y\}\) is not a virtual edge. The above observation implies that we can use \(\mathcal{T}\) to identify all vertex-edge cut-pairs of \(G\) as follows. A vertex-edge cut-pair \((v,e)\) is such that \(v\in V(G_{\alpha})\) and \(e\) is a real edge of \(G_{\alpha}\) that is not adjacent to \(v\), where \(\alpha\) is an \(S\)-node [1, 1]. Now we define the _split operation_ of \(v\) as follows. Let \(e_{1}\) and \(e_{2}\) be the edges incident to \(v\) in \(G_{\alpha}\). We split \(v\) into two vertices \(v_{1}\) and \(v_{2}\), where \(v_{1}\) is incident only to \(e_{1}\) and \(v_{2}\) is incident only to \(e_{2}\). (In effect, this makes \(S\) a path with endpoints \(v_{1}\) and \(v_{2}\).) To find the connected components of \(G\setminus\{v,e\}\), we execute a split operation on \(v\) and delete \(e\) from the resulting path. Note that \(e\neq e_{1},e_{2}\), and \(e\) does not have a copy in any other node of the SPQR tree since it is a real edge. Then, the connected components of \(G\setminus\{v,e\}\) are represented by the resulting subtrees of \(\mathcal{T}\). Here, we need to partition the ordinary vertices of \(G\) according to the vertex-edge cut-pairs \((v,e)\), where \(v\) is a marked auxiliary vertex. To do this efficiently, we can process all vertices simultaneously as follows. First, we note that we only need to consider the marked vertices that are in \(S\)-nodes that contain at least one real edge. Let \(\alpha\) be such an \(S\)-node. We perform the split operation on each marked (auxiliary) vertex \(v\), and then delete all the real edges of \(\alpha\). This breaks \(\mathcal{T}\) into subtrees, and the desired partition of the ordinary vertices is formed by the ordinary vertices of each subtree. See Algorithm 6. ``` input : An SPQR tree \(\mathcal{T}\) of \(G\), a set of marked vertices \(v\in V(G_{\alpha})\) and real edges \(e\in E(G_{\alpha})\), for all \(S\)-nodes \(\alpha\) of \(\mathcal{T}\). output : A partition of the ordinary vertices of \(G\), so that two ordinary vertices in the same set of the partition remain in the same connected component of \(G\setminus(v,e)\), for any vertex-edge cut \((v,e)\) such that \(v\) is marked. 1begin 2foreach\(S\)-node \(\alpha\) of \(\mathcal{T}\) that contains a marked vertex and a real edgedo 3 perform a split operation on each marked vertex of \(\alpha\) delete all real edges of \(G_{\alpha}\) 4 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations 5foreach subtree \(\mathcal{T}_{i}\)do 6 put all the ordinary vertices of \(\mathcal{T}_{i}\) into the same set of the partition 7 end foreach 8 9 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations 10foreach subtree \(\mathcal{T}_{i}\)do 11 put all the ordinary vertices of \(\mathcal{T}_{i}\) into the same set of the partition 12 end foreach 13 14 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations 15foreach subtree \(\mathcal{T}_{i}\)do 16 put all the ordinary vertices of \(\mathcal{T}_{i}\) into the same set of the partition 17 end foreach 18 19 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations 20foreach subtree \(\mathcal{T}_{i}\)do 21 put all the ordinary vertices of \(\mathcal{T}_{i}\) into the same set of the partition 22 end foreach 23 24 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations 25foreach subtree \(\mathcal{T}_{i}\)do 26 put all the ordinary vertices of \(\mathcal{T}_{i}\) into the same set of the partition 27 end foreach 28 29 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations 29foreach subtree \(\mathcal{T}_{i}\)do 30 put all the ordinary vertices of \(\mathcal{T}_{i}\) into the same set of the partition 31 end foreach 32 33 end foreach 34 35 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations 36 37 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations 38foreach subtree \(\mathcal{T}_{i}\)do 39 put all the ordinary vertices of \(\mathcal{T}_{i}\) into the same set of the partition 32 end foreach 39 end foreach 40 41 end foreach\(\mathcal{T}\) into connected subtrees \(\mathcal{T}_{1},\ldots,\mathcal{T}_{\lambda}\) after the above operations [MISSING_PAGE_POST] 43 components of \(G\setminus\{v,e\}\) for some marked vertex-edge cut \((v,e)\), then \(u\) and \(w\) will end up in different subtrees of \(\mathcal{T}\). Now suppose that \(u\) and \(w\) remain in the same connected component of \(G\setminus\{v,e\}\), for any vertex-edge cut \((v,e)\) with \(v\) marked. Let \(\beta\) and \(\gamma\) be the nodes of \(\mathcal{T}\) such that \(u\in V(G_{\beta})\) and \(w\in V(G_{\gamma})\). Consider any \(S\)-node \(\alpha\) that lies on the path of \(\mathcal{T}\) between \(\beta\) and \(\gamma\) and contains at least one marked vertex and at least one real edge. (If no such \(S\)-node exists, then clearly \(u\) and \(w\) cannot be separated by any marked vertex-edge cut.) Let \(e_{u}\) and \(e_{v}\) be the virtual edges of \(E(G_{\alpha})\) that correspond to the paths from \(\beta\) to \(\alpha\) and from \(\gamma\) to \(\alpha\), respectively. Also, let \(P_{1}\) and \(P_{2}\) be the two paths that connect \(e_{u}\) and \(e_{v}\) in \(G_{\alpha}\). Without loss of generality, we can assume that \(P_{1}\) contains a marked vertex \(v\). Then, \(P_{2}\) cannot contain any real edge \(e\), since otherwise \((v,e)\) would be a marked vertex-edge cut separating \(u\) and \(w\). Hence, all real edges are on \(P_{1}\). But then, for the same reason, \(P_{2}\) cannot contain a marked vertex. Hence, all marked vertices are also on \(P_{1}\). This implies that \(\beta\) and \(\gamma\) remain in the same subtree of \(\mathcal{T}\) after the execution of Algorithm 6. The same arguments work if \(u\) or \(w\) (or both) are vertices of \(V(G_{\alpha})\). Also, it is easy to see that Algorithm 6 runs in linear time. Now, since Algorithm 6 computes the marked vertex-edge blocks in linear time, we have that Algorithm 5 also runs in linear time. Hence, we obtain the following result: **Theorem 6.2**.: _The \(2\)-edge twinless strongly connected components of a directed graph can be computed in linear time._ Finally, by the reduction of Section 2, we have: **Theorem 6.3**.: _The edge-resilient strongly orientable blocks of a mixed graph can be computed in linear time._ ## 7 Concluding remarks In this paper we studied the notion of edge-resilient strongly orientable blocks of a mixed graph \(G\). Each such block \(C\) has the property that for any (directed or undirected) edge \(e\) of \(G\), there is an orientation \(R\) of \(G\setminus e\) that maintains the strong connectivity of the vertices in \(C\). We note that if we change in the definition of edge-resilient strongly orientable blocks the assumption that the edges that we allow to fail are both directed and undirected, and we demand that they are only directed, or only undirected, then we can easily modify Algorithm 2 so that we can compute those blocks in linear time (using again the reduction to computing \(2\)-edge twinless strongly connected components). We may also introduce the similar concept of the _\(2\)-edge strongly orientable blocks_ of a mixed graph. These are the maximal sets of vertices \(C_{1},\ldots,C_{k}\) with the property that, for every \(i\in\{1,\ldots,k\}\), there is an orientation \(R\) of \(G\) such that all vertices of \(C_{i}\) are \(2\)-edge strongly connected in \(R\). There are some relations between the \(2\)-edge strongly orientable blocks and the edge-resilient strongly orientable blocks. First, we can easily see that every \(2\)-edge strongly orientable block lies within an edge-resilient strongly orientable block. Moreover, both these concepts coincide with the \(2\)-edge strongly connected components in directed graphs. However, Figure 9 shows that these concepts do not coincide in general. Despite these connections, the efficient computation of the \(2\)-edge strongly orientable blocks seems to be an even more challenging problem than computing the edge-resilient strongly orientable blocks, since the former generalizes the notion of \(2\)-edge strong orientations of a mixed graph. Finally, we note that our techniques may be useful for solving other connectivity problems in mixed graphs, such as related connectivity augmentation problems [1, 1, 2].
2304.11681
Money Over Morals: A Business Analysis of Conti Ransomware
Ransomware operations have evolved from relatively unsophisticated threat actors into highly coordinated cybercrime syndicates that regularly extort millions of dollars in a single attack. Despite dominating headlines and crippling businesses across the globe, there is relatively little in-depth research into the modern structure and economics of ransomware operations. In this paper, we leverage leaked chat messages to provide an in-depth empirical analysis of Conti, one of the largest ransomware groups. By analyzing these chat messages, we construct a picture of Conti's operations as a highly-profitable business, from profit structures to employee recruitment and roles. We present novel methodologies to trace ransom payments, identifying over $80 million in likely ransom payments to Conti and its predecessor -- over five times as much as in previous public datasets. As part of our work, we publish a dataset of 666 labeled Bitcoin addresses related to Conti and an additional 75 Bitcoin addresses of likely ransom payments. Future work can leverage this case study to more effectively trace -- and ultimately counteract -- ransomware activity.
Ian W. Gray, Jack Cable, Benjamin Brown, Vlad Cuiujuclu, Damon McCoy
2023-04-23T15:21:38Z
http://arxiv.org/abs/2304.11681v1
# Money Over Morals: A Business Analysis of Conti Ransomware ###### Abstract Ransomware operations have evolved from relatively unsophisticated threat actors into highly coordinated cybercirclines indicates that regularly extort millions of dollars in a single attack. Despite dominating headlines and crippling businesses across the globe, there is relatively little in-depth research into the modern structure and economics of ransomware operations. In this paper, we leverage leaked chat messages to provide an in-depth empirical analysis of Conti, one of the largest ransomware groups. By analyzing these chat messages, we construct a picture of Conti's operations as a highly-profitable business, from profit structures to employee recruitment and roles. We present novel methodologies to trace ransom payments, identifying over $80 million in likely ransom payments to Conti and its predecessor - over five times as much as in previous public datasets. As part of our work, we publish a dataset of 666 labeled Bitcoin addresses related to Conti and an additional 75 Bitcoin addresses of likely ransom payments. Future work can leverage this case study to more effectively trace - and ultimately counteract - ransomware activity. Ransomware, Conti, cybercircline ## I Introduction Ransomware is a type of malware that encrypts the files on a victim's computer, and charges an extortion fee for the decryption key. Ransomware attacks have significantly increased over the past years with the addition of more adversarial groups, new extortion tactics, and more targeted attacks. In 2021, ransomware payments exceeded $600 million USD, according to cryptocurrency analysis firm Chainalysis [1]. This has resulted in the emergence of large-scale Ransomware as a Service (RaaS) operations that have streamlined segments of their campaigns by dividing the work across different roles and responsibilities. This often encompasses affiliate models, where a core team responsible for developing malware leases it to others to deploy and infect potential victims. However, there has been little academically peer-reviewed analysis of modern ransomware operations. This lack of insight into backend information on RaaS campaigns has left the security industry inferring for years, on an anecdotal basis, how these threats operate. In this paper, we perform an analysis of leaked chat messages and cryptocurrency addresses associated with Conti. Based on a report from Chainalysis, Conti is one of the most prolific ransomware groups and has attacked thousands of organizations [1]. Conti's victims include critical infrastructure entities such as hospitals and food providers [1]. Despite setbacks to the Conti ransomware collective, including self-proclaimed shutdowns and re-branding, they continually ranked in the top three ransomware groups for number of victims and volume of ransoms in 2020 and 2021 [2]. The chat data was leaked by a Ukrainian security researcher in February 2022 in response to the Russian invasion of Ukraine [3]. The leak included over 168,000 messages from Conti's internal chat logs. The chat logs contain information pertaining to the inner workings of the group, such as discussions of malware development and victim negotiations. These chats contain a wealth of data to aid in the understanding of Conti's inner operations, including associates' Bitcoin wallet addresses, employee recruitment processes, and delineation of roles and responsibilities. Our analysis drives insights that can be leveraged by law enforcement and policymakers to aid in counteracting ransomware. For instance, just two exchanges - one unidentified exchange and Gemini - are responsible for over 90% of identified payments to Conti. Likewise, Conti exhibits poor operational security, with its associates sending a large amount of salary payments to exchanges like Gemini and Binance that enforce Know Your Customer (KYC) regulations. These centralized points provide opportunities to trace ransomware actors and seize funds. In this paper, we make the following contributions: **Economic on-chain measurement**. We manually annotate all 666 Bitcoin addresses present in the leak according to their function (e.g. salary or reimbursement) which we will publicly publish. After annotating, we then use on-chain transaction data to provide an analysis of Conti's bottom line, including estimated gross revenue, operating cost, salary per role, cashout techniques, and relation to other cybercrime activity (like dark web marketplaces). As part of this analysis, we develop a methodology to identify ransom payments based on common proceed splitting behavior, which we use to identify $83.9 million in new likely payments. **Qualitative business structure analysis**. The chat logs also contain qualitative information on different roles and responsibilities of Conti. Along with the Bitcoin address annotations, we identified the roles and responsibilities within the collective. We assessed team composition from the chats, as well as the primary users based upon interactions within the chat logs. We also provide an analysis of their employee recruitment process and challenges managers faced with employees that did not know the illicit nature of their employer. ## II Background In this section, we describe the functional roles of the archetypal Ransomware as a Service operation. These roles are segmented into specialized tasks that fulfill different parts of the ransomware attack chain [4]. We explore how these roles execute the respective parts of the ransomware campaign, from malware delivery to cashing out [5, 6]. Ransomware operations require individuals to build, test, maintain, and deliver the malware, as well as maintain victim communications during the ransom. Once a victim pays a ransom in cryptocurrency, the attacker launders the funds through a variety of exchanges and third party services. Since the introduction of the first public ransomware leak site in 2019, approximately 80 ransomware groups have created public leak sites, where they threaten to post victim data if victims fail to meets the terms of the extortion [7]. There are ransomware groups that do not maintain leak sites, and thus this number is not exhaustive. There is also overlap in these operations, including code re-use, and re-branding that typically occurs after a significant ransomware incident [8, 9]. At a macro-level, RaaS operations are generally divided between **Ransomware Operators** and **Ransomware Affiliates**. The operators are typically salaried workers that recruit new members, develop the malware, advertise and sell access to their ransomware, and maintain the victim payment portal and leak site post-compromise. Affiliates are typically commissioned workers that license the malware for a fee or a percentage of the ransom payment. Their role is to target and compromise new victims, deliver and execute the ransomware, and handle victim negotiations. Affiliates have also been associated with lateral movement, persistence, and data exfiltration in a victim's network [9, 10]. **Management:** RaaS operations can encompass hundreds of specialized workers. They have been likened to a gig economy for their on-demand services provided by their affiliate structure. Additionally, many phases of the attack chain are facilitated by human decision-making [9, 10]. The managers are responsible for the human effort, which includes human resources, hiring, finances, and payroll. Managers may also have cross-departmental responsibilities, and support the other lines of effort listed below. **Development and Infrastructure:** Illicit economies are dependent upon administrative work and maintenance to ensure uninterrupted operations through development and infrastructure [11]. System administrators and software developers are salaried workers, essential to ensure uninterrupted RaaS operations. This may include acquiring or developing software, virtual machines, servers, proxies, antivirus (to test malware against), and a variety of other tools. These roles also offer IT support functions. **Access Operations:** Access brokers may sell access to affiliates, who use the access to escalate privileges and move laterally within a victims network. Initial access brokers monotize access to victim's networks. RaaS collectives may have their own access brokers, or they may outsource to third parties for access-as-a-service. Initial access brokers employ a variety of tactics, techniques, and procedures to gain access to victims networks, including spear-phishing key members of an organization, compromised credentials or remote desktop protocols (RDP), and exploiting vulnerabilities [12, 13, 14]. Access operations may employ a variety of tools to deploy malware, including Emotet, IcedID, Trickbot, and BazarLoader. **Negotiations:** Affiliates are typically responsible for managing negotiations post-compromise through an admin panel included with the ransomware. Large corporations may employ the use of ransomware negotiators, which deal directly with the ransomware affiliates to transfer cryptocurrencies through exchanges. RaaS operators manage the public leak site, where details of the victim are included if they fail to pay within a given time period. The operators also control the processing of ransomware payments. ## III Data Our analysis in this study uses both leaked data, public blockchain data, and an annotated set of Bitcoin addresses from Crystal Blockchain, a commercial blockchain analysis platform [15]. Table I provides a brief description of these data sources. When using leaked data, there can arise both ethical and validity concerns. In this section, we provide an overview of the datasets, discuss how we validated the data, and talk about the ethical framework of our study. ### _Description_ On February 27, 2022, the Twitter account @ContiLeaks began tweeting links to an anonymous file sharing service that contained information related to the Conti Ransomware collective. In addition to malware source code and other internal files, the account shared three files of chat logs: two files containing messages from Conti's Jabber server and one file containing messages from Conti's Rocket.Chat server. The dataset that we used for our analysis only contained text (i.e., no images). The leaked chats cover the period from July 2020 to February 2022. The portion of the dataset we analyzed did not contain any Personally Identifiable Information (PII). We created a set of regular expressions to extract Bitcoin addresses and confirmed that they were valid addresses. Table II provides a summary of the the datasets that we analyzed. ### _Validation_ The leaked datasets have been extensively validated by the security community, including the fact that gaps in the chat logs correlate with times when Conti was disrupted by law enforcement [16]. In our analysis, Bitcoin addresses included in the leak are consistent with previously-known Conti Bitcoin addresses, such as those in the Ransomwhere dataset [17], with addresses in the leak having received funds from both Conti payment addresses and Ryuk (another ransomware strain operated by the same threat group [18]). Furthermore, we do not observe any internal inconsistencies in the dataset. ### _Ethics_ We reason about potentially harms of our study through the lens of the Menlo report [19]. We have two primary ethical questions. The first is a high-level question concerning whether the data being leaked should _prima facie_ prohibit all subsequent uses of it. For example, should a researcher be prohibited from analyzing the Facebook leaks in understanding their policies? We believe that the potential benefits of our study to society outweigh the minimal increased risks of harm. We observe that this data is already broadly available and the knowledge of its existence, its association with the Conti organization, and information, such as online handles and amount of Bitcoin transactions, have been publicly documented. Also, there is likely little if any Personally Identifiable Information (PII) in this leak and we did not find any during our analysis. This was a criminal service and the usernames are pseudonyms that are intentionally difficult to link to the actual persons. Thus, there is a minimal risk of us creating any new harm from our analysis. To further manage any remaining harms we institute several safeguards. We did not attempt to deanonymize anyone in these leaks as part of our study. Also, we do not use the publicly-known real names of any Conti employees or affiliates. ## IV Methodology ### _Database Annotation_ Jabber, the Extensible Messaging and Presence Protocol (XMPP), is a popular messaging application in the cybercrime underground. The open source instant messenger supports strong encryption, and independent federated servers that are located around the world [20]. Well-established cybercrime forums, like Exploit, run their own Jabber servers. The Conti collective also operated their own Jabber server: q3mcco35auwcstmt[.]onion. Similar to other online messengers, the Conti Leaks often included short text that by itself was absent of any substantive content. The large number of users (n = 463) within the chats are often overlapping, and span different parts of the operation. Additionally, Russian cybercriminals often use specialized slang, dubbed Genis (Fenya), that is purposefully difficult for a layperson to understand as it provides shorthand, obfuscation, and signals group membership [21]. To better prepare the leaked messages for scientific analysis, we included a mixed method analysis that included quantitative and qualitative data analysis. Our primary objective in analyzing the data is to conduct an economic on-chain analysis of the cryptocurrency addresses observed in the dataset. We conducted a regular expression search within the chat messages to identify all mentions of Bitcoin addresses. In total, we identified 665 Bitcoin addresses in the Jabber dataset and 1 Bitcoin address in the Rocket.Chat dataset. As a result, we primarily focused on the Jabber dataset for this analysis. In order to provide context when annotating addresses, we included 10 messages in a conversation before and 10 messages after each mention of a Bitcoin address. Using this approach, we were able to augment machine-translated text with manual translations for Russian slang, label the context of the Bitcoin address to inform the follow-on economic and business analysis, and ascribe roles to the Conti ransomware operators through the context of the chat messages. To better understand the context of the messages, including the Russian cybercrime slang, one of our annotators is a native Russian speaker and expert in the criminal underground. Three of the authors annotated the addresses, with one author annotating each address. We maintained a Russian slang dictionary that annotators could reference throughout our analysis. When reviewing the Bitcoin addresses, we annotated the addresses according to the following labels: **Salary**: The address is associated with a request for salary or payment. Associates in the chat will often request from a manager that a salary be transferred to a wallet. **Reimbursement**: The address is associated with a request for reimbursement for a variety of services. Associates may directly or indirectly request through a manager that funds be transferred to a wallet for reimbursement of various tools. **Ransom Payment Address**: The address is used to receive payment from a Conti ransomware victim. **Claimed Ownership**: A member of the Conti collective claimed to own the address. **Services**: Any services that we can identify being directly mentioned by the Conti collective. **Victim Name**: The name of the victim who made the payment. _Inter-Annotator agreement:_ To ensure that our annotations were consistent across researchers, we randomly sampled 100 posts containing cryptocurrency addresses and conducted a blind annotation with 3 raters. We then measured Inter-Annotator Agreement (IAA) by computing Fleiss' Kappa \begin{table} \begin{tabular}{c c c c c} \hline \hline Source & Time Period & Posts & Users & Addresses \\ \hline Jabber & 2020-06-21 - 2022-02-25 & 168,624 & 463 & 665 \\ Rocket.Chat & 2020-08-31 - 2022-02-26 & 88,110 & 248 & 1 \\ \hline \hline \end{tabular} \end{table} Table II: Summary of Leaked Conti Chat Logs \begin{table} \begin{tabular}{c c c} \hline \hline Source & Information & Explanation \\ \hline Leaked Chats & timestamps, message, participants & Leaked Chat logs from Conti Jabber server \\ Bitcoin Transactions & addresses, amount, timestamp & Public Bitcoin Blockchain Data \\ Crystal Blockchain & annotated Bitcoin addresses & Platform to investigate Bitcoin addresses \\ \hline \hline \end{tabular} \end{table} Table I: Summary of Datasets for all 3 raters, which yielded a score of 0.73, indicating substantial agreement [22]. ### _Economic On-Chain Measurement_ We obtained Bitcoin addresses from the Conti leaks as well as the Ransomwhere dataset [17]. Ransomwhere is a public, crowdsourced dataset of ransomware payment addresses, which we use to understand the blockchain techniques of Conti. We then performed on-chain blockchain analysis, detailed here, on these addresses. To enrich our data, we fetched incoming and outgoing transaction data for all addresses from the blockchain.com API [23]. We then calculated dollar values for transactions by multiplying the amount of Bitcoin transacted by the closing Bitcoin to USD exchange rate the date the transaction was made from the CoinDesk API [24]. While we cannot know the exact amount the ransomware actors sold the Bitcoin for, this serves as an approximation and is consistent with previous work [6, 25]. Additionally, to understand the types of wallets the addresses have interacted with, we utilized Crystal Blockchain [15]. Crystal Blockchain is a blockchain analytics tool that offers insight into the ownership of Bitcoin addresses based on a variety of public sources [26]. We fetched the source and destination entities for all addresses in the dataset. In order to gain insight into the proceeds of Conti, we performed analysis to identify potential ransom payment addresses. Based on confirmed Conti ransom payment addresses from Ransomwhere and those labeled in our dataset, we found 17 of 32 addresses to exhibit payment splitting, where the proceeds are immediately split to two wallets according to an exact percentage. This is likely due to the affiliate structure of Conti, where affiliates and the Conti core developers split proceeds. We found that for the 17 split addresses, split percentages ranged from 5% to 40%, with the most common (9 addresses) being 20%. An example of a split payment is shown in Figure 1. Note that when we refer to split percentages, the percentage is the portion of the payment that the Conti collective keeps, with the remaining portion going to the affiliate. In addition to low-risk exchanges such as Gemini, a large portion of these ransom payments to Conti originate from an unlabeled cluster of Bitcoin addresses. It is possible that this cluster belongs to an Over The Counter (OTC) desk, which many exchanges operate as a way for customers to exchange cryptocurrency outside of private markets. Given the significant portion of known Conti ransom payments originating from this cluster, it is possible that it is used by a common ransomware negotiator or incident response firm working with multiple victims. We consider this cluster in further analysis as a potential origin of Conti payments. Future work may attempt to identify the owner of this cluster. We also analyzed 41 ransom payment addresses belonging to Ryuk from the Ransomwhere dataset. Ryuk is widely believed to be the predecessor to Conti, and both Conti and Ryuk have been attributed by Crowdstrike to be operated by the Wizard Spider group [27]. Of these 41 addresses, 17 exhibited splitting. Split percents ranged from 10% to 50%, with the most common (6 addresses) being 35%. To discover other likely ransom payment addresses, we considered addresses that: (1) sent money (directly or indirectly) to an address in the leaked dataset, (2) exhibited splitting according to an exact percent that was a multiple of 5 (e.g. 20%, 25%) and (3) had received more than 99% of its funds from a low risk exchange (e.g. Gemini) or the identified unlabeled cluster. Results of this analysis are detailed in Section V. While we are able to designate these addresses as likely ransom payment addresses, the distinction between whether they are Conti or Ryuk is less clear. Through the course of the analysis, we observed previously known Ryuk addresses being used to fund addresses in the leaked Conti dataset, further suggesting that Conti and Ryuk are operated by the same actor. As Wizard Spider (an organized cybercrime group that has been attributed to Conti, Ryuk, TrickBot, and BazarLoader) paused operating Ryuk in March 2020, which coincided with the emergence of Conti, we label a likely ransom payment address as Ryuk if the address was first used before March 2020, and Conti otherwise [18]. ### _Qualitative Business Analysis_ We extracted the unique aliases (463) from the Conti Leaks, and created a separate annotation document. We then read through the full dataset of the Conti Leaks (168,624 messages) and attempted to categorize the user roles based upon the content of their conversations. We found that a small number of individuals comprise a large number of the chats, so we sorted the aliases by degree centrality to understand who sent and received the most messages. We then decided to focus on the top 50 aliases, as most were also observed in our prior annotation of the cryptocurrency addresses. We maintained a full list of users, however we chose to focus annotations on the top 50. We made the following categories to understand their roles within the organizations: **Role, Direct Report, Working Relationships, Alternative Aliases**, and **Tasks**. While certain Figure 1: An example of splitting. This address received 22 Bitcoin from the US-based Gemini exchange, and split into 25% and 75%. 1 Bitcoin from this address would eventually be sent to an address in the leak. Other funds were transferred to other illicit entities, such as the sanctioned exchange Garantex. information regarding their respective roles could be gleaned from the chats, we had to otherwise infer based upon the context of the discussions, or their working relationships. ## V Economic Analysis As with any business, Conti has income and expenses. The bigger the profit margin, the more its operators can walk away with. To begin our economic analysis, we utilize the labeled addresses to understand which addresses represent a business expense for Conti and which represent income. We consider reimbursements and salary to be expenses, while ransom payments are income. Table III shows the total income and expenses for Conti. Of the addresses in the leaked dataset, salaries represent the most in number (419) and the highest dollar value at $21.9 million. Addresses that are used for both salary and reimbursements are relatively low in number but represent $5.4 million in payments. Reimbursements, while lower in dollar value at $3.8 million, have 227 addresses, suggesting that less money goes to reimbursement wallets on average than salary addresses. Based on addresses in the leaks alone (the first row of Table III), expenses exceed income. This is to be expected, as Conti primarily used its administrator portal to communicate with victims, while the leaked chat logs appear to be the primary forum for requesting salary payment and reimbursement. As a result, ransom payment addresses surface in the chat logs only incidentally, while salaries and reimbursements are expected. Nonetheless, we can identify likely ransom payment addresses. Given that Conti's income comes from ransom payments, and due to the traceable nature of Bitcoin, we can trace back payments visible in the leaked dataset to the ransom payments where the funds originate. Using the criteria established in Section IV, we identify 75 likely ransom payment addresses representing $83.9 million in payments. Of this, based on the dates Ryuk and Conti were active, we label $26.5 million as Ryuk payments and $57.4 million as Conti payments. The largest discovered likely payment of $9.5M is shown in Figure 2. Given this perspective, income begins to dwarf expenses. In addition to leftover money from Ryuk used to fund the Conti operation, the total Conti income ($77.9 million) is more than double total expenses ($31.2 million). A significant portion of the proceeds go directly to the hands of affiliates. Our numbers are likely incomplete - Chainalysis identified $180 million in proceeds from Conti in 2021 alone [1]. However, unlike Chainalysis, we have provided our methodology for identifying ransom payment and we will publicly publish the addresses. Table IV shows the most common origins of confirmed and likely Conti ransom payments. The unlabeled cluster discussed in Section IV represents a majority of payments - almost 70%. Following that, Gemini composes a significant share at $23.1 million. The fact that just two exchanges represent the vast majority of identified payments to Conti suggests strong intervention points. We have published the derived likely ransom payment addresses on GitHub. 1 Notably, the release of these addresses increases the amount of publicly known Conti payments more than fivefold - from Ransomwhere's $17.1 million to $104.4 million. Footnote 1: See [https://github.com/cablej/conti-payments](https://github.com/cablej/conti-payments) Next, we consider the sources and funds of funds from wallets in the leaked dataset, shown in Figure 3. We use money laundering risk levels provided by Crystal Blockchain to group exchanges into three categories of low, medium, and high risk. Consistent with the hypothesis that most money originates from victim payments, most money originates either from low risk exchanges or the unlabeled cluster. Moderate-high risk exchanges, sanctioned exchanges, illegal services, and mixers represent smaller amounts - suggesting that some Conti actors might take steps to conceal their funds, though this practice is not systematized across the group. The receiving profile varies by type of address - ransom payment funds come almost exclusively from low risk exchanges or the unlabeled cluster, while salaries and reimbursements represent a more diverse portfolio. We speculate that some salaries and reimbursements \begin{table} \begin{tabular}{c c c c} \hline \hline Exchange & Confirmed Payments & Likely Payments & Total \\ \hline Unlabeled Cluster & \$8.6M & \$64.8M & \$73.4M \\ Gemini & \$5.9M & \$17.4M & \$23.1M \\ Kraken & \$1.0M & \$0.2M & \$1.2M \\ Coinbase & \$0.4M & \$0.6M & \$1.1M \\ Binance & \$0.6M & \$0.0M & \$0.6M \\ \hline \hline \end{tabular} \end{table} Table IV: Top exchanges from which Conti ransom payments originate. Note that “Unlabeled Cluster” represents the unlabeled cluster of bitcoin addresses, discussed in Section IV. \begin{table} \begin{tabular}{c c c} \hline \hline Source & Amount & Addresses \\ \hline Ransom payments in leaked dataset & \$3.4M & 5 \\ Ransom payments (Ransomwhere) & \$17.1M & 28 \\ Likely ransom payments (Conti) & \$57.4M & 41 \\ Likely ransom payments (Ryuk) & \$26.5M & 34 \\ **Total income** & \$**\$104.4M** & **107** \\ \hline Salary & \$21.9M & 419 \\ Reimbursent/Salary & \$5.4M & 15 \\ Reimbursent & \$3.8M & 227 \\ **Total expenses** & \$**\$31.2M** & **661** \\ \hline \hline \end{tabular} \end{table} Table III: Conti income and expenses based on annotated Bitcoin addresses, Ransomwhere data, and inferred payments. Figure 2: The largest discovered likely payment, of $9.5M in March 2020. The funds originated from the unlabeled cluster discussed in Section IV. are paid from a slush fund belonging to the core operators, and thus have a variety of sources. A large portion of wallet transactions, somewhat surprisingly, are to low risk exchanges - exchanges most likely to abide by Know Your Customer (KYC) regulations. Gemini and Binance account for a large portion of these funds - $4.3 million and $2.9 million, respectively. Given Gemini's position particularly as a regulated, U.S.-based exchange, Conti actors may have jeopardized their operational security by trading there. Other funds wind up in a variety of illicit destinations, such as $6.8M in Ren Exchange, a peer-to-peer cross-blockchain exchange that can be used to launder funds, $2.8M in the Seychelles-based exchange Huobi, and $1.4M in the now-sanctioned Hydra marketplace. Of the expanded set of ransom payments, the destination of funds includes a variety of services used to launder money. $14.4M is sent to Ren Exchange, $17.9M to Huobi, and $12.6M to Binance. While both Huobi and Binance enforce KYC, certain illicit exchanges such as the now-sanctioned Suex have operated "nested exchanges" within both exchanges, providing an opportunity to launder funds through otherwise-regulated exchanges [28]. The leaks also offer insight into the individual salaries of Conti associates. Based on addresses where a Conti associate appeared to have claimed ownership of the address - most often a salary address - we compute the highest-grossing associates to be tramp ($1.2M), mango ($470K), baget ($400K), bullet ($280K), and andy ($98K). We note that this is an incomplete view into the proceeds of these associates. We also observe evidence of co-spending among some associates. Co-spending occurs when two Bitcoin addresses are used as input to the same transaction, suggesting that the same entity controls both addresses. We observe two clusters of associates - viper, jumbo, ganesh and sonar, and sticks, stakan, eluira, and bekeeper. It is likely that these two clusters use a shared Bitcoin wallet to manage their funds, or otherwise share ownership of funds. ## VI Business Analysis ### _Overlap and Re-branding_ Conti is assessed to be the successor of the Ryuk RaaS collective, which largely down-scaled their operations in March 2020 [18]. This is evidenced from the leftover revenue that we identified that was likely used to fund Conti. Ryuk and Conti shared multiple features, most notably the use of Trickbot for initial infection. It is well documented that Trickbot and Conti are both technically and operationally interconnected [29]. This overlap is significant to understand some of the roles and structures within Conti, because there are shared group members duties. Trickbot provided the initial infection and facilitated the installation of the Conti ransomware on a victim's machine, similar to Ryuk [30]. An arrest warrant for a member of the Trickbot collective, max, indicates that a large number of Trickbot's members had also collaborate on the Dyre Trojan, a precursor to Trickbot. The remaining members of the Dyre collective transitioned to Trickbot following Dyre's takedown in 2015 [30]. max's alias was identified within the Conti Leaks, also indicating overlap with Trickbot and Conti. Further, the Conti Leaks Twitter account leaked information from both Trickbot and Conti, including Trickbot's wider membership, indicating that there is approximately 18% overlap with those Trickbot aliases within Conti's Jabber. The indictment of Trickbot malware developers max and follow-on indictment of ffx provided further details into the Figure 3: Labelled origins and destinations of wallet funds occurring in the Conti leaks dataset. Note that unknown addresses are excluded. “mlrisk” stands for money laundering risk. Further, note that as there are few ransom payment addresses in the Conti leaks dataset, the “victim” section in this chart only represents a fraction of all victim ransom payments to Conti. Trickbot organization, which also helped inform our understanding of Conti. Some of the same roles and responsibilities that were observed within the Trickbot organization were also observed within Conti, indicating that it was likely a rebranding as opposed to a reorganization. Trickbot and Conti also shared similarities in their roles, responsibilities, and recruiting methods [30, 31]. ### _RaaS Roles, Responsibilities, and Recruiting_ Similar to RaaS collectives, Trickbot relied upon a network of specialized workers to facilitate different functions. For example, the unnamed defendants in the Trickbot indictment included the following roles: * **Malware Manager:** Recruiting, hiring, testing malware, and procuring infrastructure * **Malware Developer:** Oversaw functionality within the development of the malware * **Crypters:** Encrypted the malware to prevent detection from anti-virus * **Spammers:** Deployed the malware through targeted and broad-based phishing campaigns According to the indictment, Trickbot advertised these roles on legitimate job posting websites, like LinkedIn and Indeed, as well as Russia-based freelance websites. After completion of a programming test, users were added to a private Jabber OTR communication server where they collaborated on "development, maintenance, and deployment of Trickbot." This is consistent with our observations of the recruiting methods used by Conti, which included recruiting for licit roles on job posting website like Avito, HeadHunter, and Profi[.]ru. Conti utilized similar recruiting methods, as observed in their Jabber, and select threads on underground forums. On August 5, 2021, a disgruntled Conti affiliate m1Geelka leaked internal training materials, and IP addresses of their Cobalt Strike servers on XSS, a top tier underground forum. m1Geelka also commented on an IT recruitment thread from a user IT_Work, stating that it was an advertisement to work with Conti. Between June 10, 2021 and September 6, 2021, IT_Work had posted multiple offerings on underground Russian language forums, like XSS and Exploit, advertising seemingly legitimate job roles to support large IT projects. In our research, we assessed that these advertisements for licit roles were in concert with job postings on Russian-based freelance websites. * **C++ Programmer** (with reverse engineering skills) * **Full-stack web developer for PHP, NodeJS** * **Windows System Administrator** * **Data Analyst** * **Business Analyst** * **UI/UX Designer** * **HTML Designer** * **Pentester** IT_Work's posts demonstrate that while RaaS collectives are commonly associated with illicit tasks, like malware management and development, they also rely on technical talent to maintain infrastructure. These seemingly licit advertisements, albeit on underground forums, allowed Conti to recruit writing and unwitting tech workers to support the infrastructure of their operation. Following the Colonial Pipeline ransomware incident on May 6, 2021, President Biden threatened action against "ransomware networks [32]." As a result, XSS, Exploit, and Raid Forums banned ransomware advertisements. The leader of the former Babuk ransomware collective then started their own dedicated ransomware forum in May 2021, originally dubbed Payload.bin. The site changed its name to RAMP (Ransom Anon Market Place). While originally a closed forum composed of reputable threat actors, RAMP became public in August 2021 following an extortion attempt. Ransomware advertisements continued to be available on Telegram and Jabber [33]. Conti suffered a minor disruption in November 2021 after details of their infrastructure were reported on by a security firm [34]. Shortly thereafter, a user JordanConti surfaced on RAMP highlighting that they were undterred by the disruption, which included "peripheral IPs and wallets." JordanConti began openly recruiting for illicit roles required for their ransomware operation on RAMP, listing the Russian language as a requirement. The following roles were advertised on RAMP: * **Pentesters:** "Top networkers who know how to bypass problematic AVs like Sentinel, work with RMM (Remote Monitoring Management) and EDR and backups" Figure 4: A flow chart demonstrating the recruitment sources of a RaaS affiliate * **Bot herders:** "Ideally, people with their own botnet, with a sufficient number of corp bots, especially in the US." * the priority is USA." From the Conti Leaks, we were able to ascertain that their HR specialists were also continuing to recruit on Russian-language freelance job posting websites and specialized universities. According to the Jabber chat logs, details of the roles varied. In a conversation between viper, a hiring representative, and bourbon, a developer, the reasoning varied from "we do pentesting for big clients," to more vague responses like "the work is remote, communication via messenger, the nature of the work is specific. That's all I know about conditions." viper then specified, "We do pen testing, write hacker software - exploits, grabbers, spam bots and more." The legitimacy of the work was often questioned throughout the Conti leaks, as workers wondered why they had to be paid in cryptocurrencies, only communicated through encrypted messenger, and were unaware of the name or actual function of their employer. It does not appear that Conti used front companies to obscure their operations, but relied upon their managers to convey the appropriate messaging of the work. This meant deceiving their employees, or providing indirect answers to describe the true nature of their work. ### _Team Composition_ From the chats, it appears that Conti is divided into several sub-teams. These teams are generally divided into functional areas, including management, development and infrastructure, access operations, and negotiations. These roles are consistent with the ransomware team structure outlined in the background. In a chat from mango, a manager that overseees development and infrastructure, to stern, the organizational leader, mango shares details of the structure of his team, along with budgeted salaries: the main team - S97,447; 52 people new team - S4,000; 3 people, one has not started yet reverse engineering - S23,347; 16 people research team - S12,500; 6 people osint intelligence team - S9,000; 4 people mango's team does not appear to encompass the whole Conti operation, however one functional area. The total monthly salary for their team is assessed to be S146,294. Other references to a team structure appeared throughout the chat. For example, in a conversation between target and poll, target asks if poll needs individuals to attack logistics and manufacturing. poll highlights that they have a team that only "locks defense/military companies." In this regard, it appears that the sector specific targeting is divided between sub-divisions. However, some sector targeting like healthcare appeared to be off-limits. Among RaaS operations as a whole, operators have informally agreed not to target healthcare. Following the DarkSide attack on Colonial Pipeline, REvil announced several new self-imposed restrictions for its operators and their affiliates. These announced restrictions included not targeting social sectors (such as healthcare and education) or any government entities, as well as requiring ransomware affiliates to get REvil operators' approval for any future targets. In an interview, LockBit claimed that they have a, "negative attitude towards those who encrypt medical and educational institutions." In an exchange between reshaev, one of Conti's main developers, and pin, who is possibly an affiliate, pin defends their reasoning for targeting a sports treatment center, claiming that it has no resuscitation unit and they have over 3K in insurance. reshaev emphasizes that they have a policy prohibiting ransoming healthcare, and recommends that pin "goes around them now." Despite this assertion, Conti had ransomed the healthcare sector through their operation including Ireland's Health Service Executive (HSE) and Department of Health (DoH), presumably choosing money over morals. In the Conti leaks, there are abstract references to specific teams. For example, mango introduces themselves as "support C, manager for general issues of the team trick locker, now I'm looking for access to work for the gang." buza, a team lead of coders, in an exchange with hof, a technical manager, makes abstract reference to "rocket" and "A," likely meaning Rocket.Chat and team "A" (one of three teams). The Rocket.Chat messages, though not included in our primary research, did include details of the team composition of the access operations. The user alter briefly described the structure and responsibilities of teams A, B, and C. alter did not mention the size of the groups, however there were 54 unique aliases in that server. The current composition is divided into groups, each group is assigned a team leader (one or two depending on the size of the group). ### _Primary Actors_ To further understand the main actors within the Conti leaks Jabber, we sorted the aliases by degree centrality. The top five individuals within the chats, defender, stern, buza, mango, and bentley, are Conti managers controlling payments, operations, developers, and malware builds. These managers also fulfilled HR functions, often sending bulk messages to users with comments, queries, and reminders to share cryptocurrency addresses for payments. Users that had a lower degree centrality were likely affiliates or developers. The limited number of chat messages made their roles much more difficult to identify. Managers like buza identified their developers by role in bulk messages that included requests to continue working on a bug tracker. Defender also sent bulk messages, not identifying recipients by role, requesting for alternative forms of communication. This likely indicates that the leaked Jabber was likely a centralized communication channel, and other communication channels may have been used for more specialized operations, like the Rocket.chat and Trickbot Forum, which included details on using the Trojan. The managers have power of the purse. The top five users by centrality are assessed to be some of the primary leadership, since their role also included communications with the channel. Requests for funds typically occurred in the Conti leaks Jabber, with team leads requesting salaries and reimbursement from managers on the behalf of the individuals on their teams. This information helped inform us on the hierarchy of the roles, relationship between aliases, and an understanding of the team structure. However, unlike previous cybercrime research that described the importance of cybercrime cultural capital within communities, the allure of experience and experimentation, it appears that RaaS operations center around mundane tasks of operating infrastructure [11, 35, 36, 37]. The most important members of the Conti operation appear to be the managers overseeing the collective work, administering salaries, and approving expenses for reimbursement. ### _Rewards for Justice_ On May 6, 2022, the Department of State offered a $10 million reward for information leading to the identification or location of the members of Conti collective as part of the Rewards for Justice program. On August 11, 2022, they requested specific information on five individuals: * dandis: manager, crypters * professor (aka alter): ransomware negotiator * reshaev: manager, ransomware builds * target: manager, access operations * tramp: manager, operations From our research, we identified these individuals as being highly technical managers concerned with crypters, ransomware builds and development, access operations, and victim negotiations. Most of these aliases also appears within the Trickbot leaks, indicating that this may have overlap with the aforementioned Trickbot investigation. These individuals were most likely selected based upon the value that they provide the Conti collective in achieving a competitive advantage in the RaaS landscape [38]. On February 9, 2023 (following the initial publication of this paper), the United States and United Kingdom sanctioned several members of the Trickbot collective for their role in cybercrime and ransomware operations [39]. These individuals also appeared in the Conti Leaks, through the primary aliases shared in the sanction, or alternative aliases that helped us identify their membership in the collective. The following individuals were included in the sanction: * bentley (aka ben): senior manager * baget: developer * globus: developer * tropa (aka kerasid): money laundering * iseldor: malicious injects * mushroom: manager * strix: administrator These sanctions demonstrate a continued focus on cybercrime and ransomware operations. While the Rewards for Justice identified many of the lead members of Conti by alias, the sanctions listed the seven individuals by name. These measures underscore the importance of human capital in building and maintaining modern ransomware operations. ## VII Related Work In order to conduct our analysis of the Conti ransomware operation, we use and extend methodologies from cryptocurrency tracking, leaked cybercrime data, and ransomware analysis. ### _Cryptocurrency Tracking_ Prior work has shown that Bitcoin wallets and transactions are often linkable to the same entity using several heuristics [40, 41, 42, 43]. These Bitcoin tracing heuristics have been implemented into a number of commercial cryptocurrency forensic analysis tools which also use techniques to label the owner of account clusters, such as Chainalysis, TRM Labs, Elliptic, and Crystal Blockchain. We use Crystal Blockchain's cryptocurrency forensic tools to perform analysis of Bitcoin accounts that we identify in the leaked Conti chat data. Huang et al. conducted a two year end-to-end measurement of ransomare operations, tracing bitcoin from acquisition to ransomware payment. In this analysis, known-victim payments were identified through seed addresses, and were clustered with previously unknown-victims. The authors identified ransomware revenue exceeding $16 million USD, and infrastructure that was used to cashout illicit proceeds [6]. Paquet-Clouston et al. identify $13 million USD in ransomware payments between 2013 and 2017 [44]. ### _Ransomware as a Service_ Oosthoek et al. analyze over $100 million in ransom payments through a crowd-sourced dataset of ransomware addresses [25]. The authors characterized the shift from commodity ransomware to RaaS. Along with increased profits came a growing sophistication evidenced by faster time to launder funds and increased operational security practices. We build on this work by conducting an in-depth analysis of a single ransomware groups, which allows us to map over five times the amount of payments to Conti, in addition to operating costs which were not previously analyzed. Previous work has also documented the practice of Ransomware as a Service groups "splitting" payments between the ransomware group and affiliates. Cong et al. document DarkSide's split percentage, which varies based on the size of the ransom payment [45]. Regarding Conti, Elliptic noted a 22.5% split for several Conti ransom payment addresses [46]. ### _Conti_ To date, relatively little academic work has analyzed the Conti leaks. Cong et al. investigate the cryptocurrency activities of several notable ransomware groups, including Conti [45]. The authors compile data from a variety of sources, including public and proprietary data. As part of their work, the authors discuss Conti's activity at a high level, including analyzing Conti's posting of victim data on leak sites. Our work builds on this paper by performing in-depth analysis of Conti's economic and business practices, including extracting and analyzing 666 Bitcoin addresses, compared to the 239 addresses the authors extracted. Other industry groups have analyzed primarily the business aspects of the Conti leaks, including ForeScout, Secureworks, and Check Point [47, 48, 49]. ## VIII Conclusion Our study of Conti presents a vignette into the structure of a modern Ransomware as a Service group. This is the first comprehensive crypto-economic analysis of the Conti leaks, based on our annotation of cryptocurrency addresses present in the leaks, on-chain analysis of cryptocurrency payments, and qualitative business assessment based upon user conversations. Through our analysis, we developed a methodology to identify ransom payments based on common splitting behavior. We use this methodology to identify $83.9 million in new likely payments and can help to better inform ransomware-affiliated payments through exchanges. Identifying these payments may assist cryptocurrency exchanges in blocking these payments, putting additional pressure on ransomware operators. We find significant leverage points in both economic and business areas. The fact that a significant portion of funds both are received from and sent to low-risk exchanges presents an opportunity to monitor and seize funds. Further, targeting the organizational leadership responsible for the recruiting, hiring, training, and administering the various business units and infrastructure can also have an impact on their ability to function. The affiliate structure additionally provides opportunities to disrupt the more technical operators of the ransomware, thereby preventing affiliates' ability to lease the malware or receive operational support. ## Acknowledgment We thank the anonymous reviewers for their insightful and constructive suggestions and feedback, and Crystal Blockchain for providing access to their platform. Funding for this work was provided in part by National Science Foundation grants 1844753 and 2039693.
2302.12006
Does the evaluation stand up to evaluation? A first-principle approach to the evaluation of classifiers
How can one meaningfully make a measurement, if the meter does not conform to any standard and its scale expands or shrinks depending on what is measured? In the present work it is argued that current evaluation practices for machine-learning classifiers are affected by this kind of problem, leading to negative consequences when classifiers are put to real use; consequences that could have been avoided. It is proposed that evaluation be grounded on Decision Theory, and the implications of such foundation are explored. The main result is that every evaluation metric must be a linear combination of confusion-matrix elements, with coefficients - "utilities" - that depend on the specific classification problem. For binary classification, the space of such possible metrics is effectively two-dimensional. It is shown that popular metrics such as precision, balanced accuracy, Matthews Correlation Coefficient, Fowlkes-Mallows index, F1-measure, and Area Under the Curve are never optimal: they always give rise to an in-principle avoidable fraction of incorrect evaluations. This fraction is even larger than would be caused by the use of a decision-theoretic metric with moderately wrong coefficients.
K. Dyrland, A. S. Lundervold, P. G. L. Porta Mana
2023-02-21T09:55:19Z
http://arxiv.org/abs/2302.12006v1
# Does the evaluation stand up to evaluation? ###### Abstract How can one meaningfully make a measurement, if the meter does not conform to any standard and its scale expands or shrinks depending on what is measured? In the present work it is argued that current evaluation practices for machine-learning classifiers are affected by this kind of problem, leading to negative consequences when classifiers are put to real use; consequences that could have been avoided. It is proposed that evaluation be grounded on Decision Theory, and the implications of such foundation are explored. The main result is that every evaluation metric must be a linear combination of confusion-matrix elements, with coefficients - 'utilities' - that depend on the specific classification problem. For binary classification, the space of such possible metrics is effectively two-dimensional. It is shown that popular metrics such as precision, balanced accuracy, Matthews Correlation Coefficient, Fowlkes-Mallows index, \(F_{1}\)-measure, and Area Under the Curve are never optimal: they always give rise to an in-principle _avoidable_ fraction of incorrect evaluations. This fraction is even larger than would be caused by the use of a decision-theoretic metric with moderately wrong coefficients. ## 0 Prologue: a short story The manager of a factory which produces a sort of electronic component wishes to employ a machine-learning classifier to assess the durability of each produced component. The durability determines whether the component will be used in one of two possible kinds of device. The classifier should take some complex features of the component as input, and output one of the two labels '0' for 'long durability', or '1' for'short durability', depending on the component type. Two candidate classifiers, let us call them A and B, are trained on available training data. When employed on a separate evaluation set, they yield the following confusion matrices, written in the format \[\begin{array}{ccc}&\text{ The developers of the classifiers therefore recommend the employment of classifier B. The factory manager does not fully trust these metrics, asking, "how do I know they are appropriate?". The developers assure that these metrics are widely used. The manager (of engineering background) comments, "I don't remember 'widely used' being a criterion of scientific correctness - not after Galileo at least", and decides to employ both classifiers for a trial period, to see which factually leads to the best revenue. The two classifiers are integrated into two separate but otherwise identical parallel production lines. During the trial period, the classifiers perform according to the classification statistics of the confusion matrices (1) and (2) above. At the end of this period the factory manager finds that the average net gains per assessed component yielded by the two classifiers are2 Footnote 2: $’ represents a generic currency or value unit, this is why it is not written in front of the gains. \[\begin{array}{ccc}\text{classifier A}&\text{classifier B}\\ \hline 3.5\,\$&-3.5\,\$\end{array} \tag{3}\] That is, classifier B actually led to a _loss_ of revenue. The manager therefore decides to employ classifier A, commenting with a smug smile that it is always unwise to trust the recommendations of developers, unacquainted with the nitty-gritty reality of a business. The average gains above are easy to calculate from some additional information. The final net gains caused by the correct or incorrect classification of one electronic component are as follows: \[\begin{array}{ccc}\text{true class}&&\\ &0&1\\ \text{\end{array}} \tag{4}\] The reason behind these values is that short-durability components (class 1) provide more power and are used in high-end, costly devices; but they cause extreme damage and consequent repair costs and refunds if used in devices that require long-durability components (class 0). Long-durability components provide less power and are used in low-end, cheaper devices; they cause some damage if used in devices that require short-durability components, but with lower consequent costs. Taking the sum of the products of the gains above by the respective percentages of occurrence - that is, the elements of the confusion matrix - yields the final average gain. The final average gain returned by the use of classifier A, for example, is \[15\,\$\times 0.27-335\,\$\times 0.15-35\,\$\times 0.23+165\,\$\times 0.35=3.5\, \$\.\] In the present case, the confusion matrices (1) and (2) lead to the amounts (3) found by the manager. ## 1 Issues in the evaluation of classifiers The story above illustrates several well-known issues of currently popular evaluation procedures for machine-learning classifiers: 1. We are swept by an avalanche of possible evaluation metrics. Often it is not clear which is the most compelling. In the story above, for example, one could argue that the true-negative rate was the appropriate metric, in view of the great difference in gains between correct and wrong classification for class 1, compared with that for class 0. But at which point does this qualitative reasoning fail? Imagine that the net gains had been as follows instead: \[\begin{array}{c}\includegraphics[width=142.362959pt]{figures/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figs/figsfigs/figs/figfigs/figs/figsfigs/figs/figs/figfigs/figs/figfigs/figsfigs/figs/figsfigs/figs/figs/figfigs/figs/figsfigs/figfigs/figs/figs/figs/figfigs/figsfigs/figs/figs/figs/figfigs/figs/figsfigs/figsfigs/figs/figs/figfigs/figsfigs/figs/figfigs/figfigs/figsfig/figsfigs/figfigs/figsfigs/figfigs/figsfigs/figs/figfigs/figfigs/figsfigs/figs/figfigs/figsfigs/figfigs/figfigsfigs/figs/figfigs/figsfigs/figfigs/figfigs/figsfigs/figfigfigs/figfigsfigs/figfigs/figfigs/figsfigs/figfigs/figsfigs/figfigs/figsfigs/figfigsfigs/figfigsfigs/figfigsfigs/figfigsfigs/figfigs/figsfigs/figfigs/figsfigs/figfigsfigs/figfigs/figsfigs/figfigs/figsfigs/figsfigs/figfigsfigs/figfigsfigs/figfigs/figsfigsfigs/figfigsfigs/figfigs or other statistical assumptions), and an analysis of special cases only. Unfortunately this kind of derivations does not guarantee generalization to all cases, nor that the proposed metric is uniquely determined by the chosen assumptions, nor that it satisfies more general consistency requirements. By contrast, consider the kind of derivation that starts from specific qualitative requirements and mathematically proves the _uniqueness_ of a particular formula satisfying them. Examples are the derivation of the Shannon entropy as the _unique_ metric universally satisfying a set of basic requirements for the amount of information4. Or the derivation of the probability calculus as the _unique_ set of rules satisfying general rational requirements for inductive reasoning, learning, and prediction5. Or the derivation of decision theory as the unique framework guaranteeing a rational and optimal decision under uncertainty6. Footnote 4: Shannon 1948; Woodward 1964 § 3.2; also Good & Toulmin 1968. **5** Cox 1946; Fine 1973; Halpern 1999; Snow 1998; 2001; Jaynes 2003 chs 1-2; see also Self & Cheeseman 1987; Cheeseman 1988; Russell & Norvig 2022 ch. 12. **6** Russell & Norvig 2022 § 15.2; von Neumann & Morgenstern 1955 chs 2-3. **7** cf. Howard 1980. **8** cf. the discussion in Sox et al. 2013 § 11.2.9. 4. Let us assume that some of the popular metrics identify the best algorithm 'in the majority of cases' - although it is difficult to statistically define such a majority, and no real surveys have ever been conducted to back up this assumption. Yet, do we expect the end-user to simply _hope_ not to belong to the unlucky minority? Is such uncertainty inevitable? We cannot have a cavalier attitude towards this problem: life and death can depend on it in some machine-learning applications7. Imagine a story analogous to the factory one, but in a medical setting instead. The classifiers should distinguish between two tumour types, requiring two different types of medical intervention. The confusion matrices are the same (1) and (2). Correct and incorrect classification lead to the following expected remaining life lengths for patients in a specific age range:8 Footnote 4: Shannon 1948; Woodward 1964 § 3.2; also Good & Toulmin 1968. **5** Cox 1946; Fine 1973; Halpern 1999; Snow 1998; 2001; Jaynes 2003 chs 1-2; see also Self & Cheeseman 1987; Cheeseman 1988; Russell & Norvig 2022 ch. 12. **6** Russell & Norvig 2022 § 15.2; von Neumann & Morgenstern 1955 chs 2-3. **7** cf. Howard 1980. **8** cf. the discussion in Sox et al. 2013 § 11.2.9. 5. 6. These values might arise in several scenarios. For example, tumours of class 0 and 1 may require very different kinds of treatment. If a class 0 tumour is misdiagnosed and not properly treated, it leads to immediate death (0 months); if correctly diagnosed, its treatment is usually successful, leading to high life expectancy (500 months). Class 0 tumours can be treated, but they lead to a shorter life expectancy (350 months). If they are misdiagnosed as class 1, however, the damage caused by class 1 treatment shortens this life expectancy even further (300 months). This matrix above is numerically equivalent to (4) up to a common additive constant of 335, so the final net gains are also shifted by this amount. It is easy to see that the metrics are exactly as in Table 1, the majority favouring classifier B. And yet the use of classifier A leads to a more than six-month longer expected remaining life than classifier B. * Often it is not possible to temporarily deploy all candidate classifiers, as our fictitious manager did, in order to observe which factually leads to the best results. Or it may even be unethical: consider a situation like the medical one above, where a classifier may lead to a larger number of immediate deaths than another. * Finally, all issues listed above are not caused by class imbalance (the occurrence of one class with a higher frequency than another). In our story, for example, the two classes were perfectly balanced. Class imbalance can make all these issues worse9. Footnote 9: Jeni et al. 2013; Zhu 2020. But our story also points to a possible solution for all these issues. The'metric' that ultimately proved to be relevant to the manager was the average net monetary gain obtained by using a candidate classifier. In the medical variation discussed in issue (d) above, it was the average life expectancy. In either case, such metric could have been easily calculated beforehand, upon gathering information about the average gains and losses of correct and incorrect classification, collected in the matrix (4) or (6), and combining these with statistics collected in the confusion matrix associated with the classifier. Denoting the former kind of matrix by (\(U_{ij}\)) and the confusion matrix by (\(C_{ij}\)), where \(i\) indexes the classifier outputs (rows) and \(j\) the true classes (columns), such a metric would have the formula \[\sum_{i,j}U_{ij}\;C_{ij} \tag{7}\] the sum extending to all matrix elements. In the present work, we argue that formula (7) is indeed the only acceptable metric for evaluating and comparing the performance of two or more classifiers, each with its own confusion matrix (\(C_{ij}\)) collected on relevant test data. The coefficients \(U_{ij}\), called _utilities_, are problem-dependent. This formula is the _utility yield_ of a classifier having confusion matrix (\(C_{ij}\)). Our argument is based on _Decision Theory_, an overview of which is given in SS 2. The utility yield (7) is a linear combination of the confusion-matrix elements, with coefficients independent of the elements themselves. In SS 3 we explore some properties of this formula and of the space of such metrics for binary classification. We also show that some common metrics such as precision, \(F_{1}\)-measure, Matthews correlation coefficient, balanced accuracy, and Fowlkes-Mallows index _cannot_ be written as a linear combination of this kind (or a one-one function thereof). This impossibility has two consequences. First, it means that these metrics are likely affected by some kind of cognitive bias. Second, there exists _no_ classification problem for which these metrics can correctly rank the performance of all pairs of classifiers. Using any one of these metrics leaves open the possibility that the evaluation is incorrect _a priori_. In SS 5 we show that this is also true for the Area Under the Curve of the Receiver Operating Characteristic, and we offer some additional remarks about it from the standpoint of decision theory. On the other hand, metrics such as accuracy, true-positive rate, true-negative rate can be written in the form (7). Consequently, each one has a set of classification problems in which it correctly ranks the performance of _all_ pairs of classifiers. What happens if we are uncertain about the utilities appropriate to a classification problem? And what happens if the utilities are incorrectly assessed? We show in SS 4 that uncertainty about utilities still leads to a metric of the form (7). We also show that an evaluation using incorrect utilities, even with relative errors as large as 20% of the maximal utility, still leads to a higher amount of correctly ranked classifiers than the use of any of the popular metrics mentioned above. We summarize and discuss our results in the final SS 6. ## 2 Brief overview of decision theory
2308.07336
Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
We study a synthetic corpus based approach for language models (LMs) to acquire logical deductive reasoning ability. The previous studies generated deduction examples using specific sets of deduction rules. However, these rules were limited or otherwise arbitrary, limiting the generalizability of acquired reasoning ability. We rethink this and adopt a well-grounded set of deduction rules based on formal logic theory, which can derive any other deduction rules when combined in a multistep way. Then, using the proposed corpora, which we name FLD (Formal Logic Deduction), we first evaluate and analyze the logical reasoning ability of the latest LLMs. Even GPT-4 can solve only half of the problems, suggesting that pure logical reasoning isolated from knowledge is still challenging for the LLMs, and additional training specialized in logical reasoning is indeed essential. We next empirically verify that LMs trained on FLD corpora acquire more generalizable reasoning ability. Furthermore, we identify the aspects of reasoning ability on which deduction corpora can enhance LMs and those on which they cannot, and discuss future directions on each aspect. The released corpora serve both as learning resources and as challenging benchmarks.
Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa
2023-08-11T13:15:35Z
http://arxiv.org/abs/2308.07336v3
# Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic ###### Abstract We study a synthetic corpus based approach for language models (LMs) to acquire logical deductive reasoning ability. The previous studies generated deduction examples using specific sets of deduction rules. However, these rules were limited or otherwise arbitrary. This can limit the generalizability of acquired deductive reasoning ability. We rethink this and adopt a well-grounded set of deduction rules based on formal logic theory, which can derive any other deduction rules when combined in a multistep way. We empirically verify that LMs trained on the proposed corpora, which we name **FLD** (**F**ormal **L**ogic **D**eduction), acquire more generalizable deductive reasoning ability. Furthermore, we identify the aspects of deductive reasoning ability on which deduction corpora can enhance LMs and those on which they cannot. Finally, on the basis of these results, we discuss the future directions for applying deduction corpora or other approaches for each aspect. We release the code, data, and models 1. Footnote 1: Hitachi, Ltd. Research and Development Group, Kokubunji, Tokyo, Japan. Correspondence to: Terufumi Morishita <[email protected]>. ## 1 Introduction Building a machine that logically reasons step by step has been the Holy Grail since the early era of artificial intelligence (McCarthy, 1959). Such a machine will solve complex real-world problems in a very explainable and transparent way. Toward this goal, various benchmarks for measuring logical reasoning ability have recently been proposed (Weston et al., 2015; Habernal et al., 2018; Niven and Kao, 2019; Richardson et al., 2020). Usually, researchers tackle these benchmarks using state-of-the-art language models (LMs) expecting their remarkable linguistic understanding ability. Yet still, even such powerful LMs struggle with these benchmarks, showing their limited logical reasoning ability (Askell, 2020; Rae et al., 2021; Yang et al., 2022). LMs have acquired their linguistic understanding ability inductively from a lot of high-quality examples in human-written texts (Devlin et al., 2019). Conversely, their poor logical reasoning ability suggests the lack of high-quality examples of logical reasoning. This is not a surprise given that humans usually think reflexively rather than logically step by step (Kahneman, 2011). The consideration here suggests a straightforward strategy to equip LMs with logical reasoning ability: create corpora that include many examples of valid logical reasoning and train LMs on them. For this purpose, we can use the recently proposed Rule-Taker (Clark et al., 2021). RuleTaker is a benchmark composed of many synthetically generated multistep deductive proofs written in natural languages. Each deductive proof (dis-)proves a hypothesis by applying deduction rules multiple times to a given set of facts (the same as "Deduction Instance" in Figure 1). RuleTaker adopted the deduction rules of the implication kind, such as \(\forall xF(x)\to G(x),F(a)\vdash G(a)\) (here, \(\vdash\) means "derives"). Artificial Argument Corpus (AACorpus) (Betz et al., 2021) is another corpus composed of synthetically generated single-step deductive proofs. AACorpus adopted hand-selected deduction rules useful for critical thinking, such as contraposition \(\mathcal{F}\rightarrow\mathcal{G}\vdash-\mathcal{G}\rightarrow-\mathcal{F}\) (\(\neg\) is negation). All these corpora could offer LMs opportunities to acquire logical _deductive_ reasoning ability, one of the most important and universally used logical reasoning abilities. However, it is still an open question whether this research direction will genuinely lead to the improvement of deductive reasoning ability. First, the deduction rules used in the previous corpora were limited or otherwise arbitrary. This can limit the generalizability of the acquired deductive reasoning ability since complex real-world reasoning can require various deduction rules. Second, it has not yet been studied on what aspect of deductive reasoning ability deduction corpora can enhance LMs. Such aspects will include, in addition to the mastery of deduction rules, the ability to solve complex deductive proofs, understanding of diverse linguistic expressions of logical statements, robustness to distractive facts, and understanding of complex formulas. This investigation is essential to discuss the future directions on deductive reasoning: for the aspects for which deduction corpora are beneficial, we can advance by inventing better deduction corpora. However, for the other aspects, we should take other approaches. This paper aims to answer these questions. First, we rethink the choice of deduction rules. To this end, we leverage the formal logic theory (Section 2). According to formal logic, there are infinite valid deduction rules, including but not limited to the ones used in the previous corpora. However, among them, there is a set of atomic deduction rules called _the axioms_, and any other valid deduction rules can be derived by multistep deductions constructed from the axioms (_completeness_). As a consequence, _multistep deductions constructed from the axioms can express multistep deductions constructed from any other deduction rules_. The sets of deduction rules used in the previous corpora do not have this property and thus cannot express other various deduction rules. To revise this point, we propose a deduction corpus generation framework named **FLD** (**F**ormal **L**ogic **D**eduction), which adopts the axioms. Using the corpora generated by **FLD**, we aim to teach LMs how to construct multistep deductions by using the axioms. To show that the training on **FLD** is indeed effective, we measured the performance of LMs trained on **FLD** corpora on two types of deductive reasoning benchmarks (Section 6). One benchmark is deduction corpora themselves, which requires rigid logical reasoning, and the other is human-authored EntailmentBank (EB) (Dalvi et al., 2021), which requires more complex real-world reasoning. We obtained promising results: LMs trained on **FLD** outperform baselines on both benchmarks, showing their better generalizability. Nevertheless, LMs still fail to fully utilize the potential of the axioms as it struggles to construct many-step proofs. Next, we identify the aspects of deductive reasoning ability on which deduction corpora are beneficial (Section 7). To analyze each aspect separately, we employed various options of **FLD** and generated a comprehensive set of "ablation corpora", where one corpus emphasizes a specific aspect different from those emphasized by the other corpora. Then, for each corpus (aspect), we investigated whether the LM trained on that corpus outperformed the LM without this training. If it did, we concluded that the supervision from deduction corpus on that aspect is beneficial for LMs. The results suggest that deduction corpora are beneficial on all-most all the aspects. However, for some aspects, deduction corpora alone are not enough, and thus other approaches, such as advanced models and learning methods, could be required. Finally, on the basis of the results, we discuss the future directions for applying deduction corpora or other approaches for each aspect (Section 8). We summarize our contributions as follows: * To teach LMs deductive reasoning, we propose a deduction corpus generation framework **FLD* * (Section 3). * is the first to leverage formal logic theory: it adopts a well-grounded set of deduction rules that can derive any other deduction rules when combined in multistep deductions. * highly flexibly generates various patterns of corpora for analysis (Table 1). * Accordingly, we release challenging **FLD* * corpora, the code, and the fine-tuned models1. Footnote 1: We empirically verify that LMs trained on **FLD* * corpora acquire more generalizable deductive reasoning ability than the baselines without such training (Section 6). * We analyze each aspect of deductive reasoning and Figure 1: An overview of the proposed framework **FLD**, which aims to generate logical deduction instances constructed from the axioms of first-order predicate logic. **FLD** is modular, and the modules are made as flexible as possible by options or external template files. This enables us to generate various patterns of corpora for analysis. provide the future directions for applying deduction corpora or other approaches for them (Sections 7 and 8). ## 2 Preliminaries: Formal Logic Let us consider the following single-step deductive reasoning: \[\begin{array}{cc}\text{The Earth revolves}&\text{If the Earth revolves around the sun}&\text{the Earth has seasons}.\\ \hline\text{The Earth has seasons}.\end{array} \tag{1}\] This deduction step derives the conclusion, written under the bar, from the two premises. Next, consider another step: \[\begin{array}{cc}\text{The Earth revolves}&\text{If the Earth revolves around the sun} &\text{the Earth does not have seasons}.\\ \hline\text{The Earth does not have seasons}.\end{array} \tag{2}\] In this step, one of the premises (i.e., "If the Earth revolves around the sun, the Earth does not have seasons") is false. However, _if the premise had been true_, we can still derive the conclusion. Thus, in formal logic, this step is still valid the same as (1). We can abstract (1) and (2) using symbols as: \[\begin{array}{cc}\mathcal{F}&\mathcal{F}\rightarrow\mathcal{G}\\ \hline\mathcal{G}&\text{modus ponens}\end{array} \tag{3}\] The deduction step of this form is called _modus ponens_. While modus ponens is the most intuitive deduction step, many others exist. For example, a famous syllogism is: \[\begin{array}{cc}\dfrac{(\mathcal{F}\rightarrow\mathcal{G})\wedge( \mathcal{G}\rightarrow\mathcal{H})}{\mathcal{F}\rightarrow\mathcal{H}}& \text{syllogism}\end{array} \tag{4}\] The other example below defines the meaning of \(\wedge\) formally: \[\begin{array}{cc}\dfrac{(\mathcal{F}\wedge\mathcal{G})}{\mathcal{F}}& \dfrac{(\mathcal{F}\wedge\mathcal{G})}{\mathcal{G}}&\wedge\text{-elimination} \end{array} \tag{5}\] Of course, we can consider invalid 2 steps such as: Footnote 2: A deduction step (an argument) is invalid when for some truth value assignments, the conclusion is false (=0) even if all the premises are true (=1). See Table B.10b. \[\begin{array}{cc}\dfrac{\mathcal{F}}{\mathcal{G}}&(\mathcal{F}\vee\mathcal{ G})\\ \hline\mathcal{G}&\end{array} \tag{6}\] Now, from these examples, we obtain some important points of deductive reasoning. First, deductive reasoning can be defined as a form of thought in which a conclusion is derived from a set of premises following specific rules. In formal logic, such deduction rules are called _arguments_. Thus, (1) to (6) all are formal logic arguments. Second, whether an argument is valid or not does not depend on _contents_ of symbols but only on the _superficial form_ of the symbolic sequence composed of the premises to the conclusion. For example, as stated above, (3) is valid regardless of the actual content of \(\mathcal{G}\), such as \(\mathcal{G}\)="(\(\ldots\)), the Earth has seasons." in (1) and \(\mathcal{G}\)="(\(\ldots\)), the Earth does not have seasons." in (2). This enables us to regard all arguments simply as symbolic rules such as (3) to (6). Third and as one conclusion of the second point, the symbols such as \(\mathcal{F}\) and \(\mathcal{G}\) can be arbitrary compounds of other formulas such as \(\mathcal{F}\)=\((A\wedge B)\) and \(\mathcal{F}\)=\(\forall x,A(x)\to B(x)\). Finally, since we can consider infinite patterns of formulas as premises and a conclusion, we have infinite patterns of arguments (including both valid and invalid arguments). Next, we consider multistep deductions. Figure 2 shows that syllogism argument can be derived by the multistep deduction constructed from other "atomic" arguments. (For other examples, Figure B.4 shows the derivations of the arguments used in the previous corpora.) Indeed, in formal logic, there is a set of atomic arguments called _the axioms_ (listed in Figure B.3a), and the following is known 3 : Footnote 3: We limit our focus to first-order predicate logic in this paper. **Theorem 2.1** (Completeness of first-order predicate logic (Godel, 1930)).: _Any valid 4 argument is derivable by multistep deduction constructed from the axioms. Furthermore, any argument derivable by multistep deduction constructed from the axioms is valid._ Footnote 4: An argument is valid when for all truth value assignments, the conclusion is true (=1) if all the premises are true. See Table B.10a. Here we have come to the core of formal logic: multistep deduction constructed from the axioms. Thanks to the completeness, all valid arguments can be derived in this way, and all (infinite) arguments derived in this way are valid. As a consequence, _multistep deduction constructed from the axioms can express multistep deduction constructed from any other arguments_, as illustrated in Figure 2 (right). ## 3 Generating Formal Logic Deduction Corpus The previous deduction corpora (Clark et al., 2021; Betz et al., 2021) used limited or arbitrary sets of deduction rules. However, as we saw in Section 2, the axioms should be the most generalizable to various deduction rules. Thus, we propose a framework named **FLD** (**F**ormal **L**ogic **D**eduction), which generates examples of multistep deduction constructed from the axioms. We designed **FLD** to be highly flexible, i.e., configurable and/or extensible by options or external template files as in Table 1, so that we can generate and analyze various patterns of corpora. Figure 2: An example of multistep deduction constructed from the axioms. **(Left)** shows the derivation of a syllogism. **(Right)** illustrates that deduction with more steps can express deductions that use a syllogism as a given rule. We show examples of generated instances in Figure C.5. Below, we overview each module. For intuitive understanding, refer to the corresponding part of Figure 1. For the detailed implementations, refer to Appendix E. ### Proof Tree Generation via Random Forward-/Backward- Deduction RuleTaker (Clark et al., 2021) generates deductive proof trees by first randomly generating various formulas and second running a logical solver library on them to find occasionally emerged deductive relationships among them. However, since we rely on an external solver, we cannot specify the set of arguments used in proof trees (and thus we cannot specify the axioms, especially.). Further, since we rely on the randomness, we cannot control the complexity of a proof tree, i.e., the depth and the number of leaves. Thus, we decided to take another approach. We invented a module ("Proof Tree Generator" in Figure 1) that generates a proof tree through a random deduction process by using a set of arguments specified by a user. A user can specify the arguments in a template rule file, as exemplified in Figure E.6. At each forward- or backward- deduction step, the module randomly chooses one argument and joints it to the current proof tree ("forward" and "backward" in the figure). The numbers of forward- and backward- steps control the tree's depth and number of leaves, respectively. Once the structure of the proof tree is constructed, we construct the compound formulas at the tree nodes, such as \(\mathcal{F},\mathcal{G}\). Since these formulas are arbitrary (Section 2), we randomly combine atomic formulas such as \(A\) and \(B\) using logical operators \(\wedge,\vee,\neg\). To avoid over complications, we limit the number of atomic formulas in each compound formula up to three. The resulting formulas are like \(\mathcal{F}=(\neg A\wedge B)\). ### Factual Distractor Generation In a realistic scenario of logical reasoning, since the facts are collected by possibly incomplete retrieval systems rather than given, LMs have to correctly choose only the relevant facts under the existence of many irrelevant facts. To imitate this scenario, we add distractor facts to each deduction instance ("Factual Distractor Generator" in Figure 1). The distractor facts are formulas that are similar to the gold facts in their logical form. For example, for the gold fact \((A\wedge B)\to C\), formulas such as \((A\wedge C)\to B\) can be distractors. We also implemented several other types of distractors and use the mixture of them. ### Natural Language Assignment We assign one natural language sequence to each formula of tree nodes and of distractors ("Natural Language Assigner" in Figure 1). Inspired by Betz et al. (2021), we take a template based approach. For each formula, we prepare several templates via an external template file (exemplified in Figure E.7) such as follows: \(A\to B:\) "If A, then B.", "A leads to B." \(F(a)\to G(b):\) "If a F, then b G.", "When a F, b G." Then, we randomly choose one from them. Note that since the templates can be nested, the number of resulting patterns are combinatorially diverse. Next, we assign natural language statements to atomic components such as \(A,B,F,G,a,b\). Here, we come back to the important point in deductive reasoning discussed in Section 2: that the validity of deduction does not depend on contents of formulas, or in other words, the same deduction can be conducted on the same formulas regardless of their contents. To reflect this point, we assign a _random_ statement constructed (under a certain grammatical constraint) from a full vocabulary to each atomic component; for example: \(A:\) "an Earthquake occurs" \(B:\) "the year ends" \(F:\) "run" \(G:\) "answer" \(a:\) "the hamburger" \(b:\) "Peter" These random and diverse statements constructed from a large vocabulary (about 20k words) are another major difference from the previous studies (Tafjord et al., 2021; Betz et al., 2021), which used limited statements constructed from a limited vocabulary (a few dozen of words). ### Deduction Instance Conversion We finally make a deductive reasoning instance from the outputs of the previous modules ("Deduction Instance Converter" in Figure 1). A deduction instance is composed of a set of facts, a hypothesis, a proof sequence, and an answer ("proved", "disproved", or "unknown"). This module \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & Deduction & Proof Tree & Proof Tree & Formula & \# of & Linguistic & Proof \\ & Rules & Depth (upto) & Branches & Complexity & Distractors (up to) & Diversity & Labels \\ \hline \hline RuleTaker & & & & & & & \\ Clark et al. (2021) & implication & 5 & A few & complex & \(\sim\)20 & less (RuleTaker) / & provable / \\ & & & & & & more (ParRules) & unfavour \\ \hline AACLopus & & & & & & & \\ Betz et al. (2021) & (default = critical thinking) & & 1 & 1 & (simple / complex) & 0 & \(\checkmark\) & provable / \\ & & & & & & & (default = less) & disprovable \\ \hline **FLD** & & & & & & & \\ & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of **FLD** with the previous studies. **FLD** is flexible to generate various patterns of corpora for analysis. \(\checkmark\) means controllable and extensible by an external template file. \(\checkmark\) means controllable by an option. can make an instance of any answer label as follows. For answer label "proved", (i) we use the root node as the hypothesis, (ii) we use the leaf nodes of the proof tree and the distractors as the fact set, and (iii) we use the internal nodes of the proof tree as the proof sequence. For answer label "disproved", we use the negated statement of the root node as the hypothesis so that the hypothesis is disproved by the proof sequence. For answer label "unknown", we randomly drop some of the leaf nodes so that the hypothesis cannot be proved or disproved by the proof sequence. ## 4 Experiments We conducted experiments to verify the effectiveness of **FLD**, and to identify the aspects of deductive reasoning ability on which deduction corpora can enhance LMs. To this end, we examined various deduction corpora shown in Table 2. We trained LMs on the deduction corpora and measured their performance on relevant benchmarks. For reference, we also measured the performance of a LM (T5) without training on the deduction corpora. We used two types of benchmarks: deduction corpora themselves and human-authored EntailmentBank (Dalvi et al., 2021). We briefly explain the setup. See Appendix F for the details. ### Prover Model All the experiments involve generating a proof sequence to (dis-)prove a given hypothesis from a given set of facts. To tackle the task of this type, we adopt the stepwise prover model from Yang et al. (2022). This prover is a generative model based on T5 (Raffel et al., 2020), which generates one proof step at a time. A proof step represents the chosen premises and the derived (generated) conclusion, such as "fact1 & fact3 -> The Earth has seasons". The prover continues the generation until the given hypothesis is (dis-)proved. ### Few-shot Transfer to Synthetic Deduction Corpora The first benchmark is the deduction corpora, which measure rigid logical reasoning ability. We trained prover LM on a corpus and measured its performance on another corpus. If LMs have acquired robust deductive reasoning ability, they should transfer well with a small number of examples. To see this, we used few-shot setting 5. Footnote 5: Zero-shot is not appropriate for transfer among corpora that differ in the sets of arguments used in proofs as follows. Since a proof step is made by an argument, the nature (granularity) of proof steps in one corpus differs much from those in another corpus. To adjust this artificial difference, LMs need examples of the target corpus. We trained prover LM (T5-base) on the training split of each source corpus for 20k steps with a batch size of 64 and learning rate of 1e-4. Then we fine-tuned the prover LM on \(1\%\) subset (\(300\) instances) of the training split of the target corpus. Finally, we measure the performance of the prover on the test split of the target corpus by using proof accuracy (Saha et al., 2020), which measures whether the generated proofs match the gold proofs 6. We used a more strict version of the proof accuracy (see our repository 1 for details). Footnote 6: We also show the results of answer accuracy in Appendix G.1. However, due to biases in fact sets, answers can be guessed without considering proofs to some extent, as found in Tafjord et al. (2021) where answer accuracy exceeds proof accuracy. Thus, the answer accuracy is not appropriate for measuring the logical deductive reasoning ability explicitly. ### Transfer to EntailmentBank EntailmentBank (EB) (Dalvi et al., 2021) is a recently proposed challenging benchmark. The proof trees in the EB dataset are human-authored rather than synthetically generated. Further, each proof step can be rough entailment instead of a rigid logical step. Thus, EB measures logical reasoning ability in a more real-world scenario. We used all the three tasks of EB, which differ in the property of a given fact set: Task1 does not include distractors, Task2 includes distractors, and Task3 includes sentences retrieved from worldTree V2 (Xie et al., 2020). As stated above, the nature of proof steps in EB differs much from the nature of those in deduction corpora. Thus, it is difficult for prover LMs trained on deduction corpora to transfer to EB with a small number of examples. Thus, we fine-tuned the provers using all the EB instances. We trained a prover LM (T5-large) on a source deduction corpus for 10k steps and fine-tuned it on each EB corpus for 10k steps. For all the training, the batch size was 64 and the learning rate was 5e-5, except EB-task2 where the learning rate of 2.5e-5 was used. For EB-task3, we used the prover trained on task2, following Dalvi et al. (2021). Given the challengingness of EB, we used the additional RoBERTa (Liu et al., 2019) based proof step verifier proposed in Yang et al. (2022). We measured the performance of the provers on the test split of EB by the official metric of "AllCorrect" proof accuracy (Dalvi et al., 2021). ## 5 How Well do LMs Solve Logic? (The officially released corpora1 are _version_ 2, on which the provers exhibit slight improvement. See Appendix H for details). \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{RuleTaker} & \multicolumn{2}{c}{FLD} \\ \cline{2-3} KT & \(\mathbf{\text{RTPR}}\) & \(\mathbf{\text{\bf FLD}}\) & \(\mathbf{\text{\bf FLD}}\)\(\star\) \\ \hline \hline 2.5e-4 & 93.5 & 66.4 & 37.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Proof accuracy of a prover fully fine-tuned using all the dataset instances on each corpus. First, we show how well LMs solve logic of each deduction corpus (Table 3). As shown, while the fully fine-tuned provers performed well on RuleTaker, they performed poorer on FLD. One possible reason is as follows. First, since a proof tree is constructed from the combination of argument chosen at each level of the tree, the number of possible proof tree patterns can be estimated (very roughly) as \(\mathcal{O}(\mathcal{A}^{d})\), where \(\mathcal{A}\) is the number of argument choices and \(d\) is the proof tree depth. Next, while RuleTaker uses only a few arguments (\(\mathcal{A}=2\)) of implication type shown in Figure B.3b, FLD uses various arguments (\(\mathcal{A}\sim 10\)) of the axioms shown in Figure B.3a. Thus, FLD includes exponentially more diverse patterns of proof trees, which makes FLD more challenging. Indeed, when we enlarge the maximum tree depth from \(d\)=3 to \(d\)=8 (**FLD** to **FLD+**), the corpus became extremely more challenging due to the exponentially more diverse proof tree patterns. See Appendix G for further detailed analysis. ## 6 How Effective is Formal Logic Deduction? ### Benchmarking by Deduction Corpora We trained a prover on a deduction corpus ("source corpus") and measured its performance on other corpora ("target corpus") (Table 4). The prover trained on **SFLD** performed the best on average, and as seen from the corpus-wise results, the prover transferred the most robustly to the other corpora while the provers trained on the other corpora did not exhibit this level of robustness. Since the corpora used in Table 4 differ in the set of arguments (deduction rules) used in proofs, this result suggests that the prover trained **sFLD** generalized the most to other arguments. The reason for this strongest generalizability should be the following. **(s)FLD** corpora teach LMs how to construct multistep deductions using the axioms. Thanks to the completeness, the axioms can express multistep deductions constructed from any other arguments (including the ones used in the other corpora, as exemplified in Figure B.4). Thus, _mastering the axioms leads to mastering various other arguments_. On the other hand, the sets of arguments used in the other corpora do not have such a property and thus cannot generalize to other arguments. Since mastering various arguments is the most important in deductive reasoning, this generalizability to arguments obtained from **FLD** corpora is vital. ### Benchmarking by EntailmentBank Table 5 shows the results on EntailmentBank (EB). Since \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Source corpus} \\ & T5 & RT & RT.PR & sFLD-impl & sFLD-crit & **sFLD** \\ \hline RT & 70.1 & 92.4 & 91.3 & 76.2 & 74.4 & 76.7 \\ & RT.PR & 64.3 & 91.3 & 91.9 & 73.4 & 67.5 & 72.9 \\ Target & RT.BE & 65.1 & 88.8 & 88.2 & 75.2 & 79.4 & 85.0 \\ corpus & sFLD-impl & 58.4 & 66.7 & 65.9 & 82.2 & 67.3 & 80.7 \\ & sFLD-crit & 71.9 & 77.7 & 72.2 & 87.8 & 94.0 & 93.6 \\ & sFLD & 54.7 & 54.5 & 54.5 & 61.9 & 63.7 & 29.1 \\ \hline **avg.** & 62.6 & 78.5 & 78.5 & 77.1 & 74.4 & **81.3** \\ \hline \hline \end{tabular} \end{table} Table 4: Few-shot proof accuracies of provers transferred among **sFLD** and baseline corpora. For fair comparison, all the corpora have the same depth distribution (except sFLD-crit that cannot form multistep easily, see Appendix F.1) \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline name & arguments & distractors & linguistic & formula & tree depth & \# train & examples \\ & (deduction rules) & (up to) & diversity & complexity & tree depth & \# train & examples \\ \hline RT (“D0-D3”) & implication & \(\sim 20\) & less & complex & 1–3 & & 30k \\ RT.PR (“ParaRules”) & implication & \(\sim 20\) & more & complex & 1–5 & & 30k \\ RT.BE (“Birds-Electricity”) & implication & \(\sim 20\) & more & complex & 1–3 & & skewed & - (test only) \\ \hline sFLD-impl & implication & \(\sim 20\) & less & complex & 1–3 & & 30k \\ sFLD-crit & critical thinking & \(\sim 20\) & less & complex & 1–1 & & 30k \\ sFLD-axiom (**sFLD**) & the axioms & \(\sim 20\) & less & complex & 1–3 & & 30k \\ \hline RT.DS (“D0-D5”) & implication & \(\sim 20\) & less & complex & 1–5 & & 30k \\ **FLD.D.D5** & the axioms & \(\sim 20\) & less & complex & 1–5 & & 30k \\ \hline FLD-impl.0 & implication & \(\sim 20\) & less & complex & 1–3 & & 30k \\ FLD-impl.1 & implication & \(\sim 20\) & less & complex & 1–8 & & uniform & 30k \\ \hline FLD.0 & the axioms & 0 & less & complex & 1–3 & & 30k \\ FLD.1 & the axioms & \(\sim 20\) & less & simple & 1–3 & & 30k \\ FLD.2 & the axioms & \(\sim 20\) & less & complex & 1–3 & & 30k \\ FLD.3 (**FLD**) & the axioms & \(\sim 20\) & more & complex & 1–3 & & 30k \\ FLD.4 (**FLD**) & the axioms & \(\sim 20\) & more & complex & 1–8 & & 30k \\ \hline \hline \end{tabular} \end{table} Table 2: The corpora examined in this paper. For RuleTaker (“RT”), we used the OWA version introduced by Tafjord et al. (2021). To align conditions as closely as possible across the corpora being compared, we (i) generated multiple FLD corpora using the options and template files and (ii) added several preprocessings to RuleTaker. See Appendix F.1 for details. \begin{table} \begin{tabular}{l l l l l} \hline \hline & & \multicolumn{4}{c}{EntailmentBank} \\ & & Task1 & Task2 & Task3 \\ \hline Source & T5 & \(36.8_{\pm 0.9}\) & \(31.2_{\pm 0.7}\) & \(6.2_{\pm 0.9}\) \\ & RT.D5 & \(\mathbf{39.4_{\pm 0.9}}\) & \(\mathbf{32.0_{\pm 0.8}}\) & \(\mathbf{32.2_{\pm 0.4}}\) \\ corpus & **FLD.D5** & \(\mathbf{39.2_{\pm 1.2}}\) & \(\mathbf{32.6_{\pm 1.0}}\) & \(\mathbf{8.3_{\pm 0.7}}\) \\ \hline \hline \end{tabular} \end{table} Table 5: The proof accuracy of provers on EntailmentBank. See Appendix G.2 for the results of other metrics. EB trees have high-depth (majority up to five), we used the high-depth versions of deduction corpora as source corpus. First, as seen, the provers trained on both deduction corpora (RT.D5, **FLD.D5**) performed better than the baseline prover without such training (T5). This suggests that the deductive reasoning ability acquired by synthetic deduction corpora generalizes to more complex real-world deductive reasoning. We showcase some examples in Appendix G.3, where the error of the baseline prover is fixed by training on a deduction corpus (**FLD.D5**). As seen, the prover captured the fundamentals of deduction rules better than the baseline as follows: (i) it chose the correct premises necessary and sufficient to derive the next conclusion, (ii) it included only such information that logically follows from the chosen premises into a conclusion, and (iii) and it correctly used the rules of logical operators. Looking at the results of deduction corpora closely, the prover trained on **FLD.D5** performed on par with the prover trained on RT.D5, even though it had mastered various deduction rules better, as shown in Section 6.1. We consider a possible reason as follows. Firstly, real-world reasoning can require more coarse-grained deduction rules than those required by deduction corpora. For expressing such coarse-grained deduction rules by the most fine-grained axioms, many steps are required, as in Figure 2. However, the prover trained on **FLD** still struggles with constructing many-step proofs using the axioms (detailed in Section 7.1). In this sense, the prover could have failed to exploit the axioms' potential fully. We will discuss future directions to tackle this challenge in Section 8. ## 7 On What Aspects are Synthetic Deduction Corpora Beneficial? A deduction corpus in Table 2 emphasizes a specific aspect different from those emphasized by the other corpora. For each corpus (each aspect), we investigate whether the LM trained on that corpus outperforms the LM trained on the other corpus that does not emphasize the aspect. If it does, we interpret it as meaning that the supervision from deduction corpus on that aspect is beneficial for LMs. ### Ability to Solve Complex Proof Trees Table 6 shows the depth-wise performances of provers. The corpora in Table 6(a) use the implication arguments. The prover trained on the corpus of shallower (\(\sim 3\)) trees (FLD-impl.0) generalizes to deeper (\(4\sim 8\)) trees to some extent, and performs similarly to the prover trained on the corpus of deeper trees (FLD-impl.1). This generalization to deeper trees coincides with previous findings (Tafjord et al., 2021; Sanyal et al., 2022). However, as Table 6(b) shows, when the corpora use the axioms, neither the provers trained on the shallower tree corpus (FLD.3) nor deeper tree corpus (FLD.4) failed in solving deeper trees. We can interpret this seemingly contradictory result as follows. As discussed in Section 5, the number of possible proof tree patterns can be estimated (very roughly) as \(\mathcal{O}(\mathcal{A}^{d})\). When a prover tries to solve a deduction instance, it has to choose and generate exactly the one gold proof tree from these possible negative proof trees. This should be very difficult for large \(d\) with large \(\mathcal{A}\). Now, while the corpora in Table 6(a) use a few arguments (\(\mathcal{A}=2\)) of implication type, corpora in Table 6(b) use various arguments (\(\mathcal{A}\sim 10\)) of the axioms. This made it very difficult to solve large-depth deduction instances of these corpora, which lead the provers to fail in solving large-depth proof trees in Table 6(b). Overall, for solving complex trees, the supervision from deduction corpora can be necessary but not sufficient alone. ### Understanding of Diverse Linguistic Expressions Table 7 shows that a prover trained on a corpus with less linguistic diversity (i.e., RT and FLD.2) performed as well as the prover trained on the linguistically diverse counterpart of that corpus (i.e., RT.PR and FLD.3, respectively). This suggests that LMs are self-sufficient on the linguistic aspect, and thus additional supervision from deduction corpora is not that important. Indeed, this result coincides with the previous findings (Clark et al., 2021; Tafjord et al., 2021) and can be intuitively understood: since the pre-training corpora of LMs are huge and linguistically diverse, they should have given LMs many chances to learn linguistic of logical statements \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{4}{c}{Source corpus} \\ & & \multicolumn{2}{c}{RuleTaker} & \multicolumn{2}{c}{FLD} \\ & & T5 & RT & RT.PR & FLD.2 & FLD.3 \\ \hline \multirow{4}{*}{Target corpus} & RT & **70.1** & **92.2** & **91.8** & **78.3** & **76.0** \\ & RT.BE & **64.3** & **91.1** & **93.0** & **71.3** & **73.4** \\ & FLD.2 & 31.0 & 34.2 & 34.7 & **66.8** & **66.2** \\ & FLD.3 & 24.8 & 28.7 & 27.5 & **65.3** & **66.4** \\ \hline \hline **avg.** & 47.6 & 61.6 & 61.8 & 70.4 & 70.7 \\ \hline \hline \end{tabular} \end{table} Table 7: Few-shot proof accuracies of provers transferred among corpora that differ in the diversity of linguistic expressions. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & & \multicolumn{4}{c}{Source corpus} \\ & & \multicolumn{4}{c}{RuleTaker} & \multicolumn{2}{c}{FLD} \\ & & T5 & RT & RT.PR & FLD.2 & FLD.3 \\ \hline \multirow{4}{*}{Target corpus} & RT & **70.1** & **92.2** & **91.8** & **78.3** & **76.0** \\ & RT.BE & **64.3** & **91.1** & **93.0** & **71.3** & **73.4** \\ & FLD.2 & 31.0 & 34.2 & 34.7 & **66.8** & **66.2** \\ & FLD.3 & 24.8 & 28.7 & 27.5 & **65.3** & **66.4** \\ \hline \hline \end{tabular} \end{table} Table 7: Few-shot proof accuracies of provers transferred among corpora that differ in the diversity of linguistic expressions. such as that "If A, then B" paraphrases to "A leads to B". ### Understanding of Complex Formulas Table 8 shows that while the prover trained on the corpus with simple formulas (FLD.1) performed poorly on the corpus with complex formulas (FLD.2), the prover trained FLD.2 performed well on both corpora. Thus, deduction corpora are beneficial for mastering complex formulas. We can interpret this result as follows. The complex formulas included in FLD.2 are formed by modifying atomic formulas with logical operators \(\neg,\wedge,\vee\). The semantics of these logical operators, such that "a sentence with negation \(\neg\) have the opposite meaning of that sentence without negation", and that "\(A\lor B\) does not necessarily imply \(A\)", are seldom written explicitly by humans. Thus, the pre-training corpora gave LMs too few chances for learning these semantics. This result is enhanced by the previous findings that LMs fail to understand the semantics of negation (Naik et al., 2018; Hossain et al., 2020; Kassner and Schutze, 2020). Table 9 shows that, while the prover trained on the corpus without distractors (FLD.0) performed poorly on the corpus with distractors (FLD.2), the prover trained on FLD.2 performed well on both corpora. Thus, synthetic distractors are beneficial for acquiring the robustness to distractive facts. This result is intuitive: since the human-written text should not include the facts irrelevant to the content, the pre-training corpora should not have given LMs a chance to acquire robustness to irrelevant facts. ## 8 Discussions and Future Directions So far, we have investigated each aspect of deductive reasoning. We summarize the results and discuss future directions. **Mastery on Various Deduction Rules:** Mastering various deduction rules is the most important in deductive reasoning. We showed that **FLD** corpora teach LMs various arguments the most effectively (Section 6.1). This should be because that **FLD** adopts the axioms of first-order predicate logic system, which can derive any valid deduction rules in this system. The next step will be to examine the axioms of other logic systems, such as linear and modal logic systems, which are also important in real-world reasoning. **Ability to Solve Complex Proof Trees:** We have shown that solving a many-step proof tree is still challenging for LMs even after training on deduction corpora (Section 7.1). The possible reason is that they have to choose and generate a gold proof from a large number of possible trees. To solve this problem, inventing smarter and strategic search methods on possible generation space, such as Li et al. (2016); Negrinho et al. (2018); Picco et al. (2021); Welleck et al. (2022), could be a promising direction. **Understanding of Complex Formulas:** We have shown that deduction corpora are effective for LMs to understand the semantics of logical operators such as \(\neg,\wedge,\vee\) (Sections 6.2 and 7.3). It could be even more effective to incorporate the recent learning methodological approaches for making LMs understand negation (Prollochs et al., 2019; Hosseini et al., 2021) into the learning on deduction corpora. **Robustness to Distractive Facts:** We have shown that the synthetic distractors can make LMs robust to distractive facts (Section 7.4). In a real scenario of logical reasoning, the facts have to be collected by possibly incomplete retrieval systems. The distractors that imitate ones appearing in such a scenario could be more effective. We can generate such distractors as follows: (i) We build a database of synthetic facts. (ii) For a given deduction instance, we collect facts from the database by actual retrieval systems. **Generalization to Real-World Reasoning Tasks:** We have shown that the training on deduction corpora is even useful for deductive reasoning in a more real-world setting (Section 6.2). However, the LMs trained on **FLD** could not fully utilize the potential of the axioms, as they failed in constructing many-step proofs to express coarse-grained deduction rules, which could be required in real-world reasoning (Sections 6.2 and 7.1). We discussed future directions to solve such many-step proofs above. Further, LMs may need additional training to utilize deduction rules well in a realistic context. For example, the LMs could have to combine deduction rules with common sense knowledge, use multiple deduction rules at once to jump to the next conclusion, and judge the validity of a proof step considering the overall context. Recently, Wei et al. (2022); Kojima et al. (2022) showed that large LMs can utilize deduction rules in a realistic context, given appropriate prompts. It could be promising to integrate this approach and deduction corpora training. Pursuing further real-world scenarios, we have to tackle tasks of other settings. One is deductive reasoning that requires us to collect relevant facts by ourselves. For this, \begin{table} \begin{tabular}{l c c c c} \hline \hline & & \multicolumn{3}{c}{Source corpus} \\ \cline{3-5} & & T5 & FLD.1 & FLD.2 \\ \hline Target & FLD.1 & 43.1 & 77.0 & **71.6** \\ corpus & FLD.2 & 31.0 & 46.0 & 66.8 \\ \hline \hline \end{tabular} \end{table} Table 8: Few-shot proof accuracies of provers transferred among corpora that differ in the complexity of formulas. \begin{table} \begin{tabular}{l c c c c} \hline \hline & & \multicolumn{3}{c}{Source corpus} \\ \cline{3-5} & & T5 & FLD.0 & FLD.2 \\ \hline Target & FLD.0 & 38.9 & 76.1 & 75.2 \\ corpus & FLD.2 & 31.0 & 56.7 & 66.8 \\ \hline \hline \end{tabular} \end{table} Table 9: Few-shot proof accuracies of provers transferred among corpora that differ in the number of distractors. we could exploit factual knowledge implicitly embedded in LMs (Petroni et al., 2019; Davison et al., 2019; Talmor et al., 2020), or use retrieval systems. For the latter, we could train LM-based retrievers (Karpukhin et al., 2020; Guu et al., 2020) using synthetic deduction instances and fact database. Abductive reasoning (Bhagavatula et al., 2019) is another kind of real-world logical reasoning with which we derive hidden premises from a conclusion and other visible premises. Synthetic corpora for abduction based on formal logic can be generated similarly to as done in this study. ## 9 Conclusion To teach language models deductive reasoning, we proposed a synthetic corpus based on formal logic theory and verified its effectiveness empirically. Further, we analyzed each aspect of deductive reasoning and provided future directions on each. We will advance on the basis of these directions. ## Acknowledgement We thank the three anonymous reviewers and the meta-reviewer, who gave us insightful comments and suggestions. Computational resources of AI Bridging Cloud Infrastructure (ABCI) provided by the National Institute of Advanced Industrial Science and Technology (AIST) were used. We thank Dr. Masaaki Shimizu at Hitachi for the convenience of additional computational resources. We thank Dr. Naoaki Okazaki, professor at Tokyo Institute of Technology, for the keen comments.
2307.06875
Simultaneous calculation of elastic scattering, fusion, and direct cross sections for reactions of weakly-bound projectiles
Simultaneous analyses are performed for cross section data of elastic scattering, fusion, Coulomb breakup, and other direct yields for the $^{6}$He+$^{209}$Bi system at near-Coulomb-barrier energies. The bare and dynamical polarization potentials are constructed microscopically from the structure of the colliding nuclei and they reproduce all the data well with only one adjustable parameter. This method of calculation can be successfully applied to the reactions of weakly-bound and exotic projectiles with heavy targets.
H. M. Maridi, N. Keeley, K. Rusek
2023-07-13T16:22:56Z
http://arxiv.org/abs/2307.06875v3
Simultaneous calculation of elastic scattering, fusion, and direct cross sections for reactions of weakly-bound projectiles ###### Abstract Simultaneous analyses are performed for cross section data of elastic scattering, fusion, Coulomb breakup, and other direct yields for the \({}^{6}\)He+\({}^{209}\)Bi system at near-Coulomb-barrier energies. The bare and dynamical polarization potentials are constructed microscopically from the structure of the colliding nuclei and they reproduce all the data well with only one adjustable parameter. This method of calculation can be successfully applied to the reactions of weakly-bound and exotic projectiles with heavy targets. ## I Introduction In recent decades there has been significant progress in the exploration of the mechanisms involved in heavy ion collisions at energies close to the Coulomb barrier. One particular focus has been on reactions caused by radioactive halo nuclei, which consist of a tightly-bound core with one or two nucleons that orbit far from the core. When these reactions occur at low energies, close to the Coulomb barrier, they are mainly dominated by fusion and direct reactions like transfer and breakup. Reviews of the reactions induced by these exotic nuclei interacting with heavy targets may be found in Refs. [1; 2; 3]. The optical model potential is often used to describe elastic scattering data within a single channel approach, and for heavy ion projectiles is most commonly of Woods-Saxon volume form for both real and imaginary parts. However, the imaginary potential is occasionally split into volume and surface components, with a short range volume term arranged to simulate the ingoing-wave boundary condition to model loss of flux due to fusion and a surface term with a longer range to account for loss of flux due to non-elastic direct reaction channels. This is the so-called extended optical model introduced by Udagawa and collaborators [4; 5; 6; 7; 8] which gives good simultaneous fits to the fusion and elastic scattering data for a large variety of systems. The direct component can also be taken as a complex potential, i.e. including a real part, within this model, as in Refs. [9; 10; 11; 12]. In the reactions of weakly-bound projectiles with heavy targets the projectile can become polarized by and/or break up in the strong electric field of the target. The resulting strong Coulomb dipole excitation and breakup can be treated by introducing an additional interaction which influences the elastic scattering. This additional interaction is often referred to as the Coulomb dynamical polarization potential (CDPP). Recently [13; 14], a new expression for the CDPP was obtained by solving the Schrodinger equation for the internal motion of an exotic neutron-rich projectile (considered as a two-body deuteronlike cluster structure) incident on a heavy target nucleus using the adiabatic approximation. However, in the optical model, the CDPP potential alone cannot entirely account for the long-range interactions in these exotic systems. To address this issue, a long-range nuclear dynamical polarization potential (NDPP) is introduced to account for the nuclear breakup and transfer reactions, so that the direct surface potential now consists of the CDPP plus the NDPP. This NDPP utilizes either a volume or surface Woods-Saxon type potential, usually with large radius and/or diffuseness parameters, see for example Refs. [15; 16; 17]. In this work we present a form of the extended optical-model potential which is able simultaneously to reproduce elastic scattering, fusion, Coulomb breakup, other direct yields, and total reaction cross section data with a single adjustable parameter. We apply this potential to calculations for the \({}^{6}\)He+\({}^{209}\)Bi system. ## II Theory ### The optical potentials #### ii.1.1 Bare nuclear potential The nuclear interaction in the absence of coupling effects is represented by a short-range complex nuclear potential--the "bare potential"--which is often of volume Woods-Saxon form. However, in this work the real part of the bare nuclear potential is taken from the Sao Paulo potential (SPP) [18], which reproduces with reasonable accuracy the experimental angular distributions of a large number of stable systems over a wide energy range with no adjustable parameters [19; 20]. It is obtained by multiplying the double folding potential (\(V_{F}\)) by an energy dependent factor: \[V_{\rm SPP}({\bf R})=V_{F}({\bf R}){\rm e}^{-4v^{2}/c^{2}}, \tag{1}\] where \(\upsilon\) is the relative velocity between the projectile and target, \(c\) is the speed of light and \(V_{F}\) is given by [21] \[V_{F}(\mathbf{R})=\int\rho_{p}(\mathbf{r}_{p})\rho_{t}(\mathbf{r}_{t})\upsilon_{ nn}(s)d\mathbf{r}_{p}d\mathbf{r}_{t}, \tag{2}\] where \(\rho_{p}(\mathbf{r}_{p}),\rho_{t}(\mathbf{r}_{t})\) are the nuclear matter density distributions for projectile and target nuclei, respectively, \(s=|\mathbf{R}-\mathbf{r}_{p}+\mathbf{r}_{t}|\) is the distance between the two nucleons, and \(\upsilon_{nn}(s)\) is the effective \(NN\) interaction, which in this case is the zero-range effective \(NN\) interaction \(\upsilon_{nn}(s)=V_{0}\delta(s)\), with \(V_{0}=-456\) MeV [18]. The imaginary potential may be of Woods-Saxon form or the SPP potential multiplied by a normalization factor [19], in which case the bare optical potential is given by \[U_{N}(R)=N_{R}V_{\mathrm{SPP}}(R)+iN_{I}V_{\mathrm{SPP}}(R) \tag{3}\] where \(N_{R}\) and \(N_{I}\) are the normalization factors that fit the data and simulate the polarization effects. A systematic analysis of many stable tightly bound nuclei in Ref. [19] arrived at \(N_{R}=1.00\) and \(N_{I}=0.78\) as reference values of the normalization factors, and this bare potential was used recently to analyze reactions with exotic and stable nuclei [20]. The SPP has also been shown to be a reasonable basis for the analysis of fusion reactions induced by stable weakly-bound nuclei [22]. Note that since the weakly-bound nuclei considered here are composed of a core nucleus and one or two valence nucleons, the short-range bare potential may also be extracted by fitting suitable projectile core-target elastic scattering data. #### ii.1.2 Coulomb dynamical polarization potential (CDPP) Recently [13; 14], the CDPP was obtained by solving the formalism for the scattering of a weakly-bound two-body projectile (p) consisting of a core plus a cluster of \(n\) valence neutrons from a heavy target. To solve the Schrodinger equation of the system and obtain the CDPP one may use the adiabatic approximation \(\Psi(\mathbf{r},\mathbf{R})\approx\psi(\mathbf{R})\phi(\mathbf{r},\mathbf{R})\), where \(\psi(\mathbf{R})\) refers to the wave function of the center of mass and \(\phi(\mathbf{r},\mathbf{R})\) to that of the relative motion of the projectile; \(\mathbf{R}\) and \(\mathbf{r}\) are the coordinates of the projectile-target and the projectile valence-core systems, respectively. The resultant CDPP \(\delta U_{C}\) must obey \[\left(\frac{\varepsilon_{0}^{*}+\delta U_{C}(R)}{\varepsilon_{0}^{*}}\right)H _{0}^{+}(\rho)F_{0}(\rho)-Q^{2}(R)H_{0}^{+^{\prime}}(\rho)F_{0}^{{}^{\prime}} (\rho)=Q(R), \tag{4}\] where \(H_{0}^{+}=G_{0}+iF_{0}\), with \(F_{0}\) and \(G_{0}\) the regular and irregular Coulomb functions in \(\rho=k(R)R\), and \(Q(R)=\frac{\mu_{p}}{m_{e}}\frac{k(R)}{\kappa_{0}}\) with \(\kappa_{0}=\sqrt{-2\mu_{p}\varepsilon_{0}^{*}/\hbar}^{2}\), where \(\mu_{p}\) is the core-valence reduced mass and \(m_{e}\) the mass of the charged core. \(\varepsilon_{0}^{*}=\varepsilon_{0}+\varepsilon_{I_{e}^{*}}\) where \(\varepsilon_{0}\) is the binding energy of the valence neutron or neutron cluster with respect to the charged core of the projectile and \(\varepsilon_{I_{e}^{*}}\) the excitation energy of the core state of spin-parity \(I_{c}^{*}\). By making the same approximation as in Ref. [13] for the wave number of the charged core in the field of the target that is associated with the wave function of the internal motion of the projectile, \[k(R)\approx\sqrt{\frac{2m_{c}^{2}}{\mu_{p}\hbar^{2}}(V_{C}(R)+\varepsilon_{0} ^{*})}, \tag{5}\] the real and imaginary parts of the CDPP can be given as: \[\delta V_{C}(R) = \varepsilon_{0}^{*}\left[\frac{QG_{0}F_{0}+Q^{2}G_{0}F_{0}G_{0}^ {\prime}F_{0}^{\prime}+Q^{2}F_{0}^{2}F_{0}^{\prime 2}}{F_{0}^{4}+G_{0}^{2}F_{0}^{2} }-1\right]\] \[\delta W_{C}(R) = \varepsilon_{0}^{*}\left[\frac{Q^{2}F_{0}F_{0}^{\prime}-QF_{0}^{ 2}}{F_{0}^{4}+G_{0}^{2}F_{0}^{2}}\right] \tag{6}\] Note that \(k(R)\) depends parametrically on the Coulomb potential between the projectile and target, \(V_{C}(R)\) and is different from the wave number of the center-of-mass motion of the system that describes the motion of the projectile along the Rutherford trajectory, \(K=\sqrt{2\mu(E-\varepsilon_{0}^{*})/\hbar^{2}}\) where \(E\) is the incident energy of the projectile and \(\mu\) is the reduced mass of projectile-target system. This CDPP (6) depends on the structure of the system but does not depend on the incident energy of the projectile. #### ii.1.3 Nuclear dynamical polarization potential (NDPP) Notwithstanding, the CDPP is usually insufficient completely to explain the long-range interactions in exotic systems. To tackle this problem, the direct surface potential should include both the CDPP and a long-range nuclear dynamical polarization potential (NDPP) to factor in nuclear breakup and transfer. This NDPP typically employs a volume, for example Ref. [17], or surface, for example Refs. [15; 16; 23], Woods-Saxon type imaginary potential, characterized by large radius and/or diffuseness parameters. It is sometimes referred to as the direct potential and may also include a real part, see Refs. [9; 10; 11; 12; 23]. In the framework of semiclassical theory [24], an exponential form, \(W(R)\approx\mathrm{e}^{-(R-R_{s})/a}\), is assumed for the long-range imaginary surface potential that takes care of peripheral reactions like transfer and nuclear breakup. The strong absorption radius is taken as \(R_{s}=1.4(A_{p}^{1/3}+A_{t}^{1/3})\) and the diffuseness is closely linked to the decay length of the initial wave function that characterizes the polarization potential's long range. Note that a long-range surface Woods-Saxon potential with radius \(R_{L}\) and diffuseness \(a_{L}\) can be approximated by the exponential form at large distances [25; 23]: \[\frac{\exp(\frac{R-R_{L}}{a_{L}})}{[1+\exp(\frac{R-R_{L}}{a_{L}})]^{2}}\to \exp\left(-\frac{R-R_{L}}{a_{L}}\right) \tag{7}\] which is similar to the semiclassical formula with the same radius and diffuseness. The same applies to the volume Woods-Saxon shape: \[\frac{1}{1+\exp(\frac{R-R_{L}}{a_{L}})}\rightarrow\exp\left(-\frac{R-R_{L}}{a_{L}}\right) \tag{8}\] so that using either form we can fix the radius and diffuseness from the semiclassical theory and just vary the strength. In this work the long-range nuclear dynamical polarization potential is thus taken to be of derivative Woods-Saxon shape: \[\delta W_{N}\equiv W_{L}(R)=-4W_{L}\frac{\exp(\frac{R-R_{L}}{a_{L}})}{[1+\exp( \frac{R-R_{L}}{a_{L}})]^{2}}, \tag{9}\] where \(R_{L}=1.4(A_{p}^{1/3}+A_{t}^{1/3})\) is the strong absorption radius and \(a_{L}=1/(2\gamma)\) the diffuseness, where \(\gamma=\sqrt{2\mu\varepsilon/\hbar^{2}}\) and \(\varepsilon\) is the separation energy. \(W_{L}\) is varied to fit the data. For \({}^{6}\)He: \(a_{L}=2.0\) fm using \(\varepsilon=0.975\) MeV (the actual \(2n\) separation energy) and \(a_{L}=1.565\) fm using \(\varepsilon=1.6\) MeV (the "effective" separation energy used in the improved two-body cluster model of Moro _et al._[26]), which may be compared with 1.25 fm (\({}^{6}\)He+\({}^{209}\)Bi [9]), 2.29 fm (\({}^{6}\)He+\({}^{208}\)Pb) [15], and 1.45 fm (\({}^{6}\)He+\({}^{208}\)Pb [17]) obtained empirically from fitting data. For \({}^{11}\)Li, \(a_{L}=2.94\) fm which may be compared with 3.42 and 4.00 fm from \({}^{11}\)Li+\({}^{208}\)Pb [15]. For \({}^{11}\)Be, \(a_{L}=3.38\) fm which may be compared with 3.50 fm from \({}^{11}\)Be+\({}^{64}\)Zn [27; 28] and 3.2 fm from \({}^{11}\)Be+\({}^{64}\)Zn [16]. #### ii.1.4 Total optical potential The polarization potentials are added to the "bare" optical potential to give the generalized optical potential. According to the Feshbach theory of the optical potential [29], the effective optical potential can be written as \(U_{N}+\delta U\) where \(\delta U\equiv U_{\rm pol}(R)\) is the dynamical polarization potential. Here we have Coulomb and nuclear contributions. The total projectile-target optical potential is given as: \[U_{\rm OP}(R)=U_{C}(R)+U_{N}(R)+\delta U_{C}(R)+\delta U_{N}(R) \tag{10}\] where \(U_{C}(R)=V_{C}(R)\) is the usual real Coulomb potential with a radius of \(R_{C}=1.25{({A_{p}}^{1/3}+{A_{t}}^{1/3})}\), \(U_{N}(R)\) the bare nuclear potential that accounts for the fusion, and \(\delta U_{C}(R)=\delta V_{C}(R)+i\delta W_{C}(R)\) is the CDPP (6) that represents the dipole polarization and Coulomb breakup. \(\delta U_{N}(R)=i\delta W_{N}(R)\equiv iW_{L}(R)\) is the long-range polarization potential (or NDPP) which accounts for the nuclear breakup and transfer. We note that it is possible to split the NDPP into two parts, one for the nuclear breakup and the other for the transfer. To a first approximation these may employ the same diffuseness and radius, just the strengths being varied to fit the corresponding cross section data if these are available. In reactions inducing by weakly-bound projectiles with light targets we may ignore the CDPP. For more complex reactions, other potentials can be added. ### Cross sections #### ii.2.1 Partial and total reaction cross sections Using the continuity equation, the total reaction cross section can be calculated from the imaginary potential as \[\sigma_{R}=-\frac{2}{\hbar v}\left\langle\psi|W|\psi\right\rangle=-\frac{2}{ \hbar v}\int d^{3}R|\psi(R)|^{2}W(R) \tag{11}\] where \(v\) is the asymptotic relative velocity and \(\psi\) is the usual distorted wave function that satisfies the Schrodinger equation with the full optical model potential \(U(R)=V(R)+iW(R)\). Similarly, the total, direct reaction and fusion cross sections within the optical model can be calculated using imaginary surface type direct-reaction and volume type fusion potentials, respectively [6; 7; 8; 30]. Here we have three contributions to the absorption: fusion, direct nuclear, and direct Coulomb, so the total fusion and direct cross sections are calculated as \[\sigma_{R} = \sigma_{F}+\sigma_{DN}+\sigma_{DC}\] \[= -\frac{2}{\hbar v}\left\langle\psi|W_{N}(R)+\delta W_{N}(R)+\delta W _{C}(R)|\psi\right\rangle\] and then \[\sigma_{i}=\frac{2}{\hbar v}\left\langle\psi|W_{i}(R)|\psi\right\rangle,\ \ \ \ \ \ \ \ \ \ \ (i=DN,DC,\ {\rm or}\ F), \tag{13}\] Note that DN refers to direct nuclear reactions like transfer and nuclear breakup. DC refers to the direct Coulomb breakup. In terms of the partial-wave radial functions \(\chi_{\ell}(R)\), the complete wave function, \(\psi({\bf R})=\psi(R,\theta)\), of the Schrodinger equation can be expanded as \[\psi({\bf R})=\frac{1}{kR}\sum_{\ell=1}^{\infty}(2\ell+1)i^{\ell}\chi_{\ell}(R )P_{\ell}(cos(\theta)) \tag{14}\] where \(P_{\ell}(cos(\theta))\) are Legendre functions and satisfy the orthogonality relation \[\int_{-1}^{1}dcos(\theta)P_{\ell}(cos(\theta))P_{\ell}(cos(\theta))=\frac{2}{2 \ell+1}\delta_{\ell\ell} \tag{15}\] and then \[\sigma_{i} = \sum_{\ell}\sigma_{i;\ell}=-\frac{2}{\hbar v}\frac{4\pi}{k^{2}} \sum_{\ell=1}^{\infty}(2\ell+1)\int dR|\chi_{\ell}(R)|^{2}W_{i}(R) \tag{16}\] \[= \frac{\pi}{k^{2}}\sum_{\ell}(2\ell+1)T_{i,\ell},\] where the transmission coefficient (\(T_{\ell}\)) is given by \[T_{i,\ell}=\frac{8}{\hbar v}\int_{0}^{\infty}|\chi_{\ell}(R)|^{2}W_{i}(R)dR \tag{17}\] Thus, for a given shape the depths of the DN and F imaginary potentials may be fixed by fitting the corresponding cross section data if these are available, although the strength of the fusion imaginary potential is usually held fixed. In practice, the strength of the DN imaginary potential is usually fixed by fitting the elastic scattering data and is the only adjustable parameter of the present model. #### ii.2.2 Angular distribution of the combined transfer and breakup cross sections In the semi-classical approximations [31; 32; 33; 34], the trajectory impact parameter \(b\) and orbital angular momentum \(\ell\) are related to the scattering angle \(\theta\) by \[b=\frac{\ell}{k}=\frac{D_{0}}{2}\text{cot}\frac{\theta}{2}, \tag{18}\] where \(D_{0}=\frac{Z_{p}Z_{t}e^{2}}{E_{\text{c.m.}}}\) is the distance of closest approach in a head-on collision, \(k=\sqrt{2\mu E_{\text{c.m.}}}/\hbar\) is the wave number, \(E_{\text{c.m.}}\) is the incident energy in the center-of-mass system, and \(Z_{p}\) and \(Z_{t}\) are the charge of the projectile and target ions, respectively. By treating \(\ell\) as a continuous variable and assuming that \(\frac{d\sigma_{i}(\ell)}{d\ell}=\sigma_{i;\ell}\)[9], the angular distribution of cross section can be given for each potential discussed above as [9] \[\frac{d\sigma_{i}(\ell)}{d\Omega}=\frac{1}{2\pi\text{sin}\theta}\frac{d\ell}{ d\theta}\frac{d\sigma_{i}(\ell)}{d\ell}=\frac{kD_{0}}{16\pi}\frac{1}{\text{ cos}(\theta/2)\text{sin}^{3}(\theta/2)}\sigma_{i;\ell}. \tag{19}\] The angular distribution of the total transfer plus breakup cross section then comes from the direct nuclear and Coulomb contributions and is written as [9; 35]: \[\frac{d\sigma_{\text{BU}}}{d\Omega}==\frac{kD_{0}}{16\pi}\frac{1}{\text{cos}( \theta/2)\text{sin}^{3}(\theta/2)}\sigma_{\text{BU};l} \tag{20}\] with \[\sigma_{\text{BU};l}=\frac{\pi}{k}(2l+1)\,\frac{8}{\hbar v}\int_{0}^{\infty}| \chi_{\ell}(R)|^{2}[\delta W_{C}(R)+\delta W_{N}(R)]dR. \tag{21}\] ## III Application to \({}^{6}\)He+\({}^{209}\)Bi system We apply the above methodology to the \({}^{6}\)He+\({}^{209}\)Bi system. This system has been studied many times before, see for example Refs. [36; 37; 38; 39; 40; 10], since it has the most complete data set of any system involving a weakly-bound exotic projectile. It thus provides a severe test of the ability of the present formalism to describe a wide body of data with a single adjustable parameter. Remarkably high yields for \(\alpha\)-particle emission have been observed in studies [41; 42; 43] of the \({}^{6}\)He+\({}^{209}\)Bi interaction at energies close to the Coulomb barrier. They have been shown to arise from one-neutron transfer [44], two-neutron transfer [45], and projectile breakup [41]. The transfer accounts for nearly 75% of the total \(\alpha\)-particle yield [41; 46]. The calculations are carried out using the optical model framework with the total optical potential of Eq. 10. In the SPP, the density of \({}^{209}\)Bi is given by a two-parameter Fermi distribution obtained by fitting the appropriate Hartree-Fock density. The density distribution of \({}^{6}\)He is of Gaussian-oscillator form, \(\rho=\rho_{\text{core}}+\rho_{\text{valence}}\), where the core density is usually taken as a single-parameter Gaussian and the density of the valence nu Figure 1: The calculated \({}^{6}\)He+\({}^{209}\)Bi elastic scattering angular distributions compared with the data from Refs. [42; 43]. The c.m. energies corresponding to each distribution are given. cleon(s) is assumed to have a \(1p\)-shell harmonic oscillator distribution. Its parameters were obtained by fitting the measured proton elastic scattering cross sections at high incident energies using the Glauber multiple scattering theory [47]. In the CDPP, \({}^{6}\)He is described within the \({}^{4}\)He + \(2n\) cluster model of Moro _et al._[26] with a separation energy of \(\varepsilon_{0}=1.6\) MeV that stimulates the wave functions of realistic three-body calculations and gives a very good description of the elastic scattering data for several reactions induced by \({}^{6}\)He [26]. Accordingly, the diffuseness of the NDPP is given as 1.565 fm as shown in Sec. II.1.3. We analyzed the elastic scattering angular distributions for the \({}^{6}\)He+\({}^{209}\)Bi system measured at energies around the Coulomb barrier, namely, at c.m. energies of 14.3, 15.8, 17.4, 18.6, 21.4, and 21.9 MeV [42; 43]. The results of the calculations are presented in Fig. 1. At the same time the cross sections were calculated for the fusion, Coulomb breakup, other direct nuclear yields (nuclear breakup and transfer), and the total direct yield (the sum of the direct channels: Coulomb breakup and nuclear breakup plus transfer, corresponding to the measured inclusive \(\alpha\) yield). Note that the DN component will in principle also include a contribution from inelastic excitation of the target, but for this system it is completely negligible. These partial cross sections and the total reaction cross section are listed in Table 1 and presented in Fig. 2. The angular distributions of the total direct yield are compared with the measured inclusive \(\alpha\) yield at \(E_{\rm c.m.}=18.6\) and 21.9 MeV in Fig. 3. The obtained \(W_{L}\) values (the only free parameter in our optical potential) are listed in Table 1. They have a systematic behavior as a function of energy which may be parameterized in an exponential form as \(W_{L}=4.31\exp(-E_{\rm c.m.}/8)\). Since there are fusion cross section data at energies larger than 21.9 MeV, the calculations were extended to include c.m. energies larger than 22 MeV using the obtained energy dependence of \(W_{L}\). Thus at these energies we do not have any free parameter in our potential. It is clear that all the data are simultaneously well reproduced. We note here that in our discussion we refer to the direct nuclear part of the optical potential as consisting of the combined effects of transfer and nuclear breakup. However, in this system the Coulomb breakup is dominant and much more important than the nuclear, so we do not consider the nuclear breakup separately and the transfer cross section is the dominant contributor to the direct nuclear (DN) cross section. At \(E_{\rm c.m}\approx 22\) MeV the calculated Coulomb breakup is 227 mb which is close to the experimental value of 205(65) mb [41] or the calculation of Ref. [46], 218 mb. The calculated DN cross section is about 600 mb which is slightly larger than the measured transfer cross section of 565 mb [41]. The \(\alpha\) yield is about 820 mb, which is in agreement with the measured values 770(140) mb [41] and 773(31) mb [42; 43]. The calculated total reaction cross section, the sum of the fusion and direct reaction yields, is in good agreement with the experimental values of 1080(148) mb [41] or 1170(150) mb [42; 43] and the calculation of Rusek [46], 1182 mb. At 18.6 MeV, our calculated value for the \(\alpha\)-yield cross section of 670 mb is close to the measured one of 643(42) Figure 2: The fusion, breakup, direct, and total reaction cross sections calculated with our optical potential for the \({}^{6}\)He+\({}^{209}\)Bi system from the present work in comparison with the experimental data. The data are taken from Refs. [42; 48; 49; 50; 48]. Figure 3: The calculated direct reaction (transfer plus breakup) angular distributions for the \({}^{6}\)He \({}^{209}\)Bi system at c.m. incident energies of 18.6 and 21.9 MeV in comparison with the experimental data. The circles denote the total \(\alpha\) yield angular distributions of Aguilara _et al._[42]. The diamonds denote the total breakup cross section angular distributions of Kolata _et al._[41]. mb [42]. The direct nuclear contribution is about 75% of the \(\alpha\)-yield cross section at all the energies considered here, which is the same ratio deduced from previous measurements and calculations [44; 45; 41; 46] assuming that transfer is the main component of the direct nuclear processes. Figure 4 shows the bare, CDPP, NDPP, and the total potentials used to calculate the cross sections for the \({}^{6}\)He+\({}^{209}\)Bi system at \(E_{\rm c.m.}=21.9\) MeV. It is clear that the real CDPP has the longest range due to the polarization of the \({}^{6}\)He projectile and the imaginary CDPP and NDPP, which account for the loss of flux due to the other direct reaction processes, are also of much longer range than the bare potential. Figure 4 (c) shows that adding the CDPP alone to the bare potential cannot reproduce the data and the long-range NDPP is needed to account for the full deviation from the Rutherford cross section. ## IV Summary and Conclusions In summary, a simultaneous analysis has been carried out of elastic scattering, fusion, Coulomb breakup and other direct nuclear channels of the \({}^{6}\)He+\({}^{209}\)Bi system within the framework of the extended optical model [4; 5; 6; 7; 8]. The optical potential used consisted of a short-range bare nuclear potential (volume type), long-range NDPP (surface type), and CDPP. The bare potential was calculated using the SPP prescription [18] and the CDPP according to a recent formalism [13; 14]. The NDPP was of Woods-Saxon derivative form; however, guided by semiclassical theory and the observation that the results are essentially sensitive to just the tail of this potential, the radius and diffuseness parameters could be fixed leaving the depth of the NDPP, \(W_{L}\), as the sole free parameter of the potential adjusted to fit the elastic scattering angular distribution data. The angular distribution of the total direct cross section derived from summation of the DN (transfer and nuclear breakup) and DC (Coulomb breakup) channels was also compared with the measured inclusive \(\alpha\) production angular distributions at \(E_{\rm c.m.}=18.6\) and \(21.9\) MeV [42]. All the calculated cross sections are in a good agreement with the data. It was found that \(W_{L}\) exhibited a simple exponential dependence on the incident energy, enabling calculation of the fusion cross section for energies where no elastic scattering data exist but where the fusion has been measured. These predictions--the values of \(W_{L}\) were fixed following the systematics--were in good agreement with the data. Thus the methodology also has some predictive power via extrapolation into regions where there are no existing data. The success of the model is at least in part due to the applicability to the system under study of the semiclassical concepts employed. Also, the available observables are such that they are appear to be relatively insensitive to interference terms between the Coulomb and nuclear breakup mechanisms which cannot be handled within the present formalism. With these limitations, the present model has the advantage of being able to describe well a large body of data over a range of near-barrier energies with only a single free parameter. As such it should prove of use in planning experiments and also as a source of "pseudo data" that may be used to help validate more sophisticated models. ###### Acknowledgements. Thanks to Dr D. K. Sharp for input into the preparation of the manuscript. This work was funded by the Council for At-Risk Academics (Cara) within the Cara's Fellowship Programme. This work is partially supported by the British Academy within the British Academy/Cara/Leverhulme Researchers at Risk Research Support Grants Programme under grant number LTRSF/100141.
2303.14986
mSPD-NN: A Geometrically Aware Neural Framework for Biomarker Discovery from Functional Connectomics Manifolds
Connectomics has emerged as a powerful tool in neuroimaging and has spurred recent advancements in statistical and machine learning methods for connectivity data. Despite connectomes inhabiting a matrix manifold, most analytical frameworks ignore the underlying data geometry. This is largely because simple operations, such as mean estimation, do not have easily computable closed-form solutions. We propose a geometrically aware neural framework for connectomes, i.e., the mSPD-NN, designed to estimate the geodesic mean of a collections of symmetric positive definite (SPD) matrices. The mSPD-NN is comprised of bilinear fully connected layers with tied weights and utilizes a novel loss function to optimize the matrix-normal equation arising from Fr\'echet mean estimation. Via experiments on synthetic data, we demonstrate the efficacy of our mSPD-NN against common alternatives for SPD mean estimation, providing competitive performance in terms of scalability and robustness to noise. We illustrate the real-world flexibility of the mSPD-NN in multiple experiments on rs-fMRI data and demonstrate that it uncovers stable biomarkers associated with subtle network differences among patients with ADHD-ASD comorbidities and healthy controls.
Niharika S. D'Souza, Archana Venkataraman
2023-03-27T08:30:11Z
http://arxiv.org/abs/2303.14986v1
mSPD-NN: A Geometrically Aware Neural Framework for Biomarker Discovery from Functional Connectomics Manifolds ###### Abstract Connectomics has emerged as a powerful tool in neuroimaging and has spurred recent advancements in statistical and machine learning methods for connectivity data. Despite connectomes inhabiting a matrix manifold, most analytical frameworks ignore the underlying data geometry. This is largely because simple operations, such as mean estimation, do not have easily computable closed-form solutions. We propose a geometrically aware neural framework for connectomes, i.e., the mSPD-NN, designed to estimate the geodesic mean of a collections of symmetric positive definite (SPD) matrices. The mSPD-NN is comprised of bilinear fully connected layers with tied weights and utilizes a novel loss function to optimize the matrix-normal equation arising from Frechet mean estimation. Via experiments on synthetic data, we demonstrate the efficacy of our mSPD-NN against common alternatives for SPD mean estimation, providing competitive performance in terms of scalability and robustness to noise. We illustrate the real-world flexibility of the mSPD-NN in multiple experiments on rs-fMRI data and demonstrate that it uncovers stable biomarkers associated with subtle network differences among patients with ADHD-ASD comorbidities and healthy controls. Keywords:Functional Connectomics SPD Manifolds Frechet Mean Estimation Geometry-Aware Neural Networks ## 1 Introduction Resting state functional MRI (rs-fMRI) measures steady state patterns of co-activation [11] (i.e., _connectivity_) as a proxy for communication between brain regions. The 'connectome' is a whole-brain map of these connections, often represented as a correlation or covariance matrix [16] or a network-theoretic object such as adjacency matrix or graph kernel [10]. The rise of connectomics has spurred many analytical frameworks for group-wise diagnostics and biomarker discovery from this data. Early examples include statistical comparisons of connectivity features [16], aggregate network theoretic measures [10], and dimensionality reduction techniques [14, 8]. More recently, the field has embraced deep neural networks to learn complex feature representations from both the connectome and the original rs-fMRI time series [2, 18, 7]. While these approaches have yielded valuable insights, they largely ignore the underlying geometry of the connectivity data. Namely, under a geometric lens, connectomes derived from rs-fMRI data lie on the manifold of symmetric positive definite (SPD) matrices. A major computational bottleneck for developing geometrically-aware generalizations [19, 1] is the estimation of the geodesic mean on SPD manifolds. This is a far more challenging problem than statistical estimation in Euclidean data spaces because extensions of elementary operations such as addition, subtraction, and distances on the SPD manifold entail significant computational overhead [17]. The most common approach for estimating the geodesic mean on the SPD manifold is via gradient descent [20]. While this method is computationally efficient, it is highly sensitive to the step size. To mitigate this issue, Riemannian optimization methods [12], the majorization-maximization algorithm [25], and fixed-point iterations [4] can be used. While these extensions have desirable convergence properties, this comes at the cost of increased computational complexity, meaning they do not scale well to higher input dimensionality and larger numbers of samples [3]. In contrast, the work of [3] leverages the approximate joint diagonalization [21] of matrices on the SPD manifold. While this approach provides guaranteed convergence to a fixed point, the accuracy and stability of the optimization is sensitive to the deviation of the data from the assumed common principal component (CPC) generating process. Taken together, existing methods for geodesic mean estimation on the SPD manifold poorly balance accuracy, robustness and computational complexity, which makes them difficult to fold into a larger analytical framework for connectomics data. We propose a novel end-to-end framework to estimate the geodesic mean of data on the SPD manifold. Our method, the Geometric Neural Network (mSPD-NN), leverages a matrix autoencoder formulation [9] that performs a series of bi-linear transformations on the input SPD matrices. This strategy ensures that the estimated mean remains on the manifold at each iteration. Our loss function for training approximates the first order matrix-normal condition arising from Frechet mean estimation [17]. Using conventional backpropagation via stochastic optimization, the mSPD-NN automatically learns to estimate the geodesic mean of the input data. We demonstrate the robustness of our framework using simulation studies and show that mSPD-NN can handle input noise and high-dimensional data. Finally, we use the mSPD-NN for various groupwise discrimination tasks (feature selection, classification, clustering) on functional connectivity data and discover consistent biomarkers that distinguish between patients diagnosed with ADHD-Autism comorbidities and healthy controls. ## 2 Biomarker Discovery from Functional Connectomics Manifolds via the mSPD-NN Let matrices \(\{\mathbf{\Gamma}_{n}\}_{n=1}^{N}\in\mathcal{M}\) be a collection of \(N\) functional connectomes belonging to the manifold \(\mathcal{M}\) of Symmetric Positive Definite (SPD) matrices of dimensionality \(P\times P\), i.e. \(\mathcal{M}\in\mathcal{P}_{P}^{+}\) (and a real and smooth Reimannian manifold). We can define an inner product that varies smoothly at each vector \(\mathcal{T}_{\mathbf{\Gamma}}(\mathcal{M})\) in the tangent space defined at any point \(\mathbf{\Gamma}\in\mathcal{M}\). Finally, a _geodesic_ denotes the shortest path joining any two points on the manifold along the manifold surface. **Geodesic Mappings:** The matrix exponential and the matrix logarithm maps allow us to translate geodesics on the manifold back and forth to the local tangent space at a reference point. The matrix exponential mapping translates a vector \(\mathbf{V}\in\mathcal{T}_{\mathbf{\Phi}}(\mathcal{M})\) in the tangent space at \(\mathbf{\Phi}\in\mathcal{M}\) to a point on the manifold \(\mathbf{\Gamma}\in\mathcal{M}\) via the geodesic emanating from \(\mathbf{\Phi}\). Conversely, the matrix logarithm map translates the geodesic between \(\mathbf{\Phi}\in\mathcal{M}\) to \(\mathbf{\Gamma}\in\mathcal{M}\) back to the tangent vector \(\mathbf{V}\in\mathcal{T}_{\mathbf{\Phi}}(\mathcal{M})\). Mathematically, these operations are parameterized as: \[\mathbf{\Gamma}=\mathbf{Expm}_{\mathbf{\Phi}}(\mathbf{V})=\mathbf{ \Phi}^{1/2}\mathbf{expm}(\mathbf{\Phi}^{-1/2}\mathbf{V}\mathbf{\Phi}^{-1/2}) \mathbf{\Phi}^{1/2} \tag{1}\] \[\mathbf{V}=\mathbf{Logm}_{\mathbf{\Phi}}(\mathbf{\Gamma})= \mathbf{\Phi}^{1/2}\mathbf{logm}(\mathbf{\Phi}^{-1/2}\mathbf{\Gamma}\mathbf{ \Phi}^{-1/2})\mathbf{\Phi}^{1/2} \tag{2}\] Here, \(\mathbf{expm}(\cdot)\) and \(\mathbf{logm}(\cdot)\) refer to the matrix exponential and logarithm respectively, each requiring an eigenvalue decomposition of the argument matrix, a point-wise transformation of the eigenvalues, and a matrix reconstruction. **Distance Metric:** Given two connectomes \(\mathbf{\Gamma}_{1},\mathbf{\Gamma}_{2}\in\mathcal{M}\), the Fisher Information distance between them is the length of the geodesic connecting the two points: \[\delta_{R}(\mathbf{\Gamma}_{1},\mathbf{\Gamma}_{2})=\left|\left|\mathbf{logm}( \mathbf{\Gamma}_{1}^{-1}\mathbf{\Gamma}_{2})\right|\right|_{F}=\left|\left| \mathbf{logm}(\mathbf{\Gamma}_{2}^{-1}\mathbf{\Gamma}_{1})\right|\right|_{F}, \tag{3}\] where \(\left|\left|\cdot\right|\right|_{F}\) denotes the Frobenius norm. The Reimannian norm of \(\mathbf{\Gamma}\) is the geodesic distance from the identity matrix \(\mathcal{I}\) i.e. \(\left|\left|\mathbf{\Gamma}\right|\right|_{R}=\left|\left|\mathbf{logm}( \mathbf{\Gamma})\right|\right|_{F}\) Figure 1: **The mSPD-NN architecture:** The input is transformed by a cascade of 2D fully connected layers. The matrix logarithm function is used to obtain the matrix normal form, which serves as the loss function for mSPD-NN during training. ### Geodesic Mean Estimation via the mSPD-NN: The geodesic mean of \(\{\mathbf{\Gamma}_{n}\}\) is defined as the matrix \(\mathbf{G}_{R}\in\mathcal{M}\) whose sum of squared geodesic distances (Eq. (3)) to each element is minimal [17]. \[\mathcal{G}_{R}(\{\mathbf{\Gamma}_{n}\})=\operatorname*{arg\,min}_{\mathbf{G}_{ R}}\mathbf{L}(\mathbf{G}_{R})=\operatorname*{arg\,min}_{\mathbf{G}_{R}}\sum_{n} \left|\left|\mathbf{logm}(\mathbf{G}_{R}^{-1}\mathbf{\Gamma}_{n})\right|\right| _{F}^{2} \tag{4}\] A pictorial illustration is provided in the green box in Fig 1. While Eq. (4) does not have a closed-form solution for \(N>2\), it is also is convex and smooth with respect to the unknown quantity \(\mathbf{G}_{R}(\cdot)\)[17]. To estimate population means from the connectomes, mSPD-NN makes use of Proposition 3.4 from [17]. **Proposition 1:** The geodesic mean \(\mathbf{G}_{R}\) of a collection of \(N\) SPD matrices \(\{\mathbf{\Gamma}_{n}\}\) is the unique symmetric positive-definite solution to the nonlinear matrix equation \(\sum_{n}\mathbf{logm}(\mathbf{G}_{R}^{-1/2}\mathbf{\Gamma}_{n}\mathbf{G}_{R}^{ -1/2})=\mathbf{0}\). \(\mathbf{0}\) is a \(P\times P\) matrix of all zeros. _Proof:_ The proof follows by computing the first order necessary (and here, sufficient) condition for optimality for Eq. (4). First, we express the derivative of a real-valued function of the form \(\mathbf{H}(\mathbf{S}(t))=\frac{1}{2}{\left|\left|\mathbf{logm}(\mathbf{C}^{-1 }\mathbf{S}(t))\right|\right|_{F}^{2}}\) with respect to \(t\). In this expression, the argument \(\mathbf{S}(t)=\mathbf{G_{R}}^{1/2}\mathbf{expm}(t\mathbf{A})\mathbf{G_{R}}^{ 1/2}\) is the geodesic arising from \(\mathbf{G}_{R}\) in the direction of \(\mathbf{\Delta}=\mathbf{S}(\mathbf{0})=\mathbf{G_{R}}^{1/2}\mathbf{A}\mathbf{G _{R}}^{1/2}\), and the matrix \(\mathbf{C}\in\mathcal{P}_{P}^{+}\) is a constant SPD matrix of dimension \(P\). By using the cyclic properties of the trace function and the distributive equivalence of \(\mathbf{logm}(\mathbf{A}^{-1}[\mathbf{B}]\mathbf{A})=\mathbf{A}^{-1}[\mathbf{ logm}(\mathbf{B})]\mathbf{A}\), we obtain the following condition: \[\mathbf{H}(\mathbf{S}(t))=\frac{1}{2}{\left|\left|\mathbf{logm}(\mathbf{C}^{-1 /2}\mathbf{S}(t)\mathbf{C}^{-1/2})\right|\right|_{F}^{2}}\] By the symmetry of the term \(\mathbf{logm}(\mathbf{C}^{-1/2}\mathbf{S}(t)\mathbf{C}^{-1/2})\) we have that: \[\therefore\frac{d}{dt}\mathbf{H}(\mathbf{S}(t))\Big{|}_{t=0}=\operatorname{Tr} \left(\left[\mathbf{logm}(\mathbf{C}^{-1}\mathbf{G}_{R})\mathbf{G}_{R}^{-1} \mathbf{\Delta}\right]\right)=\operatorname{Tr}[\mathbf{\Delta logm}(\mathbf{ C}^{-1}\mathbf{G}_{R})\mathbf{G}_{R}^{-1}]\] Notice that since \(\nabla\mathbf{H}\) is symmetric, it belongs to the tangent space \(\mathcal{S}_{P}\) of \(\mathcal{P}_{P}^{+}\). Therefore, we express the gradient of \(\mathbf{L}(\mathbf{G}_{R})\) defined in Eq. (4), as follows: \[\mathbf{L}(\mathbf{G}_{R})=\sum_{n}\left|\left|\mathbf{logm}( \mathbf{G}_{R}^{-1}\mathbf{\Gamma}_{n})\right|\right|_{F}^{2}\quad\implies \nabla\mathbf{L}(\mathbf{G}_{R})=\mathbf{G}_{R}^{-1}\sum_{n}\mathbf{logm}( \mathbf{G}_{R}\mathbf{\Gamma}_{n}^{-1})\] \[\therefore\operatorname*{arg\,min}_{\mathbf{G}_{R}}\mathbf{L}( \mathbf{G}_{R})\implies\sum_{n}\mathbf{logm}(\mathbf{G}_{R}\mathbf{\Gamma}_{ n}^{-1})=\sum_{n}\mathbf{logm}(\mathbf{G}_{R}^{-1/2}\mathbf{\Gamma}_{n}\mathbf{G }_{R}^{-1/2})=\mathbf{0}\] The final step uses the property that \(\mathbf{L}(\mathbf{G}_{R})\) is a sum of convex functions, with the first order stationary point is the necessary and sufficient condition being the unique minima. _Denoting \(\mathbf{G}_{R}^{-1/2}=\mathbf{V}\in\mathcal{P}_{P}^{+}\), the matrix multiplications in the argument of the \(\mathbf{logm}(\cdot)\) term can be efficiently expressed within the feed-forward operations of a neural network with unknown parameters \(\mathbf{V}\)._ ### mSPD-NN Architecture The mSPD-NN uses the form above to perform geodesic mean estimation. The architecture is illustrated in Fig. 1. The encoder of the mSPD-NN is a 2D fully-connected neural network (FC-NN) [5] layer \(\mathbf{\Psi}_{\text{enc}}(\cdot):\mathcal{P}_{P}^{+}\rightarrow\mathcal{P}_{P}^ {+}\) that projects the input matrices \(\mathbf{\Gamma}_{n}\) into a latent representation. This mapping is computed as a cascade of two linear layers with tied weights \(\mathbf{W}\in\mathcal{R}^{P\times P}\), i.e., \(\mathbf{\Psi}_{\text{enc}}(\mathbf{\Gamma}_{n})=\mathbf{W}\mathbf{\Gamma}_{n} \mathbf{W}^{T}\) The decoder \(\mathbf{\Psi}_{dec}(\cdot)\) has the same architecture as the encoder, but with transposed weights \(\mathbf{W}^{T}\). The overall transformation can be written as: \[\text{mSPD-NN}(\mathbf{\Gamma}_{n})=\mathbf{\Psi}_{\text{dec}}(\mathbf{\Psi}_ {\text{enc}}(\mathbf{\Gamma}_{n}))=\mathbf{W}\mathbf{W}^{T}(\mathbf{\Gamma}_ {n})\mathbf{W}\mathbf{W}^{T}=\mathbf{V}(\mathbf{\Gamma}_{n})\mathbf{V} \tag{5}\] where \(\mathbf{V}\in\mathcal{R}^{P\times P}\) and is symmetric and positive definite by construction. We would like our loss function to minimize Eq. (4) in order to estimate the first order stationary point as \(\mathbf{V}=\mathbf{G}_{R}^{-1/2}\), and therefore devise the following loss: \[\mathcal{L}(\cdot)=\frac{1}{P^{2}}\Big{|}\Big{|}\frac{1}{N}\sum_{n}\text{logm} \Big{[}\mathbf{W}\mathbf{W}^{T}(\mathbf{\Gamma}_{n})\mathbf{W}\mathbf{W}^{T} \Big{]}\Big{|}\Big{|}_{F}^{2} \tag{6}\] Formally, an error of \(\mathcal{L}(\cdot)=0\) implies that the argument satisfies the matrix normal equation exactly under the parameterization \(\mathbf{V}=\mathbf{W}\mathbf{W}^{T}=\mathbf{G}_{R}^{-1/2}\). Therefore, Eq. (6) allows us to estimate the geodesic mean on the SPD manifold. We utilize standard backpropagation to optimize Eq. (6). From an efficiency standpoint, the mSPD-NN architecture maps onto a relatively shallow neural network. Therefore, this module can be easily integrated into other deep learning inference frameworks for example, for batch normalization on the SPD manifold. This flexibility is the key advantage over classical methods, in which integrating the geodesic mean estimation within a larger framework is not straightforward. Finally, the extension of Eq. (6) to the estimation of a weighted mean (with positive weights \(\{w_{n}\}\)) also follows naturally as a multiplier in the summation. **Implementation Details:** We train mSPD-NN for a maximum of 100 epochs with an initial learning rate of 0.001 decayed by 0.8 every 50 epochs. The tolerance criteria for the training loss is set at \(1e^{-4}\). mSPD-NN implemented in PyTorch (v1.5.1), Python 3.5 and experiments were run on an 4.9 GB Nvidia K80 GPU. We utilize the ADAM optimizer during training and a default PyTorch initialization for the model weights. To ensure that \(\mathbf{W}\) is full rank, we add a small bias to the weights, i.e., \(\tilde{\mathbf{W}}=\mathbf{W}+\lambda\mathcal{I}_{P}\) for regularization and stability. ## 3 Evaluation and Results ### Experiments on Synthetic Data We evaluate the scalability, robustness, and fidelity of mSPD-NN using simulated data. We compare the mSPD-NN against two popular mean estimation algorithms, the first being the Riemannian gradient descent [20] on the objective in Eq. (4) and the second being the **A**pproximate Joint Diagonalization **L**og Euclidean (ALE) mean estimation [3], which directly leverages properties of the common principal components (CPC) data generating process [21]. Our synthetic experiments are built off the CPC model [13]. In this case, each input connectome \(\mathbf{\Gamma}_{n}\in\mathcal{R}^{P\times P}\) is derived from a set of components \(\mathbf{B}\in\mathcal{R}^{P\times P}\) common to the collection and a set of example specific (and strictly positive) weights across the components \(\mathbf{c}_{n}\in\mathcal{R}^{(+)P\times 1}\). Let the diagonal matrix \(\mathbf{C}_{n}\) be defined as \(\mathbf{C}_{n}=\mathbf{diag}(\mathbf{c}_{n})\in\mathcal{R}^{(+)P\times P}\). From here, we have \(\mathbf{\Gamma}_{n}=\mathbf{BC}_{n}\mathbf{B}^{T}\). **Evaluating Scalability:** In the absence of corrupting noise, the theoretically optimal geodesic mean of the examples \(\{\mathbf{\Gamma}_{n}\}_{n=1}^{N}\) can be computed as: \(\mathbf{G}_{R}^{*}=\mathbf{B}\ \text{\bf expm}\left[\frac{1}{N}\sum_{n=1}^{N} \text{\bf logm}(\mathbf{B}^{-1}\mathbf{\Gamma}_{n}\mathbf{B}^{-T})\right]\ \mathbf{B}^{T}\)[3]. We evaluate the scalability of each algorithm with respect to the dataset dimensionality \(P\) and the number of examples \(N\) by comparing its output to this theoretical optimum. We randomly sample columns of the component matrix \(\mathbf{B}\) from a standard normal, i.e., \(\mathbf{B}[:,j]\sim\mathcal{N}(\mathbf{0},\mathcal{I}_{P})\ \forall\ j\in\{1,\ldots,P\}\), where \(\mathcal{I}_{P}\) is an identity matrix of dimension \(P\). In parallel, we sample the component weights \(\mathbf{c}_{nk}\) according to \(\mathbf{c}_{nk}^{1/2}\sim\mathcal{N}(0,1)\ \forall\ k\in\{1,\ldots,P\}\). To avoid degenerate behavior when the inputs are not full-rank, we clip \(\mathbf{c}_{nk}\) to a minimum value of \(0.001\). We consider two experimental scenarios. In **Experiment 1**, we fix the data dimensionality at \(P=30\) and sweep the dataset size as \(N\in\{5,10,20,50,100,200\}\). In **Experiment 2**, we fix the dataset size at \(N=20\) and sweep the dimensionality as \(P\in\{5,10,20,50,100,200\}\). For each parameter setting, we run all three estimation algorithms ten times using different random initializations. We score performance based on the correctness of the solution and the execution time in seconds. Correctness is measured in two ways. First is the final condition fit \(\mathcal{L}(\mathbf{G}_{R}^{\text{est}})\) from Eq. (6), which quantifies the deviation of the solution from the first order stationary condition (i.e., \(\mathcal{L}(\mathbf{G}_{R}^{\text{est}})=0\)). Second is the normalized squared Riemannian distance \(d_{\text{mean}}=d_{R}^{2}(\mathbf{G}_{R}^{\text{est}},\mathbf{G}_{R}^{*})/|| \mathbf{G}_{R}^{*}||_{R}^{2}\) between the solution and the theoretically optimal mean. Lower values of the condition fit \(\mathcal{L}(\mathbf{G}_{R})\) and deviation \(d_{\text{mean}}\) imply a better quality solution. Fig. 2 illustrates the performances of mSPD-NN, gradient descent and ALE mean estimation algorithms. Figs. 2(a) and (d) plot the first-order condition fit \(\mathcal{L}(\mathbf{G}_{R}^{\text{est}})\) when varying the dataset size \(N\) (Experiment 1) and the matrix dimensionality \(P\) (Experiment 2), respectively. Likewise, Figs. 2(b) and (e) plot the recovery performance for each experiment. We observe that the first order condition fit for the mSPD-NN is better than the ALE for all settings, and better than the gradient descent for most settings. We note that the recovery performance of mSPD-NN is better than the baselines in most cases while being a close approximation in the remaining ones. Finally, Figs. 2(c) and (f) illustrate the time to convergence for each algorithm. As seen, the performance of mSPD-NN scales with dataset size but is nearly constant with respect to dimensionality. In all cases, it either beats or is competitive with ALE. **Robustness to Noise:** Going one step further, we evaluate the efficacy of the mSPD-NN framework when there is deviation from the ideal CPC generating process. In this case, we add rank-one structured noise to obtain the input data: \(\mathbf{\Gamma}_{n}=\mathbf{BC}_{n}\mathbf{B}^{T}+\frac{1}{P}\mathbf{x}_{n}\mathbf{x }_{n}^{T}\). As before, the bases and coefficients are randomly sampled as \(\mathbf{B}[:,j]\sim\mathcal{N}(\mathbf{0},\mathcal{I}_{P})\) and \(\mathbf{c}_{nj}^{1/2}\sim\mathcal{N}(0,1)\quad\forall\quad j\in\{1,\dots,P\}\). In a similar vein, the structured noise is generated as \(\mathbf{x}_{n}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathcal{I}_{P})\in\mathcal{ R}^{P\times 1}\), with \(\sigma^{2}\) controlling the extent of the deviation. For this experiment, we set \(P=30,N=20\) and vary the noise over the range \([0.2-1]\) in increments of \(0.1\). One caveat in this setup is that the theoretically optimal mean defined previously and cannot be used to evaluate performance. Hence, we report only the first-order condition fit \(\mathcal{L}(\mathbf{G}_{R})\) We also calculate the pairwise concordance \(d_{\text{weights}}\) of the final mSPD-NN weights for different initializations. Fig. 3(a) illustrates the first-order condition fit \(\mathcal{L}(\mathbf{G}_{R}^{\text{est}})\) across all three methods for increasing noise \(\sigma\). As seen, \(\mathcal{L}(\mathbf{G}_{R}^{est})\) for the mSPD-NN is consistently Figure 3: Performance of the mSPD-NN, gradient descent and ALE estimation under increasing additive noise: **(a)** First order condition fit (Eq. 6) **(b)** Pairwise distance between the recovered mSPD-NN solutions across random initializations. Figure 2: Evaluating the estimates from mSPD-NN, gradient descent and ALE according to **(a)**_and_ **(d)** first-order condition fit (Eq. 6) **(b)**_and_ **(e)** deviation from the theoretical solution **(c)**_and_ **(f)** execution time for **varying dataset size \(N\)**_and_ **data dimension \(P\)** respectively lower than the corresponding value for the gradient descent and ALE algorithm, suggesting improved performance despite increasing corruption to the CPC process. The ALE algorithm is designed to utilize the CPC structure within the generating process, but its poor performance suggests that it is particularly susceptible to noise. Fig. 3(b) plots the pairwise distances between the geodesic means estimated by mSPD-NN across the 10 random initializations. As seen, mSPD-NN produces a consistent solution, thus underscoring its robustness. ### Experiments on Functional Connectomics Data **Dataset:** To probe the efficacy of the mSPD-NN for representation learning on real world matrix manifold data, we experiment on several groupwise discrimination tasks (such as group-wise discrimination, classification and clustering) on the publicly available CNI 2019 Challenge dataset [23] consisting of preprocessed rs-fMRI time series, provided for 158 subjects diagnosed with Attention Deficit Hyperactivity Disorder (ADHD), 92 subjects with Autism Spectrum Disorder (ASD) with an ADHD comorbidity [15], and 257 healthy controls. The scans were acquired on a Phillips 3T Achieva scanner using a single shot, partially parallel, gradient-recalled EPI sequence with TR/TE = 2500/30ms, flip angle 70, voxel resolution = \(3.05\times 3.15\times 3\)mm, with a scan duration of either 128 or 156 time samples (TR). A detailed description of the demographics and preprocessing can be found in [23]. Connectomes are estimated via the Pearson's correlation matrix, regularized to be full-rank via two parcellations, the Automated Anatomical Atlas (AAL) (\(P=116\)) and the Craddocks 200 atlas (\(P=200\)). **Groupwise Discrimination:** We expect that FC differences between the ASD and ADHD cohorts are harder to tease apart than differences between ADHD and controls [23, 15]. We test this hypothesis by comparing the geodesic means estimated via mSPD-NN for the three cohorts. For robustness, we perform bootstraped trials for mean estimation by sampling 25 random subjects from a given group (ADHD/ASD/Controls). We then compute the Riemannian distance \(d(\mathbf{G}_{R}(\{\mathbf{\Gamma}_{g1}\}),\mathbf{G}_{R}(\{\mathbf{\Gamma}_{g2 }\}))\) between the mSPD-NN means associated with groups \(g1\) and \(g2\). A higher value of \(d(\cdot,\cdot)\) implies a better separation between the groups. We also run a Wilcoxon signed rank test on the distribution of \(d(\cdot,\cdot)\). Fig. 4 illustrates the pairwise distances between the geodesic means of cohorts \(g1-g2\) across bootstraped trials (t-SNE representations for the group means are provided in Fig. 5(c)). As a sanity check, we note that the mean estimates across samples within the same cohort (ADHD-ADHD) are closer than those across cohorts (ADHD-controls, ASD-controls, ADHD-ASD). More interestingly, we observe that ADHD-controls separation is consistently larger than that of the ADHD-ASD groups for both parcellations. This result confirms the hypothesis that the overlapping diagnosis for the two classes translates to a reduced separability in the space of FC matrices and indicates that mSPD-NN is able to robustly uncover population level differences in FC. **Classification:** Building on the observation that mSPD-NN provides reliable group-separability, we adopt this framework for classification. Using the AAL parcellation, we randomly sample 25 subjects from each class for training, and set aside the rest for evaluation with a \(10\%/90\%\) validation/test split. We estimate the geodesic mean for each group across the training samples via 10 bootstrapped trials, in which we sub-sample \(80\%\) of the training subjects from the respective group. Permutation testing is performed on the mean estimates [24], and functional connections (i.e., entries of \(\mathbf{G}_{R}(\{\mathbf{\Gamma}_{n}\})\)) that differ with an FDR-corrected threshold of \(p<0.001\) are retained for classification. Finally, a Random Forest classifier is trained on the selected features to classify ADHD vs Controls. The train-validation-test splits are repeated 10 times to compute confidence intervals. We use classification accuracy and area under the receiver operating curve (AU-ROC) as metrics for evaluation. The mSPD-NN feature selection plus Random Forest approach provides an accuracy of \(0.62\pm 0.031\) and an AU-ROC of \(0.60\pm 0.04\) for ADHD-Control classification on the test samples. We note that this approach outperforms all but one method on the CNI challenge leaderboard [23]. Moreover, one focus of the challenge is to observe how models trained on the ADHD vs Control discrimination task translate to ASD (with ADHD comorbidity) vs Control discrimination in a transfer learning setup. Accordingly, we apply the learned classifiers in each split to ASD vs Control classification and obtain an accuracy of \(0.54\pm 0.044\) and an AU-ROC of \(0.53\pm 0.03\). This result is on par with the best performing algorithm in the CNI-TL challenge. The drop in accuracy and AU-ROC for the transfer learning task is consistent with the performance profile of all the challenge submissions. These results suggest that despite the comorbidity, connectivity differences between the cohorts are subtle and hard to reliably capture. Nonetheless, the mSPD-NN+RF framework is a first step to underscoring stable, yet interpretable (see below) connectivity patterns that can discriminate between diseased and healthy populations. **Qualitative Analysis:** To better understand the group-level connectivity differences, we plot the most consistently selected features (top 10 percent) from the previous experiment (ADHD-control feature selection) in Fig. 4(c). We utilize the BrainNetViewer Software for visualization. The blue circles are the AAL nodes, while the solid lines denote edges between nodes. We observe that the Figure 4: Groupwise discrimination between the FC matrices estimated via the **(a)** AAL **(b)** Craddock’s 200 atlas, for the ADHD/ASD/Controls cohorts according to pairwise distances between the mSPD-NN mean estimates. Results of pairwise connectivity comparisons between group means for **(c)** ADHD-Controls **(d)** ADHD-ASD groups for the AAL parcellation. The red connections are significant differences (\(p<0.001\)). highlighted connections appear to cluster in the sensorimotor and visual areas of the brain, along with a few temporal lobe contributions. Altered sensorimotor and visual functioning has been previously reported among children and young adults diagnosed with ADHD [6]. Adopting a similar procedure, we additionally highlight differences among the ASD and ADHD cohorts in Fig. 4(d). The selected connections concentrate around the pre-frontal areas of the brain, which is believed to be associated with altered social-emotional regulation in Autism [22]. We additionally provide an extended version of the group connectivity difference results across trials in Fig. 5 (a) ADHD vs Controls and (b) ADHD vs ASD. Across train-test-validation splits, we observe that several connectivity differences appear fairly consistently. Overall, the patterns highlighted via statistical comparisons on the mSPD-NN estimates are both robust as well as in line with the physiopathology of ADHD and ASD reported in the literature. **Data-Driven Clustering:** Finally, we evaluate the stability of the mapping between the functional connectivity and diagnostic spaces via a geometric clustering experiment. We use the geodesic mean estimates from the groupwise discrimination experiment (generated using the ground truth Controls/ASD/ADHD labels and mSPD-NN) as an initialization and track the shift in the diagnostic assignments upon running an unsupervised **E**xpectation-**M**aximization (EM) algorithm. At each iteration of the mSPD-EM, the E-Step assigns cluster memberships to a given subject according to the geodesic distance (Eq. (3)) from the cluster centroids, while the M-Step uses the mSPD-NN for recomputing the centroids. Upon convergence, we evaluate the alignment between the inferred clusters and diagnostic labels. To this end, we map each cluster to a diagnostic label according to majority voting, and measure the cluster purity (fraction of cluster members that are correctly assigned). mSPD-EM provides an overall Figure 5: Pairwise differences between mSPD-NN group means for (a) ADHD-Controls (b) ADHD-ASD groups across bootstrapped trials. Significant differences marked in red (\(p<0.001\)). t-SNE plots for group means from experiment on (c) Groupwise Discrimination using mSPD-NN (d) After data-driven clustering via the mSPD-EM cluster purity of \(0.59\pm 0.05\) (Controls), \(0.52\pm 0.12\) (ADHD), ASD \(0.51\pm 0.09\) (ASD), indicating that there is considerable shift in the assignment of diagnostic labels from ground truth. We also visualise the cluster centroids using t-Stochastic Neighbor Embeddings (t-SNE) at initialization and after convergence of the mSPD-EM in Fig. 5 (c) and (d) respectively. We provide 3-D plots to better visualise the cluster separation. Again, we observe that the diagnostic groups overlap considerably and are challenging to separate in the functional connectivity space alone. One possible explanation may be that the distinct neural phenotypes between the disorders are being overwhelemed by other rs-fMRI signatures. Given the migration of diagnostic assignments from the ground truth, the strict diagnostic criteria used to separate the diseased and healthy cohorts group may need to be more critically examined. ## 4 Conclusion We have proposed a novel mSPD-NN framework to reliably estimate the geodesic mean of a collection of functional connectivity matrices. Through extensive simulation studies, we demonstrate that the mSPD-NN scales well to high-dimensional data and can handle input noise when compared with current iterative methods. By conducting a series of experiments on group-wise discrimination, feature selection, classification, and clustering, we demonstrate that the mSPD-NN is a reliable framework for discovering consistent group differences between patients diagnosed with ADHD-Autism comorbidities and controls. The mSPD-NN makes minimal assumptions about the data and can potentially be a useful tool to advance data-scientific and clinical research. AcknowledgementsThis work is supported by the National Science Foundation CAREER award 1845430 (PI Venkataraman), the National Institute of Health R01HD108790 (PI Venkataraman) and R01EB029977 (PI Caffo).
2308.08005
Navigating the complex nexus: cybersecurity in political landscapes
Cybersecurity in politics has emerged as a critical and intricate realm intersecting technology, governance, and international relations. In this interconnected digital context, political entities confront unparalleled challenges in securing sensitive data, upholding democratic procedures, and countering cyber threats. This study delves into the multifaceted landscape of political cybersecurity, examining the evolving landscape of cyberattacks, their impact on political stability, and strategies for bolstering digital resilience. The intricate interplay between state-sponsored hacking, disinformation campaigns, and eroding public trust underscores the imperative for robust cybersecurity measures to safeguard political system integrity. Through an extensive exploration of real-world case studies, policy frameworks, and collaborative initiatives, this research illuminates the intricate network of technological vulnerabilities, geopolitical dynamics, and ethical concerns that shape the dynamic evolution of cybersecurity in politics. Amidst evolving digital landscapes, the imperative for agile and preemptive cybersecurity strategies is paramount for upholding the stability and credibility of political institutions.
Mike Nkongolo
2023-08-15T19:37:37Z
http://arxiv.org/abs/2308.08005v1
# Navigating the complex nexus: cybersecurity in political landscapes ###### Abstract Cybersecurity in politics has emerged as a critical and intricate realm intersecting technology, governance, and international relations. In today's interconnected digital context, political entities confront unparalleled challenges in securing sensitive data, upholding democratic procedures, and countering cyber threats. This study delves into the multifaceted landscape of political cybersecurity, examining the evolving landscape of cyberattacks, their impact on political stability, and strategies for bolstering digital resilience. The intricate interplay between state-sponsored hacking, disinformation campaigns, and eroding public trust underscores the imperative for robust cybersecurity measures to safeguard political system integrity. Through an extensive exploration of real-world case studies, policy frameworks, and collaborative initiatives, this research illuminates the intricate network of technological vulnerabilities, geopolitical dynamics, and ethical concerns that shape the dynamic evolution of cybersecurity in politics. Amidst evolving digital landscapes, the imperative for agile and preemptive cybersecurity strategies is paramount for upholding the stability and credibility of political institutions. **Keywords:** Cybersecurity in politics, Technology and governance, State-sponsored hacking, Robust cybersecurity measures, Geopolitical dynamics, Cyberintelligence dataset ## 1 Introduction In an era characterized by technological advancement, governance intricacies, and global interconnectedness, the convergence of cybersecurity and politics has emerged as a paramount concern [1]. The intersection of these two domains forms the foundation of a critical and intricate realm that demands rigorous exploration. As technology becomes an integral part of political landscapes and international relations, the safeguarding of sensitive data, the preservation of democratic processes, and the mitigation of cyber threats have become pressing imperatives for political entities worldwide [1, 2]. This study embarks on an insightful journey into the multifaceted landscape of cybersecurity in politics [3]. By delving into the dynamic interplay between technology, governance, and international relations, we seek to unravel the complexities inherent in this domain. Our investigation extends beyond the mere exploration of cyberattacks; it delves deep into their evolving nature and the consequential impact on political stability. Moreover, this study also unveils the strategic measures employed to fortify digital resilience, ensuring the integrity of political systems in the face of mounting challenges [2, 3]. Of particular significance is the intricate web woven by state-sponsored hacking, disinformation campaigns, and the erosion of public trust. This intricacy serves as a poignant reminder of the need for robust cybersecurity measures that not only defend against cyber threats but also uphold the fundamental tenets of political institutions. Drawing from real-world case studies, policy frameworks, and collaborative initiatives, this research endeavors to illuminate the profound network of technological vulnerabilities, geopolitical dynamics, and ethical considerations that underpin the ever-evolving paradigm of cybersecurity in politics. As we delve into the depths of cybersecurity's role in shaping global political discourse, it becomes apparent that our endeavor extends beyond technological boundaries. Our exploration holds the potential to safeguard the stability, credibility, and integrity of political institutions on a global scale [3]. This study introduces an approach to understanding the intricate nexus between technology, governance, and international relations in the context of cybersecurity within politics. While previous research has predominantly focused on the technical aspects of cyberattacks, our investigation takes a holistic view that transcends the conventional boundaries of cybersecurity discourse. We delve into the dynamic inter- play of these three critical dimensions to unearth the underlying complexities inherent in this domain. By doing so, we contribute a comprehensive framework that not only dissects the anatomy of cyberattacks but also illuminates their profound implications on political stability, both at a national and international level. Furthermore, our research takes a step beyond traditional explorations by shedding light on the strategic measures adopted to enhance digital resilience within political systems. This novel perspective goes beyond the reactive stance of countering cyber threats and instead emphasizes the proactive strategies that safeguard the integrity of political institutions [3]. Our study uncovers the nuanced strategies and practices that political entities employ to bolster their defenses, ensuring they can navigate the ever-evolving landscape of cybersecurity challenges [4]. In summary, our research not only advances the discourse on the interplay between technology, governance, and international relations but also contributes a fresh lens to the study of cybersecurity in politics. By transcending the boundaries of conventional cyberattack analysis and incorporating the broader dynamics of political stability and digital resilience, our work provides a unique and innovative perspective that enriches our understanding of the complex landscape in which these critical domains converge. An intricate exploration of the interplay among technology, governance, and international relations in political cybersecurity An intricate exploration delving into the interplay among technology, governance, and international relations within the realm of political cybersecurity unveils a complex landscape. For instance, consider the use of state-sponsored hacking [5] to gain access to sensitive political information, exemplifying the fusion of technology and international intrigue. In the aftermath of such breaches, the governance of data protection policies and international diplomatic responses become critical factors in shaping the geopolitical landscape. Furthermore, the influence of disinformation campaigns on democratic processes underscores the delicate balance between technology and governance. Instances where social media platforms are manipulated to spread false narratives, impacting electoral outcomes, highlight the need for effective governance mechanisms to combat such threats [6]. This intricate interplay is not confined to national boundaries; international relations are tested as nations collaborate or confront each other to address transnational cyber threats. As this exploration advances, it becomes evident that understanding and navigating this nexus is imperative for ensuring political stability and safeguarding democratic institutions [3, 6]. The complex dance between technological advancements, effective governance strategies, and international collaborations forms the cornerstone of modern political cybersecurity, shaping the future of global politics. As technology seamlessly integrates into political landscapes and international relations, the imperative to ensure the security of sensitive data, uphold democratic processes, and counter cyber threats has risen to the forefront of global priorities. This shift is evident through various real-life examples that highlight the critical role of technology in shaping political dynamics and international interactions [6]: **Election Interference**: The interference in the 2016 United States presidential election by foreign actors serves as a poignant example [7]. State-sponsored hacking and disinformation campaigns aimed at swaying public opinion and influencing election outcomes underscore the need for heightened cybersecurity measures to safeguard democratic processes and preserve the integrity of elections. **Nation-State Espionage**: Instances of cyber espionage, such as the hacking of government agencies and diplomatic communications, reveal the extent to which technology can be weaponized to gather sensitive information. The hacking of the U.S. Office of Personnel Management (OPM) in 2014, where millions of federal employees' records were compromised, underscores the vulnerability of political entities to cyber intrusions [8]. **Global Diplomacy**: The use of digital platforms for international diplomacy has grown significantly. Diplomatic negotiations, agreements, and exchanges are increasingly conducted through digital channels. The Wikiteaks publication of classified diplomatic cables in 2010 demonstrated how the exposure of such sensitive information could strain international relations and impact geopolitical strategies [9]. Cross-Border Cybercrime: The WannaCry ransomware attack in 2017, which targeted critical infrastructure [10] and institutions across multiple countries, highlighted the interconnectedness of cyber threats. This event showcased the potential for cyberattacks to transcend national borders and disrupt international relations. **Disinformation and Social Media**: The manipulation of social media platforms to disseminate false narratives and misinformation has become a widespread concern [6]. The spread of misleading information during Brexit and other elections demonstrates the vulnerability of public discourse to technological manipulation [11]. Considering these examples, the integration of technology into political and international contexts underscores the urgency of addressing cybersecurity challenges. Safeguarding data, preserving democratic values, and countering cyber threats are pivotal not only for the stability of individual nations but also for maintaining trust and cooperation in the global arena [1, 3, 11]. Strengthening digital resilience: a multifaceted approach to safeguarding political systems from cybersecurity challenges To fortify digital resilience and ensure the integrity of political systems in the face of mounting cybersecurity challenges, strategic measures encompass a multifaceted approach that combines technological, policy, and collaborative efforts [12]. The following practical examples could potentially illustrate the implementation of various measures. Enhanced Cybersecurity Training and Awareness Programs: Political entities can invest in comprehensive cybersecurity training and awareness programs for their personnel [2]. For instance, government officials, diplomats, and staff members can undergo regular training sessions to recognize phishing attempts, secure communication channels, and understand the implications of sharing sensitive information. **Multi-Layered Authentication and Access Controls**: Implementing strong multi-factor authentication (MFA) and access controls can prevent unauthorized access to critical political systems. An example is requiring biometric verification, in addition to passwords, for government officials to access classified information. **Robust Incident Response Plans**: Developing and practicing well-defined incident response plans enables political entities to swiftly and effectively address cyber incidents. These plans outline steps to contain, mitigate, and recover from cyberattacks. The U.K. government's National Cyber Security Centre (NCSC) regularly tests its incident response procedures to ensure readiness [13]. **Public-Private Partnerships**: Collaboration between political entities and private cybersecurity firms can yield valuable insights and resources. For instance, governments may partner with Information Technology companies to share threat intelligence and develop innovative solutions to combat emerging cyber threats. **Securing Critical Infrastructure**: Implementing stringent cybersecurity measures for critical infrastructure [10, 14], such as power grids and transportation systems, is essential. The Estonian government's efforts to protect its critical infrastructure from cyber threats after experiencing a massive cyberattack in 2007 serve as a notable example [15]. **Legislation and Regulations**: Governments can enact and enforce cybersecurity laws and regulations to hold individuals and entities accountable for cybercrimes. The European Union's General Data Protection Regulation (GDPR) and the United States' Cybersecurity Information Sharing Act (CISA) are examples of legislative efforts to enhance cybersecurity [2]. **International Cooperation**: Diplomatic efforts to establish international norms and agreements on cybersecurity can foster cooperation among nations. The Budapest Convention on Cybercrime, ratified by numerous countries, serves as an example of international collaboration to combat cybercrime [16]. **Continuous Monitoring and Threat Intelligence Sharing**: Political entities can establish continuous monitoring of networks and systems to detect and respond to cyber threats in real-time [17]. Intelligence sharing among government agencies, such as the U.S. Department of Homeland Security's Cyber Information Sharing and Collaboration Program (CISCP), facilitates timely threat detection [18]. These strategic measures collectively contribute to bolstering digital resilience and safeguarding the integrity of political systems. By proactively addressing cyber threats and fostering a culture of cybersecurity, political entities can navigate the complex cybersecurity landscape with greater confidence and effectiveness. The philosophical nexus: unveiling the multidimensional landscape of cybersecurity and political stability The study considers a scenario where a nation's political landscape is disrupted by a sophisticated cyberattack aiming at destabilizing its democratic processes. As we traverse the philosophical nexus, we illuminate the profound implications that this breach has on the delicate balance between technological advancements, governance mechanisms, and international cooperation. Drawing from social theory [19], we analyze the cascading effects of this cyber event through the lens of Niklas Luhmann's Systems Theory [20]. The attack, a disruption in the intricate dance of communication channels, creates a ripple that extends beyond the digital realm. The breach not only exposes vulnerabilities in the nation's cybersecurity infrastructure but also triggers a crisis of public trust in governance institutions. As we contemplate this scenario, we uncover a nuanced narrative where the digital symphony of technology and governance meets the somber overtones of international relations. The philosophical nexus guides us to consider questions that extend beyond technical defenses - it prompts us to ponder the erosion of social cohesion, the fragility of democratic processes, and the vulnerability of the global diplomatic ecosystem. Our exploration further reveals the resonance of J. Habermas's Public Sphere Theory [21]. The breach reverberates through the public discourse, inciting debates about the authenticity of information, the role of media, and the implications for the political narrative. The very foundation of political stability undergoes a philosophical examination, emphasizing the interdependence of technology and governance in shaping the collective consciousness. Through this practical example, we unlock the door to a philosophical inquiry that probes beyond the realm of codes and algorithms [22]. The cyberattack becomes a symposium of ideas, where the multidimensional landscape of cybersecurity and political stability converges [3]. As we navigate this nexus, we are reminded that the harmonious chords of technological progress can swiftly give way to dissonance, underscoring the imperative of holistic strategies that safeguard not only digital infrastructure but also the very fabric of political order. ## 5 Psychological fortification: unveiling strategic practices in political cybersecurity In the realm where technology meets psychology, we illuminate the intricate tapestry of defense mechanisms that fortify political systems against the tumultuous tides of cyber challenges. Consider the concept of Cognitive Resilience [23] as a psychological framework applied to political cybersecurity. Just as individuals develop mental fortitude to withstand adversity, political entities cultivate cognitive resilience to navigate the stormy seas of cyber threats. By integrating psychological concepts into cybersecurity, governments employ novel strategies that address not only technical vulnerabilities but also the cognitive dimensions of their defenses. One practical manifestation is the application of Behavioral Biometrics [24]. Political entities harness behavioral patterns, such as typing speed and mouse movement, to create a unique cognitive fingerprint for authorized users. This innovative approach combines technology with psychology, offering an additional layer of protection against unauthorized access. As a political leader interacts with secure systems, the system's ability to recognize their behavioral cues enhances cybersecurity by validating the user's identity beyond traditional means. Furthermore, our investigation delves into the realm of Social Engineering Inoculation [25]. Drawing from psychology's inoculation theory, political entities design immersive training experiences that expose personnel to simulated social engineering attacks. This practice enhances cognitive resilience by training individuals to recognize and resist manipulation tactics. Just as a vaccine primes the immune system, these simulations equip individuals with the psychological tools to resist the contagion of cyber deception. By intertwining psychology concepts with cybersecurity strategies, political entities create a formidable defense. Just as psychological fortitude equips individuals to confront adversity, the application of Cognitive Resilience principles empowers political systems to withstand the waves of cyber challenges. Our exploration unveils a profound fusion of technology and psychology, where the novel ideas of Cognitive Biometrics and Social Engineering inoculation fortify defenses against an ever-evolving landscape of cyber threats. Through this integration, political entities emerge not only technically resilient but also psychologically equipped to safeguard the integrity of their systems. ## 6 Limitations **Scope of Psychological Concepts**: The paper primarily explores the integration of psychology with cybersecurity, but the depth of psychological theories covered may be limited, leaving room for more comprehensive analysis. **Data Availability**: The availability of empirical data on the effectiveness of psychological strategies in real-world political cybersecurity contexts might be limited, potentially impacting the robustness of certain conclusions. We suggest the utilization of a cyberintelligence dataset established by Naidoo [26] as a foundational resource for conducting subsequent experiments within this specific domain. The characteristics of this dataset are illustrated in Figure 1 and the results of the machine learning classification are depicted in Figure 2. **Interdisciplinary Gaps**: While the paper bridges the gap between cybersecurity and psychology, interdisciplinary gaps may arise due to the complex nature of both fields, leading to potential oversights. ## 7 Recommendations **Interdisciplinary Collaboration**: Encourage collaboration between cybersecurity experts and psychologists to develop innovative strategies that leverage psychological principles for enhancing political cybersecurity. **Longitudinal Studies**: Conduct longitudinal studies to assess the long-term impact of integrating psychological resilience practices on the prevention and management of cyber incidents. Figure 1: A cyberintelligence dataset by [26] RandomTree ================= Type1 = Fake Social Media | Type = Phishing : 0 (2/0) | Type = Fake Social Media : 1 (72/0) Type1 = Phishing | Threat < 0.5 | | Incidents = Domtorentos : 0 (0/0) | | Incidents = Dropbox : 0 (2/0) | | Incidents = Netflix : 1 (2/0) | | Incidents = Facebook : 0 (10/0) | | Incidents = Internet : 0 (0/0) | | Incidents = Spymax : 0 (0/0) | | Incidents = TAURriye.gov.tr : 0 (1/0) | | Incidents = USA.gov : 1 (1/0) | | Incidents = OneDrive : 0 (3/0) | | Incidents = Anubis APR Malware : 0 (0/0) | | Incidents = Federal government of the United States : 1 (1/0) | | Incidents = Microsoft : 0 (3/0) | | Incidents = Google : 0 (0/0) | | Incidents = WHO | | | Compassion < 0.5 : 0 (2/0) | | | Compassion >= 0.5 : 1 (1/0) | | Incidents = Banco do Brasil : 0 (1/0) | | Incidents = International Card Services B.V. : 0 (1/0) | | Incidents = ASN Bank : 0 (1/0) | | Incidents = Apple ID : 0 (1/0) | | Incidents = American Express : 0 (1/0) | | Incidents = CrA@dit Agricole : 0 (1/0) | | Incidents = Email credentials : 0 (0/0) | | Incidents = HSBC UK : 0 (1/0) | | Incidents = Americanans : 0 (1/0) | | Incidents = Banco Popular : 0 (1/0) | | Incidents = GET 500 GB DATA FOR FREE : 0 (1/0) | | Incidents = WhatsApp : 0 (1/0) | | Incidents = Internal Revenue Service (IRS) : 0 (1/0) **Ethics Framework**: Establish an ethical framework to guide the responsible application of psychological tactics in political cybersecurity, considering potential implications for individuals and society. Figure 2: The outcomes of the classification analysis on the cyberintelligence dataset, with WHO referring to the World Health Organization, revealed that the classifier successfully identified incidents attributed to organizations, including cases of phishing and counterfeit social media advertisements. By addressing these future research directions, acknowledging the limitations, and implementing the recommended measures, this paper can pave the way for a deeper understanding of the interplay between psychology and cybersecurity, contributing to more robust strategies for safeguarding political systems in the digital age. ## 8 Future Research **Geopolitical Dynamics**: Investigate how geopolitical tensions impact international cooperation and information sharing in political cybersecurity, with a focus on regions of conflict or strained diplomatic relations. **Quantifying Psychological Resilience**: Explore methodologies to quantify the impact of psychological resilience strategies, like Social Engineering Inoculation, on the decision-making and response capabilities of political personnel. **Ethical Considerations**: Examine the ethical implications of employing psychological tactics in political cybersecurity, including potential privacy concerns and the boundaries of manipulating cognitive behaviors. ## 9 Conclusion In conclusion, the intricate interplay between cybersecurity and political landscapes presents a dynamic and evolving nexus that requires comprehensive exploration and innovative solutions. This paper has illuminated the multifaceted nature of this nexus, transcending conventional boundaries to delve into the integration of psychology and technology. By examining how psychological concepts can fortify cybersecurity measures, we have uncovered novel strategies and practices employed by political entities to enhance their digital resilience. Our journey through this complex terrain has unveiled the importance of understanding human behavior, decision-making processes, and cognitive vulnerabilities as integral components of effective cybersecurity. We have showcased practical examples of how psychological principles, such as Social Engineering Inoculation, can empower political personnel to discern and respond to cyber threats adeptly. Moreover, our exploration of the multidimensional landscape of cybersecurity and political stability has highlighted the need for interdisciplinary collaboration, ethical considerations, and continuous research to address challenges and harness opportunities. As digital landscapes continue to evolve, the profound implications of cybersecurity on political dynamics persist. This paper's interdisciplinary approach, merging psychology and cybersecurity, offers a holistic framework that not only elucidates the inner workings of this nexus but also inspires future research and strategic advancements. By embracing psychological resilience as an integral facet of cybersecurity practices, political entities can chart a course towards a more secure, stable, and resilient digital political landscape. Through ongoing dedication to understanding and navigating this intricate nexus, we can collectively forge a path toward a safer and more resilient future.
2301.12556
A Log-Sensitive Encoding of Turing Machines in the $λ$-Calculus
This note modifies the reference encoding of Turing machines in the $\lambda$-calculus by Dal Lago and Accattoli, which is tuned for time efficiency, as to accommodate logarithmic space. There are two main changes: Turing machines now have *two* tapes, an input tape and a work tape, and the input tape is encoded differently, because the reference encoding comes with a linear space overhead for managing tapes, which is excessive for studying logarithmic space.
Beniamino Accattoli, Ugo Dal Lago, Gabriele Vanoni
2023-01-29T22:07:13Z
http://arxiv.org/abs/2301.12556v1
# A Log-Sensitive Encoding of ###### Abstract This note modifies the reference encoding of Turing machines in the \(\lambda\)-calculus by Dal Lago and Accattoli [6], which is tuned for time efficiency, as to accommodate logarithmic space. There are two main changes: Turing machines now have _two_ tapes, an input tape and a work tape, and the input tape is encoded differently, because the reference encoding comes with a linear space overhead for managing tapes, which is excessive for studying logarithmic space. ## 1 Introduction This note presents a new encoding of Turing machines into the \(\lambda\)-calculus and and proves its correctness. It is based over Dal Lago and Accattoli's reference encoding of single tape Turing machines [6]. The new encoding is tuned for studying logarithmic space complexity even though such a study is not carried out here but in a companion paper. The aim of this note is to provide the formal definition of the encoding and the tedious calculations to prove its correctness. The key points of the new encoding with respect to the reference one are: * _Log-sensitivity_: the reference encoding cannot account for logarithmic space complexity because it is based on Turing machines with a single tape. Logarithmic space requires Turing machines with a read-only input tape, of input string \(i\), and an ordinary work tape, and to only count the space used on the work tape--if one counts the space for \(i\), it is impossible to use only \(\mathcal{O}(\log|i|)\) space. We refer to such a separation of tapes as to _log-sensitivity_. The reference encoding is log-_in_sensitive instead. * _Different encoding of the input tape_: simply adapting the reference encoding to two tapes is not enough for preserving logarithmic space, because the reference encoding is tuned for time: it reads from tapes in \(\mathcal{O}(1)\) time but the reading mechanism comes with a \(\mathcal{O}(|i|)\) space overhead, while the input tape has to be handled with at most \(\mathcal{O}(\log|i|)\) space overhead. We then change the encoding and the reading mechanism of the input tape, trading time for space, as to read in \(\mathcal{O}(|i|\log|i|)\) time with \(\mathcal{O}(\log|i|)\) space overhead. The idea is that the position of the head is indicated by a pointer, given by the position index (of logarithmic size), and reading from the input requires scrolling the input tape sequentially, until the position index is reached. * _Different time complexity_: by trading time for space, the new encoding is slower. If a Turing machine \(\mathcal{M}\) takes time \(T_{\mathcal{M}}(|i|)\) on input string \(i\) then the encoding evaluates in \(\Theta((T_{\mathcal{M}}(|i|)+1)\cdot|i|\cdot\log|i|)\)\(\beta\)-steps rather than in \(\Theta(T_{\mathcal{M}}(|i|))\) as in the reference encoding, because each transition reads from the input. The time complexity is however still polynomial and thus the encoding is reasonable for time (considered as the number of \(\beta\)-steps). Intrinsic and Mathematical Tape RepresentationsA TM tape is a string plus a distinguished position, representing the head. There are two tape representations, dubbed _intrinsic_ and _mathematical_ by van Emde Boas in [20], described next. * The _intrinsic_ one represents both the string \(i\) and the current position of the head as the triple \(i=i_{l}\cdot h\cdot i_{r}\), where \(i_{l}\) and \(i_{r}\) are the prefix and suffix of \(i\) surrounding the character \(h\) read by the head. The reference encoding of TMs in the \(\lambda\)-calculus uses the intrinsic representation of tapes and, as already mentioned, reading costs \(\mathcal{O}(1)\) time but the reading mechanism comes with a \(\mathcal{O}(|i|)\) space overhead. * The _mathematical_ representation of tapes, instead, is simply given by the index \(n\in\mathbb{N}\) of the head position, that is, the triple \(i_{l}\cdot h\cdot i_{r}\) is replaced by the pair \((i,|i_{l}|+1)\). The index \(|i_{l}|+1\) has the role of a pointer, of logarithmic size when represented in binary. An encoding in the \(\lambda\)-calculus based on the mathematical representation reads in \(\mathcal{O}(|i|\log|i|)\) time with \(\mathcal{O}(\log|i|)\) space overhead. The time cost is due to the fact that accessing the tape is done sequentially, by moving of one cell at a time on the tape and at each step decrementing of one the index, until the index is 0. Therefore, accessing the right cell requires to decrement the index \(|i_{l}|+1\) times, each time requiring time \(\log|i_{l}|+1\), because the index is represented in binary. The new log-sensitive encoding keeps the intrinsic representation for the work tape, and instead adopts the mathematical representation for the input tape. This is because linear space overhead for the work tape is not a problem for log-sensitivity, while the mathematical representation makes it harder to write on the tape. Binary Arithmetic and Literature.For implementing on the \(\lambda\)-calculus the manipulation of the head index, one needs to develop an encoding of binary strings and three operations: successor, predecessor, and lookup of the \(n\)-th character of a string starting from the binary representation of \(n\). We do this using the Scott encoding of binary string, but using a reversed representation of binary strings. We developed our encoding of binary arithmetic from scratch, in a seemingly ad-hoc way. We discovered afterwards that our approach is a variation over encodings in the literature. Namely, it is quite similar to Mogensen's encoding [17], itself building over Goldberg's study [14]. The Deterministic \(\lambda\)-Calculus and the Scott Encoding of Strings As it is the case for the reference encoding, also the new encodings has as image a restricted form of \(\lambda\)-calculus, what we refer to as the deterministic \(\lambda\)-calculus, where, in particular, call-by-name and call-by-value evaluation coincide. Deterministic \(\lambda\)-Calculus.The language and the evaluation contexts of the _deterministic \(\lambda\)-calculus_\(\Lambda_{\mathtt{det}}\) are given by: \[\begin{array}{ccccc}\text{Terms}&t,u,r,w&::=&v\mid tv\\ \text{Values}&v,w,v^{\prime}&::=&\lambda x.t\mid x\\ \text{Evaluation Contexts}&E&::=&\langle\cdot\rangle\mid Ev\end{array}\] Note that * _Arguments are values_: the right subterm of an application has to be a value, in contrast to what happens in the ordinary \(\lambda\)-calculus. * _Weak evaluation_: evaluation contexts are _weak_, i.e. they do not enter inside abstractions. Evaluation is then defined by: \[\begin{array}{ccccc}\text{Rule at top level}&\text{Contextual closure}\\ (\lambda x.t)u\mapsto_{\beta}t\{x{\leftarrow}u\}&E\langle t\rangle\to_{det}E \langle u\rangle&\text{if }t\mapsto_{\beta}u\end{array}\] _Convention_: to improve readability we omit some parenthesis, giving precedence to application with respect to abstraction. Therefore \(\lambda x.tu\) stands for \(\lambda x.(tu)\) and not for \((\lambda x.t)u\), that instead requires parenthesis. The name of this calculus is motivated by the following immediate lemma. **Lemma 2.1** (Determinism).: _Let \(t\in\Lambda_{\mathtt{det}}\). If \(t\to_{det}u\) and \(t\to_{det}r\) then \(u=r\)._ Proof.: By induction on \(t\). If \(t\) is a variable or an abstraction then it cannot reduce. If \(t=wv\) then there are two cases: * \(w\) _is an abstraction_\(\lambda x.s\). Then \(t=(\lambda x.s)v\to_{det}s\{x{\leftarrow}v\}\) is the unique redex of \(t\), that is, \(u=r=s\{x{\leftarrow}v\}\). * \(w\) _is not an abstraction_. Then the two steps from \(t\) come from two steps \(w\to_{det}u^{\prime}\) and \(w\to_{det}r^{\prime}\) with \(u=u^{\prime}v\) and \(r=r^{\prime}v\), because \(\langle\cdot\rangle v\) is the only possible evaluation context. By _i.h._, \(u^{\prime}=r^{\prime}\), that is, \(u=r\). Fixpoint.The encoding of Turing machines requires a general form of recursion, that is usually implemented via a fixpoint combinator. We use Turing's fixpoint combinator, in its call-by-value variant, that fits into \(\Lambda_{\mathtt{det}}\) and that returns a fixpoint up to \(\eta\)-equivalence. Let \(\mathtt{fix}\) be the term \(\theta\theta\), where \[\theta:=\lambda x.\lambda y.y(\lambda z.xxyz).\] Now, given a term \(u\) let us show that \(\operatorname{fix}u\) is a fixpoint of \(u\) up to \(\eta\)-equivalence. \[\begin{array}{rcl}\operatorname{fix}u&=&(\lambda x.\lambda y.y(\lambda z. xxyz))\theta u\\ &\rightarrow_{det}&(\lambda y.y(\lambda z.\theta\theta yz))u\\ &\rightarrow_{det}&u(\lambda z.\theta\theta uz)\\ &=&u(\lambda z.\operatorname{fix}uz)\\ &=_{\eta}&u(\operatorname{fix}u)\end{array}\] It is well-known that \(\eta\)-equivalent terms are indistinguishable in the \(\lambda\)-calculus (this is Bohm's theorem). Therefore, we will simply use the fact that \(\operatorname{fix}u\rightarrow^{2}_{det}u(\lambda z.\operatorname{fix}uz)\) without dealing with \(\eta\)-equivalence. This fact will not induce any complication. Encoding alphabets.Let \(\Sigma=\{a_{1},\ldots,a_{n}\}\) be a finite alphabet. Elements of \(\Sigma\) are encoded as follows: \[\lceil a_{i}\rceil^{\Sigma}:=\lambda x_{1}.\ldots.\lambda x_{n}.x_{i}\.\] When the alphabet will be clear from the context we will simply write \(\lceil a\rceil_{i}\). Note that 1. the representation fixes a total order on \(\Sigma\) such that \(a_{i}<a_{j}\) iff \(i<j\); 2. the representation of an element \(\lceil a_{i}\rceil^{\Sigma}\) requires space linear (and not logarithmic) in \(|\Sigma|\). But, since \(\Sigma\) is fixed, it actually requires constant space. Encoding strings.A string in \(s\in\Sigma^{*}\) is represented by a term \(\overline{s}^{\Sigma^{*}}\). Our encoding exploits the fact that a string is a concatenation of characters _followed by the empty string_\(\varepsilon\) (which is generally omitted). For that, the encoding uses \(|\Sigma|+1\) abstractions, the extra one (\(x_{\varepsilon}\) in the definition below) being used to represent \(\varepsilon\). The encoding is defined by induction on the structure of \(s\) as follows: \[\begin{array}{rcl}\overline{\varepsilon}^{\Sigma^{*}}&:=\lambda x_{1}.\ldots.\lambda x_{n}.\lambda x_{\varepsilon}.x_{\varepsilon}\,\\ \overline{a_{i}r}^{\Sigma^{*}}&:=\lambda x_{1}.\ldots.\lambda x_{n}.\lambda x_{ \varepsilon}.x_{i}\overline{r}^{\Sigma^{*}}.\end{array}\] Note that the representation depends on the cardinality of \(\Sigma\). As before, however, the alphabet is a fixed parameter, and so such a dependency is irrelevant. As an example, the encoding of the string \(aba\) with respect to the alphabet \(\{a,b\}\) ordered as \(a<b\) is \[\overline{aba}^{\{a,b\}}=\lambda x_{a}.\lambda x_{b}.\lambda x_{\varepsilon}. x_{a}(\lambda x_{a}.\lambda x_{b}.\lambda x_{\varepsilon}.x_{b}(\lambda x_{a}. \lambda x_{b}.\lambda x_{\varepsilon}.x_{a}(\lambda x_{a}.\lambda x_{b}. \lambda x_{\varepsilon}.x_{\varepsilon})))\] **Lemma 2.2** (Appending a character in constant time).: _Let \(\Sigma\) be an alphabet and \(a\in\Sigma\) one of its characters. There is a term \(\operatorname{\mathsf{append}}^{a}_{\Sigma}\) such that for every continuation term \(k\) and every string \(s\in\Sigma^{*}\),_ \[\operatorname{\mathsf{append}}^{a}_{\Sigma}k\overline{s}\rightarrow^{\mathcal{ O}(1)}_{det}k\overline{(as)}.\] Proof.: Define the term \(\operatorname{\mathsf{append}}^{a}_{\Sigma}:=\lambda k^{\prime}.\lambda s^{ \prime}.k^{\prime}(\lambda x_{1}.\ldots.\lambda x_{|\Sigma|}.\lambda x_{ \varepsilon}.x_{i_{a}}s^{\prime})\) where \(i_{a}\) is the index of \(a\) in the ordering of \(\Sigma\) fixed by its encoding, that appends the character \(a\) to the string \(s^{\prime}\) relatively to the alphabet \(\Sigma\). We have: \[\begin{array}{rcl}\mbox{append}^{a}_{\Sigma}k\overline{s}&=&(\lambda k^{\prime}. \lambda s^{\prime}.k^{\prime}(\lambda x_{1}.\ldots.\lambda x_{|\Sigma|}.\lambda x _{\varepsilon}.x_{i_{a}}s^{\prime}))k\overline{s}\\ &\rightarrow^{2}_{det}&k(\lambda x_{1}.\ldots.\lambda x_{|\Sigma|}.\lambda x _{\varepsilon}.x_{i_{a}}\overline{s})\\ &=&k(\overline{as}).\end{array}\] ### Binary Arithmetic In order to navigate the input word, we consider a counter (in binary). Moving the head left (respectively right) amounts to decrement (respectively increment) the counter by one. The starting idea is to see a number as its binary string representation and to use the Scott encoding of strings. Since it is tricky to define the successor and predecessor on such an encoding, we actually define an ad-hoc encoding. The first unusual aspect of our encoding is that the binary string is represented in reverse order, so that the representation of 2 is 01 and not 10. This is done to ease the definition of the successor and predecessor functions as \(\lambda\)-terms, which have to process strings from left to right, and that with the standard representation would have to go to the end of the string and then potentially back from right to left. With a reversed representation, these functions need to process the string only once from left to right. The second unusual aspect is that, in order to avoid problems with strings made out of all 0s and strings having many 0s on the right (which are not meaningful), we collapse all suffixes made out of all 0 on to the empty string. A consequence is that the number 0 is then represented with the empty string. Non-rightmost 0 bits are instead represented with the usual Scott encoding. If \(n\in\mathbb{N}\) we write \(\lfloor n\rfloor\) for the binary string representing \(n\). Then we have: \[\begin{array}{rcl}\lfloor 0\rfloor&:=&\varepsilon\\ \lfloor 1\rfloor&:=&1\\ \lfloor 2\rfloor&:=&01\\ \lfloor 3\rfloor&:=&11\\ \lfloor 4\rfloor&:=&001\end{array}\] And so on. Binary strings are then encoded as \(\lambda\)-terms using the Scott encoding, as follows: \[\begin{array}{rcl}\overline{\varepsilon}&:=&\lambda x_{0}.\lambda x_{1}. \lambda x_{\varepsilon}.x_{\varepsilon}\\ \overline{0\cdot s}&:=&\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x _{0}\overline{s}\\ \overline{1\cdot s}&:=&\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x _{1}\overline{s}\end{array}\] Successor Function.The successor function succ on the reversed binary representation can be defined as follows (in Haskell-like syntax): \[\begin{array}{rcl}\mbox{succ}&\varepsilon&=&1\\ \mbox{succ}&0\cdot s&=&1\cdot s\\ \mbox{succ}&1\cdot s&=&0\cdot(\mbox{succ}\,s)\end{array}\] For which we have succ\((\lfloor n\rfloor)=\lfloor n+1\rfloor\). **Lemma 2.3**.: _There is a \(\lambda\)-term \(\mathtt{succ}\) such that for every continuation term \(k\) and every natural number \(n\in\mathbb{N}\),_ \[\mathtt{succ}\,k\overline{\left\lfloor n\right\rfloor}\rightarrow^{\mathcal{O}( \log n)}_{det}k\,\overline{\textsc{succ}\,\left\lfloor n\right\rfloor}.\] Proof.: Define \(\mathtt{succ}:=\Theta\mathtt{succ}\mathtt{aux}\) and \(\mathtt{succ}\mathtt{aux}:=\lambda f.\lambda k^{\prime}.\lambda n^{\prime}.n ^{\prime}N_{0}N_{1}N_{\varepsilon}fk^{\prime}\) where: * \(N_{0}:=\lambda f^{\prime}.\lambda s^{\prime}.\lambda k^{\prime}.\mathtt{append }^{1}k^{\prime}s^{\prime}\) * \(N_{1}:=\lambda f^{\prime}.\lambda s^{\prime}.\lambda k^{\prime}.f^{\prime}( \lambda z.\mathtt{append}^{0}k^{\prime}z)s^{\prime}\) * \(N_{\varepsilon}:=\lambda f^{\prime}.\lambda k^{\prime}.k^{\prime}\,\overline{1 \cdot\varepsilon}\) We rather prove \(\mathtt{succ}\,k\overline{\left\lfloor n\right\rfloor}\rightarrow^{\mathcal{O}( \left\lfloor n\right\rfloor)}_{det}k\,\overline{\textsc{succ}\,\left\lfloor n \right\rfloor}.\), where clearly \(\left\lvert\left\lfloor n\right\rfloor\right\rvert=\log n\), because the proof is naturally by induction on the length of \(\left\lfloor n\right\rfloor\) as a string. The first steps of the evaluation of \(\mathtt{succ}\,k\overline{\left\lfloor n\right\rfloor}\) are common to all natural numbers \(n\in\mathbb{N}\): \[\mathtt{succ}\,k\overline{\left\lfloor n\right\rfloor} = \mathtt{fix}\mathtt{succ}\mathtt{aux}\,k\overline{\left\lfloor n \right\rfloor}\] \[\rightarrow^{2}_{\beta} \mathtt{succ}\mathtt{aux}(\lambda z.\mathtt{succ}\,z)k \overline{\left\lfloor n\right\rfloor}\] \[= (\lambda f.\lambda k^{\prime}.\lambda n^{\prime}.n^{\prime}N_{0}N _{1}N_{\varepsilon}fk^{\prime})(\lambda z.\mathtt{succ}\,z)k\overline{\left \lfloor n\right\rfloor}\] \[\rightarrow^{3}_{\beta} \overline{\left\lfloor n\right\rfloor}N_{0}N_{1}N_{\varepsilon}( \lambda z.\mathtt{succ}\,z)k\] Cases of \(n\): * _Zero_, that is, \(n=0\), \(\left\lfloor n\right\rfloor=\varepsilon\), and \(\overline{\left\lfloor n\right\rfloor}=\lambda x_{0}.\lambda x_{1}.\lambda x_{ \varepsilon}.x_{\varepsilon}\): then \[\begin{array}{ll}\overline{\left\lfloor n\right\rfloor}N_{0}N_{1}N_{ \varepsilon}(\lambda z.\mathtt{succ}\,z)k&=&(\lambda x_{0}.\lambda x_{1}. \lambda x_{\varepsilon}.x_{\varepsilon})N_{0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ &\rightarrow^{3}_{\beta}&N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ &=&(\lambda f^{\prime}.\lambda s^{\prime}.\lambda k^{\prime}\,\overline{1 \cdot\varepsilon})k\\ &\rightarrow_{\beta}&k\,\overline{1\cdot\varepsilon}\\ &=&k\,\overline{\textsc{succ}\,\left\lfloor 0\right\rfloor}\end{array}\] * _Not zero_. Then there are two sub-cases, depending on the first character of the string \(\left\lfloor n\right\rfloor\): * _character_, i.e. \(\left\lfloor n\right\rfloor=0\cdot s\): then \[\begin{array}{ll}&\overline{0\cdot r}N_{0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ =&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{s})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ =&(\lambda f^{\prime}.\lambda s^{\prime}.\lambda k^{\prime}.\mathtt{append }^{1}k^{\prime}s^{\prime})\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&\mathtt{append}^{1}k\overline{s}\\ \rightarrow^{\mathcal{O}(1)}_{\beta}&k\,\overline{1\cdot s}\\ =&k\,\overline{\textsc{succ}\,\left\lfloor n\right\rfloor}\end{array}\] \[\begin{array}{ll}\overline{0\cdot r}N_{0}N_{1}N_{\varepsilon}( \lambda z.\mathtt{succ}\,z)k\\ =&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{s})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ =&(\lambda f^{\prime}.\lambda s^{\prime}.\lambda k^{\prime}.\mathtt{append }^{1}k^{\prime}s^{\prime})\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&\mathtt{append}^{1}k\overline{s}\\ \rightarrow^{\mathcal{O}(1)}_{\beta}&k\,\overline{1\cdot s}\\ =&k\,\overline{\textsc{succ}\,\left\lfloor n\right\rfloor}\end{array}\] \[\begin{array}{ll}\overline{0\cdot r}N_{0}N_{1}N_{\varepsilon}( \lambda z.\mathtt{succ}\,z)k\\ =&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{s})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ =&(\lambda f^{\prime}.\lambda s^{\prime}.\lambda k^{\prime}.\mathtt{append }^{1}k^{\prime}s^{\prime})\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&\mathtt{append}^{1}k\overline{s}\\ \rightarrow^{\mathcal{O}(1)}_{\beta}&k\,\overline{1\cdot s}\\ =&k\,\overline{\textsc{succ}\,\left\lfloor n\right\rfloor}\end{array}\] \[\begin{array}{ll}\overline{0\cdot r}N_{0}N_{1}N_{\varepsilon}( \lambda z.\mathtt{succ}\,z)k\\ =&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{s})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&\mathtt{append}^{1}k\overline{s}\\ \rightarrow^{\mathcal{O}(1)}_{\beta}&k\,\overline{1\cdot s}\\ =&k\,\overline{\textsc{succ}\,\left\lfloor n\right\rfloor}\end{array}\] \[\begin{array}{ll}\overline{0\cdot r}N_{0}N_{1}N_{ \varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ =&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{s})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&\mathtt{append}^{1}k\overline{s}\\ \rightarrow^{\mathcal{O}(1)}_{\beta}&k\,\overline{1\cdot s}\\ =&k\,\overline{\textsc{succ}\,\left\lfloor n\right\rfloor}\end{array}\] \[\begin{array}{ll}\overline{0\cdot r}N_{0}N_{1}N_{\varepsilon}( \lambda z.\mathtt{succ}\,z)k\\ =&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{\varepsilon})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&\mathtt{append}^{1}k\overline{s}\\ \rightarrow^{\mathcal{O}(1)}_{\beta}&k\,\overline{1\cdot s}\\ =&k\,\overline{\textsc{succ}\,\left\lfloor n\right\rfloor}\end{array}\] \[\begin{array}{ll}\overline{0\cdot r}N_{0}N_{1}N_{\varepsilon}( \lambda z.\mathtt{succ}\,z)k\\ =&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{\varepsilon})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&N_{0}\overline{s}(\lambda z.\mathtt{succ}\,z)k\\ \rightarrow^{3}_{\beta}&\mathtt{append}^{1}k\overline{s}\\ \rightarrow^{3} \(-\,1\)_character_, i.e. \([n]=1\):\(s\): then \[\begin{array}{ll}&\overline{1\cdot r}N_{0}N_{1}N_{\varepsilon}(\lambda z. \mathsf{succ}\,z)k\\ =&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{1}\overline{s})N_{0} N_{1}N_{\varepsilon}(\lambda z.\mathsf{succ}\,z)k\\ \rightarrow_{\beta}^{3}&N_{1}\overline{s}(\lambda z.\mathsf{succ}\,z)k\\ =&(\lambda f^{\prime}.\lambda s^{\prime}.\lambda k^{\prime}.f^{\prime}( \lambda z.\mathsf{append}^{0}k^{\prime}z)s^{\prime})\overline{s}(\lambda z. \mathsf{succ}\,z)k\\ \rightarrow_{\beta}^{3}&(\lambda z.\mathsf{succ}\,z)\,(\lambda z.\mathsf{ append}^{0}kz)\overline{s}\\ \rightarrow_{\beta}&\mathsf{succ}\,(\lambda z.\mathsf{append}^{0}kz) \overline{s}\\ (\textit{i.h.})&\rightarrow_{\beta}^{\mathcal{O}(|s|)}&(\lambda z.\mathsf{ append}^{0}kz)\overline{\textsc{succ}\,s}\\ &\rightarrow_{det}&\mathsf{append}^{0}k\textsc{succ}\,s\\ (L.\ 2.2)&\rightarrow_{\beta}^{\mathcal{O}(1)}&k\,\overline{0\cdot(\textsc{succ }\,s)}\\ =&k\,\overline{\textsc{succ}\,(1\cdot s)}\\ =&k\,\overline{\textsc{succ}\,\left[\,n\,\right]}\end{array}\] Predecessor Function.We now define and implement a predecessor function. We define it assuming that it shall only be applied to the enconding \(\lfloor n\rfloor\) of a natural number \(n\) different from \(0\), as it shall indeed be the case in the following. Such a predecessor function pred is defined as follows on the reversed binary representation (in Haskell-like syntax): \[\begin{array}{llll}\text{pred}&0\cdot s&=&1\cdot(\text{pred}\,s)\\ \text{pred}&1\cdot\varepsilon&=&\varepsilon\\ \text{pred}&1\cdot b\cdot s&=&0\cdot b\cdot s\end{array}\] It is easily seen that \(\text{pred}(\lfloor n\rfloor)=\lfloor n-1\rfloor\) for all \(0<n\in\mathbb{N}\). Note that \(\text{pred}(\lfloor n\rfloor)\) does not introduce a rightmost \(0\) bit when it changes the rightmost bit of \(\lfloor n\rfloor\), that is, \(\text{pred}\,001=11\) and not \(110\). **Lemma 2.4**.: _There is a \(\lambda\)-term \(\mathsf{pred}\) such that for every continuation term \(k\) and every natural number \(1\leq n\in\mathbb{N}\),_ \[\mathsf{pred}\,k\overline{\left[n\right]}\rightarrow_{det}^{\mathcal{O}( \log n)}k\,\overline{\textsc{pred}\,\left[n\right]}.\] Proof.: Define \(\mathsf{pred}:=\mathsf{fix}\,\mathsf{predaux}\) and \(\mathsf{predaux}:=\lambda f.\lambda k^{\prime}.\lambda n^{\prime}.n^{\prime}N_ {0}N_{1}N_{\varepsilon}fk^{\prime}\) where: * \(N_{0}:=\lambda r^{\prime}.\lambda f.\lambda k^{\prime}.f(\lambda z.\mathsf{ append}^{1}k^{\prime}z)r^{\prime}\); * \(N_{1}:=\lambda r^{\prime}.\lambda f.r^{\prime}M_{0}M_{1}M_{\varepsilon}\), where: * \(M_{0}:=\lambda v.\lambda k.\mathsf{append}^{0}(\lambda z.\mathsf{append}^{1} kz)v\); * \(M_{1}:=\lambda v.\lambda k.\mathsf{append}^{1}(\lambda z.\mathsf{append}^{1} kz)v\); * \(M_{\varepsilon}:=\lambda k^{\prime}.k^{\prime}\overline{\varepsilon}\); * \(N_{\varepsilon}\) is whatever closed term. We rather prove \(\mathsf{pred}\,k\overline{\left[n\right]}\rightarrow_{det}^{\mathcal{O}(| \lfloor n\rfloor|)}k\,\overline{\textsc{pred}\,\left[n\right]}\)., where clearly \(|\lfloor n\rfloor|=\log n\), because the proof is naturally by induction on the length of \(\lfloor n\rfloor\) as a string. The first steps of the evaluation of \(\texttt{pred}\,k\lfloor n\rfloor\) are common to all natural numbers \(1\leq n\in\mathbb{N}\): \[\begin{array}{rcl}\texttt{pred}\,k\lfloor n\rfloor&=&\texttt{fixpredaux}\,k \lceil\frac{n}{n}\rceil\\ &\to_{\beta}^{2}&\texttt{predaux}(\lambda z.\texttt{pred}\,z)k\lceil\frac{n}{n} \rceil\\ &=&(\lambda f.\lambda k^{\prime}.\lambda n^{\prime}.n^{\prime}N_{0}N_{1}N_{ \varepsilon}fk^{\prime})(\lambda z.\texttt{pred}\,z)k\lceil\frac{n}{n}\rceil \\ &\to_{\beta}^{3}&\lfloor\frac{n}{n}\rfloor N_{0}N_{1}N_{\varepsilon}(\lambda z. \texttt{pred}\,z)k\end{array}\] By hypothesis, \(n\geq 1\). Then \(\overline{n}\) is a non-empty string. Cases of its first character: * \(0\)_character_, i.e. \(\lfloor n\rfloor=0\cdot r\): then \[\begin{array}{rcl}&\overline{\lfloor n\rfloor}N_{0}N_{1}N_{\varepsilon}( \lambda z.\texttt{pred}\,z)k\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{r})N_ {0}N_{1}N_{\varepsilon}(\lambda z.\texttt{pred}\,z)k\\ &\to_{\beta}^{3}&N_{0}\overline{r}(\lambda z.\texttt{pred}\,z)k\\ &=&(\lambda r^{\prime}.\lambda f.\lambda k^{\prime}.f(\lambda z.\texttt{append }^{1}k^{\prime}z)r^{\prime})\overline{r}(\lambda z.\texttt{pred}\,z)k\\ &\to_{\beta}^{3}&(\lambda z.\texttt{pred}\,z)(\lambda z.\texttt{append}^{1}k^ {\prime}z)\overline{r}\\ &\to_{\beta}&\texttt{pred}(\lambda z.\texttt{append}^{1}kz)\overline{r}\\ &\to_{\beta}&\texttt{append}^{1}k\,\texttt{PRED}\,\overline{r}\\ (L.\,\,2.\,2)&\to_{\beta}^{O(1)}&k\,\overline{1\cdot(\texttt{pred}\,r)}\\ &=&k\,\overline{\texttt{pred}\,0\cdot r}\\ &=&k\,\overline{\texttt{pred}\,\lfloor n\rfloor}\end{array}\] * \(1\)_character_, i.e. \(\lfloor n\rfloor=1\cdot r\): then \[\begin{array}{rcl}\overline{\lfloor n\rfloor}N_{0}N_{1}N_{\varepsilon}( \lambda z.\texttt{pred}\,z)k&=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{ \varepsilon}.x_{1}\overline{r})N_{0}N_{1}N_{\varepsilon}(\lambda z.\texttt{ pred}\,z)k\\ &\to_{\beta}^{3}&N_{1}\overline{r}(\lambda z.\texttt{pred}\,z)k\\ &=&(\lambda r^{\prime}.\lambda f.r^{\prime}M_{0}M_{1}M_{\varepsilon})\overline{ r}(\lambda z.\texttt{pred}\,z)k\\ &\to_{\beta}&\overline{r}M_{0}M_{1}M_{\varepsilon}k\end{array}\] There are three sub-cases, depending on the string \(r\): * \(r\) is empty, i.e. \(r=\varepsilon\). Then: \[\begin{array}{rcl}\overline{\varepsilon}M_{0}M_{1}M_{\varepsilon}k&=&( \lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{\varepsilon})M_{0}M_{1}M _{\varepsilon}k\\ &\to_{\beta}^{3}&M_{\varepsilon}k\\ &=&(\lambda k^{\prime}.k^{\prime}\overline{\varepsilon})k\\ &\to_{\beta}&k\overline{\varepsilon}\\ &\to_{\beta}&k\overline{\lfloor 0\,]}\\ &=&k\texttt{pred}\,\lfloor 1\rfloor\end{array}\] _,_ * \(r\) _start with_ \(0\), that is, \(\lfloor n\rfloor=1\cdot r=1\cdot 0\cdot p\). Then: \[\begin{array}{ll}\overline{0\cdot p}\,M_{0}M_{1}M_{\varepsilon}k&=&(\lambda x _{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{p})M_{0}M_{1}M_{ \varepsilon}k\\ &\rightarrow_{\beta}^{3}&M_{0}\overline{p}k\\ &=&(\lambda v.\lambda k.\texttt{append}^{0}(\lambda z.\texttt{append}^{0}kz)v) \overline{p}k\\ &\rightarrow_{\beta}^{2}&\texttt{append}^{0}(\lambda z.\texttt{append}^{0} kz)\overline{p}\\ &\rightarrow_{\beta}^{\mathcal{O}(1)}&(\lambda z.\texttt{append}^{0}kz) \overline{0\cdot p}\\ &\rightarrow_{\beta}&\texttt{append}^{0}k\,\overline{0\cdot p}\\ (L.\ 2.2)&\rightarrow_{\beta}^{\mathcal{O}(1)}&k\,\overline{0\cdot 0\cdot p}\\ &=&k\,\overline{0\cdot r}\\ &=&k\,\overline{\texttt{pred}\,1\cdot r}\\ &=&k\,\overline{\texttt{pred}\,\lfloor n\rfloor}\end{array}\] * \(r\) _start with_ \(1\), that is, \(\lfloor n\rfloor=1\cdot r=1\cdot 1\cdot p\). Then: \[\begin{array}{ll}\overline{0\cdot p}\,M_{0}M_{1}M_{\varepsilon}k&=&(\lambda x _{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{1}\overline{p})M_{0}M_{1}M_{ \varepsilon}k\\ &\rightarrow_{\beta}^{3}&M_{1}\overline{p}k\\ &=&(\lambda v.\lambda k.\texttt{append}^{1}(\lambda z.\texttt{append}^{0} kz)v)\overline{p}k\\ &\rightarrow_{\beta}^{2}&\texttt{append}^{1}(\lambda z.\texttt{append}^{0} kz)\overline{p}\\ (L.\ 2.2)&\rightarrow_{\beta}^{\mathcal{O}(1)}&(\lambda z.\texttt{append}^{0} kz)\overline{1\cdot p}\\ &\rightarrow_{\beta}&\texttt{append}^{0}k\,\overline{1\cdot p}\\ (L.\ 2.2)&\rightarrow_{\beta}^{\mathcal{O}(1)}&k\,\overline{0\cdot 1\cdot p}\\ &=&k\,\overline{0\cdot r}\\ &=&k\,\overline{\texttt{pred}\,1\cdot r}\\ &=&k\,\overline{\texttt{pred}\,\lfloor n\rfloor}\end{array}\] Lookup Function.Given a natural number \(n\), we need to be able to extract the \(n+1\)-th character from a non-empty string \(s\). The partial function lookup can be defined as follows (in Haskell-like syntax): \[\begin{array}{llll}\texttt{lookup}&\lfloor 0\rfloor&(c\cdot s)&=&c\\ \texttt{lookup}&\lfloor n\rfloor&(c\cdot s)&=&\texttt{lookup}\,\,(\texttt{ pred}\,\lfloor n\rfloor)\,\,s\quad\text{if}\,\,n>0\end{array}\] **Lemma 2.5**.: _There is a \(\lambda\)-term lookup such that for every continuation term \(k\), every natural number \(n\) and every non-empty string \(i\in\mathbb{B}^{+}\),_ \[\begin{array}{llll}\texttt{lookup}k\overline{\lfloor n\rfloor^{i}}& \rightarrow_{det}^{\mathcal{O}(n\log n)}&k\,\lceil\texttt{lookup}\, \lfloor n\rfloor i\rceil.\end{array}\] Proof.: We can now code the function lookup\(:=\texttt{fix}\,\texttt{lookup}\) where: \[\begin{array}{llll}\texttt{lookup}\texttt{aux}:=\lambda f.\lambda k^{ \prime}.\lambda n^{\prime}.\lambda i^{\prime}.n^{\prime}N_{0}N_{1}N_{ \varepsilon}fk^{\prime}i^{\prime}\end{array}\] where: * \(N_{0}:=\lambda p^{\prime}.\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{ \prime}M_{0}M_{0}M_{\varepsilon}p^{\prime}fk^{\prime}\), where * \(M_{0}:=\lambda r^{\prime}.\lambda p^{\prime}.\lambda f.\lambda k^{\prime}. \texttt{append}^{0}(\lambda z^{\prime\prime}.\texttt{pred}(\lambda z^{\prime }.fk^{\prime}z^{\prime})z^{\prime\prime})p^{\prime}r^{\prime}\); * \(M_{\varepsilon}\) is whatever closed term. * \(N_{1}:=\lambda p^{\prime}.\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{ \prime}M_{1}M_{e}p^{\prime}fk^{\prime}\), where * \(M_{1}:=\lambda r^{\prime}.\lambda p^{\prime}.\lambda f.\lambda k^{\prime}. \texttt{append}^{1}(\lambda z^{\prime\prime}.\texttt{pred}(\lambda z^{\prime}. fk^{\prime}z^{\prime})z^{\prime\prime})p^{\prime}r^{\prime}\); * \(M_{\varepsilon}\) is whatever closed term. * \(N_{\varepsilon}:=\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{\prime}O_{ 0}O_{1}O_{\mathsf{R}}O_{\varepsilon}k^{\prime}\), where * \(O_{b}:=\lambda s^{\prime}.\lambda k^{\prime}.k^{\prime}[\,b\,]\); * \(O_{\varepsilon}\) is whatever closed term. The first steps of the evaluation of \(\texttt{lookup}\,k\overline{[\,n\,]i}\) are common to all strings \(i\in\mathbb{B}^{+}\) and natural numbers \(n\in\mathbb{N}\): \[\begin{array}{rcl}\texttt{lookup}\,k\overline{[\,n\,]i}&=&\texttt{fix} \,\texttt{lookup}\,\texttt{aux}\,k\overline{[\,n\,]i}\\ &\rightarrow_{\beta}^{2}&\texttt{lookup}\texttt{aux}(\lambda z.\texttt{ lookup}\,z)k\overline{[\,n\,]i}\\ &=&(\lambda f.\lambda k^{\prime}.\lambda n^{\prime}.\lambda i^{\prime}.n^{ \prime}N_{0}N_{1}N_{\varepsilon}fk^{\prime}i^{\prime})(\lambda z.\texttt{lookup}\,z)k \overline{[\,n\,]i}\\ &\rightarrow_{\beta}^{4}&\overline{[\,n\,]N_{0}N_{1}N_{\varepsilon}(\lambda z. \texttt{lookup}\,z)k\overline{i}}\end{array}\] Cases of \(n\): * \(n=0\), and so \(\lfloor n\rfloor=\varepsilon\): then \[\begin{array}{rcl}&\overline{\varepsilon}N_{0}N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{\varepsilon})N_{0 }N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}x_{ \varepsilon}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{\prime}O_{0}O_{1}O_{ \mathsf{R}}O_{\varepsilon}k^{\prime})(\lambda z.\texttt{lookup}\,z)k \overline{i}\\ \rightarrow_{det}^{3}&\tilde{i}O_{0}O_{1}O_{\mathsf{L}}O_{\mathsf{R}}O_{ \varepsilon}k\end{array}\] Let \(i\) start with \(b\in\mathbb{B}\), that is, \(i=b\cdot s\): \[\begin{array}{rcl}\overline{b\cdot s}\,O_{0}O_{1}O_{\mathsf{L}}O_{\mathsf{R} }O_{\varepsilon}k&=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}x_{ \varepsilon}\overline{s})O_{0}O_{1}O_{\mathsf{L}}O_{\mathsf{R}}O_{\varepsilon }k\\ &\rightarrow_{det}^{3}&O_{b}\overline{s}k\\ &=&(\lambda s^{\prime}.\lambda k^{\prime}.k^{\prime}[\,b\,])\overline{s}k\\ &\rightarrow_{\beta}^{2}&k\lceil b\rceil\\ &=&k\lceil\texttt{lookup}\,\varepsilon(b\cdot s)\rceil\\ &=&k\lceil\texttt{lookup}\,\lfloor 0\rfloor i\rceil\end{array}\] * _Non-empty string starting with \(0\)_, that is, \(n>0\) and \(\lfloor n\rfloor=0\cdot p\): then \[\begin{array}{rcl}&\overline{[\,n\,]}N_{0}N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{p})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ \rightarrow_{\beta}^{3}&N_{0}\overline{p}(\lambda z.\texttt{lookup}\,z)k \overline{i}\\ &=&(\lambda p^{\prime}.\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{ \prime}M_{0}M_{0}M_{e}p^{\prime}fk^{\prime})\overline{p}(\lambda z.\texttt{lookup}\,z)k \overline{i}\\ \rightarrow_{\beta}^{4}&\tilde{i}M_{0}M_{0}M_{\varepsilon}\overline{p}(\lambda z.\texttt{lookup}\,z)k\end{array}\] Let \(i\) start with \(b\in\mathbb{B}\), that is, \(i=b\cdot r\): \[\begin{array}{rcl}&\overline{[\,n\,]}N_{0}N_{1}N_{\varepsilon}(\lambda z. \texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{p})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ &\rightarrow_{\beta}^{3}&N_{0}\overline{p}(\lambda z.\texttt{lookup}\,z)k \overline{i}\\ &=&(\lambda p^{\prime}.\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{ \prime}M_{0}M_{0}M_{e}p^{\prime}fk^{\prime})\overline{p}(\lambda z.\texttt{lookup }\,z)k\overline{i}\\ &\rightarrow_{\beta}^{4}&\tilde{i}M_{0}M_{0}M_{\varepsilon}\overline{p}( \lambda z.\texttt{lookup}\,z)k\end{array}\] Let \(i\) start with \(b\in\mathbb{B}\), that is, \(i=b\cdot r\): \[\begin{array}{rcl}&\overline{[\,n\,]}N_{0}N_{1}N_{\varepsilon}(\lambda z. \texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{p})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ &\rightarrow_{\beta}^{3}&N_{0}\overline{p}(\lambda z.\texttt{lookup}\,z)k \overline{i}\\ &=&(\lambda p^{\prime}.\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{ \prime}M_{0}M_{0}M_{e}p^{\prime}fk^{\prime})\overline{p}(\lambda z.\texttt{lookup }\,z)k\overline{i}\\ &\rightarrow_{\beta}^{4}&\tilde{i}M_{0}M_{0}M_{\varepsilon}\overline{p}(\lambda z.\texttt{lookup}\,z)k\end{array}\] Let \(i\) start with \(b\in\mathbb{B}\), that is, \(i=b\cdot r\): \[\begin{array}{rcl}&\overline{[\,n\,]}N_{0}N_{1}N_{\varepsilon}(\lambda z. \texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{p})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ &\rightarrow_{\beta}^{3}&N_{0}\overline{p}(\lambda z.\texttt{lookup}\,z)k \overline{i}\\ &=&(\lambda p^{\prime}.\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{ \prime}M_{0}M_{0}M_{e}p^{\prime}fk^{\prime})\overline{p}(\lambda z.\texttt{lookup }\,z)k\overline{i}\\ &\rightarrow_{\beta}^{4}&\tilde{i}M_{0}M_{0}M_{\varepsilon}\overline{p}( \lambda z.\texttt{lookup}\,z)k\end{array}\] Let \(i\) start with \(b\in\mathbb{B}\), that is, \(i=b\cdot r\): \[\begin{array}{rcl}&\overline{[\,n\,]}N_{0}N_{1}N_{\varepsilon}(\lambda z. \texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{p})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ &\rightarrow_{\beta}^{3}&N_{0}\overline{p}(\lambda z.\texttt{lookup}\,z)k \overline{i}\\ &=&(\lambda p^{\prime}.\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.i^{ \prime}M_{0}M_{0}M_{e}p^{\prime}fk^{\prime})\overline{p}(\lambda z. \texttt{lookup}\,z)k\overline{i}\\ &\rightarrow_{\beta}^{4}&\tilde{i}M_{0}M_{0}M_{\varepsilon}\overline{p}( \lambda z.\texttt{lookup}\,z)k\end{array}\] Let \(i\) start with \(b\in\mathbb{B}\), that is, \(i=b\cdot r\): \[\begin{array}{rcl}&\overline{[\,n\,]}N_{0}N_{1}N_{\varepsilon}(\lambda z. \texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{p})N_{ 0}N_{1}N_{\varepsilon}(\lambda z.\texttt{lookup}\,z)k\overline{i}\\ &=&(\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{0}\overline{p})N_{0}N_{1}N_{ \varepsilon} \(\overline{b\cdot r}M_{0}M_{0}M_{\varepsilon}k\) \(=\) \((\lambda x_{0}.\lambda x_{1}.\lambda x_{\varepsilon}.x_{b}\overline{r})M_{0}M_{0} M_{\varepsilon}\overline{p}(\lambda z.\mathtt{lookup}\,z)k\) \(\rightarrow^{|\Sigma|}_{det}\) \(M_{0}\overline{r}\,\overline{p}(\lambda z.\mathtt{lookup}\,z)k\) \(=\) \((\lambda r^{\prime}.\lambda p^{\prime}.\lambda f.\lambda k^{\prime}.\mathtt{ append}^{0}(\lambda z^{\prime\prime}.\mathtt{pred}(\lambda z^{\prime}.fk^{\prime}z^{ \prime})z^{\prime\prime})p^{\prime}r^{\prime})\overline{r}\,\overline{p}( \lambda z.\mathtt{lookup}\,z)k\) \(\rightarrow^{\lambda}_{det}\) \(\mathtt{append}^{0}(\lambda z^{\prime\prime}.\mathtt{pred}(\lambda z^{ \prime}.(\lambda z.\mathtt{lookup}\,z)kz^{\prime})z^{\prime\prime}) \overline{p}\,\overline{r}\) \((L.\ 2.2\rightarrow^{\mathcal{O}(1)}_{det}\) \((\lambda z^{\prime\prime}.\mathtt{pred}(\lambda z^{\prime}.(\lambda z. \mathtt{lookup}\,z)kz^{\prime})z^{\prime\prime})\overline{0\cdot p}\, \overline{r}\) \(\rightarrow_{det}\) \(\mathtt{pred}(\lambda z^{\prime}.(\lambda z.\mathtt{lookup}\,z)kz^{\prime}) \overline{0\cdot p}\,\overline{r}\) \(=\) \(\mathtt{pred}(\lambda z^{\prime}.(\lambda z.\mathtt{lookup}\,z)kz^{\prime}) \overline{\lfloor n\rfloor}\,\overline{r}\) \((L.\ 2.4)\rightarrow^{\mathcal{O}(\log n)}_{det}\) \((\lambda z^{\prime}.(\lambda z.\mathtt{lookup}\,z)kz^{\prime})\overline{ \mathtt{pred}\,\lfloor n\rfloor}\,\overline{r}\) \(\rightarrow^{2}_{det}\) \(\mathtt{lookup}\,k\,\overline{\mathtt{pred}\,\lfloor n\rfloor}\,\overline{r}\) \(=\) \(\mathtt{lookup}\,k\,\overline{\lfloor n-1\rfloor}\,\overline{r}\) \((\textit{i.h.})\rightarrow^{\mathcal{O}((n-1)\cdot\log(n-1))}_{det}\) \(k(\lceil\mathtt{lookup}\,\lfloor n-1\rfloor r\rceil)\) \(=\) \(k(\lceil\mathtt{lookup}\,\lfloor n\rfloor s\rceil)\) The number of \(\beta\) steps then is \(\mathcal{O}(\log n)+\mathcal{O}((n-1)\cdot\log(n-1))+h\) for a certain constant \(h\), which is bounded by \(\mathcal{O}(n\cdot\log n)\), as required. * _Non-empty string starting with \(1\)_, that is, \(n>0\) and \(\lfloor n\rfloor=1\!\cdot\!p\): same as the previous one, simply replacing \(N_{0}\) with \(N_{1}\), and thus \(M_{0}\) with \(M_{1}\). In particular, it takes the same number of steps. ### The New Encoding of Turing Machines Turing Machines.Let \(\mathbb{B}_{\mathfrak{l}}:=\{0,1,\mathsf{L},\mathsf{R}\}\) and \(\mathbb{B}_{\mathsf{W}}:=\{0,1,\Box\}\) where \(\mathsf{L}\) and \(\mathsf{R}\) delimit the input (binary) string, and \(\Box\) is our notation for the blank symbol. A deterministic binary Turing machine \(\mathcal{M}\)_with input_ is a tuple \((Q,q_{in},q_{T},q_{F},\delta)\) consisting of: * A finite set \(Q=\{q_{1},\ldots,q_{m}\}\) of _states_; * A distinguished state \(q_{in}\in Q\), called the _initial state_; * Two distinguished states \(Q_{\textit{fin}}:=\{q_{T},q_{F}\}\subseteq Q\), called the _final states_; * A partial _transition function_\(\delta:\mathbb{B}_{\mathfrak{l}}\times\mathbb{B}_{\mathsf{W}}\times Q \rightharpoonup\{-1,+1,0\}\times\mathbb{B}_{\mathsf{W}}\times\{\gets, \rightarrow,\downarrow\}\times Q\) such that \(\delta(b,a,q)\) is defined only if \(q\notin Q_{\textit{fin}}\). A configuration for \(\mathcal{M}\) is a tuple \[(i,n,w_{l},a,w_{r},q)\in\mathbb{B}_{\mathsf{I}}^{*}\times\mathbb{N}\times\mathbb{ B}_{\mathsf{W}}^{*}\times\mathbb{B}_{\mathsf{W}}\times\mathbb{B}_{\mathsf{W}}^{*}\times Q\] where: * \(i\) is the immutable input string and is formed as \(i=\mathsf{L}\!\cdot\!s\!\cdot\!\mathsf{R},\ s\in\mathbb{B}^{*}\); * \(n\in\mathbb{N}\) represents the position of the input head. It is meant to be represented in binary (that is, as an element of \(\mathbb{B}^{*}\)), to take space \(\log n\), but for ease of reading we keep referring to it as a number rather than as a string; * \(w_{l}\in\mathbb{B}_{\mathsf{W}}^{*}\) is the work tape on the left of the work head; * \(a\in\mathbb{B}_{\mathsf{W}}\) is the element on the cell of the work tape read by the work head; * \(w_{r}\in\mathbb{B}_{\mathsf{W}}^{*}\) is the work tape on the right of the work head; * \(q\in Q\) is the state of the machine. For readability, we usually write a configuration \((i,n,w_{l},a,w_{r},q)\) as \((i,n\,|\,w_{l},a,w_{r}\,|\,q)\), separating the input components, the working components, and the current state. Given an input string \(i\in\mathbb{B}_{\mathsf{I}}^{*}\) (where \(i=\mathsf{L}\!\cdot\!s\!\cdot\!\mathsf{R}\) and \(s\in\mathbb{B}^{*}\)) we define: * the _initial configuration_\(C_{\mathsf{in}}(i)\) for \(i\) is \(C_{\mathsf{in}}(i)\mathrel{\mathop{:}}=(i,0\,|\,\varepsilon,\square,\varepsilon \,|\,q_{in})\), * the _final configuration_\(C_{\mathsf{fin}}\mathrel{\mathop{:}}=(s,n\,|\,w_{l},a,w_{r}\,|\,q)\), where \(q\in Q_{\mathit{fin}}\). For readability, a transition, say, \(\delta(i_{n},a,q)=(-1,a^{\prime},\leftarrow,q^{\prime})\), is usually written as \((-1\,|\,a^{\prime},\leftarrow\,|\,q^{\prime})\) to stress the three components corresponding to those of configurations (input, work, state). As in Goldreich, we assume that the machine never scans the input beyond the boundaries of the input. This does not affects space complexity. _An example of transition_: if \(\delta(i_{n},a,q)=(-1\,|\,a^{\prime},\leftarrow\,|\,q^{\prime})\), then \(\mathcal{M}\) evolves from \(C=(i,n\,|\,wa^{\prime\prime},a,w_{r}\,|\,q)\), where the \(n\)th character of \(i\) is \(i_{n}\), to \(D=(i,n-1\,|\,w_{l},a^{\prime\prime},a^{\prime}w_{r}\,|\,q^{\prime})\) and if the tape on the left of the work head is empty, i.e. if \(C=(i,n\,|\,\varepsilon,a,w_{r}\,|\,q)\), then the content of the new head cell is a blank symbol, that is, \(D\mathrel{\mathop{:}}=(i,n-1\,|\,\varepsilon,\square,a^{\prime}w_{r}\,|\,q^{ \prime})\). The same happens if the tape on the right of the work head is empty. If \(\mathcal{M}\) has a transition from \(C\) to \(D\) we write \(C\to_{\mathcal{M}}D\). A configuration having as state a final state \(q\in Q_{\mathit{fin}}\) is _final_ and cannot evolve. A Turing machine \((Q,q_{in},q_{T},q_{F},\delta)\) computes the function \(f:\mathbb{B}^{*}\to\mathbb{B}\) in time \(T:\mathbb{N}\to\mathbb{N}\) and space \(S:\mathbb{N}\to\mathbb{N}\) if for every \(i\in\mathbb{B}^{+}\), the initial configuration for \(i\) evolves to a final configuration of state \(q_{f(i)}\) in \(T(|i|)\) steps and using at most \(S(|i|)\) cells on the work tape. Encoding configurations.A configuration \((i,n\,|\,s,a,r\,|\,q)\) of a machine \(\mathcal{M}=(Q,q_{in},q_{T},q_{F},\delta)\) is represented by the term \[\overline{(i,n\,|\,w_{l},a,w_{r}\,|\,q)}^{\mathcal{M}}\mathrel{\mathop{:}}= \lambda x.(x\overline{i}^{\mathbb{B}^{+}}\overline{[n]}^{\mathbb{B}}\, \overline{w_{l}^{\mathbb{B}_{\mathsf{W}}^{*}}}\,[\,a\,]^{\mathbb{B}_{\mathsf{W} }}\,\overline{w_{r}^{\mathbb{B}_{\mathsf{W}}^{*}}}\,[\,q\,]^{Q}).\] where \(w_{l}^{\mathbb{R}}\) is the string \(w_{l}\) with the elements in reverse order. We shall often rather write \[\overline{(i,n\,|\,w_{l},a,w_{r}\,|\,q)}\mathrel{\mathop{:}}=\lambda x.(x \overline{i}\,\,\overline{[n]}\,\,\overline{w_{l}^{\mathbb{R}}}\,\,[\,a\,]\, \overline{w_{r}}\,\,[\,q\,]).\] letting the superscripts implicit. To ease the reading, we sometimes use the following notation for tuples \(\langle s,q\,|\,t,u,r\,|\,w\rangle:=\lambda x.(xsqturw)\), so that \(\overline{(i,n\,|\,w_{l},a,w_{r}\,|\,q)}=\langle\overline{i},\overline{\lfloor n \rfloor}\,\big{|}\,\overline{w_{l}^{\text{R}}},\lceil a\rceil,\overline{w_{r} }\,\rceil\,\big{\lceil}\,q\rceil)\). Encoding the transition functionsThe transition function \(\delta(b,a,q)\) is implemented by looking up into a 3-dimensional table \(T\) having for coordinates: * _Input_: the current bit \(b\) on the input tape, which is actually retrieved from the input tape \(i\) and the counter \(n\) of the current input position, * _Work_: the current character \(a\) on the work tape, and * _State_: the current state \(q\), The transition function is encoded as a recursive \(\lambda\)-term trans taking as argument the encodings of \(i\), and \(n\)--to retrieve \(b\)--and \(a\) and \(q\). It works as follows: * It first retrieves \(b\) from \(n\) and \(i\) by applying the lookup function; * It has a subterm \(A_{b}\) for the four values of \(b\). The right sub-term is selected by applying the encoding \(\lceil b\rceil\) of \(b\) to \(A_{0},A_{1},A_{\text{L}}\) and \(A_{\text{R}}\). * Each \(A_{b}\) in turn has a sub-term \(B_{b,a}\) for every character \(a\in\mathbb{B}_{\mathsf{W}}\), corresponding to the working tape coordinates. The right sub-term is selected by applying the encoding \(\lceil a\rceil\) of the current character \(a\) on the work tape to \(B_{b,0},B_{b,1},B_{b,\Box}\). * Each \(B_{b,a}\) in turn has a subterm \(C_{b,a,q}\) for every character \(q\) in \(Q\). The right sub-term is selected by applying the encoding \(\lceil q\rceil\) of the current state \(q\) to \(C_{b,a,q_{1}},\ldots,B_{b,a,q_{|Q|}}\). * The subterm \(C_{b,a,q}\) produces the (encoding of the) next configuration according to the transition function \(\delta\). If \(\delta\) decreases (resp. increases) the counter for the input tape then \(C_{b,a,q}\) applies pred (resp. succ) to the input counter and then applies a term corresponding to the required action on the work tape, namely: * \(S\) (for _stay_) if the head does not move. This case is easy, \(S\) simply produces the next configuration. * \(L\) if it moves left. Let \(w_{l}=wa^{\prime\prime}\), \(a^{\prime}\) the element that the transition has to write and \(q^{\prime}\) the new state. Then \(L\) has a subterm \(L^{a^{\prime},q^{\prime}}_{a^{\prime\prime}}\) for each \(a^{\prime\prime}\in\mathbb{B}_{\mathsf{W}}\) the task of which is to add \(a^{\prime}\) to the right part of the work tape, remove \(a^{\prime\prime}\) from the left part of the work tape (which becomes \(w\)), and make \(a^{\prime\prime}\) the character in the work head position. * \(R\) if it moves right. Its structure is similar to the one of \(L\). In order to be as modular as possible we use the definition of \(S\), \(L\), and \(R\) for the cases when the input head moves also for the cases where it does not move, even if this requires a useless (but harmless) additional update of the counter \(n\). Define the term \(\mathtt{trans}^{\mathcal{M}}\), or simply \(\mathtt{trans}\), as follows. \[\mathtt{transaux} := \lambda x.\lambda k^{\prime}.\lambda C^{\prime}.C^{\prime}(\lambda i ^{\prime}.\lambda n^{\prime}.\lambda w^{\prime}_{l}.\lambda a^{\prime}.\lambda w ^{\prime}_{r}.\lambda q^{\prime}.\mathtt{lookup}Ki^{\prime}n^{\prime})\] \[\mathtt{trans} := \mathtt{fix}\mathtt{transaux},\] where: \[\begin{array}{rcl}T&:=&\lambda b^{\prime}.b^{\prime}\,A_{0}A_{1}A_{\mathsf{ L}}A_{\mathsf{R}}a^{\prime}q^{\prime}xk^{\prime}i^{\prime}n^{\prime}w^{ \prime}_{l}w^{\prime}_{r}\\ A_{b}&:=&\lambda a^{\prime}.a^{\prime}B_{b,0}B_{b,1}B_{b,\Box}\\ \\ B_{b,a}&:=&\lambda q^{\prime}.q^{\prime}C_{b,a,q_{1}}\ldots C_{b,a,q_{|Q|}} \\ \\ C_{b,a,q}&:=&\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{ \prime}.\lambda w^{\prime}_{l}.\lambda w^{\prime}_{r}.\end{array}\] \[\begin{array}{rcl}\left\{\begin{array}{ll}k^{\prime}\langle i^{\prime},n^ {\prime}\,|\,w^{\prime}_{l},\lceil a\rceil,w^{\prime}_{r}\,|\,\lceil q\rceil \rangle&\quad\text{if }q\in Q_{\text{fin}}\\ Sn^{\prime}&\quad\text{if }\delta(b,a,q)=(0\,|\,a^{\prime},\downarrow\,|\,q^{ \prime})\\ Ln^{\prime}&\quad\text{if }\delta(b,a,q)=(0\,|\,a^{\prime},\leftarrow\,|\,q^{ \prime})\\ Rn^{\prime}&\quad\text{if }\delta(b,a,q)=(0\,|\,a^{\prime},\rightarrow\,|\,q^{ \prime})\\ \mathtt{pred}Sn^{\prime}&\quad\text{if }\delta(b,a,q)=(-1\,|\,a^{\prime}, \downarrow\,|\,q^{\prime})\\ \mathtt{pred}Ln^{\prime}&\quad\text{if }\delta(b,a,q)=(-1\,|\,a^{\prime}, \leftarrow\,|\,q^{\prime})\\ \mathtt{pred}Rn^{\prime}&\quad\text{if }\delta(b,a,q)=(-1\,|\,a^{\prime}, \rightarrow\,|\,q^{\prime})\\ \mathtt{succ}Sn^{\prime}&\quad\text{if }\delta(b,a,q)=(+1\,|\,a^{\prime}, \downarrow\,|\,q^{\prime})\\ \mathtt{succ}Ln^{\prime}&\quad\text{if }\delta(b,a,q)=(+1\,|\,a^{\prime}, \leftarrow\,|\,q^{\prime})\\ \mathtt{succ}Rn^{\prime}&\quad\text{if }\delta(b,a,q)=(+1\,|\,a^{\prime}, \rightarrow\,|\,q^{\prime})\\ \\ S&:=&\lambda n^{\prime\prime}.xk^{\prime}\langle i^{\prime},n^{\prime\prime} \,|\,w^{\prime}_{l},[a^{\prime}],w^{\prime}_{r}\,|\,[q^{\prime}]\rangle\\ L&:=&\lambda n^{\prime\prime}.w_{l}^{\prime}L_{0}^{q^{\prime},a^{\prime}}L_{1 }^{q^{\prime},a^{\prime}}L_{\Box}^{q^{\prime},a^{\prime}}L_{\varepsilon}^{q^ {\prime},a^{\prime}}xk^{\prime}i^{\prime}n^{\prime\prime}w^{\prime}_{r}\\ R&:=&\lambda n^{\prime\prime}.w^{\prime}_{r}R_{0}^{q^{\prime},a^{\prime}}R_{ 0}^{q^{\prime},a^{\prime}}R_{0}^{q^{\prime},a^{\prime}}R_{0}^{q^{\prime},a^{ \prime}}R_{0}^{q^{\prime},a^{\prime}}xk^{\prime}i^{\prime}n^{\prime\prime}w^{ \prime}_{l}\\ L_{a^{\prime}}^{q^{\prime},a^{\prime}}&:=&\lambda w^{\prime}_{l}.\lambda x. \lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\mathtt{append}^{a^{ \prime}}(\lambda w^{\prime}_{r}.xk^{\prime}\langle i^{\prime},n^{\prime}\,|\,w^ {\prime}_{l},[a^{\prime\prime}],w^{\prime}_{r}\,|\,[q^{\prime}]\rangle)\\ L_{\varepsilon}^{q^{\prime},a^{\prime}}&:=&\lambda x.\lambda k^{\prime}. \lambda i^{\prime}.\lambda n^{\prime}.\mathtt{append}^{a^{\prime}}((\lambda d. \lambda w^{\prime}_{r}.xk^{\prime}\langle i^{\prime},n^{\prime}\,|\,d,[ \Box],w^{\prime}_{r}\,|\,[q^{\prime}]\rangle)\overline{\varepsilon})\\ R_{a^{\prime\prime}}^{q^{\prime},a^{\prime}}&:=&\lambda w^{\prime}_{r}.\lambda x. \lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\mathtt{append}^{a^{ \prime}}(\lambda w^{\prime}_{l}.xk^{\prime}\langle i^{\prime},n^{\prime}\,|\,w^ {\prime}_{l},[a^{\prime\prime}],w^{\prime}_{r}\,|\,[q^{\prime}]\rangle)\\ R_{\varepsilon}^{q^{\prime},a^{\prime}}&:=&\lambda x.\lambda k^{\prime}.\lambda i^{ \prime}.\lambda n^{\prime}.\mathtt{append}^{a^{\prime}}((\lambda d.\lambda w^{ \prime}_{l}.xk^{\prime}\langle i^{\prime},n^{\prime}\,|\,w^{\prime}_{l},[ \Box],d\,[q^{\prime}]\rangle)\overline{\varepsilon})\end{array}\] ## 3 Time Correctness of the Encoding Turning the input string into the initial configuration.The following lemma provides the term \(\mathtt{init}\) that builds the initial configuration. **Lemma 3.1** (Turning the input string into the initial configuration).: _Let \(\mathcal{M}=(Q,q_{in},q_{T},q_{F},\delta)\) be a Turing machine. There is a term \(\mathtt{init}^{\mathcal{M}}\), or simply \(\mathtt{init}\), such that for every continuation term \(k\) and for every input string \(i\in\mathbb{B}_{\mathsf{i}}^{*}\) (where \(i=\mathsf{L}\cdot s\cdot\mathsf{R}\) and \(s\in\mathbb{B}^{*}\)):_ \[\mathtt{init}\,k\,\overline{i}\quad\rightarrow_{det}^{\Theta(1)}\quad k\, \overline{C_{\mathtt{in}}(i)}\] _where \(C_{\mathtt{in}}(i)\) is the initial configuration of \(\mathcal{M}\) for \(i\)._ Proof.: Define \[\mathtt{init}:=(\lambda d.\lambda e.\lambda f.\lambda k^{\prime}.\lambda i^{ \prime}.k^{\prime}\langle i^{\prime},d\,|\,e,\lceil\Box\rceil^{\mathbb{B}_{ \mathsf{W}}},f\,|\,\lceil q_{in}\rceil^{Q}\rangle)\overline{[0]\varepsilon}^{ \mathbb{B}_{\mathsf{W}}^{*}}\overline{e}^{\mathbb{B}_{\mathsf{W}}^{*}}_{\mathsf{W}}\] Please note that the term is not in normal form. This is for technical reasons that will be clear next. Then \[\begin{array}{rcl}\mathtt{init}\,k^{\mathtt{a}^{\mathtt{b}^{*}}_{1}}&=&( \lambda d.\lambda e.\lambda f.\lambda k^{\prime}.\lambda i^{\prime}.k^{\prime} \langle i^{\prime},d\,|\,e,[\square]^{\mathtt{Bw}},f\,|\,[q_{in}]^{Q}))\overline {[0]}\overline{\varepsilon}^{\mathtt{b}^{*}_{\mathtt{w}}}\overline{\varepsilon}^ {\mathtt{b}^{*}_{\mathtt{w}}}k^{\mathtt{a}^{*}_{1}}_{1}\\ &\rightarrow^{5}_{det}&k\,\langle\overline{i}^{\mathtt{b}^{*}_{1}},\overline{[ 0]}\,|\,\overline{\varepsilon}^{\mathtt{b}^{*}_{\mathtt{w}}},[\square]^{ \mathtt{Bw}},\overline{\varepsilon}^{\mathtt{b}^{*}_{\mathtt{w}}}\,|\,[q_{in} ]^{Q}\rangle\\ &=&k\,\overline{(i,0\,|\,e,\square,\varepsilon\,|\,q_{in})}\\ &=&k\,\overline{C_{\mathtt{in}}(i)}\end{array}\] **Extracting the output from the final configuration.** **Lemma 3.2** (Extracting the output from the final configuration).: _Let \(\mathcal{M}=(Q,q_{in},q_{T},q_{F},\delta)\) be a Turing machine. There is a term \(\mathtt{final}^{\mathcal{M}}\), or simply \(\mathtt{final}\), such that for every continuation term \(k\)and for every final configuration \(C\) of state \(q\in Q_{\mathit{fin}}\):_ \[\mathtt{final}\,k\,\overline{C}\rightarrow^{\Theta(|Q|)}_{det}\begin{cases}k( \lambda x.\lambda y.x)&\text{ if }q=q_{T}\\ k(\lambda x.\lambda y.y)&\text{ if }q=q_{F}\end{cases}\] Proof.: Define \[\mathtt{final}:=\lambda k^{\prime}.\lambda C^{\prime}.C^{\prime}(\lambda i^{ \prime}.\lambda n^{\prime}.\lambda w^{\prime}_{l}.\lambda a^{\prime}.\lambda w ^{\prime}_{r}.\lambda q^{\prime}.q^{\prime}N_{1}\ldots N_{|Q|}k^{\prime})\] where: \[N_{i}:=\begin{cases}\lambda k^{\prime}.k^{\prime}(\lambda x.\lambda y.x)&\text { if }q_{i}=q_{T}\\ \lambda k^{\prime}.k^{\prime}(\lambda x.\lambda y.y)&\text{ if }q_{i}=q_{F}\\ \text{ whatever closed term (say, the identity)}&\text{ otherwise}\end{cases}\] Then: \[\begin{array}{rcl}&\mathtt{final}\,k\,\overline{C}\\ =&(\lambda k^{\prime}.\lambda C^{\prime}.C^{\prime}(\lambda i^{\prime}.\lambda n ^{\prime}.\lambda w^{\prime}_{l}.\lambda a^{\prime}.\lambda w^{\prime}_{r}. \lambda q^{\prime}.q^{\prime}N_{1}\ldots N_{|Q|}k^{\prime}))k\overline{C}\\ \rightarrow^{2}_{det}&\overline{C}(\lambda i^{\prime}.\lambda n^{\prime}. \lambda w^{\prime}_{l}.\lambda a^{\prime}.\lambda w^{\prime}_{r}.\lambda q^{ \prime}.q^{\prime}N_{1}\ldots N_{|Q|}k)\\ =&\overline{(i,n\,|\,w_{l},a,w_{r}\,|\,q)}(\lambda i^{\prime}.\lambda n^{ \prime}.\lambda w^{\prime}_{l}.\lambda a^{\prime}.\lambda w^{\prime}_{r}. \lambda q^{\prime}.q^{\prime}N_{1}\ldots N_{|Q|}k)\\ =&(\lambda x.x\overline{i}^{\mathtt{b}^{*}}\,\overline{[n]}\,\overline{w}^{ \mathtt{b}^{*}_{\mathtt{w}}}_{1}\,[\alpha]^{\mathtt{Bw}}\,\,\overline{w}^{ \mathtt{b}^{*}_{\mathtt{w}}}_{r}\,[q]^{Q})(\lambda i^{\prime}.\lambda n^{ \prime}.\lambda w^{\prime}_{l}.\lambda a^{\prime}.\lambda w^{\prime}_{r}. \lambda q^{\prime}.q^{\prime}N_{1}\ldots N_{|Q|}k)\\ \rightarrow^{4}_{det}&(\lambda i^{\prime}.\lambda n^{\prime}.\lambda w^{ \prime}_{l}.\lambda a^{\prime}.\lambda w^{\prime}_{r}.\lambda q^{\prime}.q^{ \prime}N_{1}\ldots N_{|Q|}k)^{\mathtt{a}^{*}_{\mathtt{w}}}\,[n]\,\,w^{ \mathtt{b}^{*}_{\mathtt{w}}}_{l}\,[a]^{\mathtt{Bw}}\,\,\overline{w}^{ \mathtt{b}^{*}_{\mathtt{w}}}_{r}\,[q]^{Q}\\ \rightarrow^{6}_{det}&[q]^{Q}N_{1}\ldots N_{|Q|}k\\ =&(\lambda x_{1}\ldots x_{|Q|}.x_{j})N_{1}\ldots N_{|Q|}k\\ \rightarrow^{|Q|}_{det}&N_{j}k\end{array}\] If \(q=q_{T}\), then: \[\begin{array}{rcl}N_{j}k&=&(\lambda k^{\prime}.k^{\prime}(\lambda x.\lambda y.x))k\\ &\rightarrow_{det}&k(\lambda x.\lambda y.x)\end{array}\] If \(q=q_{F}\), then: \[\begin{array}{rcl}N_{j}k&=&(\lambda k^{\prime}.k^{\prime}(\lambda x.\lambda y.y))k\\ &\rightarrow_{det}&k(\lambda x.\lambda y.y)\end{array}\] Simulation of a machine transition.Now we show that the given encoding of the transition function \(\delta\) of a Turing machine as a \(\lambda\)-term simulates every single transition in \(\mathcal{O}(|i|\log|i|)\) time, where \(i\) is the input string. This is the heart of the encoding, and the most involved proof. **Lemma 3.3** (Simulation of a machine transition).: _Let \(\mathcal{M}=(Q,q_{in},q_{T},q_{F},\delta)\) be a Turing machine. The term \(\mathtt{trans}^{\mathcal{M}}\) is such that for every continuation term \(k\) and for every configuration \(C\) of input string \(i\in\mathbb{B}^{+}\):_ * _Final configuration: if_ \(C\) _is a final configuration then_ \(\mathtt{trans}\,k\,\overline{C}\to_{det}^{\mathcal{O}(|i|\log|i|)}k\,\overline{C}\)_;_ * _Non-final configuration: if_ \(C\to_{\mathcal{M}}D\) _then_ \(\mathtt{trans}\,k\,\overline{C}\to_{det}^{\mathcal{O}(|i|\log|i|)}\mathtt{ trans}\,k\,\overline{D}\)_._ Proof.: Let \(C=(i,n\,|\,w_{l},a,w_{r}\,|\,q)\). We are now going to show the details of how the \(\lambda\)-calculus simulates the transition function. At the level of the number of steps, the main cost is payed at the beginning, by the lookup function that looks up the \(n\)-th character of the input string \(i\). The cost of one such call is \(\mathcal{O}(n\log n)\), but since \(n\) can vary and \(n\leq|i|\), such a cost is bound by \(\mathcal{O}(|i|\log|i|)\). The cases of transition where the position on the input tape does not change have a constant cost. Those where the input position changes require to change the counter \(n\) via pred or succ, which requires \(\mathcal{O}(\log n)\), itself bound by the cost \(\mathcal{O}(|i|\log|i|)\) of the previous look-up. Now, if \(\textsc{lookup}\,[n]\,i=b\) then: \(\texttt{trans}\,k\,\overline{C}\) \(=\) \(\texttt{fixtransaux}k\overline{C}\) \(\rightarrow^{2}_{det}\) \(\texttt{transaux}(\lambda z.\texttt{fix}\,\texttt{transaux}z)k\overline{C}\) \(=\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda C^{\prime}.C^{\prime}(\lambda i^{\prime}. \lambda n^{\prime}.\lambda w^{\prime}_{l}.\lambda a^{\prime}.\lambda w^{ \prime}_{r}.\lambda q^{\prime}.\texttt{lookup}\,Ti^{\prime}n^{\prime}))( \lambda z.\texttt{trans}\,z)k\overline{C}\) \(\rightarrow^{3}_{det}\) \(\overline{C}(\lambda i^{\prime}.\lambda n^{\prime}.\lambda w^{\prime}_{l}. \lambda a^{\prime}.\lambda w^{\prime}_{r}.\lambda q^{\prime}.\texttt{lookup}\,T\{x \leftarrow\lambda z.\texttt{trans}\,z\}i^{\prime}n^{\prime})\) \(=\) \((i,n\,|\,w_{l},a,w_{r}\,|\,q)(\lambda i^{\prime}.\lambda n^{\prime}.\lambda w^ {\prime}_{l}.\lambda a^{\prime}.\lambda w^{\prime}_{r}.\lambda q^{\prime}. \texttt{lookup}\,T\{x\leftarrow\lambda z.\texttt{trans}\,z\}i^{\prime}n^{ \prime})\) \(=\) \((\lambda x.\vec{x}\overline{\{n\,|\,w^{\rm R}_{l}\,}}\,\lceil a\rceil\,\, \overline{w_{r}\,}\,\lceil q\rceil)(\lambda i^{\prime}.\lambda n^{\prime}. \lambda w^{\prime}_{l}.\lambda a^{\prime}.\lambda w^{\prime}_{r}.\lambda q^{ \prime}.\texttt{lookup}\,T\{x\leftarrow\lambda z.\texttt{trans}\,z\}i^{ \prime}n^{\prime})\) \(=\) \((\lambda i^{\prime}.\lambda n^{\prime}.\lambda w^{\prime}_{l}.\lambda a^{ \prime}.\lambda w^{\prime}_{r}.\lambda q^{\prime}.\texttt{lookup}\,T\{x \leftarrow\lambda z.\texttt{trans}\,z\}i^{\prime}n^{\prime})\overline{\{i\,| \,w^{\rm R}_{l}\,}}\,\lceil a\rceil\,\,\overline{w_{r}\,}\,\lceil q\rceil\) \(\rightarrow^{6}_{det}\) \(\texttt{lookup}\,(\lambda b^{\prime}.b^{\prime}\,A_{0}A_{1}A_{4}A_{8}\lceil a \rceil\,\lceil q\rceil(\lambda z.\texttt{trans}\,z)k^{\prime}\overline{\{n \,|\,w^{\rm R}_{l}\,}}\overline{w_{r}\})\overline{\{i\,|\,n\}}\) \(L.\)\(2.5\rightarrow^{\mathcal{O}(n\log n)}_{det}\) \((\lambda b^{\prime}.b^{\prime}\,A_{0}A_{1}A_{4}A_{8}\lceil a\rceil\,\lceil q \rceil(\lambda z.\texttt{trans}\,z)k^{\prime}\overline{\{n\,|\,w^{\rm R}_{l} \,}}\overline{w_{r}})\lceil b\rceil\) \(\rightarrow_{det}\) \(\lceil b\rceil\,A_{0}A_{1}A_{4}A_{8}\lceil a\rceil\,\lceil q\rceil(\lambda z. \texttt{trans}\,z)k\overline{\{n\,|\,w^{\rm R}_{l}\,}}\overline{w_{r}}\) \(\rightarrow^{4}_{det}\) \(A_{b}\lfloor a\rceil\,\lceil q\rceil(\lambda z.\texttt{trans}\,z)k\overline{ \{n\,|\,w^{\rm R}_{l}\,}}\,\overline{w_{r}}\) \(=\) \((\lambda a^{\prime}.a^{\prime}B_{b,0}B_{b,1}B_{b,\square})\lceil a\rceil\, \lceil q\rceil(\lambda z.\texttt{trans}\,z)k\overline{\{n\,|\,w^{\rm R}_{l}\, }}\,\overline{w_{r}}\) \(\rightarrow_{det}\) \(\lceil a\rceil B_{b,0}B_{b,1}B_{b,\square}\lceil q\rceil(\lambda z.\texttt{ trans}\,z)k\overline{\{n\,|\,w^{\rm R}_{l}\,}}\overline{w_{r}}\) \(\rightarrow^{3}_{det}\) \(B_{b,a}\lceil q\rceil(\lambda z.\texttt{trans}\,z)k\overline{\{n\,|\,w^{\rm R }_{l}\,}}\overline{w_{r}}\) \(=\) \((\lambda q^{\prime}.q^{\prime}C_{b,a,q_{1}}\ldots C_{b,a,q_{|Q|}})\lceil q\rceil (\lambda z.\texttt{trans}\,z)k\overline{\{n\,|\,w^{\rm R}_{l}\,}}\overline{w_{r}}\) \(\rightarrow_{det}\) \(\lceil q\rceil C_{b,a,q_{1}}\ldots C_{b,a,q_{|Q|}}(\lambda z.\texttt{trans}\,z) k\overline{\{n\,|\,w^{\rm R}_{l}\,}}\overline{w_{r}}\) \(\rightarrow^{|Q|}_{det}\) \(C_{b,a,q}(\lambda z.\texttt{trans}\,z)k\overline{\{n\,|\,w^{\rm R}_{l}\,}} \overline{w_{r}}\) Now, consider the following four cases, depending on the value of \(\delta(b,a,q)\): 1. _Final state_: if \(\delta(b,a,q)\) is undefined, then \(q\in Q_{\mathit{fin}}\) and replacing \(C_{b,a,q}\) with the corresponding \(\lambda\)-term we obtain: \(C_{b,a,q}(\lambda z.\mathtt{trans}\,z)k\overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_ {r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l }^{\prime}.\lambda w_{r}^{\prime}.Ln^{\prime})(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l }^{\prime}.\lambda w_{r}^{\prime}.Ln^{\prime})(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \(k\overline{i[\,n\,]}\overline{w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \(k\overline{i[\,n\,]}\overline{w_{l}}\overline{w_{l}}\) \(=\) \(k\overline{C}\) 2. _The heads do not move_: if \(\delta(b,a,q)=(0\,|\,a^{\prime},\downarrow\,|\,q^{\prime})\), then \(D=(i,n\,|\,w_{l},a^{\prime},w_{r},q^{\prime})\). The simulation continues as follows: \(C_{b,a,q}(\lambda z.\mathtt{trans}\,z)k\overline{i[\,n\,]w_{l}^{\mathrm{R}}} \overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l }^{\prime}.\lambda w_{r}^{\prime}.Sn^{\prime})(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w _{l}^{\prime}.\lambda w_{r}^{\prime}.(\lambda n^{\prime\prime}.xk^{\prime} \langle i^{\prime},n^{\prime\prime}\,|\,w_{l}^{\prime},[a^{\prime}],w_{r}^{ \prime}\,|\,[q^{\prime}]\rangle)n^{\prime})(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(\rightarrow_{det}^{6}\) \((\lambda n^{\prime\prime}.(\lambda z.\mathtt{trans}\,z)k\overline{i},n^{\prime \prime}\,|\,\overline{w_{l}^{\mathrm{R}}},[a^{\prime}],\overline{w_{r}}\,|\, [q^{\prime}]\rangle)\overline{[n]}\) \(\rightarrow_{det}^{2}\) \(\mathtt{trans}\,k\overline{i},\overline{[\,n\,]}\,|\,\overline{w_{l}^{\mathrm{R}}},[a^{\prime}],\overline{w_{r}}\,|\,\lceil q^{\prime}\rceil)\) \(=\) \(\mathtt{trans}\,k\overline{i[\,n\,|\,w_{l},a^{\prime},w_{r}\,|\,q^{\prime})}\) \(=\) \(\mathtt{trans}\,k\overline{D}\) 3. _The input head does not move and the work head moves left_: if \(\delta(b,a,q)=\) \((0\,|\,a^{\prime},\leftarrow\,|\,q^{\prime})\) and \(w_{l}=wa^{\prime\prime}\) then: \(C_{b,a,q}(\lambda z.\mathtt{trans}\,z)k\overline{i[\,n\,]w_{l}^{\mathrm{R}}} \overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime\prime}. \lambda w_{l}^{\prime}.\lambda w_{r}^{\prime}.Ln^{\prime})(\lambda z.\mathtt{ trans}\,z)k\overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w _{l}^{\prime}.\lambda w_{r}^{\prime}.(\lambda n^{\prime\prime}.w_{l}^{\prime} \overline{L}_{\Box}^{q^{\prime},a^{\prime}}L_{\Box}^{q^{\prime},a^{\prime}}L_{ \varepsilon}^{q^{\prime},a^{\prime}}xk^{\prime}i^{\prime}n^{\prime\prime}w_{r }^{\prime})n^{\prime})(\lambda z.\mathtt{trans}\,z)k\overline{i[\,n\,]w_{l}^{ \mathrm{R}}}\overline{w_{r}}\) \(\rightarrow_{det}^{6}\) \((\lambda n^{\prime\prime}.\overline{w_{l}^{\mathrm{R}}}L_{0}^{q^{\prime},a^{ \prime}}L_{1}^{q^{\prime},a^{\prime}}L_{\Box}^{q^{\prime},a^{\prime}}L_{ \varepsilon}^{q^{\prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z)k\overline{i [\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(\rightarrow_{det}^{6}\) \((\lambda n^{\prime\prime}.\overline{w_{l}^{\mathrm{R}}}L_{0}^{q^{\prime},a^{ \prime}}L_{1}^{q^{\prime},a^{\prime}}L_{\Box}^{q^{\prime},a^{\prime}}L_{ \varepsilon}^{q^{\prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z)k\overline{i [\,n\,]w_{r}}\) \(\rightarrow_{det}^{6}\) \(\overline{w_{l}^{\mathrm{R}}}L_{0}^{q^{\prime},a^{\prime}}L_{1}^{q^{\prime},a^{ \prime}}L_{\Box}^{q^{\prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{r}}\) Two sub-cases, depending on whether \(w_{l}\) is an empty or a compound string. 1. \(w_{l}\) _is the compound string \(wa^{\prime\prime}\)_. Then \(w_{l}^{\mathrm{R}}=a^{\prime\prime}w^{\mathrm{R}}\) and \(D=(i,n\,|\,w_{l},a^{\prime\prime},a^{\prime}w_{r}\,|\,q^{\prime})\). The simulation continues as follows: \(C_{b,a,q}(\lambda z.\mathtt{trans}\,z)k\overline{i[\,n\,]w_{l}^{\mathrm{R}}} \overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime\prime}.\lambda w _{l}^{\prime}.\lambda w_{r}^{\prime}.Ln^{\prime})(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime\prime}.\lambda w _{l}^{\prime}.\lambda w_{r}^{\prime}.Ln^{\prime})(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l }^{\prime}.\lambda w_{r}^{\prime}.Ln^{\prime})(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime\prime}.\lambda w _{l}^{\prime}.\lambda w_{r}^{\prime}.Ln^{\prime})(\lambda z.\mathtt{trans}\,z)k \overline{i[\,n\,]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime\prime}. \(\begin{array}{l}=\\ \overline{a^{\prime\prime}w^{\text{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}}}}}}}{{{\rm{{\rm{{\rm{ \rm{\rm{\rm{\rm{{\rm{{\rm{\rm{{\rm{\rm{\rm{}}}}}}}}}}}}}{{{\rm{{\rm{{\rm{{ \rm{\rm{\rm{\rm{{\rm{\rm{{\rm{\rm{\rm{}}}}}}}}}}}}}{{{\rm{{\rm{{\rm{{\rm{ \rm{\rm{\rm{{\rm{{\rm{\rm{\rm{}}}}}}}}}}}}}{{{\rm{{\rm{{\rm{\rm{{\rm{{{\rm{{\rm{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\ \(C_{b,a,q}(\lambda z.\texttt{trans}\,z)k\overline{i}[n]\overline{w_{l}^{k}w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l}^ {\prime}.\lambda w_{r}^{\prime}.Rn^{\prime})(\lambda z.\texttt{trans}\,z)k \overline{i}[\overline{n}]w_{l}^{k}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l}^ {\prime}.\lambda w_{r}^{\prime}.(\lambda n^{\prime\prime}.w_{r}^{\prime}R_{0}^ {q^{\prime},a^{\prime}}R_{1}^{q^{\prime},a^{\prime}}R_{\Box}^{q^{\prime},a^{ \prime}}R_{\varepsilon}^{q^{\prime},a^{\prime}}xk^{\prime}i^{\prime}n^{ \prime\prime}w_{l}^{\prime})n^{\prime})(\lambda z.\texttt{trans}\,z)k \overline{i}[\overline{n}]w_{l}^{k}\overline{w_{r}}\) \(\rightarrow^{6}_{det}\) \((\lambda n^{\prime\prime}.\overline{w_{r}}R_{0}^{q^{\prime},a^{\prime}}R_{1} ^{q^{\prime},a^{\prime}}R_{\Box}^{q^{\prime},a^{\prime}}R_{\varepsilon}^{q^{ \prime},a^{\prime}}(\lambda z.\texttt{trans}\,z)k\overline{i}n^{\prime\prime} \overline{w_{l}^{k}})[n]\) \(\rightarrow_{det}\) \(\overline{w_{r}}R_{0}^{q^{\prime},a^{\prime}}R_{1}^{q^{\prime},a^{\prime}}R_ {\Box}^{q^{\prime},a^{\prime}}R_{\varepsilon}^{q^{\prime},a^{\prime}}(\lambda z.\texttt{trans}\,z)k\overline{i}[\overline{n}]w_{l}^{k}\) Two sub-cases, depending on whether \(w_{l}\) is an empty or a compound string. 1. \(w_{r}\) _is the compound string_ \(a^{\prime\prime}w\). Then \(D=(i,n\,|\,w_{l}a^{\prime},a^{\prime\prime},w\,|\,q^{\prime})\). The simulation continues as follows: \(=\) \(\overline{a^{\prime\prime}w}R_{0}^{q^{\prime},a^{\prime}}R_{1}^{q^{\prime},a^{ \prime}}R_{\Box}^{q^{\prime},a^{\prime}}R_{\varepsilon}^{q^{\prime},a^{ \prime}}(\lambda z.\texttt{trans}\,z)k\overline{i}[\overline{n}]\overline{w_{l} ^{k}}\) \(\rightarrow^{4}_{det}\) \(R_{a^{\prime\prime}}^{q^{\prime},a^{\prime}}\overline{w}(\lambda z.\texttt{ trans}\,z)k\overline{i}[\overline{n}]w_{l}^{k}\) \(=\) \((\lambda w_{r}^{\prime}.\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n ^{\prime}.\texttt{append}^{a^{\prime}}(\lambda w_{l}^{\prime}.xk^{\prime} \langle i^{\prime},n^{\prime}\,|\,w_{l}^{\prime},\lceil a^{\prime\prime} \rceil,w_{r}^{\prime}\,|\,|q^{\prime}|)))\overline{w}(\lambda z.\texttt{trans }\,z)k\overline{i}[\overline{n}]w_{l}^{k}\) \(\rightarrow^{5}_{det}\) \(\texttt{append}^{a^{\prime}}(\lambda w_{l}^{\prime}.(\lambda z.\texttt{trans }\,z)k\overline{i},\overline{[n]}\,|\,w_{l}^{\prime},\lceil a^{\prime\prime} \rceil,w\,|\,\lceil q^{\prime}\rceil))\overline{w_{l}^{k}}\) \(L.\)\(2.2\rightarrow^{O(1)}_{det}(\lambda w_{l}^{\prime}.(\lambda z.\texttt{trans }\,z)k\overline{i},\overline{[n]}\,|\,w_{l}^{\prime},\lceil a^{\prime\prime} \rceil,w\,|\,\lceil q^{\prime}\rceil))\overline{a^{\prime}w_{l}^{k}}\) \(\rightarrow^{2}_{det}\) \(\texttt{trans}\,k\overline{i},\overline{[n]}\,|\,\overline{a^{\prime\prime}w_{l} ^{k}},\lceil a^{\prime\prime}\rceil,w\,|\,\lceil q^{\prime}\rceil)\) \(=\) \(\texttt{trans}\,k\overline{i},\overline{[n]}\,|\,\overline{(w_{l}a^{\prime})^{k}}, \lceil a^{\prime\prime}\rceil,w\,|\,\lceil q^{\prime}\rceil)\) \(=\) \(\texttt{trans}\,k\overline{i},\overline{[n]}\,|\,\overline{(w_{l}a^{\prime})^{k}}, \lceil a^{\prime\prime}\rceil,w\,|\,\lceil q^{\prime}\rceil)\) \(=\) \(\texttt{trans}\,k\overline{(i,n\,|\,w_{l}a^{\prime},a^{\prime\prime},w\,|\,q^{ \prime})}\) \(=\) \(\texttt{trans}\,k\overline{D}\) 2. \(w_{r}\) _is the empty string_ \(\varepsilon\). Then \(D=(i,n\,|\,w_{l}a^{\prime},\Box,\varepsilon\,|\,q^{\prime})\). The simulation continues as follows:_ \(=\) \(\overline{\varepsilon}R_{0}^{q^{\prime},a^{\prime}}R_{1}^{q^{\prime},a^{\prime}}R_{ \square}^{q^{\prime},a^{\prime}}R_{\varepsilon}^{q^{\prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z)k\overline{i[n]w_{l}^{\mathrm{R}}}\) \(\rightarrow_{det}^{\rightarrow}\) \(R_{\varepsilon}^{q^{\prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z)k\overline{i[n]w _{l}^{\mathrm{R}}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\mathtt{append }^{a^{\prime}}((\lambda d.\lambda w_{l^{\prime}}^{\prime}.xk^{\prime}\langle i^ {\prime},n^{\prime}\,|\,w_{l}^{\prime},[\square],d\,|\,\lceil q^{\prime}\rceil) \overline{\varepsilon}))(\lambda z.\mathtt{trans}\,z)k\overline{i[n]w_{l}^{ \mathrm{R}}}\) \(\rightarrow_{det}^{\rightarrow}\) \(\mathtt{append}^{a^{\prime}}((\lambda d.\lambda w_{l^{\prime}}^{\prime}.( \lambda z.\mathtt{trans}\,z)k\overline{i},\overline{n]}\,|\,w_{l}^{\prime},[ \square],d\,|\,\lceil q^{\prime}\rceil)\overline{\varepsilon})\overline{w_{l }^{\mathrm{R}}}\) \(L.\) \(2.2\rightarrow_{det}^{O(1)}\) \(((\lambda d.\lambda w_{l^{\prime}}^{\prime}.(\lambda z.\mathtt{trans}\,z)k \overline{i},\overline{n]}\,|\,w_{l}^{\prime},[\square],d\,|\,\lceil q^{\prime} \rceil)\overline{\varepsilon})\overline{a^{\prime}w_{l}^{\mathrm{R}}}\) \(\rightarrow_{det}^{3}\) \(\mathtt{trans}\,k(\overline{i},\overline{[n]}\,|\,\overline{a^{\prime}w_{l}^{ \mathrm{R}}},[\square],\overline{\varepsilon}\,|\,\lceil q^{\prime}\rceil))\) \(=\) \(\mathtt{trans}\,k\langle\overline{i},\overline{[n]}\,|\,\overline{(w_{l}a^{\prime })^{\mathrm{R}}},[\square],\overline{\varepsilon}\,|\,\lceil q^{\prime}\rceil))\) \(=\) \(\mathtt{trans}\,k\overline{(i,n\,|\,w_{l}a^{\prime},\square,\varepsilon\,|\,q^{ \prime})}\) \(=\) \(\mathtt{trans}\,k\overline{D}\) 5. _The input head moves left and the work head does not move_: if \(\delta(b,a,q)=(-1\,|\,a^{\prime},+\,|\,q^{\prime})\), then \(D=(i,n-1\,|\,w_{l},a^{\prime},w_{r},q^{\prime})\). The simulation continues as follows: \(C_{b,a,q}(\lambda z.\mathtt{trans}\,z)k\overline{i[n]w_{l}^{\mathrm{R}}} \overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l ^{\prime}}^{\prime}.\lambda w_{r}^{\prime}.\mathtt{pred}\,S(\lambda z.\mathtt{ trans}\,z)k\overline{i[n]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l ^{\prime}}^{\prime}.\lambda w_{r}^{\prime}.\mathtt{pred}\,(\lambda n^{\prime \prime}.xk^{\prime}\langle i^{\prime},n^{\prime\prime}\,|\,w_{l}^{\prime},[a^{ \prime}],w_{r}^{\prime}\,|\,\lceil q^{\prime}\rceil))n^{\prime})(\lambda z. \mathtt{trans}\,z)k\overline{i[n]w_{l}^{\mathrm{R}}}\overline{w_{r}}\) \(\rightarrow_{det}^{\rightarrow}\) \(\mathtt{pred}(\lambda n^{\prime\prime}.(\lambda z.\mathtt{trans}\,z)k\langle \overline{i},n^{\prime\prime}\,|\,\overline{w_{l}^{\mathrm{R}}},[a^{\prime}], \overline{w_{r}}\,|\,\lceil q^{\prime}\rceil))\overline{[n]}\) \(L.\) \(2.4\rightarrow_{det}^{O(\log n)}\) \((\lambda n^{\prime\prime}.(\lambda z.\mathtt{trans}\,z)k\langle\overline{i},n^{ \prime\prime}\,|\,\overline{w_{l}^{\mathrm{R}}},[a^{\prime}],\overline{w_{r}} \,|\,\lceil q^{\prime}\rceil))\overline{[n-1]}\) \(=\) \(\mathtt{trans}\,k(\overline{i},\lfloor n-1\rfloor\,|\,w_{l},a^{\prime},w_{r}\,|\,q^{ \prime})\) \(=\) \(\mathtt{trans}\,k\overline{(i,\lfloor n-1\rfloor\,|\,w_{l},a^{\prime},w_{r}\,|\,q^{ \prime})}\) \(=\) \(\mathtt{trans}\,k\overline{(i,\lfloor n-1\rfloor\,|\,w_{l},a^{\prime},w_{r}\,|\,q^{ \prime})}\) \(=\) \(\mathtt{trans}\,k\overline{(i,\lfloor n-1\rfloor\,|\,w_{l},a^{\prime},w_{r}\,|\,q^{ \prime})}\) \(=\) \(\mathtt{trans}\,k\overline{(i,\lfloor n-1\rfloor\,|\,w_{l},a^{\prime},w_{r}\,|\,q^ {\prime})}\) \(=\) \(\mathtt{trans}\,k\overline{(i,\lfloor n-1\rfloor\,|\,w_{l},a^{\prime},w_{ \(C_{b,a,q}(\lambda z.\mathtt{trans}\,z)k\overline{i[n]}w_{l}^{\overline{k}}w_{r}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l}^{ \prime}.\lambda w_{r}^{\prime}.\mathtt{pred}\,Ln^{\prime})(\lambda z.\mathtt{trans }\,z)k\overline{i[n]}w_{l}^{\overline{k}}w_{r}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l}^ {\prime}.\lambda w_{r}^{\prime}.\mathtt{pred}\,(\lambda n^{\prime\prime}.w_{l} ^{\prime}L_{0}^{q^{\prime},a^{\prime}}L_{1}^{q^{\prime},a^{\prime}}L_{\square} ^{q^{\prime},a^{\prime}}L_{\varepsilon}^{q^{\prime},a^{\prime}}xk^{\prime}i^ {\prime}n^{\prime\prime}w_{r}^{\prime})n^{\prime})(\lambda z.\mathtt{trans }\,z)k\overline{i[n]}w_{l}^{\overline{k}}w_{r}\) \(\rightarrow^{6}_{det}\) \(\mathtt{pred}\,(\lambda n^{\prime\prime}.\overline{w_{l}^{\overline{k}}}L_{0}^ {q^{\prime},a^{\prime}}L_{1}^{q^{\prime},a^{\prime}}L_{\square}^{q^{\prime},a ^{\prime}}L_{\varepsilon}^{q^{\prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z )k\overline{i}n^{\prime\prime}\overline{w_{r}})\overline{[n]}\) \(L.\ 2.4\rightarrow^{\mathcal{O}(\log n)}(\lambda n^{\prime\prime}.\overline{w_{l}^{ \overline{k}}}L_{0}^{q^{\prime},a^{\prime}}L_{1}^{q^{\prime},a^{\prime}}L_{ \square}^{q^{\prime},a^{\prime}}L_{\varepsilon}^{q^{\prime},a^{\prime}}( \lambda z.\mathtt{trans}\,z)k\overline{i}n^{\prime\prime}\overline{w_{r}}) \overline{[n-1]}\) \(\rightarrow_{det}\) \(\overline{w_{l}^{\overline{k}}}L_{0}^{q^{\prime},a^{\prime}}L_{1}^{q^{\prime},a ^{\prime}}L_{\square}^{q^{\prime},a^{\prime}}L_{\varepsilon}^{q^{\prime},a^{ \prime}}(\lambda z.\mathtt{trans}\,z)k\overline{i[n-1]}\overline{w_{r}}\) And then the case continues with the two sub-cases of case 3 (input head does not move and work head moves left), with the only difference that \(\overline{[n]}\) is replaced by \(\overline{[n-1]}\). 7. _The input head moves left and the work head moves right_: if \(\delta(b,a,q)=(-1\,|\,a^{\prime},\leftarrow\,|\,q^{\prime})\) and \(w_{l}=wa^{\prime\prime}\), then \(D=(i,n-1\,|\,w,a^{\prime\prime},a^{\prime}w_{r},q^{\prime})\). The simulation continues as follows: \(C_{b,a,q}(\lambda z.\mathtt{trans}\,z)k\overline{i[n]}w_{l}^{\overline{k}}w_{r}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l }^{\prime}.\lambda w_{r}^{\prime}.\mathtt{pred}\,Rn^{\prime})(\lambda z. \mathtt{trans}\,z)k\overline{i[n]}w_{l}^{\overline{k}}w_{r}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l }^{\prime}.\lambda w_{r}^{\prime}.\mathtt{pred}\,Rn^{\prime})(\lambda z. \mathtt{trans}\,z)k\overline{i[n]}w_{l}^{\overline{k}}w_{r}\) \(=\) \((\lambda x.\lambda k^{\prime}.\lambda i^{\prime}.\lambda n^{\prime}.\lambda w_{l }^{\prime}.\lambda w_{r}^{\prime}.\mathtt{pred}(\lambda n^{\prime\prime}.w_{r} ^{\prime}R_{0}^{q^{\prime},a^{\prime}}R_{1}^{q^{\prime},a^{\prime}}R_{ \square}^{q^{\prime},a^{\prime}}R_{\varepsilon}^{q^{\prime},a^{\prime}}xk^{ \prime}i^{\prime}n^{\prime\prime}w_{l}^{\prime})n^{\prime})(\lambda z. \mathtt{trans}\,z)k\overline{i[n]}w_{l}^{\overline{k}}w_{r}\) \(\rightarrow^{6}_{det}\) \(\mathtt{pred}(\lambda n^{\prime\prime}.\overline{w_{r}}R_{0}^{q^{\prime},a^{ \prime}}R_{1}^{q^{\prime},a^{\prime}}R_{\square}^{q^{\prime},a^{\prime}}R_{ \varepsilon}^{q^{\prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z)k\overline{ i}n^{\prime\prime}\overline{w_{l}^{\overline{k}}})\overline{[n]}\) \(\mathtt{pred}\,(\lambda n^{\prime\prime}.\overline{w_{r}}R_{0}^{q^{\prime},a^{ \prime}}R_{1}^{q^{\prime},a^{\prime}}R_{\square}^{q^{\prime},a^{\prime}}( \lambda z.\mathtt{trans}\,z)k\overline{i}n^{\prime\prime}\overline{w_{l}^{ \overline{k}}})\overline{[n]}\) by \(L.\ 2.4\rightarrow^{\mathcal{O}(\log n)}_{det}\) \((\lambda n^{\prime\prime}.\overline{w_{r}}R_{0}^{q^{\prime},a^{\prime}}R_{1}^{ q^{\prime},a^{\prime}}R_{\square}^{q^{\prime},a^{\prime}}R_{ \varepsilon}^{q^{\prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z)k\overline{i}n^{ \prime\prime}\overline{w_{l}^{\overline{k}}})\overline{[n-1]}\) \(\rightarrow_{det}\) \(\lambda n^{\prime\prime}.\overline{w_{r}}R_{0}^{q^{\prime},a^{\prime}}R_{1}^{ q^{\prime},a^{\prime}}R_{\square}^{q^{\prime},a^{\prime}}R_{\varepsilon}^{q^{ \prime},a^{\prime}}(\lambda z.\mathtt{trans}\,z)k\overline{i[n-1]}w_{l}^{ \overline{k}}\) And then the case continues with the two sub-cases of case 4 (input head does not move and work head moves right), with the only difference that \(\overline{[n]}\) is replaced by \(\overline{[n-1]}\). 8. _The input head moves right and the work head does not move_: exactly as case 5 (input head _left_, work head does not move) just replacing \(\mathtt{pred}\) with \(\mathtt{succ}\) \(\mathtt{succ}\) and using Lemma 2.3 instead of Lemma 2.4. 9. _The input head moves right and the work head moves left_: exactly as case 6 (input head _left_, work head left) just replacing \(\mathtt{pred}\) with \(\mathtt{succ}\) and using Lemma 2.3 instead of Lemma 2.4. 10. _The input head moves right and the work head moves right_: exactly as case 7 (input head _right_, work head right) just replacing \(\mathtt{pred}\) with \(\mathtt{succ}\) and using Lemma 2.3 instead of Lemma 2.4. Straightforward inductions on the length of executions provide the following corollaries. **Corollary 3.4** (Executions).: _Let \(\mathcal{M}\) be a Turing machine. Then there exist a term \(\mathtt{trans}\) encoding \(\mathcal{M}\) as given by Lemma 3.3 such that for every configuration \(C\) of input string \(i\in\mathbb{B}^{+}\)_ 1. Finite computation_: if_ \(D\) _is a final configuration reachable from_ \(C\) _in_ \(n\) _transition steps then there exists a derivation_ \(\rho\) _such that_ \(\rho:\mathtt{trans}\,k\lceil C\rceil\rightarrow^{\mathcal{O}((n+1)|i|\log|i|)}_ {det}k\lceil D\rceil\)_;_ 2. Diverging computation_: if there is no final configuration reachable from_ \(C\) _then_ \(\mathtt{trans}\,k\lceil C\rceil\) _diverges._ The Simulation Theorem.We now have all the ingredients for the final theorem of this note. **Theorem 3.5** (Simulation).: _Let \(f:\mathbb{B}^{*}\rightarrow\mathbb{B}\) a function computed by a Turing machine \(\mathcal{M}\) in time \(T_{\mathcal{M}}\). Then there is an encoding \(\lceil\cdot\rceil\) into \(\Lambda_{\mathtt{det}}\) of \(\mathbb{B}\), strings, and Turing machines over \(\mathbb{B}\) such that for every \(i\in\mathbb{B}^{+}\), there exists \(\rho\) such that \(\rho:\lceil\mathcal{M}\rceil\lceil i\rceil\rightarrow^{n}_{det}\lceil f(i)\rceil\) where \(n=\Theta((T_{\mathcal{M}}(|i|)+1)\cdot|i|\cdot\log|i|)\)._ Proof.: Intuitively, the term is simply \[\overline{\mathcal{M}}:=\mathtt{init}(\mathtt{trans}(\mathtt{final})\] where the identity \(\mathfrak{l}\) plays the role of the initial continuation. Such a term however does not belong to the deterministic \(\lambda\)-calculus, because the right subterms of applications are not always values. The solution is simple, it is enough to \(\eta\)-expand the arguments. Thus, define \[\overline{\mathcal{M}}:=\mathtt{init}(\lambda y.\mathtt{trans}(\lambda x. \mathtt{final}x)y)\] Then \[\begin{array}{llll}\overline{\mathcal{M}}\lceil i\rceil&=&\\ \mathtt{init}(\lambda y.\mathtt{trans}(\lambda x.\mathtt{final}x)y)\lceil i \rceil&\rightarrow^{\Theta(1)}_{det}&(\text{by }L.\ 3.1)\\ (\lambda y.\mathtt{trans}(\lambda x.\mathtt{final}x)y)\lceil C^{\mathcal{M}}_{ \mathtt{in}}(s)\rceil&\rightarrow^{\Theta((T_{\mathcal{M}}(|i|)+1)\cdot|i| \cdot\log|i|)}&(\text{by }Cor.\ 3.4)\\ (\lambda x.\mathtt{final}x)\lceil C_{\mathtt{in}}(f(i))\rceil&\rightarrow^ {\Theta(|Q|)}_{det}&(\text{by }L.\ 3.2)\\ \mathfrak{l}\lceil f(i)\rceil&\rightarrow^{\Phi(|Q|)}_{det}&(\text{by }L.\ 3.2)\\ \mathfrak{l}\lceil f(i)\rceil&\rightarrow_{det}&\\ \end{array}\]
2304.11859
Scattering from Time-modulated Transmission Line Loads: Theory and Experiments in Acoustics
Scattering wave systems that are periodically modulated in time offer many new degrees of freedom to control waves both in spatial and frequency domains. Such systems, albeit linear, do not conserve frequency and require the adaptation of the usual theories and methods. In this paper, we provide a general extension of transmission line or telegraph equations to periodically time-modulated systems. As a by-product of the theory, we obtain a general approach to compute and measure the complete scattering matrix of such systems. Finally, the proposed theory and methods are applied and validated on a concrete practical example in the realm of airborne acoustics: a time-modulated actively controlled loudspeaker membrane terminating a monomode waveguide. Different modulation functions and parameters are tested. The experimental results are compared to both numerical simulation and an analytical model based on a two time-scale method.
Matthieu Malléjac, Romain Fleury
2023-04-24T07:15:02Z
http://arxiv.org/abs/2304.11859v1
# Scattering from Time-modulated Transmission Line Loads: Theory and Experiments in Acoustics ###### Abstract Scattering wave systems that are periodically modulated in time offer many new degrees of freedom to control waves both in spatial and frequency domains. Such systems, albeit linear, do not conserve frequency and require the adaptation of the usual theories and methods. In this paper, we provide a general extension of transmission line or telegraph equations to periodically time-modulated systems. As a by-product of the theory, we obtain a general approach to compute and measure the complete scattering matrix of such systems. Finally, the proposed theory and methods are applied and validated on a concrete practical example in the realm of airborne acoustics: a time-modulated actively controlled loudspeaker membrane terminating a monomode waveguide. Different modulation functions and parameters are tested. The experimental results are compared to both numerical simulation and an analytical model based on a two time-scale method. Time-modulation, Active control, Scattering characterization, ## I Introduction Time-varying wave media have attracted a great level of interest over the past few decades and have opened new perspectives in the field of metamaterials by adding a new degree of freedom in wave manipulation and engineering possibilities [1]. The first studies on wave propagation in time-varying media date back to the mid-nighties, with the work of Morgenthaler [2], Felsen _et al._[3], or Fante [4] on spatially homogeneous but time-varying dielectric and dispersive media. Scattering from temporal boundary conditions and discontinuities gives rise to intriguing phenomena. The temporal dual of wave scattering on a planar interface between two media cannot, due to causality, involve a reflection to negative times, thus leading to different Fresnel coefficients. Another important contrast between spatial and temporal crystals or metamaterials, _i.e._, slab of (locally resonant) medium varying periodically in time, is the fact that they can generate not only frequency band gaps, but also wavenumber gaps, corresponding to linearly unstable regimes [5]. In particular, systems that are periodically modulated in time, or time-Floquet systems, have the ability to alleviate some of the constraints of simple static media [6], such as the breaking of time-reversal symmetry and of reciprocity [7; 8; 9]. As a result, exciting wave control possibilities open, such as magnet-free circulators and temporal aiming [10; 11], Floquet topological insulators [12; 13; 14; 15], unidirectional and parametric amplification [16; 17; 18; 19; 20; 21; 22; 23; 24; 25], frequency conversion [25; 26; 27], holography [28], near zero index enabled behaviors (negative refraction, high harmonic generation, time-reversal, broadband, and controllable frequency shift) [29; 30; 31; 32; 33], or strong non-linear behavior [34] allowing the observation of Floquet solitons [35] or the development of wave-based neuromorphic computing [36]. One of the main characteristics of Floquet metamaterials is the generation of harmonics at integer multiples of the modulation frequency, thus requiring an adaptation of classical experimental or theoretical methods where natural hypotheses such as linearity, reciprocity and frequency conservation are assumed. Many efforts have already been made to theoretically describe time-varying systems [37; 38] and their components [39] as well as to extend theoretical tools such as the generalization of Kramer-Kronigs relations [40], of particle's dipolar polarizability [41], of the transfer matrix methods [42], or of the T matrix [43], among others. Transmission line is another key concept fundamental to wave engineering, as it allows for the characterization of wave systems involving the scattering of guided waves. From this theory, one can describe and define the scattering of N-port systems, and easily connect several scatterers together to compose more complex systems. The growing interest for time-varying media requires to extend this theory to multi-harmonic systems, both from theoretical and experimental points of view. In this paper, we provide a comprehensive framework to explore the guided-wave scattering of time-modulated loads, from theory to experiments. In particular, the measurement of the complete scattering matrix of such systems, including potential Floquet harmonics, remains an experimental challenge, for which we propose a solution. The paper is structured as follows. In a first section, we derive and expose the extended theory of transmission line, scattering and reduced impedance matrices, for time modulated systems. In a second section, we present a method for extracting the complete scattering matrix based on a multiload technique. These two sections are general and can be applied to several domains of wave physics (electrical, acoustic or mechanical transmission lines for example). At last, we take an acoustic example to apply the theory on a concrete case, and demonstrate experimentally our S matrix extraction method: an actively controlled loudspeaker with an assigned input impedance modulated in time. Analytical modeling based on a two time-scale method and numerical simulations are used to confirm the experimental results obtained for different modulation functions and parameters. ## II Time-modulated transmission line theory We consider here a very general one-dimensional transmission line where \(x\) and \(y\) can represent any quantities such that, in the static case, both are related by the impedance \(Z(\omega)\) \[x(z,\omega)=Z(z,\omega)\cdot y(z,\omega). \tag{1}\] For electrical, acoustic or mechanical circuits, \(x\) is respectively the voltage \(U\), the pressure \(P\), or the force \(F\) while \(y\) is the current intensity \(I\), the particle velocity \(V\), or the velocity \(V\). The transmission line is terminated by a time-varying load, \(Z_{t}(t)\) as shown in Fig. 1. ### Constant load We first recall some well-known generalities about transmission lines where the termination load does not vary with time [44]. In this case, the scalar fields at position \(z\) can be written as the superposition of the incident \(x_{i}\) and reflected \(x_{r}\) waves. Noting \(Z_{0}\) the characteristic impedance of the line, one has \[x(z,\omega) =x_{i}(z,\omega)+x_{r}(z,\omega), \tag{2}\] \[y(z,\omega) =\frac{x_{i}(z,\omega)-x_{r}(z,\omega)}{Z_{0}}, \tag{3}\] which can be related by a scalar scattering coefficient at \(z=0\) \[x_{r}(z=0,\omega)=S_{t}\cdot x_{i}(z=0,\omega) \tag{4}\] _i.e._, the complex reflection coefficient \(S_{t}=R\). Alternatively, the scalar impedance of the load can be related to the reflection coefficient as \[S_{t}=R=\frac{Z_{t}/Z_{0}-1}{Z_{t}/Z_{0}+1}, \tag{5}\] or, equivalently, \[Z_{t}=Z(z=0)=Z_{0}\frac{1+R}{1-R} \tag{6}\] ### Periodically time-modulated load If the load is now modulated periodically in time with a circular frequency \(\omega_{m}\), Floquet harmonics at \(\omega\pm n\omega_{m}\) (\(n\in\mathbb{N}\)) are generated around the excitation frequency \(\omega\) and will be reflected from the load. Moreover, since in the general case the source is not impedance matched, the multi-harmonic reflection from the load will also be reflected back from the generator side. Both the incident and the reflected waves can therefore be developed in Fourier series. Since the amplitudes of the harmonics must decrease at large \(n\), we can always truncate their summation to a given harmonic order \(N\), approximating the signals as follows, \[x(z,t) =\sum_{n=-N}^{N}\mathbf{x}[n](z)\mathrm{e}^{\mathrm{i}(\omega+n \omega_{m})t}, \tag{7}\] \[y(z,t) =\sum_{n=-N}^{N}\mathbf{y}[n](z)\mathrm{e}^{\mathrm{i}(\omega+n \omega_{m})t}, \tag{8}\] where \(\mathbf{x}[n](z)\) and \(\mathbf{y}[n](z)\) are the \(n^{\text{th}}\) element of the complex amplitude vectors \(\mathbf{x}(z)\) and \(\mathbf{y}(z)\) of length \(2N+1\), defined at each position \(z\). As a generalization of eqs. (2), (3), and (4), these complex amplitudes can be expressed as \[\mathbf{x}(z) =\mathbf{x_{i}}(z)+\mathbf{x_{r}}(z)=[\mathbb{1}+\mathbf{S}(z)] \cdot\mathbf{x_{i}}(z), \tag{9}\] \[\mathbf{y}(z) =\frac{\mathbf{x_{i}}(z)-\mathbf{x_{r}}(z)}{Z_{0}}=[\mathbb{1}- \mathbf{S}(z)]\cdot\frac{\mathbf{x_{i}}(z)}{Z_{0}}. \tag{10}\] using the matrix generalization of the scattering coefficient, eq. (4). The incident and reflected complex amplitude vectors can be expressed as the element wise multiplication of a magnitude vector \(\mathbf{x_{i,r}^{\text{abs}}}\) and a phase vector \(\mathbf{d}(z)\) \[\mathbf{x_{i,r}}(z)=\mathbf{x_{i,r}^{\text{abs}}}\odot\mathbf{d}(z), \tag{11}\] where the magnitude vector is given by \(\mathbf{x_{i,r}^{\text{abs}}}=[|x_{i,r}^{(-N)}|,...,|x_{i,r}^{(0)}|,...,|x_{i,r }^{(+N)}|]^{T}\). The phase vector is defined as \(\mathbf{d}(z)=[\mathrm{e}^{\mathrm{i}k(-N)z},...,\mathrm{e}^{\mathrm{i}k(0)z },...,\mathrm{e}^{\mathrm{i}k(+N)z}]^{T}\), with the Figure 1: Transmission line circuit terminated by a time-modulated load (a), and reduced impedance circuit (b). wavenumbers \(k^{(n)}=(\omega\pm n\omega_{m})/c_{0}\). The wave celerity in the line is \(c_{0}=\text{const}\). The operator \(\odot\) is the Hadamard product, _i.e._, the element wise multiplication operator. Due to the multi-harmonic content, eqs. (5) and (6) also need to be generalized. What used to be the scalar scattering coefficient and impedance now become (\(2N+1\)) by (\(2N+1\)) matrices, related by \[\mathbf{Z_{t}} =\mathbf{Z}(z=0)=Z_{0}\left[\mathbb{1}+\mathbf{S_{t}}\right] \cdot\left[\mathbb{1}-\mathbf{S_{t}}\right]^{-1}, \tag{12}\] \[\mathbf{S_{t}} =\left[\mathbb{1}+\frac{\mathbf{Z_{t}}}{Z_{0}}\right]^{-1}\cdot \left[\frac{\mathbf{Z_{t}}}{Z_{0}}-\mathbb{1}\right]. \tag{13}\] These relations can be easily derived by introducing eqs. (9) and (10), in the matrix extended definition of the impedance and scattering coefficient at the load position \[\mathbf{x}(z=0) =\mathbf{Z_{t}}\cdot\mathbf{y}(z=0), \tag{14}\] \[\mathbf{x_{r}}(z=0) =\mathbf{S_{t}}\cdot\mathbf{x_{i}}(z=0). \tag{15}\] We can then determine the scattering and impedance matrices at any \(z\) position along the transmission line from the ones at the termination. The reduced impedance can be expressed as \[\mathbf{Z}(z)= Z_{0}\left[\mathbb{1}+\left[\mathbb{1}+\frac{\mathbf{Z_{t}}}{Z_{0 }}\right]^{-1}\cdot\left[\frac{\mathbf{Z_{t}}}{Z_{0}}-\mathbb{1}\right]\odot \left(\mathbf{d}(z)\cdot\mathbf{d}(z)^{T}\right)\right.\] \[\cdot\left[\mathbb{1}-\left[\mathbb{1}+\frac{\mathbf{Z_{t}}}{Z_{0 }}\right]^{-1}\cdot\left[\frac{\mathbf{Z_{t}}}{Z_{0}}-\mathbb{1}\right]\odot \left(\mathbf{d}(z)\cdot\mathbf{d}(z)^{T}\right)\right]^{-1}, \tag{16}\] and the scattering matrix reduced to the \(z\) position is given by \(\mathbf{S}(z)=\mathbf{S_{t}}\odot\left(\mathbf{d}(z)\cdot\mathbf{d}(z)^{T}\right)\). Finally, we can compute the average power along the transmission line \[\mathscr{P}(z)=\frac{1}{2}\Re\{\mathbf{x}\cdot\mathbf{y}^{*}\}= \left[\mathbb{1}+\mathbf{S}(z)\right]\cdot\left[\mathbf{x_{i}^{ \text{abs}}\odot\mathbf{d}^{*}(z)}\right]\] \[\cdot\left[\mathbb{1}-\mathbf{S}(z)\right]^{*}\cdot\frac{\mathbf{ x_{i}^{\text{abs}}\odot\mathbf{d}(z)}}{Z_{0}}. \tag{17}\] In terms of incident \(\mathscr{P}_{i}\) and reflected \(\mathscr{P}_{r}\) power, we have \[\mathscr{P}(z)=\mathscr{P}_{i}(z)+\mathscr{P}_{r}(z)=\frac{1}{2Z_{0}}|\mathbf{ x_{i}}|^{2}-\frac{1}{2Z_{0}}|\mathbf{S}(z)\cdot\mathbf{x_{i}}|^{2}. \tag{18}\] According to Fig. 1(b), the incident wave can also be defined with respect to the generator voltage, pressure or force through the load impedance reduced to the generator position \[\mathbf{x_{in}} =\mathbf{Z_{in}}\cdot\left[\mathbf{Z_{s}}+\mathbf{Z_{in}}\right]^ {-1}\cdot\mathbf{x_{g}}\] \[=\left[\mathbb{1}+\mathbf{S}(-L)\right]\cdot\mathbf{x_{i}^{\text {abs}}\odot\mathbf{d}^{*}(-L)}, \tag{19}\] with the input impedance \(\mathbf{Z_{in}}\) being defined from eq. (16) \[\mathbf{Z_{in}}=\mathbf{Z}(-L). \tag{20}\] These equations provide, from the measured time signals \(x(t)\) and/or \(y(t)\) developed in Fourier series, the total field at each Floquet harmonic, \(\mathbf{x}\) and \(\mathbf{y}\), and allow to fully characterize a time-modulated load in terms of scattering, impedance and power. ## III Extraction of the scattering matrix Extracting the complete scattering matrix of a time-modulated system remains a challenge due to the multi-harmonic content. One needs to measure \(2N+1\) linearly independent data sets from which one can extract any of the elements of the \(\mathbf{S_{t}}\) matrix. Here, we propose a general methodology based on multi-load techniques [45] to measure the \((2N+1)\) by \((2N+1)\) scattering matrix while limiting the number of sensors required. It is worth noting here that another method would consist in multiplying the number of sensors in the system. Two extraction methods can be employed. First, \(\mathbf{S_{t}}\) can be extracted by probing \(x(t)\) at two positions in the transmission line, which allows differentiating the incident \(\mathbf{x_{i}}\) and reflected \(\mathbf{x_{r}}\) fields for each harmonic that are related to \(\mathbf{S_{t}}\) through eq. (15). For the second method, we can use eq. (12) to obtain \(\mathbf{S_{t}}\) from the \(\mathbf{Z_{t}}\) matrix that relates the harmonic content of the \(x(t)\) and \(y(t)\) fields measured at the load position (eq. (14)). ### Multiload technique Both methods involve inverting equations with vectors. To do this, we expand the vectors \(\mathbf{x_{i}}\), \(\mathbf{x_{r}}\) or \(\mathbf{x}\) and \(\mathbf{y}\) into square matrices, multiplying the number of measurements, thus multiplying the number of equations, to solve the \((2N+1)^{2}\) unknowns. In order to invert these matrices, the equations must be linearly independent in the frequency range of interest. We can then vary the length of the transmission line between the load to be characterized and the probe position for each measurement. By doing so, we can change the phase of the reflected field at the microphone position. The length should be chosen so that each configuration gives independent data sets over the frequency range for \(\mathbf{x_{i}}\) and \(\mathbf{x_{r}}\), or \(\mathbf{x}(z=0)\) and \(\mathbf{y}(z=0)\). In addition, these methods assume plane wave propagation. The excitation frequency as well as all generated Floquet harmonics must be lower than the waveguide cutoff frequency. Equations (15) and (14) can then be rewritten as \[\mathbf{S_{t}}=\begin{bmatrix}x_{r_{L_{1}}}^{-N}&\cdots&x_{r_{L_{2N+1}}}^{-N} \\ \cdot&\cdots&\cdot\\ \cdot&\cdots&\cdot\\ x_{r_{L_{1}}}^{0}&\cdots&x_{r_{L_{2N+1}}}^{0}\\ \cdot&\cdots&\cdot\\ x_{r_{L_{1}}}^{+N}&\cdots&x_{r_{L_{2N+1}}}^{+N}\end{bmatrix}\cdot\begin{bmatrix}x_{i_{L_{1}}}^{ -N}&\cdots&x_{i_{L_{2N+1}}}^{-N}\\ \cdot&\cdots&\cdot\\ \cdot&\cdots&\cdot\\ x_{i_{L_{1}}}^{0}&\cdots&x_{i_{L_{2N+1}}}^{0}\\ \cdot&\cdots&\cdot\\ x_{i_{L_{1}}}^{+N}&\cdots&x_{i_{L_{2N+1}}}^{+N}\end{bmatrix}, \tag{21}\] \[\mathbf{Z_{t}}=\begin{bmatrix}x_{L_{1}}^{-N}&\cdots&x_{L_{2N+1}}^{-N}\\ \cdot&\cdots&\cdot\\ x_{L_{1}}^{0}&\cdots&x_{L_{2N+1}}^{0}\\ \cdot&\cdots&\cdot\\ \cdot&\cdots&\cdot\\ x_{L_{1}}^{+N}&\cdots&x_{L_{2N+1}}^{+N}\end{bmatrix}_{z=0}\quad\begin{bmatrix}y_{ L_{1}}^{-N}&\cdots&y_{L_{2N+1}}^{-N}\\ \cdot&\cdots&\cdot\\ y_{L_{1}}^{0}&\cdots&y_{L_{2N+1}}^{0}\\ \cdot&\cdots&\cdot\\ y_{L_{1}}^{+N}&\cdots&y_{L_{2N+1}}^{+N}\end{bmatrix}_{z=0}^{-1}\quad. \tag{22}\] The inversion of these square matrices is sensitive to the independence of the data sets, _i.e._, to the choice of the load lengths. Computing the condition number of the matrix to be inverted allows to select correctly the adequate lengths. Initially, the different lengths can be chosen so that, for the mid-range frequency, each produces a phase shift linearly distributed in the range \([0\ \pi]\). To increase the accuracy for each frequency in the band, one can also overdetermine the system by increasing the number of configurations to be measured to \(M\), _i.e._, \(M\) of transmission line lengths. One can then solve the pseudo-inverse of a matrix \((2N+1)\) by \((M)\) with a least mean squares procedure, thus minimizing the uncertainties in the inversion process due to matrices that may be close to singularity at some frequencies, _i.e._, data sets that are not sufficiently independent. ### S matrix from \(\mathbf{x}_{i}\) and \(\mathbf{x}_{r}\) discrimination For each of the \(2N+1\) charges, the incident \(x_{i}^{(\pm n)}\) and reflected \(x_{r}^{(\pm n)}\) complex amplitudes can be discriminated from the total field \(x^{(\pm n)}(z)\) at each harmonic using two carefully calibrated probes positioned at \(z_{1}\) and \(z_{2}\), \[\begin{bmatrix}x_{i}^{(\pm n)}&x_{r}^{(\pm n)}\end{bmatrix}=\begin{bmatrix}x ^{(\pm n)}(z_{1})&x^{(\pm n)}(z_{2})\end{bmatrix}\cdot\begin{bmatrix}x^{(\pm n )}(z_{2})\end{bmatrix}\cdot\begin{bmatrix}x^{(\pm n)}(z_{1})&x^{(\pm n)}(z_{2} )\end{bmatrix}^{-1}. \tag{23}\] One can then access \(\mathbf{S_{t}}\) from the \(2N+1\) vectors \(\mathbf{x_{r}}\) and \(\mathbf{x_{i}}\) concatenated in matrices, as detailed in eq. (21). This approach assumes linearity along the transmission line, and requires that individual harmonics do not interact with each other during propagation. The sole interactions should occur at the load. ### S matrix from Z matrix The impedance matrix can be obtained directly by simultaneously probing, at the load position, the total fields \(x^{(\pm n)}\) and \(y^{(\pm n)}\) at each harmonic and for the \(2N+1\) charges \[\mathbf{x}(z=0)=\begin{bmatrix}x_{z=0}^{(-N)}\\ \cdot\\ x_{z=0}^{(0)}\\ \cdot\\ \cdot\\ x_{z=0}^{(+N)}\end{bmatrix}=\mathbf{Z_{t}}\cdot\begin{bmatrix}y_{z=0}^{(-N)} \\ \cdot\\ y_{z=0}^{(0)}\\ \cdot\\ y_{z=0}^{(+N)}\end{bmatrix}=\mathbf{Z_{t}}\cdot\mathbf{y}(z=0). \tag{24}\] The scattering matrix can then be deduced from the impedance matrix by simply solving eq. (12). ## IV Application to a time-modulated actively controlled loudspeaker We now apply the general theory and the extraction procedure of the scattering matrix to an acoustic example. ### Experimental set-up We consider a one dimensional circular cross-section acoustic waveguide of diameter \(d=7.18\) cm terminated by an actively controlled loudspeaker enclosed in a cavity of volume \(V_{b}=1081.6\) cm\({}^{3}\) as depicted in Fig. 2(a,b). The waveguide is instrumented with two microphones to measure the incident and reflected pressures, and is excited from the left by a monochromatic wave of circular frequency \(\omega\). With active control, the characteristics of a resonator, such as the electrodynamic loudspeaker, can differ completely from its natural properties, e.g., modified resonant frequency, stiffness, impedance [46], and nonlinearity, paving the way for a plethora of applications such as nonreciprocal behavior [47], gain and loss control [48; 49] or enhanced broadband absorption [50; 51; 52; 53; 54; 55] among others. Here, we periodically modulate in time the impedance of the loudspeaker, so that it responds with a target impedance \(Z_{\text{targ}}(t)\). The control part illustrated in Fig. 2(c) and performed using an FPGA based Speed-goat Performance Real-Time controller (I/O 131), consists first in measuring the pressure \(p_{f}\) in front of the loudspeaker and then applying a feedback loop that assigns a given current \(i(t)\) to the speaker, based on a given control law \(\Theta\) \[i=\Theta p_{f}=\frac{S_{d}}{Bl}\left(1-\frac{Z_{ms}(\omega_{c})}{Z_{\text{targ }}(t)}\right)p_{f}, \tag{25}\] where \(S_{d}\) is the speaker effective cross-section, \(Bl\) is the Force factor, and \(Z_{ms}\) is the specific impedance of the loudspeaker. In the following, we will first consider a narrowband control, allowing the response of the resonator to time modulation to be carefully studied while decoupling the effect of the control at other frequencies. Apart from the narrow frequency range where the modulation occurs, no Floquet harmonics are generated. Only the middle column of the scattering matrix is thus meaningful, i.e., the reflection at the different harmonics for an incidence at the excitation frequency. Finally, we will demonstrate the experimental extraction of the full scattering matrix using a broadband control law, allowing the generation of harmonics for any excitation frequency and making the characterization of the complete matrix relevant. ### Narrow band control To limit the control only over a given bandwidth \(B_{c}\) around the control frequency \(f_{c}\), a complex envelope technique based on a second order Bessel function is adopted. More details on this technique and the control efficiency can be found in [46]. Two different periodic modulation functions are investigated in Fig. 3(a,b,c), a cosine and a positive/negative circular modulation respectively such that \[Z_{targ}(t) = \tilde{Z}_{t}\left(1+A_{m}\cos(\omega_{m}t+\phi_{m})\right), \tag{26}\] \[\text{or }Z_{targ}(t) = \tilde{Z}_{t}\left(1+A_{m}\exp(\pm\mathrm{i}(\omega_{m}t+\phi_{m} ))\right), \tag{27}\] where \(A_{m}\), \(\phi_{m}\), and \(\omega_{m}=2\pi f_{m}\) are the modulation depth, phase, and circular frequency respectively, and \(\tilde{Z}_{t}=Z_{t}/Z_{0}\) is the amplitude of the target normalized impedance. To characterize the effect of the modulation functions, the second column of the scattering matrix is extracted experimentally following the procedure detailed in Section III. In the following examples, the control characteristics are fixed as follows: \(f_{c}=220\) Hz, \(B_{c}=2\) Hz, \(Z_{t}=0.8\), \(A_{m}=0.2\), and \(f_{m}=50\) Hz. A prior pressure measurement along the transmission line ended by the time-modulated load, showed that only the first Floquet harmonic is measurable in the system. Thus, the truncation in the Fourier series is set as \(N=1\). Due to the control, the reflection at \(\omega\) for an incidence at \(\omega\) exhibits that of the natural loudspeaker, except in the control bandwidth around \(f_{c}=220\) Hz, where the reflection reaches the value given by the target impedance \(|R|=|(\tilde{Z}_{t}-1)/(1+\tilde{Z}_{t})|\), as shown in Figs. 3(a,b,c-2). This is also the case for a time invariant control, except that all the extra-diagonal terms of the matrix \(\mathbf{S_{t}}\) are in this case null. Non-zero off-diagonal terms appear only when time modulation is enabled. It generates, in addition to the change in the reflection \(|R^{(0,0)}|\) in the control range, some reflection at the \(\pm 1\) Floquet harmonics. For example, two reflection peaks centered on \(f_{c}\) are now visible in Fig. 3(a-1) and (a-3), corresponding respectively to a reflection at \(\omega\pm\omega_{m}\) in response to an incidence at \(\omega\), _i.e._, \(|\mathbf{S}_{32}|=|R^{(+1,0)}|\) and \(|\mathbf{S}_{12}|=|R^{(-1,0)}|\). Cosine modulation then generates both positive and negative Floquet harmonics (Figs. 3(a-1,3)) while a complex function, e.g. a positive (resp. negative) complex exponential, generates only positive (resp. negative) Floquet harmonics as evidenced in Fig. 3(a-1) (resp. Fig. 3(c-3)). We compare the experimental results (circle symbols) with numerical simulation based on a Finite Difference Time Domain approach, FDTD, (dashed lines) and an analytical model based on a two-time scale approach (solid lines). More details on the analytical and numerical modeling can be found in Appendix A. It is noteworthy here that these three methods require prior and accurate characterization of the loudspeaker and evaluation of its mechanical parameters (see Appendix A). We can note a slight discrepancy just before the control frequency between the analytical model and the numerical and experimental results. This can be explained by the way the control bandwidth is applied. Indeed, both numerically and experimentally, a complex envelope technique involving 2nd order Bessel filtering is used, whereas the analytical modeling involves only a generalized normal distribution window (see Appendix A). Nevertheless, it is worth noting that the analytical model matches well with the expected numerical and experimental reflection value at \(f_{c}\), both at the fundamental and at the first positive and negative Floquet harmonics, thus validating our analytical modeling and the extrac Figure 2: Photograph (a) and schematic (b) of the experimental set-up used to measure the entire scattering matrix of a time-modulated acoustic system (a), block diagram of the active control strategy (b). tion procedure. A detailed analysis of the effect of the modulation depth, target impedance, and modulation frequency can be found in Appendix B. ### Broadband control Extracting the full scattering matrix is only relevant if the time modulated load is effective also at the frequency of the harmonics. Then, the generated harmonic, for example at \(\omega\pm\omega_{m}\) generates back a Floquet harmonic that contribute to the reflection at \(\omega\), making the measurement more challenging. A substantial incident amplitude at \(\omega\), and \(\omega\pm\omega_{m}\) is required to measure these backscatter coefficients, which correspond to the first and third columns of the scattering matrix in our case. Floquet harmonics then need to be generated by the time-modulated load for any incident frequency. In other words, the control has to be broadband and has to generate large amplitude harmonics. To do so, we change the control law applied to the system and modulate no longer the magnitude of the load impedance but it's compliance \(C_{ms}\), _i.e._, it's resonance frequency, \[Z_{\text{targ}}(\omega,t)=Z_{ms}(\omega)+(\text{i}\omega C_{ms})^{-1}A_{m}\cos (\omega_{m}t+\phi_{m}). \tag{28}\] This new broadband control law still generates mostly one positive and one negative Floquet harmonics, but its stronger effect increases their amplitude, allowing the extraction of the complete \(\mathbf{S_{t}}\) matrix. Figure 4 shows two complete 3-by-3 scattering matrices obtained for two different modulation frequencies \(f_{m}=50\) Hz (blue) and \(f_{m}=25\) Hz (red). The experimental results represented by the circle symbols are compared to the scattering coefficients extracted using the 2-probe multiload technique applied to the FDTD experiment. To complement these two methods, a finite element model based on a Fourier expansion of the target impedance is also developed to validate the second column of the scattering matrix. Only the reflections for an incidence at the excitation frequency, _i.e._, 2nd column only, is accessible with the FEM model (dashed line, see Appendix A for more details). The first, second and third columns of the scattering matrix, see Figs. 4(a,b,c), correspond respectively to the reflection from an incidence at \(\omega-\omega_{m}\), \(\omega\), and \(\omega+\omega_{m}\), to \(\omega-\omega_{m}\), \(\omega\), and \(\omega+\omega_{m}\) for the first, second, and third line elements Figs. 4(1,2,3). The reflection at the excitation frequency is again that of the speaker with a drop in reflection at the natural resonant frequency, _i.e._, 200 Hz for \(|\mathbf{S}_{22}|\) in Fig. 4(b-2). Since the first and third columns refer to what happens at \(\omega\pm\omega_{m}\), the reflection curve and the drop are therefore delocalized to 200 Hz \(\pm f_{m}\) for \(|\mathbf{S}_{33}|\) and \(|\mathbf{S}_{11}|\), shown in Figure 3: **Effect of the modulation function:** second column of the full scattering matrix for a cosine modulation (a), positive circular modulation (b), and negative circular modulation (c). The first line, second, and third lines corresponds respectively to \(|\mathbf{S}_{12}|=|R^{(-1,0)}|\), \(|\mathbf{S}_{22}|=|R^{(0,0)}|\) and \(|\mathbf{S}_{32}|=|R^{(+1,0)}|\) respectively. Analytical, numerical, and experimental results are given respectively by the solid and dashed lines and the symbols. Fig. 4(c-3, a-1) respectively. The scattering coefficients of these two columns are relying mainly on the incident pressures measured at \(\omega\pm\omega_{m}\) which are uniquely due to the generated harmonics and are thus more sensitive to noise, hence the larger variance of the experimental data. Another source of discrepancies comes from the condition number of the matrix to be inverted, which depends on the chosen lengths of the multi-loads and is optimal only for the centre of the frequency range but not necessarily for all frequencies of the bandwidth. The difference in amplitude of the different off-diagonal terms is explained by the frequency dispersion of the speaker. Except slight discrepancies for some frequencies, the overall agreement of the measured data with the simulation of the full experimental set-up (including the dispersion and mechanical damping of the source) demonstrates the ability to measure the full scattering matrix of time-modulated systems. ## V Conclusions In conclusion, we have studied the effect of a time-modulated load on a typical transmission line, and we have extended the classical theory to include the Floquet harmonic generated by the system. We have tackled the challenge of experimentally characterizing the scattering of such structures, by implementing a multi-load measurement technique. The characterization of time-modulated building blocks is an essential element for the design of more complex devices like space-time varying metamaterials. The extended transmission line theory and scattering extraction methodology are verified and applied to a one dimensional acoustic transmission line terminated by a time-modulated load. To do so, we periodically modulated in time the input impedance of an actively controlled loudspeaker. The experimental scattering is confronted with both time-domain numerical simulations and analytical modeling based on a 2 time-scale model of the controlled loudspeaker. The agreement of the three is very good for the different functions and modulation parameters tested. An interesting feature is the possibility to force the generation of either positive and/or negative Floquet harmonics solely by the choice of the modulation function applied in the control law. This unique behavior could be used to improve sound absorption by transferring low-frequency acoustic energy only to the higher harmonics, which can be absorbed more easily.. In addition, to the extended telegraphers equations, that could also be applied to the characterization of nonlinear load, we also present an analytically well described time-modulated controllable acoustic system which can be used in various application of time-varying phenomena such as non-reciprocal device, acoustic circulators, and non-hermitian systems among others. Figure 4: **Full scattering matrix of the time modulated loudspeaker** for a modulation \(f_{m}=50\) Hz (blue) or \(f_{m}=25\) Hz (red), \(\phi_{m}=0\), and \(Z_{\text{arg}}=Z_{ms}(\omega)+0.2\text{(i}\omega\mathcal{C}_{ms})^{-1}\cos( \omega_{m}t+\phi_{m})\). Reflection coefficient towards \(\omega-\omega_{m}\) (1), towards \(\omega\) (2), and towards \(\omega+\omega_{m}\) (3) for an incidence at \(\omega-\omega_{m}\) (a), at \(\omega\) (b), and at \(\omega+\omega_{m}\) (c). FEM (only for the 2nd column) and FDTD simulations are given respectively by the dashed and dashed-dotted lines. Experimental results are given by the circle symbols. ## Appendix A Details on the modelling ### Analytical model - 2 time-scale method The actively controlled loudspeaker follows an integro-differential equation relating the velocity of the loudspeaker diaphragm \(v(t)\) to the pressure in front of the loudspeaker \(p_{f}(t)\) \[\left(M_{ms}\mathrm{d}_{tt}^{2}\right. + R_{ms}\mathrm{d}_{t}+\left[C_{ms}\right]^{-1}\right)v(t) \tag{10}\] \[= S_{d}\mathrm{d}_{t}p_{f}(t)-Bl\mathrm{d}_{t}i(t)W(\omega),\] where \(R_{ms}\), \(M_{ms}\) and \(C_{ms}\) are the Thiele and Small characteristics of the loudspeaker, _i.e._, acoustic resistance, mass and compliance, respectively, and \(W(\omega)\) is a frequency generalized normal distribution window centered at \(f_{c}\). Inserting eq. (25) into eq. (10), imposing the change of variable \(\tau=\omega_{\infty}t\) to obtain a dimensionless time variable, and rearranging the terms, we obtain \[\left(\mathrm{d}_{\tau\tau}^{2}+\frac{R_{ms}}{M_{ms}\omega_{ \infty}}\mathrm{d}_{\tau}+1\right)v(\tau)=\frac{S_{d}}{M_{ms}\omega_{\infty}} \tag{11}\] \[\times\mathrm{d}_{\tau}\left[1-\left(1-\frac{Z_{ms}(\omega_{c}) }{Z_{t}\left(1+A\cos\left(\frac{\omega_{m}}{\omega_{\infty}}\tau+\phi_{m} \right)\right)}\right)W(\omega)\right]p_{f}(\tau),\] with \(\omega_{\infty}^{2}=[C_{ms}M_{ms}]^{-1}\) the natural resonance circular frequency of the loudspeaker. The system is excited by a source delivering a pressure \(p(\tau)=P_{inc}\exp\left(\mathrm{i}\omega/\omega_{\infty}\tau\right)\). At \(\tau=0\) (initial condition), the driven loudspeaker has zero acceleration \(d_{\tau}v(\tau)_{\mathrm{i}=0}=0\) and has a velocity equal to \(S_{d}v(\tau=0)=p_{f}(\tau=0)/Z_{ms}=P_{inc}/Z_{ms}\). We assume that the system can be described using two different time scales. One related to the excitation at \(\omega\), \(T_{0}\approx\tau\), and the other related to the slow modulation at \(\omega_{m}\), \(T_{1}\approx\epsilon\tau\), such that \(T_{0}>>T_{1}\). We define as a small parameter for the derivations, \(\epsilon\approx\omega_{m}/\omega_{\infty}<<1\). We also note that the prefactor of the right-hand side of eq. (11) is of the same order as \(\epsilon\), so \(S_{d}/M_{ms}\omega_{\infty}\) can be replaced by \(\epsilon S_{d}/M_{ms}\omega_{m}\). The velocity field can then be extended according to these two scales \[v(\tau)\approx v_{0}(T_{0},T_{1})+\epsilon v_{1}(T_{0},T_{1}). \tag{12}\] The governing equation can then be rewritten as \[\left(\partial_{T_{0}T_{0}}^{2}+2\epsilon\partial_{T_{0}T_{1}}^{2}\right. + \epsilon^{2}\partial_{T_{1}T_{1}}^{2}+\frac{R_{ms}}{M_{ms}\omega_{ \infty}}\partial_{T_{0}}+\frac{R_{ms}}{M_{ms}\omega_{\infty}}\epsilon\partial _{T_{1}}+1\right)\left(v_{0}(T_{0},T_{1})+\epsilon v_{1}(T_{0},T_{1})\right) \tag{13}\] \[=\epsilon\frac{S_{d}}{M_{ms}\omega_{m}}(\partial_{T_{0}}+ \epsilon\partial_{T_{1}})\left[1-\left(1-\frac{Z_{ms}(\omega_{c})}{\tilde{Z}_ {t}\left(1+A\cos\left(T_{1}+\phi_{m}\right)\right)}\right)W(\omega)\right]p_ {f}(T_{0}).\] Separating the different orders in \(\epsilon\), we get \[\mathcal{O}(\epsilon^{0})\rightarrow\left(\partial_{T_{0}T_{0}}^{2} +\frac{R_{ms}}{M_{ms}\omega_{\infty}}\partial_{T_{0}}+1\right)v_{0}= 0, \tag{14}\] \[\mathcal{O}(\epsilon^{1})\rightarrow\left(\partial_{T_{0}T_{0}}^{2} +\frac{R_{ms}}{M_{ms}\omega_{\infty}}\partial_{T_{0}}+1\right)v_{1}= \frac{S_{d}}{M_{ms}\omega_{m}}\partial_{T_{0}}\left[1-\left(1- \frac{Z_{ms}(\omega_{c})}{Z_{t}\left(1+A\cos\left(T_{1}+\phi_{m}\right) \right)}\right)W(\omega)\right]p_{f}(T_{0})\] \[-(2\partial_{T_{0}T_{1}}^{2}+\frac{R_{ms}}{M_{ms}\omega_{\infty}} \partial_{T_{1}})v_{0}. \tag{15}\] To solve the governing equation of the loudspeaker, we now have to solve each of the two separated partial differential equations giving the solution at order 0 and 1, \(v_{0}\) and \(v_{1}\) respectively. We first solve the partial differential equation at order 0, eq. (14), using the initial conditions. We then reinject the solution \(v_{0}\) into eq. (15) and cancel the secular terms, to obtain the solution \(v_{1}\). Finally, using the definition of the velocity field expansion eq. (12), remembering that the two time scales are \(T_{0}\approx\tau\), \(T_{1}\approx\epsilon\tau\), and replacing \(\epsilon\) and \(\tau\) by their definitions, we can derive the total solution to the initial governing equation, eq. (10) \[v(t)=\frac{P_{inc}}{S_{d}Z_{ms}}\mathrm{e}^{-\frac{R_{ms}}{Z_{t}M _{ms}}t}\cos\left(\sqrt{1-(\frac{R_{ms}}{2M_{ms}\omega_{\infty}})^{2}}\omega_{ \infty}t\right) \tag{16}\] \[+\frac{P_{inc}}{Z_{ms}}\left[1-\left(1-\frac{Z_{ms}(\omega_{c})} {Z_{t}(1+A\cos(\omega_{m}t+\phi_{m}))}\right)W(\omega)\right]\mathrm{e}^{ \mathrm{i}\omega t}.\] The first term corresponds to the transient field and decays exponentially with the speaker dissipation con stant \(R_{ms}/2M_{ms}\). At the control frequency, the loudspeaker responds effectively to the target impedance, \(v(t)=P_{inc}/\left(Z_{t}\left(1+A\cos\left(\omega_{m}t+\phi_{m}\right)\right)\right)\). A constant fitting parameter is introduced such that \(A=A_{m}/2\) To extract the impedance matrix \(\mathbf{Z}\) from the analytical model, we have to use the superposition principle and solve the system for an incident pressure at \(\omega\), \(\omega-\omega_{m}\), and \(\omega+\omega_{m}\), and for \(2N+1\) charges. Then, the Fourier transform of the pressure and velocity fields allows to solve eq. (22) and thus to derive the scattering matrix eq. (13). ### Numerical FDTD model Numerical results are obtained using SIMULINK modelling of the entire experimental setup, based on a finite difference time step approach using a time step of \(1.10^{-5}\). The scattering and impedance matrices can then be extracted from the incident and reflected pressure, using the 2-probe multi-load technique applied to the numerical experiment. The results obtained via the extraction procedure performed as in the experimental set-up are consistent with those obtained from the direct access to the reflected and transmitted pressures allowed by the simulation. ### Numerical FEM model The numerical FEM experiment is performed using the frequency domain solver of the commercial software COMSOL Multiphysics, following the methodology proposed in [10]. The time-modulated loudspeaker is modeled as an impedance \[Z =Z_{ms}(\omega)+A_{m}[\mathrm{i}\omega C_{ms}]^{-1}/S_{d}\cos( \mathrm{i}\omega_{m}t+\phi_{m}) \tag{10}\] \[=Z_{ms}(\omega)+\delta Z(\omega)(e^{\mathrm{i}\omega_{m}t}e^{ \mathrm{i}\phi_{m}}+e^{\mathrm{i}\omega_{m}t}e^{\mathrm{i}\phi_{m}}). \tag{11}\] An impedance condition is implemented as follows \[-\mathbf{n}\frac{\nabla p}{\rho_{0}}=p_{f}\frac{-\mathrm{i}\omega}{Z}=-\mathbf{ v}.\mathbf{n}\mathrm{i}\omega. \tag{12}\] Expanding \(p_{f}\) and \(v\) (normal particle velocity) in Fourier series, we end up after some algebra to \[p_{f_{n}} =Z(\omega_{n})v_{n}+\delta Z(\omega_{n})\left(v_{n-1}e^{-\mathrm{ i}\phi_{m}}+v_{n+1}e^{\mathrm{i}\phi_{m}}+...\right), \tag{13}\] \[\text{or equivalently}\] \[v_{n} =\frac{p_{f_{n}}-\delta(\omega_{n})\big{(}v_{n-1}e^{-\mathrm{i} \phi_{m}}+v_{n+1}e^{\mathrm{i}\phi_{m}}+...\big{)}}{Z(\omega_{n})}, \tag{14}\] where \(\omega_{n}=\omega+n\omega_{m}\). We then put eq. (14) into weak forms and solve for \(n=(-2,-1,0,1,2)\) simultaneously for any incident frequency \(\omega\). ### Characterization of the resonator to control A fitting procedure of the transfer function of the loudspeaker terminated by an open circuit, a short circuit, or a load \(R=100.8\)\(\Omega\), allows to obtain the following Thiele and Small parameters [56]: \(Bl=3.63\) T\(\cdot\)m, \(M_{ms}=2.9\) g, \(C_{ms}=0.214\) mm/N, and \(R_{ms}=0.54\) N\(\cdot\)s/m. The natural resonance of the speaker occurs close to 200 Hz. We thus choose to apply our control around the resonance to take advantage of the stable response in this range ## Appendix B Effect and limitation of the control characteristics ### Change of the control frequency Figure 5 shows three configurations with positive circular modulation and control frequency on either side of the natural resonance frequency, _i.e._, \(f_{c}=180\) Hz (a), \(f_{c}=220\) Hz (b) and \(f_{c}=260\) Hz (c). It can be seen that the active control does capture the reflection value \(|R^{(0,0)}|=0.11\) for each of the control frequencies and that a +1 Floquet harmonic is generated. The amplitude of the latter differs depending on the control frequency, again, due to the dispersion of the electrodynamic speaker. ### Change of the target impedance \(\tilde{Z}_{t}\) We then test in Fig. 6 three different target impedance values, respectively \(\tilde{Z}_{t}=0.4\) (a), \(\tilde{Z}_{t}=0.6\) (b), and Figure 5: **Control frequency \(f_{c}\) variation:** (a) \(f_{c}=180\) Hz, (b) \(f_{c}=220\) Hz, and (c) \(f_{c}=260\) Hz for the fundamental \(|R^{(0,0)}|\) (1) and the 1st Floquet harmonic \(|R^{(+1,0)}|\) (2). Analytical, numerical, and experimental results are given respectively by the solid and dashed lines and the symbols. \(\tilde{Z}_{t}=1\) (c). Here again, the agreement between the three methods is rather good. The reflection at \(\omega\) falls to \(|R^{(0,0)}|=0.28\), \(|R^{(0,0)}|=0.25\), and \(|R^{(0,0)}_{r}|=0\), respectively for \(\tilde{Z}_{t}=0.4\), \(\tilde{Z}_{t}=0.6\), and \(\tilde{Z}_{t}=1\). The smaller the impedance, the larger the reflection at both \(\omega\) and \(\omega+\omega_{m}\). Furthermore, it should be noted here that even though for \(\tilde{Z}_{t}=1\), we have an impedance matching and thus \(|R^{(0,0)}|=0\), a reflection at \(\omega+\omega_{m}\) exists, \(|R^{(+1,0)}|\neq 0\). The termination load is only impedance matched at the control frequency \(f_{c}\) but not at \(f_{c}+f_{m}\), in other words, \(\mathbf{\tilde{Z}_{t}}(1,2)\neq Z_{0}\). A vigilance point is that the control may be limited by instabilities if the loudspeaker is asked to respond with an impedance too different from its natural impedance at a given frequency. ### Change of the modulation depth \(A_{m}\) Finally, we study the effect of changing the modulation amplitude, also called modulation depth. We assign three different values \(A_{m}=0\), \(A_{m}=0.4\), and \(A_{m}=0.6\) as illustrated in Figs. 7(a,b,c) respectively. As anticipated, the modulation depth has almost no impact on the reflection at the fundamental frequency. For zero modulation depth, Fig. 7(a), _i.e._, a time-invariant control, no Floquet harmonics are generated. Furthermore, the higher the modulation depth, the higher the magnitude of the reflection at the 1st Floquet harmonic \(|R^{(+1,0)}|\). A point of caution with the variation of \(A_{m}\): a high modulation depth can also generate higher order harmonics. It is therefore important to check the magnitude of the reflection at \(\omega+n\omega_{m}\), and if necessary, adapt the dimension of the \(\mathbf{S}\) matrix to account for the additional harmonics in the calculation. ## Appendix C Details on the experimental set-up The acoustic apparatus used for the characterization of \(\mathbf{S}\) and illustrated in Fig. 2 consists of an acoustic waveguide made up of removable portions of a 7.18 cm diameter circular duct, terminated on one side by an electrodynamic loudspeaker (Monacor SPX-30 M, 3 inches) acting as the source, and on the other side, the time modulated load. To consider only the propagation of plane waves, we take care to work only below the 1st cutoff frequency of the waveguide (\(f_{c}=1.8412c_{0}/2\pi a=1400\) Hz). The time-modulated load is an actively controlled electrodynamic loudspeaker (Monacor SPX-30 M, 3 inch) enclosed in a cavity of volume \(V_{b}=1081.6\) cm\({}^{3}\) and instrumented with an ICP microphone (PCB 130F20, 1/4 inch) placed just in front of the loudspeaker diaphragm. The active control scheme as well as the excitation signal generation and data acquisition are performed with an FPGA-based Speed-goat Performance real-time controller (I/O 131) controlled by the MATLAB/SIMULINK xPC target environment. The controller's output voltage is converted by a homemade voltage-to-current converter (0.2083 A/V) based on a Howland pump circuit and fed into the controlled speaker. The incident and reflected pressure are derived from the pressure measured by two ICP microphones (PCB 130F20, 1/4 inch) 24 cm apart and 5 cm from the exci Figure 7: **Modulation depth \(A_{m}\) variation:** (a) \(A_{m}=0\), (b) \(A_{m}=0.4\), and (c) \(A_{m}=0.6\) for the fundamental \(|R^{(0,0)}|\) (1) and the 1st Floquet harmonic \(|R^{(+1,0)}|\) (2). Analytical, numerical, and experimental results are given respectively by the solid and dashed lines and the symbols. Figure 6: **Target impedance \(\tilde{Z}_{t}\) variation:** (a) \(\tilde{Z}_{t}=0.4\), (b) \(\tilde{Z}_{t}=0.6\), and (c) \(\tilde{Z}_{t}=1\) for the fundamental \(|R^{(0,0)}|\) (1) and the 1st Floquet harmonic \(|R^{(+1,0)}|\) (2). Analytical, numerical, and experimental results are given respectively by the solid and dashed lines and the symbols. tation source. To characterize the full scattering matrix, a multi-charge technique is used, with \(L_{load}=50\), 62.5, and 87.5 cm respectively, thus ensuring a consistent difference in phase delay due to the round-trip propagation distance between the microphones and time-modulated load for each configuration. It is important to note that the distance should be chosen according to the center frequency of the bandwidth under consideration. Note that a further extension to multimode characterization is possible, but will require an increased number of sensors.
2308.11554
Multistability of elasto-inertial two-dimensional channel flow
Elasto-inertial turbulence (EIT) is a recently discovered two-dimensional chaotic flow state observed in dilute polymer solutions. It has been hypothesised that the dynamical origins of EIT are linked to a center-mode instability, whose nonlinear evolution leads to a travelling wave with an 'arrowhead' structure in the polymer conformation, a structure also observed instantaneously in simulations of EIT. In this work we conduct a suite of two-dimensional direct numerical simulations spanning a wide range of polymeric flow parameters to examine the possible dynamical connection between the arrowhead and EIT. Our calculations reveal (up to) four co-existent attractors: the laminar state and a steady arrowhead, along with EIT and a 'chaotic arrowhead'. The steady arrowhead is stable for all parameters considered here, while the final pair of (chaotic) flow states are visually very similar and can be distinguished only by the presence of a weak polymer arrowhead structure in the 'chaotic arrowhead' regime. Analysis of energy transfers between the flow and the polymer indicates that both chaotic regimes are maintained by an identical near-wall mechanism and that the weak arrowhead does not play a role. Our results suggest that the arrowhead is a benign flow structure that is disconnected from the self-sustaining mechanics of EIT.
Miguel Beneitez, Jacob Page, Yves Dubief, Rich R. Kerswell
2023-08-22T16:34:48Z
http://arxiv.org/abs/2308.11554v1
# Multistability of elasto-inertial two-dimensional channel flow ###### Abstract Elasto-inertial turbulence (EIT) is a recently discovered two-dimensional chaotic flow state observed in dilute polymer solutions. It has been hypothesised that the dynamical origins of EIT are linked to a center-mode instability, whose nonlinear evolution leads to a travelling wave with an 'arrowhead' structure in the polymer conformation, a structure also observed instantaneously in simulations of EIT. In this work we conduct a suite of two-dimensional direct numerical simulations spanning a wide range of polymeric flow parameters to examine the possible dynamical connection between the arrowhead and EIT. Our calculations reveal (up to) four co-existent attractors: the laminar state and a steady arrowhead, along with EIT and a 'chaotic arrowhead'. The steady arrowhead is stable for all parameters considered here, while the final pair of (chaotic) flow states are visually very similar and can be distinguished only by the presence of a weak polymer arrowhead structure in the 'chaotic arrowhead' regime. Analysis of energy transfers between the flow and the polymer indicates that both chaotic regimes are maintained by an identical near-wall mechanism and that the weak arrowhead does not play a role. Our results suggest that the arrowhead is a benign flow structure that is disconnected from the self-sustaining mechanics of EIT. keywords: + Footnote †: Email address for correspondence: [email protected] ## 1 Introduction It has been more than 70 years since the phenomenon of polymer drag reduction in wall-bounded turbulence was first observed experimentally (Toms 1948; Mysels U.S. Patent 2492173A, June 1949). Following this discovery, great efforts have been directed towards understanding how inertial turbulence (IT) is altered by the addition of polymers to the flow (e.g. see the reviews Lumley 1969; White & Mungal 2008). Polymeric fluids also exhibit counter-intuitive chaotic behaviour in very small scale, inertialess flows. This 'elastic' turbulence (ET) was also first discovered experimentally (Groisman & Steinberg 2000, 2004) at vanishing Reynolds numbers and is thought to rely on finite-amplitude curvature in the streamlines (Shaqfeh 1996). In contrast to polymer-modified IT, ET is associated with an increased drag relative to the laminar Newtonian state (Varshney & Steinberg 2018). It can be exploited to promote heat transfer (Traore _et al._ 2015) and to efficiently mix at very small scales (Squires & Quake 2005). In addition to these distinct phenomena, a third chaotic flow state was recently identified (Samanta _et al._ 2013; Dubief _et al._ 2013) where both inertial and elastic effects are relevant, and was named 'elasto-inertial' turbulence (EIT). EIT can be sustained for Reynolds numbers \(Re=O(1000)\), and is potentially linked to the 'early turbulence' reported in a range of experimental studies (Jones & Maddock 1966; Goldstein _et al._ 1969; Draad _et al._ 1998; Choueiri _et al._ 2018; Chandra _et al._ 2018). EIT differs from both IT and ET in that it can be sustained in a purely two-dimensional planar flow (Sid _et al._ 2018), and is dominated by highly extended'sheets' of polymer stress (e.g. see Dubief _et al._ 2023). A connection has been sought between EIT and the so-called'maximum drag reduction' state in IT (Zhu & Xi 2021; Zhang _et al._ 2021), though the mechanisms underpinning both of these flow types remain to be clarified. Despite much progress in our statistical understanding of the various chaotic viscoelastic flows (Datta _et al._ 2022; Sanchez _et al._ 2022; Dubief _et al._ 2023), the dynamical origins and connections between polymer-perturbed IT, EIT and ET remain largely unknown. The exception here is ET in curved geometries, which is associated with a linear instability driven by viscoelastic hoop stresses (Larson _et al._ 1990; Shaqfeh 1996). In parallel flows there has been some indication that self-sustaining ET can be triggered by a finite amplitude perturbation to generate the curvature necessary for a hoop-stress instability (Meulenbroek _et al._ 2004; Morozov & van Saarloos 2007; Pan _et al._ 2013), but the exact requirements and dynamical connection to the linear instabilities in a curved geometry has not been demonstrated and there is also the possibility of a direct connection to EIT. The situation in a planar pressure-driven channel flow is ripe for investigation due to the presence of a pair of linear instabilities. One is the viscoelastic analogue of the Newtonian Tollmien-Schlichting (TS) waves and exists at high \(Re\) (Zhang _et al._ 2013). It has been observed that the polymer conformation field associated with a saturated TS wave and the polymer conformation for the weakly chaotic edge state for (subcritical) EIT have a similar appearance (Shekar _et al._ 2019, 2021) though the TS branch turns around prior to the emergence of EIT as the Weissenberg number \(Wi\) is increased and a clear dynamical connection has yet to be established. The other instability was discovered only very recently, and is a 'centre mode' found in both pipes (Garg _et al._ 2018) and channels (Khalid _et al._ 2021_a_) at modest Weissenberg numbers \(Wi\sim 20\). Most intriguingly, the unstable centre mode in a channel remains unstable even in the inertialess limit (Khalid _et al._ 2021_b_), although only for very high \(Wi\) and vanishing polymer concentration (more realistic values of \(Wi\) are found with the introduction of a more realistic polymer model, see Buza _et al._ 2022_b_). The existence of a linear instability in areas of the parameter space relevant to _both_ ET and EIT could provide a plausible direct connection between these states. The nonlinear evolution of the viscoelastic centre mode leads to a saturated 'arrowhead' travelling wave (Page _et al._ 2020) which is strongly subcritical (Wan _et al._ 2021; Buza _et al._ 2022_b_). The arrowhead can be continued down to the inertialess limit where it is found to exist at experimentally realisable values of the Weissenberg number (Buza _et al._ 2022\(a\); Morozov 2022). Finite amplitude structures which are similar in appearance to the exact arrowhead travelling waves have been observed in experiments at low \(Re\) (Choueiri _et al._ 2021) and have also been seen intermittently in numerical simulations of EIT at high \(Re\) (Page _et al._ 2020; Dubief _et al._ 2022). However - much like the TS waves - a direct route to chaos from this structure (e.g. in a sequence of successive bifurcations) has yet to be found. The possible importance of the arrowhead in sustaining EIT was suggested by the simulations of Dubief _et al._ (2022), who performed DNS of viscoelastic flows using the FENE-P model for \(Re=1000\), \(Wi\in[50,200]\) and \(0.5\leqslant\beta\leqslant 1\). Their study identified several regimes in different areas of the parameter space: a stable travelling wave arrowhead, EIT, a chaotic arrowhead and an intermittent arrowhead. Motivated by these results, we conduct a systematic study of the state space of a two-dimensional viscoelastic channel flow for a wide range of polymeric parameters in the FENE-P model, in an effort to directly connect the arrowhead to EIT. Surprisingly, we find that the arrowhead is a benign flow structure - it can be maintained on top of a background EIT, but does not play a role in the self-sustaining mechanism which is driven by near-wall behaviour. Our search reveals that the steady arrowhead travelling wave is always stable for the parameters we consider, and we also find a large region of multistability with up to four attractors - the laminar state, a steady arrowhead, EIT and a chaotic arrowhead. The final regime is nearly identical to EIT apart from a weak arrowhead in the centre of the domain. The rest of this paper is structured as follows: In SS2 we present the governing equations and describe the numerical simulations to be conducted. In SS3 we present evidence for the four distinct attractors and draw connections to the results of Dubief _et al._ (2022). In SS4 we look for dynamical connections between the attractors and compute various edge states between them. Finally, conclusions are presented in SS5. ## 2 Formulation and computational details We consider two-dimensional streamwise-periodic flow between two infinite, stationary, rigid walls, separated by a distance \(2h\) and driven by a time-varying pressure-gradient so that the mass flux is constant. The viscoelastic flow is governed by the finite-extensibility nonlinear elastic-Peterlin (FENE-P) model with governing equations \[\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}+\nabla p =\frac{\beta}{Re}\Delta\mathbf{u}+\frac{(1-\beta)}{Re}\nabla\cdot \mathbf{T}(\mathbf{C}), \tag{1}\] \[\partial_{t}\mathbf{C}+(\mathbf{u}\cdot\nabla)\mathbf{C}+\mathbf{ T}(\mathbf{C}) =\mathbf{C}\cdot\nabla\mathbf{u}+(\nabla\mathbf{u})^{T}\cdot \mathbf{C}+\frac{1}{Re\ Sc}\Delta\mathbf{C},\] (2) \[\nabla\cdot\mathbf{u} =0, \tag{3}\] where \[\mathbf{T}(\mathbf{C}):=\frac{1}{Wi}\left(f(\mathrm{tr}\mathbf{C})\mathbf{C}- \mathbf{I}\right),\quad\text{and}\quad f(x):=\left(1-\frac{x-3}{L_{\max}^{2}} \right)^{-1}. \tag{4}\] We consider two-dimensional flows with \(\mathbf{u}=(u,v)\) denoting the streamwise and wall-normal velocity components, \(p\) the pressure and \(\mathbf{C}\) the positive-definite conformation tensor which represents the ensemble average of the product of the end-to-end vector of the polymer molecules. The parameter \(\beta:=\nu_{s}/(\nu_{s}+\nu_{p})\) denotes the viscosity ratio, with \(\nu_{s}\) and \(\nu_{p}\) the solvent and polymer contributions to the total kinematic viscosity, \(\nu=\nu_{s}+\nu_{p}\) The parameter \(L_{\max}\) is the maximum extensibility of the polymer chains. The equations are made non-dimensional with the half-distance between the plates \(h\) and the bulk velocity \[U_{b}:=\frac{1}{2h}\int_{-h}^{h}u\ dy. \tag{5}\] The non-dimensional Reynolds, \(Re\), and Weissenberg, \(Wi\) numbers are defined as \[Re:=\frac{U_{b}h}{\nu}\quad\text{and}\quad Wi:=\frac{\tau U_{b}}{h}, \tag{6}\] where \(\tau\) denotes the polymer relaxation time. Equation (2.2) has a stress diffusion term which for a realistic polymer solution would take a value \(Sc=O(10^{6})\). However, numerical simulations are typically restricted to much smaller values, \(Sc=O(10^{3})\)(Sid _et al._, 2018; Page _et al._, 2020), and the term itself is treated as regulariser to help maintain positive-definite \(\mathbf{C}\). With non-zero polymer diffusion we must specify boundary conditions on the polymer conformation. We apply the following equation at the wall: \[\partial_{t}\mathbf{C}+\mathbf{T}(\mathbf{C})=\mathbf{C}\cdot\nabla\mathbf{u} +(\nabla\mathbf{u})^{T}\cdot\mathbf{C}+\frac{1}{Re\;Sc}\partial_{xx}\mathbf{C}, \tag{2.7}\] as previously used in the literature (Dubief _et al._, 2022; Page _et al._, 2020; Buza _et al._, 2022). Note that equation (2.2) does not require boundary conditions in the limit \(Sc\to\infty\)(Sid _et al._, 2018); the boundary conditions (2.7) are chosen so that the distance from the \(Sc\to\infty\) limit is minimized (Sid _et al._, 2018; Dubief _et al._, 2022). ### Numerics The spectral codebase Dedalus (Burns _et al._, 2020) is used to perform direct numerical simulations of equations (2.1)-(2.3). We consider a computational domain of fixed size \([L_{x},L_{y}]=[2\pi,2]\) in units of \(h\). The quantities \(\mathbf{C}\) and \(\mathbf{u}\) are expanded in \(N_{x}\) Fourier modes in the \(x\) direction and \(N_{y}\) Chebyshev modes in the \(y\) direction. Time integration is performed with a 3rd-order semi-implicit BDF scheme (Wang & Ruuth, 2008) with fixed time step. We fix \(Sc=500\) for the majority of simulations unless otherwise indicated. The different numerical solutions have various requirements in term of resolution. We typically use \([N_{x},N_{y}]=[512,600]\) to simulate travelling waves while higher values of \([N_{x},N_{y}]=[600,800]\) are used to simulate chaotic states, although for some of the higher Weissenberg and higher \(L_{\max}\) cases we have considered \([N_{x},N_{y}]=[800,1024]\). Further increasing \(Sc\) can cause \(\mathbf{C}\) to lose positive definiteness in several locations of the domain, as previously reported Dubief _et al._ (2022). However, reducing the computational timestep and increasing the resolution can alleviate this. We have checked those results which temporarily lose positive definiteness in certain regions by reducing the time step below \(10^{-5}\) and increasing the resolution to at least \([N_{x},N_{y}]=[2048,2048]\). The increase of spatio-temporal resolution ensured that positive definiteness is recovered while the reported dynamics remain unaltered. ### Elasto-inertial attractors Dubief _et al._ (2022) identified various statistically-steady states in the same channel geometry: the laminar state (L), steady arrowhead (SAR), EIT and a chaotic arrowhead (CAR). Note that Dubief _et al._ (2022) also discuss an intermittent arrowhead state (IAR) which we now believe is actually a weaker version of CAR and not a distinct state. Examples of the the three non-trivial attractors are reported in figure 1, where we show contours of the polymer trace \(\mathrm{tr}(\mathbf{C})/L_{\max}^{2}\). The SAR state features a pair of symmetric sheets of polymer extension, which sit close to the channel centreline, bending to meet at \(y=0\). A highly stretched central sheet then extends downstream along the centreline for almost half of the computational domain. Both EIT and CAR show intense polymer stretch in near wall regions, with many wavy sheets of polymer extension layered on top of one another. The states are visually very similar, though CAR features a weak, distorted arrowhead structure near the centre of the domain. We trigger each of the states discussed above and shown in figure 1 by time-stepping appropriate initial conditions. The SAR attractor was initially found via nonlinear saturation of the linear centre mode instability as described in Page _et al._ (2020). We found the SAR to always be stable, and were able to obtain this state at other parameter settings by supplying a converged arrowhead obtained nearby in parameter space as an initial condition. We triggered the chaotic states CAR and EIT by applying blowing and suction at the wall, starting from either SAR (to obtain CAR) or the laminar state (to obtain EIT). The blowing and suction is similar to that used in previous studies (Samanta _et al._, 2013; Dubief _et al._, 2013) and takes the form \[v(y=\pm 1)=\mp A\sin\,(2\pi x/L_{x}), \tag{8}\] with \(A=2\times 10^{-3}\). The forcing is active for \(0\leqslant t<3\). Perturbations from the wall were found to be necessary to trigger the self-sustained chaotic states. In contrast, arbitrary perturbations applied in the core of the domain did not trigger chaotic behaviour. ## 3 Multistability of two-dimensional viscoelastic channels In this section we summarise our computations and map out regions of multistability. We also explore the impact of changing the flow parameters on the appearance and statistical properties of the various attractors. Figure 1: Example snapshots of \(\mathrm{tr}(\mathbf{C})/L_{\mathrm{max}}^{2}\) for the states initially explored in Dubief _et al._ (2022). (a) Steady arrowhead regime (SAR) at \(Re=1000\), \(Wi=50\), \(\beta=0.9\), \(L_{\mathrm{max}}=90\), \(Sc=500\), (b) Elasto-inertial turbulence (EIT) at \(Re=1000\), \(Wi=50\), \(\beta=0.9\), \(L_{\mathrm{max}}=70\), \(Sc=500\), (c) Chaotic arrowhead regime (CAR) at \(Re=1000\), \(Wi=50\), \(\beta=0.9\), \(L_{\mathrm{max}}=70\), \(Sc=500\). We will show that these state do not succeed each other but coexist in parameter space. ### Coexistence of attractors in parameter space The parameter space here is 5-dimensional and so a systematic search was impractical. However, a preliminary investigation indicated that \(Wi\) and \(L_{\max}\) were the most important parameters (yielding the most qualitative changes) so they were the focus of the search: see figure 2. Over the range \((Wi,L_{\max})\in[20,70]\times[50,1000]\) at \(Sc=500\), \(\beta=0.9\) and \(Re=1000\), the laminar state is linearly stable. This is consistent with the centre-mode mode instability appearing at slightly higher \(Wi\) (see figure 2 in Page _et al._ (2020) where \(Sc=1000\) was used). However, the consequence of the centre-mode instability - the SAR state - is seen at lower \(Wi\) as the instability is subcritical. The SAR state was found to be an attractor for \(L_{\max}\geqslant 70\) and \(Wi\geqslant 30\) consistent with Page _et al._ (2020). Interestingly, the chaotic arrowhead state (CAR) was only found where SAR also exists and is stable (basically for \(Wi\geqslant 50\) and \(L_{\max}\in[70,120]\)) excluding the possibility of a SAR-to-CAR bifurcation in this (\(Wi,L_{\max}\)) range. EIT was found for \(Wi\in[30,100]\), \(L_{\max}\in[50,130]\) with \(\beta\in[0.9,0.97]\), \(Re\in[900,1200]\) and \(Sc\geqslant 500\). In terms of fig 2, CAR and EIT coexist when \(L_{\max}\) is as low as 50 where only EIT and the laminar state were simulated. Attempts to simulate SAR and CAR for this value of \(L_{\max}\) were prohibitively expensive computationally due to the loss of positive definiteness and high spatio-temporal resolution required. The general conclusion from figure 2 is that the nonlinear states firstly reported in (Dubief _et al._, 2022) - SAR, CAR & EIT - coexist in parameter space rather than succeeding each other as attractors. The latter scenario would suggest dynamical connections between the Figure 2: Summary of computations and the attractors found over parameter space. Blue circles indicates only the laminar (L) state was found as an attractor, orange stars - the steady arrowhead (SAR) and L coexist as attractors, light blue squares - L and EIT co-exist as attractors, and red triangles - L, EIT, SAR and the chaotic arrowhead (CAR) all co-exist as attractors. For the main plot, \(Sc=500\), \(\beta=0.9\) and \(Re=1000\) while for the inset \(Wi=50\), \(\beta=0.9\) and \(Sc=500\) again. At \(L_{\max}=50\) only EIT and L were explored as attractors as SAR/CAR become prohibitively expensive computationally. states in which one loses stability to another but this seems not to be the case at least in the parameter ranges considered. The fact that SAR and CAR coexist as attractors for the parameters considered is particularly surprising as CAR plausibly looks like the indirect result of a bifurcation off SAR. ### Distinguishing between CAR and EIT Figure 1 shows that the CAR and EIT states look very similar and developing some quantitative measure to distinquish them is important. Another issue is whether either state is just a long-lived transient. For example, does CAR eventually evolve into the EIT state? This latter question is impossible to answer definitively with finite-time computations but what can be said is that over the course of our simulations (some of duration over 1000 \(h/U_{b}\)), CAR never collapsed. The defining feature of CAR is the mixture of an arrowhead structure at the midplane with the chaotic stretched polymer sheets towards the walls which characterise EIT. A quantity well suited to picking the former feature out is \[C_{\text{grad}}:=\frac{1}{L_{x}}\int\,|\partial_{x}C^{\prime}_{kk}(x,y=0)|dx, \tag{1}\] which is the streamwise-averaged gradient magnitude of the perturbation over the laminar state trace along the centreline. We also use the \(L_{2}\)-norm of the velocity difference from the Figure 3: (a) \(C_{\text{grad}}/L_{\text{max}}^{2}\)_vs._\(TKE_{L}\) as defined in the main text for for EIT (red) and CAR (orange) identified at \(Re\)=1000, \(Wi\)=50, \(L_{\text{max}}=70\), \(\beta=0.9\), \(Sc=500\) for a finite time interval \(T\approx 1000\). (b) Projection of the same EIT trajectory (red) and CAR (orange) onto the TS mode _vs._ the projection onto the centermode. The figures present observables to show that EIT and CAR are two separate attractors. (c) trace of the centermode for the aforementioned parameters and \(k_{x}=1\). (d) idem for the TS mode which becomes unstable at sufficiently large \(Re\). Note that the projection of the TS mode is much smaller than that of the centermode due to the smaller spatial extension of \(\text{tr}(\mathbf{C})\), which is the largest term in the corresponding eigenmode. laminar flow - a turbulent kinetic energy \[\mathrm{TKE}_{L}\coloneqq\frac{1}{2L_{x}}\int_{\Omega}\left(\mathbf{u}-\mathbf{u }_{\mathbf{L}}\right)^{2}d\Omega, \tag{3}\] to compare CAR and EIT. Figure 3 (a) shows the two-dimensional probability density function (PDF) over these two quantities for the CAR and EIT states collected over a 1000 \(h/U_{b}\) time period. The turbulent kinetic energy of EIT and CAR are very similar but, as expected, \(C_{\mathrm{grad}}\) is much larger for CAR than EIT. Another differentiator between EIT and CAR is the result of projecting onto the eigenmodes of the symmetric centre-mode (CM) and the antisymmetric Tollmien-Schlichting (TS) mode as follows \[\langle\,\phi_{j}^{\dagger},\phi\,\rangle=\frac{1}{2}\int_{-1}^{1}\phi_{j}^{ \dagger*}\phi\,dy, \tag{4}\] with \[\phi(y)\coloneqq\frac{1}{L_{x}}\int_{0}^{L_{x}}\varphi(x,y)e^{ix}dx, \tag{5}\] where \(j=\{\mathrm{CM,TS}\}\), \(\varphi=[u^{\prime},v^{\prime},p^{\prime},C_{xx}^{\prime},C_{yy}^{\prime},C_{ zz}^{\prime},C_{xy}^{\prime}]\) is the perturbation to the laminar state, and \(\phi\) denotes the projection onto the \(k_{x}=1\) mode (\({}^{*}\) denotes complex conjugate and \({}^{\dagger}\) the adjoint). Figure 3 (b) shows that this projection for the same perturbation trajectories used in figure 3 (a) produces the same desired separation. The projection onto the centre-mode is much larger than the TS mode one for both chaotic states. This is caused by the fact that the trace of the conformation tensor \(\mathbf{C}\) in the TS eigenmodes has a much larger amplitude than the other components, but its spatial extension is significantly smaller: see figure 3 (c) and (d). ### Effect of varying \(Wi\) and \(L_{\mathrm{max}}\) The kinetic energy of the steady arrowhead state, SAR, increases for increasing \(L_{\mathrm{max}}\), in line with the results previously reported (Dubief _et al._, 2022; Buza _et al._, 2022). Figure 4 (a) shows the time series of the volume-averaged trace for CAR corresponding to \(L_{\mathrm{max}}=\{70,90,110\}\) at \((Wi,Re,Sc,\beta)=(50,1000,500,0.9)\) and \(L_{\mathrm{max}}=130\) at \((Wi,Re,Sc,\beta)=(50,1000,1000,0.9)\) where the change in \(Sc\) was necessary to maintain chaotic dynamics (ditto for the corresponding SAR). The figure shows that the CAR states undergo periods of calmer, less energetic dynamics alternating with more active periods. The duration of the calmer events increases with \(L_{\mathrm{max}}\) as shown by the increasing distance between peaks of the time series. This behaviour indicates a continuous transition between the previously reported chaotic arrowhead and intermittent arrowhead regimes, leading to the conclusion that these states are two ends of the same attractor, hereafter labelled CAR. The intermittent arrowhead state discussed in Dubief _et al._ (2022) is simply a CAR state where the calm phases dominate the chaotic dynamics which occurs as \(L_{\mathrm{max}}\) gets large for example. The effect of varying \(L_{\mathrm{max}}\) on the EIT states can be seen in figure 5 where the lengthscales in instantaneous snapshots increase with \(L_{\mathrm{max}}\). This is further supported by considering \(\mathrm{tr}(\mathbf{C})\) at any arbitrary horizontal line (\(y=-0.6\) in this case), which is shown in Figure 5(d) and its Fourier transform in Fig 5(e). The latter figure shows that for increasing \(L_{\mathrm{max}}\), the energy content in the larger scales (smaller wavenumbers) is increased. The effect of increasing \(Wi\) on the EIT state is to make the polymer sheets more undulating spatially and temporally: see figure 6. Increasing \(Wi\) also intensifies the polymer layers which reach closer to the centerline. The same trends were also observed for decreasing \(L_{\mathrm{max}}\) and are found also for the CAR state. A discussion about the evolution of the SAR states with \(Wi\) can be found in Buza _et al._ (2022). Figure 4: (a) Timeseries corresponding to several chaotic arrowheads (solid) and the corresponding SAR (dashed) at _Re_= 1000, _Wi_= 50, \(\beta=0.9\) with \(Sc=500\) for \(L_{\text{max}}=\{110,90,70\}\) (second top to bottom) and \(Sc=1000\) for \(L_{\text{max}}=130\) (top). The figure shows how the duration of the calm-active phases becomes longer with increasing \(L_{\text{max}}\), i.e. the peaks of \(\text{tr}(\mathbf{C})\) become more separated in time. This shows that the intermittent arrowhead regime (IAR) and CAR reported in Dubief _et al._ (2022) are smoothly connected and so correspond to the same attractor. Figure 5: Left: Snapshots of \(\text{tr}(\mathbf{C})/L_{\text{max}}^{2}\) of EIT with varying \(L_{\text{max}}\) for fixed _Re_= 1000, _Wi_= 50, \(\beta=0.9\), (a) \(L_{\text{max}}=50\). (b) \(L_{\text{max}}=70\). (c) \(L_{\text{max}}=90\) and \(Sc=500\) for all cases. Right: (d) \(\text{tr}(\mathbf{C})/L_{\text{max}}^{2}\) along the arbitrarily chosen line \(y=-0.6\) for \(L_{\text{max}}=50\) (red), \(L_{\text{max}}=70\) (orange), \(L_{\text{max}}=90\) (blue). (e) Fourier transform of \(L_{\text{max}}\) for the lines in the top right figure illustrating how the lengthscales in the flow increase with \(L_{\text{max}}\), i.e. smaller \(L_{\text{max}}\) shows greater amplitudes in the lower wavenumber modes. The vertical lines indicate the wavenumbers corresponding to the two smallest wavenumbers (apart from 0) for each \(L_{\text{max}}\) above. ### Effect of varying \(Re\), \(\beta\) and \(Sc\) EIT and CAR remain robust as the Reynolds number is increased away from where they first appear in parameter space. As an example, figure 7(right) shows the CAR for three different \(Re=\{900,1100,1200\}\), while keeping the remaining parameters fixed. The intensity of the dynamics increases while the arrowhead persists at the centreline, consistent with Dubief _et al._ (2022). Increasing the polymer concentration, \(\beta\), also intensifies the chaotic dynamics present. Figure 7 (left) shows CAR for \(\beta=\{0.9,0.95,0.97\}\) for fixed \(Re=1000,\ Wi=50,\ L_{\text{max}}=70,\ \text{and}\ Sc=500\). A larger \(\beta\) leads to more active chaotic dynamics. Steady arrowhead states (SAR) have been observed at values as low as \(\beta\approx 0.5\)(Dubief _et al._ 2022; Morozov 2022; Buza _et al._ 2022_a_), whereas we have found that chaotic states (EIT and CAR) cannot be sustained for values below \(\beta\approx 0.8\) (using \(Re=1000\), \(L_{\rm max}=70\), \(Wi=50\) and \(Sc=500\)). The majority of the results presented here were computed using \(Sc=500\) as a compromise between including a vanishingly small real diffusion (see e.g. El-Kareh & Leal 1989) and enough diffusion to numerically stabilise the time-stepping spectral code at the resolutions used. The value of \(Sc=500\) was also selected as the best match to the previous finite-difference computations reported in Dubief _et al._ (2022) and Page _et al._ (2020) where a value of \(Sc=1000\) was taken (finite difference codes already have some implicit numerical diffusion so less needs to be added explicitly to stabilise time-stepping compared to a spectral code). Even then, the EIT reported in fig 2 of Page _et al._ (2020) (the red square at \(Wi=20\), \(\beta=0.9\), \(L_{\rm max}=500\) and \(Re=1000\)) could only be recovered by using neighbouring parameter values \(Wi=30\), \(\beta=0.9\), \(L_{\rm max}=120\) and \(Re=1000\). Runs were also carried out with \(Sc=150\) and \(1000\) which confirmed that all 4 states (EIT, CAR, SAR and L) as well as their coexistence are robust. The exact parameter limits for their coexistence, however, do depend on \(Sc\) and taking \(Sc\leqslant 50\) killed the chaotic states. ## 4 Dynamic connections between attractors The goal of this section is to explore how the various attractors - L, SAR, CAR and EIT - are organised in state space. Of primary concern is identifying which states share basin boundaries and which do not. The physical features present in the different states, such as the presence of a polymer sheet across the midplane or the undulations of polymer sheets closer to the wall, are common to several of the states identified. It is therefore natural to ask how transitions can occur between them and how they come into existence as the parameters are varied. As an initial check, we first examined the linear stability of the SAR state which results from the centre-mode instability found by Garg _et al._ (2018) in a pipe and Khalid _et al._ (2021_a_) in a channel. This bifurcation is generally subcritical in both \(Re\) and \(Wi\)(Wan _et al._ 2021; Buza _et al._ 2022_b_) with the steady arrowhead solution (SAR) emerging as the upper branch solution (Page _et al._ 2020; Buza _et al._ 2022\(a\); Morozov 2022). We examined the two-dimensional linear stability of the SAR states performing a global stability analysis using an implicitly-restarted Arnoldi method (Sorensen 1992; Bagheri _et al._ 2009). The linear stability analysis was carried out in the frame travelling with the speed of the SAR, where the state corresponds to a fixed point (the perturbation was represented by \(N_{x}=64\) streamwise and \(N_{y}=512\) wall-normal modes). All the SAR states tested were found to be linearly stable to 2-dimensional perturbations consistent with the time-stepping numerics. Interestingly, while this work was being performed, another group have found that the SAR state is, however, linearly unstable to 3-dimensional perturbations where there is a non-vanishing spanwise wavenumber (Lellep _et al._ 2023). As the laminar state is also linearly stable over the parameter space being considered, the transition between the SAR, L and the other chaotic states must then be through finite amplitude perturbations. To shed some light on this, the saddle states lying in the boundaries between the basins of attraction of the aforementioned attractors, i.e. the edge states. are considered below. ### Edge states Edge states are attracting states on the edge manifold, a codimension one manifold lying in the boundary between different basins of attraction. Edge states are thus helpful to shed light on the global structure of the state space (Skufca _et al._ 2006; Schneider & Eckhardt 2006; Duguet _et al._ 2008). These states can be identified by the so-called classical edge tracking algorithm based on threshold attainment of a key observable of the flow (Itano & Toh 2001; Skufca _et al._ 2006). The choice of an observable to uniquely label trajectories as lying within a certain basin of attraction is not straightforward in viscoelastic flows as discussed above in SS3.2. The choice used here is the \(L_{2}\)-norm of the vertical velocity, \(||v||^{2}\), which is zero for the laminar state L. Edge tracking was then performed between (i) EIT and L (shown in figure 8 as a blue line) and (ii) EIT and SAR (shown in figure 8 as a red line). The use of \(||v||^{2}\) was not able to distinguish between trajectories belonging to CAR or to EIT, as discussed in Figure 3. The existence of an edge manifold between CAR and EIT can still be explored by probing the state space with specific trajectories and assessing whether an arrowhead structure survives or not after sufficiently long time but this is a laborious process. The simplest edge state identified is the 'lower branch' unstable SAR between the 'upper branch' stable SAR and L (Buza _et al._ 2022_a_). Figure 9(a) shows the time series of the edge tracking between EIT and L and Figure 9(c) shows a snapshot of this trajectory, a weakly chaotic state characterised by polymer layers located at \(y\approx\pm[0.75,0.85]\). The edge state reveals the significance of the polymer layers located close to the walls, as they are responsible for the self-sustained chaotic dynamics within the edge. Furthermore, these layers have been observed during the calm phases of both EIT and CAR, suggesting that they related to the driving mechanism for elasto-inertial turbulence. The edge state between CAR and L can be compared with the edge state found between EIT and SAR, figure 9(b) and figure 9(d), which also corresponds to a weakly chaotic state characterised by polymer layers located at \(y\pm[0.75,0.85]\) with the presence of an arrowhead structure in the centre of the channel. The results of the edge tracking suggest an organisation of state space sketched in Figure 8 Figure 8: Sketch of the state space configuration. The four quadrants represent the basins of attraction corresponding to the states EIT, CAR, SAR, L. The solid lines emanating from the states represent trajectories approaching and departing different regions of the state space. The thick lines indicate the edge tracking carried out: between EIT and L (blue), between EIT and SAR (red) (see figure 9). The edge states resulting from the bisection algorithm are framed with the same colour. The chaotic attractors undergo calm and active phases (see figure 4) and approach the edge states during the calm phase. over a 2D plane of wall activity against centre-mode projection. Here each state is shown within its basin of attraction and the chaotic states are shown approaching their corresponding edge states in their calm phases. Our calculations suggest that there could be an intersection between the basins of attraction of EIT, CAR, SAR and L but the saddle state residing here is not computable using bisection since it must have two unstable directions. ### Kinetic-to-elastic energy transfer Our calculations of chaotic states highlight the importance of polymer activity at the walls. Here we identify the key location where kinetic energy is transferred to elastic polymer energy. The energy transfer flux from the perturbation kinetic energy to the perturbation elastic energy is, \[\Pi^{\prime}_{e}\coloneqq\frac{1-\beta}{Re}T^{\prime}_{ij}S^{\prime}_{ij} \tag{10}\] (e.g. see equations (9)-(12) in Dubief _et al._2022) where primed variables indicate perturbations from the mean turbulent state, and the dissipation rate of TKE, \(\varepsilon:=\frac{\beta}{Re}\partial_{j}u^{\prime}_{i}\partial_{j}u^{\prime}_ {i}\), defines the Kolmogorov lengthscale, \[\eta_{K}:=\left[\frac{(\beta/Re)^{3}}{\bar{\varepsilon}}\right]^{1/4}. \tag{11}\] Figure 10 shows the instantaneous cospectra of \(\Pi^{\prime}_{e}\) and the corresponding instantaneous Figure 9: Top edge tracking for \(Re=1000\), \(Wi=50\), \(\beta=0.9\), \(L_{\max}=70\), \(Sc=500\): (a) between EIT and L (the green edge trajectory is bracketed by red trajectories approaching EIT and blue trajectories relaminarising to L); and (b) between EIT and SAR (the green edge trajectory is bracketed by red trajectories approaching CAR instead of EIT and blue trajectories approaching SAR. Bottom: (c) snapshot of \(\mathrm{tr}(\mathbf{C})/L_{\max}^{2}\) of the edge trajectory in (a) at \(t=400\) which shows a strong polymer layer at \(y\approx\pm\{0.75,0.85\}\). Plot (d) repeats this for (b). Figure 8 explains how it is possible to reach the CAR edge state starting from a bisection between EIT and SAR. field of \(\text{tr}(\mathbf{C})/L_{\text{max}}^{2}\) for each of EIT and CAR when they are on the verge of a high-activity phase. These cospectra highlight that as the trajectories depart from their calm phases, the largest rate of energy exchange from the kinetic energy to the elastic energy, i.e. maximal \(\Pi_{e}^{\prime}\), takes place at a location \(y\approx[0.75,0.85]\) in (a) and \(y\approx[-0.75,-0.85]\) in (b). This corresponds to the location of the polymer layers harvesting kinetic energy to build self-sustained chaotic dynamics. It is also interesting to note that the main exchange from elastic energy to kinetic energy happens in the dark regions of figure 10. As can be observed throughout the various figures in this work, this region supports the larger scale motions during the observed self-sustained chaotic process. Moreover, these polymer sheets experience the same kind of undulation for the edge states as for the complex chaotic attractors when departing from the calm phases. The time-averaged cospectra (shown in figure 10 (e) and (f)) confirm the importance of the energy exchange in the neighbourhood of the wall \(y\approx[0.75,0.85]\) As expected, the time-averaged cospectra for EIT and CAR are very similar as the main energy exchange driving the chaotic dynamics is located in the same region (Dubief _et al._, 2022). ## 5 Discussion In this paper we have carried out a suite of 2-dimensional simulations of viscoelastic channel flow to explore where the various states described in Dubief _et al._ (2022) exist in \((Wi,Re,\beta,L_{\text{max}},Sc)\) parameter space. A fully spectral code using the FENE-P model has been used to confirm the existence of 4 distinct states: the laminar state, L, the steady arrowhead solution, SAR, a chaotic arrowhead, CAR, and elasto-inertial turbulence, EIT Figure 10: (a) Cospectra between the perturbation kinetic energy to the perturbation elastic energy for EIT at \(Re=1000,\;Wi=50,\;\beta=0.9,\;L_{\text{max}}=70,\;Sc=500\) as a function of the wall-normal coordinate \(y\) just before an active phase. The streamwise wavenumber \(k_{x}\) is normalised with the minimum mean Kolmogorov lengthscale. The white dashed line is at \(y=0.75\). (c) Instantaneous snapshots of \(\text{tr}(\mathbf{C})/L_{\text{max}}^{2}\) corresponding to the cospectra in (a). (b) same as (a) but for a snapshots of CAR at the same parameters. (d) Instantaneous snapshot of of \(\text{tr}(\mathbf{C})/L_{\text{max}}^{2}\) corresponding to the cospectra in (b). (e) Mean cospectra for same EIT as (a). (f) Idem for CAR in (b). The figure illustrates how the energy exchange ahead of an active phase occurs at polymer layers located at \(y\approx\pm[0.75,0.85]\). (the intermediate arrowhead state, IAR, of Dubief _et al._ (2022) has been clarified as a CAR state where calm periods dominate over the chaotic dynamics). EIT has been found for \((Wi,Re,\beta,L_{\max},Sc)\in[30,100]\times[900,1200]\times[0.9,0.97]\times[50,130] \times[500,\infty)\) with increasing \(Wi\), \(Re\) and \(\beta\) and decreasing \(L_{\max}\) intensifying the chaotic behaviour. Small \(Sc\) values of \(\approx 50\) suppress the chaotic dynamics, while larger values of \(Sc\) allow the chaos to exist in a greater region of parameter space. The most significant finding, however, is that there is a substantial set of parameter values (shown in figure 2) where all 4 states co-exist as attractors. This contrasts with the classic'supercritical' scenario where a succession of unique attractors appear of increasing complexity as parameters are changed to make the flow more unstable (e.g. increasing \(Wi\) or decreasing \(L_{\max}\)). In particular, no evidence has been found that a bifurcation off the SAR leads ultimately to either CAR or EIT (at least in 2D) as hypothesized after the recent discovery of the centre-mode instability (e.g. see Garg _et al._ 2018; Page _et al._ 2020; Khalid _et al._ 2021\(a\); Shankar & Subramanian 2022; Datta _et al._ 2022). It may well be that such a subcritical bifurcation sequence exists at, for example, higher \(Wi\) or lower \(L_{\max}\) beyond the region of multistability. Our results do not go high enough in \(Wi\) nor low enough in \(L_{\max}\) to see this. In terms of polymer concentration, SAR has been found as low as \(\beta=0.5\) but remains stable even when chaotic dynamics emerges for \(\beta\geqslant 0.9\). To further probe the connection between the various states, various edge states were identified between pairs of attractors, and used to sketch the relative locations of the states in phase space. As expected, the edge state between SAR and L is the unstable 'lower branch' SAR found in Buza _et al._ (2022\(b\),_a_) while the edge states between CAR and L, and between EIT and SAR correspond to weakly chaotic states. The chaotic edge states reveal the presence of unstable polymer layers at \(y\approx\pm[0.75,0.85]\), qualitatively similar to the edge states between CAR and L, and between EIT and SAR. By examining the energy transfer flux, these near-wall polymer layers were found to be where the dominant energy transfer occurs from the velocity field to the polymers which seems fundamental for the self-sustained chaotic dynamics. In contrast the chaotic flow appeared to be insensitive to the arrowhead structure populating the centerline region. This then further suggests that the chaotic dynamics is not related to the centre-mode instability or its arrowhead manifestation but is more a wall-focussed phenomenon. The conclusion of the present study is then that the 2D linear instability discovered by Garg _et al._ (2018) in pipe flow and Khalid _et al._ (2021_a_) in a channel, and the resulting arrowhead structure (Page _et al._ 2020; Morozov 2022; Buza _et al._ 2022_a_), appear dynamically disconnected from EIT at least in the 2 dimensions studied here. Instead, our study suggests that to trigger any chaotic motion, it is necessary to excite polymer layers located at \(y\approx\pm[0.75,0.85]\) from the wall. Recent work discussing viscoelastic TS waves (Shekar _et al._ 2020, 2021) suggests a plausible mechanism as polymer stretch is found localized at the near-wall critical layer of the TS waves. Another possibility is the very recent discovery of a wall-localised linear instability (Beneitez _et al._ 2022). Clearly, further efforts are needed to untangle the mechanism leading to EIT but now this can be focussed on near-wall processes. Declaration of Interests. The authors report no conflict of interest. Acknowledgements. The authors are grateful to EPSRC for supporting this work via grant EP/V027247/1. YD also thanks the support of the National Science Foundation CBET (Chemical, Bioengineering, Environmental and Transport Systems) through award 1805636.
2306.07842
PSSTRNet: Progressive Segmentation-guided Scene Text Removal Network
Scene text removal (STR) is a challenging task due to the complex text fonts, colors, sizes, and background textures in scene images. However, most previous methods learn both text location and background inpainting implicitly within a single network, which weakens the text localization mechanism and makes a lossy background. To tackle these problems, we propose a simple Progressive Segmentation-guided Scene Text Removal Network(PSSTRNet) to remove the text in the image iteratively. It contains two decoder branches, a text segmentation branch, and a text removal branch, with a shared encoder. The text segmentation branch generates text mask maps as the guidance for the regional removal branch. In each iteration, the original image, previous text removal result, and text mask are input to the network to extract the rest part of the text segments and cleaner text removal result. To get a more accurate text mask map, an update module is developed to merge the mask map in the current and previous stages. The final text removal result is obtained by adaptive fusion of results from all previous stages. A sufficient number of experiments and ablation studies conducted on the real and synthetic public datasets demonstrate our proposed method achieves state-of-the-art performance. The source code of our work is available at: \href{https://github.com/GuangtaoLyu/PSSTRNet}{https://github.com/GuangtaoLyu/PSSTRNet.}
Guangtao Lyu, Anna Zhu
2023-06-13T15:20:37Z
http://arxiv.org/abs/2306.07842v1
# PSSTRNET: Progressive Segmentation-Guided Scene Text Removal Network ###### Abstract Scene text removal (STR) is a challenging task due to the complex text fonts, colors, sizes, and background textures in scene images. However, most previous methods learn both text location and background inpainting implicitly within a single network, which weakens the text localization mechanism and makes a lossy background. To tackle these problems, we propose a simple Progressive Segmentation-guided Scene Text Removal Network(PSSTRNet) to remove the text in the image iteratively. It contains two decoder branches, a text segmentation branch, and a text removal branch, with a shared encoder. The text segmentation branch generates text mask maps as the guidance for the regional removal branch. In each iteration, the original image, previous text removal result, and text mask are input to the network to extract the rest part of the text segments and cleaner text removal result. To get a more accurate text mask map, an update module is developed to merge the mask map in the current and previous stages. The final text removal result is obtained by adaptive fusion of results from all previous stages. A sufficient number of experiments and ablation studies conducted on the real and synthetic public datasets demonstrate our proposed method achieves state-of-the-art performance. The source code of our work is available at: [https://github.com/GuangtaoLyu/PSSTRNet](https://github.com/GuangtaoLyu/PSSTRNet). Guangtao Lyu, Anna Zhu*+School of Computer Science and Artificial Intelligence, Wuhan University of Technology, China [email protected] Footnote †: Corresponding author Scene text removal, segmentation, image inpainting, progressive process ## 1 Introduction Scene text contains quite a lot of sensitive and private information. To prevent the private information in images from being used illegally, scene text removal(STR) is proposed to address this issue. The well-known Pix2Pix[1] which used patch-GAN for image translation can be applied to the STR task. So, Scene Text Eraser(STE)[2] adopted its idea and used a single-scaled sliding window to remove the text in each patch independently. This method processed STR locally without considering the global context information. EnsNet[3] developed several designed loss functions and a lateral connection to further enhance the STR performance. However, these single stage-based STR methods may modify non-text pixels in images and result in excessive or inadequate inpainting results. MTRNet[4] employed conditional GAN and used the text segmentation results for inpainting region guidance. EraseNet[5] used a text detection branch to locate text regions and remove text from coarse to fine. However, the STR performance of those methods relay heavily on only one-shot text segmentation results. PERT[6] performed multi-stage text erasure in a progressive way[7] with explicit text region guidance. However, it could not get more accurate text regions in iteration stages, and the network was difficult to train. In this paper, we propose a Progressive Segmentation-guided Scene Text Removal Network (PSSTRNet) with very low computation costs. It is built on a very simple and small network, which has one feature-sharing encoder and two decoders to generate text segmentation and removal results individually. However, we find that single forward computing generates very coarse STR results. So, we input the text removal image to the network again to yield refined results progressively. A Mask Update module is added in the text segmentation branch for generating more precise text segmentation results. Additionally, we design an adaptive fusion method to make full use of the results of different iterations. We conducted sufficient experiments on two datasets: SCUT-EnsText[5] and SCUT-syn[3]. Both the qualitative and quantitative results indicate that PSSTRNet can outperform previous state-of-the-art STR methods. We summarize the contributions of this work as follows: * We propose a novel STR network termed PSSTRNet. It decomposes the challenging STR task into two simple subtasks and processes text segmentation and background inpainting progressively. * We design a Mask Update module and an adaptive fusion strategy to make full use of results from different iterations. * Our proposed PSSTRNet is light-weighted and achieves SOTA quantitative and qualitative results on public synthetic and real scene datasets. ## 2 Proposed Method ### Overall pipeline As shown in Fig.1, the pipeline of our model consists of two branches: the text segmentation branch and the text removal branch. They share a lightweight encoder with 5 residual convolutional layers. PSSTRNet implements text segmentation and erasing process on the previous results iteratively, and merges all the results in each iteration as final output adaptively. ### Text Segmentation Branch The text segmentation branch contains a Text Region Positioning module (i.e., TRPM), an upsampling process, and a Mask Updating module(i.e. MUM). To reduce the computational cost, TRPM is designed to locate the unremoved text regions in text removal images from the output of the previous iteration. It outputs the expansion text mask with 1/4 size of the original image. Then, this mask goes through the upsampling process by two bilinear interpolations to get the mask owning the same size as the origin image. The size recovered expansion text mask is denoted as \(M^{i}_{temp}\) (\(M^{i}_{temp}\in[0,1]\), 1 for text region and 0 for non-text region). With the input of the previous text mask \(M^{i-1}\) and \(M^{i}_{temp}\), MUM updates \(M^{i-1}\) and outputs the final text mask map(\(M^{i}\)) in \(i_{th}\) iteration. It includes a Merging block and a Correcting block. The Merging block merges \(M^{i}_{temp}\) and \(M^{i-1}\) through Eq.(1) to get a more complete text mask map \(M^{i}_{comp}\). \[M^{i}_{comp}=max(M^{i}_{temp},M^{i-1}) \tag{1}\] In Correcting block[8], \(M^{i}_{comp}\) is first multiplied with the origin text image \(I_{in}\), to generate the text-attentive features \(I_{t}\) and the background-attentive features \(I_{b}\), respectively. Then, we feed these two types of features into two parallel context exploration (CE) blocks to perform contextual reasoning for discovering the false-positive distractions \(I_{fp}\) and the false-negative distractions \(I_{fn}\), respectively. The CE block consists of four dilation convolutions with different dilation rates of 1, 2, 3, 5. The outputs of all the four dilation convolutions are concatenated and then fused via a 1\(\times\)1 convolution. Using such a design, the CE block gets the capability of exploring abundant context over a wide range of scales and thus can be used for context reasoning. After context reasoning, we can correct the mask in the following way: \[\begin{split} I_{in}&=\mathrm{NR}(I_{in}-\alpha I _{fp}),\\ I_{in}&=\mathrm{NR}(I_{in}+\beta I_{fn})),\\ M^{i}&=\sigma(I_{in}).\end{split} \tag{2}\] where \(\alpha\) and \(\beta\) are the learnable parameters that are initialized as 1. NR is batch normalization and ReLU operation. \(\sigma\) is the sigmoid function. We use the element-wise subtraction operation to restrain the ambiguous backgrounds (i.e., false-positive distractions) Figure 1: The overall architecture of PSSTRNet. TSB is the text segmentation branch that outputs a mask with \(\frac{1}{4}\) size of input text image \(I_{in}\). The mask is further corrected in the Mask-Update block. TRB is the text removal branch, and it outputs the temporary text removed result \(I_{temp}\). Then, \(I_{temp}\) and \(I_{in}\) are merged in the Merge block to get correct text removed result \(I^{i}_{out}\) in current iteration. The final result is the adaptive fusion of results from all iterations. and the element-wise addition operation to enhance the missing text regions (i.e., false-negative distractions). Then, we apply a sigmoid function to get a more precise text mask map \(M^{i}\). ### Text Removal Branch Similarly, the text removal branch and the shared encoder are built on a simplified residual U-Net structure. The encoder has five convolution layers with kernel size k\(\times\)k (k=7,5,3,3,3 in each layer in order). Each layer contains a batch normalization, a ReLU, and a residual block after convolution operation. With inputting the previous result \(I_{out}^{i-1}\) in \(i_{th}\) iteration, the output is defined as \(I_{out}^{i}\). The goal of STR is to remove the text areas while keeping the background areas unchanged. So, we merge the text regions of \(I_{out}^{i}\), and non-text regions of \(I_{in}\) as the final output \(I_{out}^{i}\) of \(i_{th}\) iteration as in Eq.(3). \[I_{out}^{i}=I_{in}^{i}*(1-M^{i})+I_{out}*M^{i}, \tag{3}\] Where \(I_{in}\) is the original text image. Finally, after a specific number of iterations, the text regions can be extracted more accurately and the text erased cleaner. However, since the process of mapping RGB images to the latent features and mapping them back to the RGB space occurs in each iteration, it results in information distortion in every recurrence. To solve these problems, we merge the intermediate iteration outputs in an adaptive merging way as formulated in Eq.(4). The final output is \(I_{out}\). \[\begin{split} M^{{}^{\prime}}=\frac{\sum_{1}^{n}M^{i}}{n}\\ I_{out}^{{}^{\prime}}=\frac{\sum_{1}^{n}I_{out}^{i}*M^{i}}{n}\\ I_{out}^{{}^{\prime}}=(I_{out}^{{}^{\prime}}+\epsilon)/(M^{{}^{ \prime}}+\epsilon)\\ I_{out}=I_{in}^{{}^{\prime}}*(1-M^{{}^{\prime}})+I_{out}*M^{{}^{ \prime}}\end{split} \tag{4}\] where \(\epsilon\) is a smoothing factor and is set to be \(1e^{-8}\) ### Loss Functions We introduce several loss functions for PSSTRNet learning, including region content loss \(L_{rc}\), perceptual loss \(L_{p}\), style loss \(L_{s}\) and segmentation loss \(L_{seg}\). Given the origin text image \(I_{in}\), text-removed ground truth (gt) \(I_{gt}\) and the binary text gt mask \(M_{gt}\), the text removal output of PSSTRNet in each iteration \(i_{th}\) is denoted as \(I_{out}^{i}\) and text segmentation result as \(M^{i}\). **Region content Loss.** We use \(L_{1}\) loss as the region content loss for text and non-text region reconstruction: \[\begin{split} L_{rc}=\gamma_{1}*\sum_{i}||M_{gt}\odot(I_{out}^{ i}-I_{gt})||_{1}+\\ \gamma_{2}*\sum_{i}||(1-M_{gt})\odot(I_{out}^{i}-I_{gt})||_{1}. \end{split} \tag{5}\] where \(\gamma_{1}\gamma_{2}\) is set to be 50,10. **Perceptual Loss.** We employ the perceptual loss[9] in Eq.(6). \(\Phi_{i}\) is the activation map of the \(i\)-th layer of the VGG-16 backbone. \(H_{n}\), \(W_{n}\), and \(C_{n}\) denotes the height, width and channel numbers of feature maps outputted from \(n_{th}\) layer of VGG-16. \[L_{p}=\sum_{i}\sum_{n}\frac{1}{H_{n}W_{n}C_{n}}||\Phi_{i}(I_{out}^{i})-\Phi_{i }(I_{gt})||_{1} \tag{6}\] **Style Loss.** We compute the style loss [10] as Eq.(7), where \(G_{n}\) is the Gram matrix constructed from the selected activation maps. \[\begin{split} L_{s}=\sum_{i}\sum_{n}\frac{1}{H_{n}W_{n}C_{n}}||G_ {n}(I_{out}^{i})_{T}.\\ G_{n}(I_{out}^{i})-G_{n}(I_{gt})^{T}\cdot G_{n}(I_{gt})||_{1} \end{split} \tag{7}\] **Segmentation Loss.** For learning of text segmentation module, \(L_{seg}\) in Eq.8 is formulated as dice loss [11]. \[L_{seg}=\sum_{i}1-\frac{2\sum_{x,y}(M^{i}(x,y)\times M_{gt}(x,y))}{\sum_{x,y}( M^{i}(x,y)^{2}\times M_{gt}(x,y))}*\gamma_{i} \tag{8}\] where \(\gamma_{i}\) is set to be 1,2,3. (\(x\), \(y\)) denotes each pixel coordinate in the image. In Summary, the total loss for training PSSTRNet is the weighted combination of all the above loss functions. \[L_{total}=200*L_{s}+0.1*L_{p}+L_{rc}+L_{seg} \tag{9}\] ## 3 Experiments and Results ### Datasets and Evaluation Metrics **SCUT-Syn.** This synthetic dataset only includes English text instances, including 8,000 images for training and 800 images for testing. More details can be found in [12]. **SCUT-EnsText**. It contains 2,749 training images and 813 test images which are collected in real scenes. More descriptions refer to [5]. **Evaluation Metrics:** For detecting text on the output images, we employ a text detector CRAFT[13] to calculate recall and F-score. The lower, the better. Six alternative metrics are adopted for measurement the equality of the output images, i.e, PSNR, MSE, MSSIM, AGE, pEPs, and pCEPS[3]. A higher MSSIM and PSNR, and a lower AGE, pEPs, pCEPS, and MSE indicate better results. Figure 3: The results of different iterations on SCUT-EnsText and SCUT-Syn datasets. It\({}_{i}\) is the \(M_{temp}\) of \(i_{th}\) iteration. Final represents the final STR results and final fused mask. Figure 2: Comparison results with other SOTA methods on SCUT-EnsText and SCUT-Syn datasets. ### Implementation Details We train PSSTRNet on the training set of SCUT-EnsText and SCUT-Syn and evaluate them on their testing sets, respectively. The masks are generated by subtraction from the input images and the labels. We use dilated process for covering as much of the text area as possible. We follow [5] to apply data augmentation during the training stage. The model is optimized by adam optimizer. Experimentally, we set the iteration number to be 3. The learning rate is set to be 0.001. The learning rate decayed by 50% every 10 epochs. The PSSTRNet is trained by a single NVIDIA GPU with a batch size of 6 and input image size of 256\(\times\)256. ### Ablation Study In this section, we study the effect of the number of iterations and the adaptive fusion method on the SCUT-Text dataset. In total, we conduct seven experiments by designing the network with 1) one iteration (1It.), 2) two iterations (2It.), 3) three iterations (3It.), 4) four iterations (4It.), 5) two iterations with adaptive fusion(2It.+AF), 6) three iterations with adaptive fusion(3It.+AF), 7) four iterations with adaptive fusion(4It.+AF). All experiments use the same training and test settings. Qualitative and quantitative results are illustrated in Fig.4 and table1, respectively. We can see that the network generates the best STR results with two iterations if only considering iteration times (i.e., comparing results in the first four experiments). This arises from that the information is lost in increasing iterations using encoder-decoder architecture. By adding an adaptive fusion strategy, the model with three iterations (3It.+AF) gets the best results. It is because adaptive fusion utilizes previous removal results and could also get more erased regions on text when increasing iterations. As shown in (b)(c)(d) of Fig.3, our method gets a roughly segmentation result at \(1_{st}\) iteration and extracts the rest part of the text segments, and cleaner text removal results in the following iterations. However, We find that the style of intermediate result is distorted when increasing the iterative times to 4 or larger. The decreasing qualitative results of \(7_{th}\) experiment 4It.+AF in table1 reflect this point. ### Comparison with State-of-the-Art Approaches We compare our proposed PSSTRNet with five state-of-the-art methods: Pix2pix[1], STE[2], EnsNet[3], EraseNet[5] and PERT [6], on both SCUT-EnsText and SCUT-Syn datasets. We retrain all the models with the setting as official reported, but input the image of size 256\(\times\)256. The source code of PERT is not released currently, so we do not show its qualitative results. **Qualitative Comparison**. As shown in the 1st row of Fig.2, our model can preserve more information in non-text areas while erasing text regions cleaner. Compared with other state-of-the-art methods, the results of our proposed PSSTRNet have significantly fewer color discrepancies and bluriness, especially in 1st, 2nd, and 4th lines. It demonstrates our model could generate more semantically elegant results on text removal and background inpainting results. **Quantitative Comparison**. As shown in Table 2 and 3, our method produces the best scores on most text removal \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline iterations & PSNR & MSSIM & MSE & AGE & pEPs & pCEPs \\ \hline 1It. & 32.97 & 96.41 & 0.0017 & 2.0742 & 0.0180 & 0.0105 \\ \hline 2It. & 34.09 & 96.40 & **0.0014** & 1.7788 & 0.0144 & 0.0077 \\ \hline 3It. & 32.44 & 95.56 & 0.0028 & 2.4506 & 0.0209 & 0.0125 \\ \hline 4It. & 32.15 & 95.69 & 0.0020 & 2.1221 & 0.0184 & 0.0100 \\ \hline 2It.+AF & 34.13 & 96.42 & **0.0014** & 1.7388 & 0.0142 & 0.0075 \\ \hline 3It.+AF & **34.65** & **96.75** & **0.0014** & **1.7161** & **0.0135** & **0.0074** \\ \hline 4It.+AF & 33.02 & 96.46 & 0.0017 & 2.0084 & 0.0177 & 0.0098 \\ \hline \end{tabular} \end{table} Table 1: Ablation study results of different modules effect on SCUT-Text. Figure 4: The results of ablation study on SCUT-EnsText dataset. evaluation protocols for both SCUT-EnsText and SCUT-Syn datasets. Furthermore, our model has the minimum number of parameters, which only has about one-third of the parameters of PERT that also implements STR in a progressive way. ## 4 Limitation As shown in the 3rd row of Fig.2 and Fig.4, there are still some texts that are not be removed. Besides, our model's inference time is longer than others since we apply iterative processes. Hence, there is still some improvement space of our method in terms of text detection and inference time. Combining our work with a better scene text detector may lead to better results. ## 5 Conclusion In this paper, we present a light-weighted progressive network PSSTRNet for scene text removal in images. It is based on an encoder-decoder structure with a shared encoder and two decoder branches for progressive text segmentation and text removal respectively. A Mask Updated module is developed to gradually acquire more and more complete and accurate text masks for better guidance. Instead of using the output from the final iteration, we aggregate the results in each iteration by adaptive fusion. Experimental results indicate that the proposed method achieves state-of-the-art performance on both synthetic and real-world datasets while maintaining low complexity.
2310.14666
SeLeP: Learning Based Semantic Prefetching for Exploratory Database Workloads
Prefetching is a crucial technique employed in traditional databases to enhance interactivity, particularly in the context of data exploitation. Data exploration is a query processing paradigm in which users search for insights buried in the data, often not knowing what exactly they are looking for. Data exploratory tools deal with multiple challenges such as the need for interactivity with no a priori knowledge being present to help with the system tuning. The state-of-the-art prefetchers are specifically designed for navigational workloads only, where the number of possible actions is limited. The prefetchers that work with SQL-based workloads, on the other hand, mainly rely on data logical addresses rather than the data semantics. They fail to predict complex access patterns in cases where the database size is substantial, resulting in an extensive address space, or when there is frequent co-accessing of data. In this paper, we propose SeLeP, a semantic prefetcher that makes prefetching decisions for both types of workloads, based on the encoding of the data values contained inside the accessed blocks. Following the popular path of using machine learning approaches to automatically learn the hidden patterns, we formulate the prefetching task as a time-series forecasting problem and use an encoder-decoder LSTM architecture to learn the data access pattern. Our extensive experiments, across real-life exploratory workloads, demonstrate that SeLeP improves the hit ratio up to 40% and reduces I/O time up to 45% compared to the state-of-the-art, attaining impressive 95% hit ratio and 80% I/O reduction on average.
Farzaneh Zirak, Farhana Choudhury, Renata Borovica-Gajic
2023-10-23T08:01:58Z
http://arxiv.org/abs/2310.14666v2
# SeLeP: Learning Based Semantic Prefetching for Exploratory Database Workloads ###### Abstract. Prefetching is a crucial technique employed in traditional databases to enhance interactivity, particularly in the context of _data exploration_. Data exploration is a query processing paradigm in which users search for insights buried in the data, often not knowing what exactly they are looking for. Data exploratory tools deal with multiple challenges such as the need for interactivity with no a priori knowledge being present to help with the system tuning. The state-of-the-art prefetchers are specifically designed for navigational workloads only, where the number of possible actions is limited. The prefetchers that work with SQL-based workloads, on the other hand, mainly rely on data logical addresses rather than the data semantics. They fail to predict complex access patterns in cases where the database size is substantial, resulting in an extensive address space, or when there is frequent co-accessing of data. In this paper, we propose SeLeP, a semantic prefetcher that makes prefetching decisions for both types of workloads, based on the encoding of the data values contained inside the accessed blocks. Following the popular path of using machine learning approaches to automatically learn the hidden patterns, we formulate the prefetching task as a time-series forecasting problem and use an encoder-decoder LSTM architecture to learn the data access pattern. Our extensive experiments, across real-life exploratory workloads, demonstrate that SeLeP improves the hit ratio up to 40% and reduces I/O time up to 45% compared to the state-of-the-art, attaining impressive 95% hit ratio and 80% I/O reduction on average. + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 + Footnote †: journal: Accepted in 2019 ## 1. Introduction Exploring massive amounts of data to extract (unknown) information is a query processing paradigm called _data exploration_(Cowards, 2001; D'Alessandro, 2001; D'Alessandro, 2001). The growth in data collection ability in recent decades has led to providing larger and more detailed datasets in both sciences and businesses(D'Alessandro, 2001). Consequently, the popularity of data exploration has significantly increased, giving rise to the need for database systems tailored to its specific requirements, such as responding interactively and adapting to the shifts in the users' workload(D'Alessandro, 2001; D'Alessandro, 2001). It has been shown that during exploratory browsing, the interaction response times should be bounded within 500 ms, since additional delays drastically reduce the rate by which users make observations, draw generalizations, and generate hypotheses(D'Alessandro, 2001). However, given the exponential growth in the amounts of generated data, responding to queries over such large data sets with a subsecond latency has become a tall order for the traditional database management systems (DMS)
2305.01963
Non-Gaussian reconciliation for continuous-variable quantum key distribution
Non-Gaussian modulation can improve the performance of continuous-variable quantum key distribution (CV-QKD). For Gaussian modulated coherent state CV-QKD, photon subtraction can realize non-Gaussian modulation, which can be equivalently implemented by non-Gaussian postselection. However, non-Gaussian reconciliation has not been deeply researched, which is one of the key technologies in CV-QKD. In this paper, we propose a non-Gaussian reconciliation method to obtain identical keys from non-Gaussian data. Multidimensional reconciliation and multi-edge type low density parity check codes (MET-LDPC) are used in non-Gaussian reconciliation scheme, where the layered belief propagation decoding algorithm of MET-LDPC codes is used to reduce the decoding complexity. Furthermore, we compare the error correction performance of Gaussian data and non-Gaussian data. The results show that the error correction performance of non-Gaussian data is better than Gaussian data, where the frame error rate can be reduced by 50% for code rate 0.1 at SNR of 0.1554 and the average iteration number can be reduced by 25%.
Xiangyu Wang, Menghao Xu, Yin Zhao, Ziyang Chen, Song Yu, Hong Guo
2023-05-03T08:24:26Z
http://arxiv.org/abs/2305.01963v1
# Non-Gaussian reconciliation for continuous-variable quantum key distribution ###### Abstract Non-Gaussian modulation can improve the performance of continuous-variable quantum key distribution (CV-QKD). For Gaussian modulated coherent state CV-QKD, photon subtraction can realize non-Gaussian modulation, which can be equivalently implemented by non-Gaussian post-selection. However, non-Gaussian reconciliation has not been deeply researched, which is one of the key technologies in CV-QKD. In this paper, we propose a non-Gaussian reconciliation method to obtain identical keys from non-Gaussian data. Multidimensional reconciliation and multi-edge type low density parity check codes (MET-LDPC) are used in non-Gaussian reconciliation scheme, where the layered belief propagation decoding algorithm of MET-LDPC codes is used to reduce the decoding complexity. Furthermore, we compare the error correction performance of Gaussian data and non-Gaussian data. The results show that the error correction performance of non-Gaussian data is better than Gaussian data, where the frame error rate can be reduced by 50% for code rate 0.1 at SNR of 0.1554 and the average iteration number can be reduced by 25%. ## I Introduction Secure communication needs secret keys. However, the classical key generation algorithm based on computational complexity is seriously threatened by quantum computer and new mathematical algorithm. In order to solve this problem, scholars have proposed quantum key distribution (QKD) protocols based on the basic principles of quantum physics, which is one of the most mature quantum information technology [1; 2; 3]. QKD allows two separate parties (Alice and Bob) to establish unconditional secure keys through an unsecure quantum channel which maybe controlled by potential eavesdroppers (Eve). According to the encoding dimension of quantum states, QKD is divided into two branches, discrete-variable (DV) QKD [4; 5] and continuous-variable (CV) QKD [6; 7; 8; 9]. These two kinds of protocols both have unconditional security [10; 11; 12; 13], in which Gaussian modulated coherent states CV-QKD protocols have the advantages of being compatible with classical coherent optical communication technology [14; 15]. However, due to the imperfection of the practical system devices and the reconciliation efficiency of postprocessing, the transmission distance of CV-QKD system is limited [16; 17; 18]. Therefore, improving the secret key rate at given distance [19; 20] and extending the transmission distance of the system are two important development trends in the field of CV-QKD [9; 21; 22]. Recently, CV-QKD has made great progress in theory [23; 24; 25] and experiment [26; 27]. The transmission distance has been significantly improved due to the optimization of experiment setups and the improvement of reconciliation efficiency [21; 28]. Some new protocols have been proposed to improve the performance of CV-QKD system, such as the noiseless linear amplification [29] and photon subtraction [30; 31]. These two quantum operation can extend the transmission distance. However, due to the imperfection of actual devices and other factors, it is difficult to implement in physical system. Thus, it is hard to achieve the ideal effect. To solve these problems, some postselection protocols are proposed. Through Gaussian postselection, virtual noiseless linear amplification can be realized [32], it has been demonstrated experimentally [33]. The physical realization of photon subtraction operation requires an ideal single photon detector. Therefore, the implementation cost will be increased, the effect will be reduced due to the imperfection of the actual devices. Non-Gaussian postselection is proposed to implement virtual photon subtraction, which avoids the use of single photon detector and complex physical operation [34]. The raw data after Gaussian postselection is still following Gaussian distribution, while for the virtual photon subtraction inside Alice the raw data of Alice after non-Gaussian postselection is no longer following Gaussian distribution. Postselection algorithm can realize virtual physical operation, which greatly reduces the implementation complexity. However, the corresponding classical postprocessing part has not been deeply studied. There are still errors between Alice's and Bob's raw data after postselection. Therefore, it is necessary to correct the errors by using channel coding and decoding technology to obtain consistent keys. The raw data are continuous variables in CV-QKD. Therefore, it needs to be transformed into binary data that can be encoded through mapping transformation firstly. Generally, there are mainly two methods, slice reconciliation [35] and multidimensional reconciliation [36]. The former is usually used in the case of high signal-to-noise ratio (SNR), while the latter is used in the case of low SNR. The common error correction codes used in CV-QKD are Raptor codes [37], Polar codes [38], and low density parity check codes (LDPC) [39] etc. Multi-edge type (MET) LDPC codes can achieve good error correction performance under extremely low SNRs. In this paper, we mainly focus on the non-Gaussian reconciliation in CV-QKD. The raw data after photon subtraction no longer follow Gaussian distribution, the distribution varies with the number of photon subtraction. Firstly, we give the method which realizes the transformation from Gaussian distribution data to non Gaussian data through postselection filter function. This process is very important, which affects the amount of raw data saved after non-Gaussian postselection, it has an important impact on the secret key rate of CV-QKD system. Multidimensional reconciliation and MET-LDPC codes are used for the non-Gaussian reconciliation in reverse reconciliation system. We introduce layered belief propagation decoding algorithm [40] into non-Gaussian data error correction, which greatly reduces the complexity of the postprocessing decoding process and does not increase the frame error rate (FER) of decoding. Furthermore, we test the error correction performance of the non-Gaussian reconciliation. Although the noise and Bob's raw data still obey Gaussian distribution, Alice's data converge to medium amplitude after non-Gaussian postselection, so the anti-noise performance of the system is improved. It can be seen from the results that the FER of non-Gaussian data error correction is obviously lower than that of Gaussian data under the same conditions, the average number of iterations is also significantly reduced by using layered decoding algorithm. The rest of the paper is organized as follows. In Sec. II, we introduce some basics of the non-Gaussian postselection, information reconciliation of CV-QKD, presenting a postselection method to convert Gaussian distribution data to non-Gaussian distribution data, proposing the non-Gaussian reconciliation algorithm based on multidimensional reconciliation and MET-LDPC codes. In Sec. III, we present the data distribution under different virtual photon subtraction numbers, the performance tests of information reconciliation on the non-Gaussian data after postselection under different virtual photon subtraction numbers, and the error correction performance comparation with Gaussian data. In Sec. IV, we draw the conclusion of this paper. ## II Non-Gaussian reconciliation in CV-QKD Non-Gaussian operation can increase the entanglement of the Gaussian entangled states. As a non-Gaussian operation, it has been proposed that photon subtraction in CV-QKD can increase the transmission distance. It has been proved that the entanglement-based scheme and the corresponding prepare-and-measure scheme is secure [31]. Simultaneously, the feasibility and security of virtual photon subtraction scheme through non-Gaussian postselection have also been proved [34]. In this section, we first introduce the basic of the non-Gaussian postselection, then propose the postselection method to convert Gaussian distribution data to non-Gaussian distribution data, finally present the non-Gaussian reconciliation scheme based on multidimensional reconciliation and MET-LDPC codes. ### Non-Gaussian postselection in CV-QKD The security of entanglement-based model CV-QKD photon subtraction protocol has been proved, the security of the corresponding prepare-and-measure (PM) Figure 1: The PM scheme of CV-QKD with non-Gaussian postselection in Alice. \(P(k|x_{A},p_{A})\) is the postselection filter function. \(\gamma\) are Alice’s raw data. \(|\alpha\rangle\) is output quantum state. \(T_{C}\) is the transmittance of quantum channel. \(\epsilon\) is excess noise. Hom: homodyne detection. Het: heterodyne detection. The side information transmitted from the classical authentication channel includes the selection results of Alice in postselection process and the information used in postprocessing process. model with equivalent postselection as virtual photon subtraction has also been proved [34]. Different from Gaussian quantum state protocol, non-Gaussian quantum state protocol can not have a symplectic covariance matrix, so we can't use von-Neumann entropy to derive the Holevo boundary for secret key rate directly. However, we can estimate the secret key rate through the physical model equivalent to virtual photon subtraction. This equivalent physical model has been completed in our previous work [34]. The main idea is based on the optimality of Gaussian attacks [10; 41; 42]. The eavesdropper will not get more information from the non-Gaussian quantum state than the Gaussian quantum state, so the secret key rate of the non-Gaussian protocol is less than that of the corresponding Gaussian protocol. To ensure unconditional security, we can use the secret key rate of the corresponding Gaussian protocol as the lower bound of the secret key rate of the non-Gaussian protocol. In addition, the probability of photon subtraction success should also be considered when estimating the secret key rate of non-Gaussian protocol. Most of the implementation of CV-QKD is based on PM scheme, which does not need to prepare entangled states, so it is easy to implement in experiment. The PM scheme of CV-QKD with non-Gaussian postselection in Alice is shown in Fig. 1. In the PM scheme, Alice generates coherent states \(|\alpha\rangle\), where \(\alpha=\sqrt{2T\lambda^{2}}\gamma\), \(\gamma=x_{A}+ip_{A}\). \(x_{A}\) and \(p_{A}\) are both randomly selected from a Gaussian distribution data set with mean 0 and variance \(V_{A}\). \(T\) is related to the transmittance of photon subtraction, \(\lambda^{2}=\dfrac{V-1}{V+1}\), and \(V\) is variance of the two-mode squeezed vacuum state. Then after the coherent states are prepared, they are sent to Bob through quantum channels. Bob performs homodyne or heterodyne detection according to the type of protocol after receiving the quantum states, the measurement results are recorded as \(x_{B}\) and \(p_{B}\). If Bob performs homodyne detection, he informs Alice the quadratures (\(x\) or \(p\)) he measures. Alice keeps the same quadratures with Bob. While in the case of heterodyne detection, both quadratures of \(x\) and \(p\) are kept. After the quantum measurement and base sifting step, Alice chooses to accept part of the raw data according to the non-Gaussian postselection filter function with probability \(P(k|x_{A},p_{A})\), the probability function of subtracting \(k\) photons is described by \[\begin{split} P(k|x_{A},p_{A})=\dfrac{1}{k!}[\dfrac{(1-T)\lambda ^{2}}{2}(x_{A}^{2}+p_{A}^{2})]^{k}\times\\ \exp[-\dfrac{(1-T)\lambda^{2}}{2}(x_{A}^{2}+p_{A}^{2})].\end{split} \tag{1}\] Then she reveals the selection results to Bob. Bob keeps the corresponding raw data. After the non-Gaussian postselection, Alice's raw data no longer follow Gaussian distribution. But the distribution of Bob's raw data remains unchanged. The probability density function of the raw data before and after non-Gaussian postselection is shown in Fig. 2. The black solid line represents original Gaussian distribution function of Alice's raw data. The red, green, blue solid lines represent non-Gaussian distribution function of Alice's data after virtual photon substraction with \(k=1,2,3\) respectively. As shown in Fig. 2, for the raw data of Alice, the probability of some data in Gaussian distribution is lower than that in non-Gaussian distribution. Therefore, it is impossible to directly sample non-Gaussian distribution data from Gaussian distribution data in this situation. To solve this problem, the acceptance-rejection sampling method is used to convert the Gaussian distribution data to non-Gaussian distribution data, which is a sampling method from probability density function. This method is to extract subsequence of random variables from a sequence with a specific distribution according to a rule and make them meet the given probability distribution. Suppose the random variable is \(x\), its value range is \([a,b]\), its probability density function \(f(x)\) of non-Gaussian distribution is bounded, which satisfies \(\max\{f(x)|a\leqslant x\leqslant b\}=A.\) For effective sampling, the probability density function of Gaussian distribution is required to be greater than that of non-Gaussian distribution for any \(x\). However, as shown in Fig. 2, the probability density function of raw data \(g(x)\) does not satisfy the condition that it is always greater than \(f(x)\) (\(f_{1}(x)\), \(f_{2}(x),f_{3}(x)\)) for any \(x\). In order to satisfy the condition, we construct a new probability density function which satisfies \[cg(x)\geqslant f_{k}(x),x\in[a,b], \tag{2}\] where \(c\) is a constant. The purple solid line represents the probability density function of \(cg(x)\), it can be seen that the probability is greater than that of \(f(x)\) for any \(x\). Figure 2: The probability density function of Gaussian distribution and Non-Gaussian distribution after postselection. The X-axis is the value of raw data, the Y-axis is the corresponding density. \(g(x)\) is the probability density function of Gaussian distribution. \(c\) is a constant for effective sampling. \(f_{1}(x)\), \(f_{2}(x)\), \(f_{3}(x)\) are the probability density function of non-Gaussian distribution after virtual photon subtraction with \(k=1,2,3\) separately. The modulation variance is set to 20. The value of \(c\) should be as small as possible to improve sampling efficiency which can be defined as \[E=\frac{1}{m}. \tag{3}\] It represents the average number of original distribution random variables \(m\) required to obtain a random variable with a specific distribution. In our case, it is equivalent to the success probability of the overall non-Gaussian postselection. For a raw data \(x\) of Alice, she randomly generates a random number \(d\) that obeys a uniform distribution in the interval of \([0,cg(x)]\), where \(d=cg(x)\xi,\xi\in U[0,1]\). Then, she compares \(d\) and \(f_{k}(x)\), if \(d\leqslant f_{k}(x)\), she accepts \(x\) as a non-Gaussian distribution data after virtual subtraction of \(k\) photons. Otherwise, she rejects it and restarts the above process until all the raw data are completed. After completing the non-Gaussian postselection process, they use the saved data to perform postprocessing process through classical authentication channel, including information reconciliation, parameter estimation, and privacy amplification. The secret key rate against collective attacks for reverse reconciliation of the \(k\) photons subtraction is described by \[K_{PS}^{k}=P(k)[\beta I(A:B)-S(E:B)], \tag{4}\] where \(P(k)\) is the success probability of overall virtual \(k\) photons subtraction (the success probability of the non-Gaussian postselection), \(\beta\) is reconciliation efficiency, \(I(A:B)\) is classical mutual information between Alice and Bob, \(S(E:B)\) is Von Neumann entropy between Eve and Bob. ### Non-Gaussian reconciliation in CV-QKD Suppose that the variables of Alice and Bob are \(X\) and \(Y\) after non-Gaussian postselection. \(X\) follows non-Gaussian distribution and \(Y\) still follows Gaussian distribution. The quantum channel can be seen as an additive white Gaussian noise (AWGN) channel. Then we have \(Y=tX+Z\), where \(t\) is related to the channel transmittance and detection efficiency, \(Z\) is channel noise which follows Gaussian distribution. For information reconciliation, signal to noise ratio (SNR) is the main parameter of concern. Thus for simplicity, we can fit \(t=1\). In addition, direct reconciliation is limited by 3dB loss. Therefore, reverse reconciliation is used in our scheme, which can break this limit. Alice performs the error correction to obtain the identical keys with Bob. Alice and Bob first convert the AWGN channel to a virtual binary input AWGN channel. Then they can use channel coding and decoding technology to correct errors between them. Multidimensional reconciliation is used to finish the first step. Bob normalizes the data \(Y\) after postselection according to the dimension of multi-dimensional reconciliation. Then he randomly chooses a uniform distribution vector \(U\) which is generated by a quantum random number generator. Next, a mapping function \(M(Y^{\prime},U)\) from normalized variable \(Y^{\prime}\) to \(U\) is constructed by orthogonal transformation matrix. \(Y^{\prime}\) and \(U\) satisfy the following relationship \[M(Y^{\prime},U)Y^{\prime}=U, \tag{5}\] He sends the mapping function to Alice through a classical authentication channel which means that eavesdropper Eve can get all the information but she can not change the information without the knowledge of both sides of the legal communication. Alice receives the mapping function \(M(Y^{\prime},U)\) and normalizes her non-Gaussian distribution data \(X\) after postselection. Then she rotates her normalized data \(X^{\prime}\) to \(V\) through the mapping function, which can be calculated by \[M(Y^{\prime},U)X^{\prime}=V, \tag{6}\] where \(V\) is the noise form of \(U\). They repeat the above process until all the data are converted. Finally, they select the appropriate channel codes according to the channel parameters to perform the error correction process. MET-LDPC codes are the generalization form of LDPC codes, which is very suitable for error correction under extremely low SNRs. They can achieve performance close to the Shannon's limit. Thus, we choose MET-LDPC code as the channel coding technology to correct the errors between Alice and Bob. Firstly, The code rate of MET-LDPC code is calculated according to the estimated quantum channel characteristics. Secondly, the degree distribution of the code rate is obtained by density evolutionary algorithm. Then select a suitable construction method to generate the parity check matrix. Finally, Alice and Bob use the matrix for encoding and decoding to correct the errors between them to get completely consistent data. The encoding process in CV-QKD system is very different from that in classical communication. It does not need generation matrix. The parity check matrix of error correction code is directly multiplied by the binary data after reconciliation to obtain the syndromes. Then Bob sends the syndromes to Alice through a classical authentication channel which will be errorless in the transmission process but Eve can get all the information. This is different from classical communication. After getting the syndromes, Alice initializes the information according to the data after reconciliation and uses the same parity check matrix to correct errors. In the message iteration process of error correction, the syndromes sent by Bob needs to be used and compared with Alice's temporary syndromes to judge whether the decoding is successful. This is another big difference from classical communication. The error correction codes used in CV-QKD are MET-LDPC codes, which have many decoding algorithms, such as probability domain belief propagation algorithm (BP), log likelihood ratio belief propagation algorithm (LLR-BP) and so on, in which BP algorithm is a commonly used decoding algorithm in CV-QKD system. Compared with other decoding algorithms, it has higher accuracy and can ensure the success rate of decoding in CV-QKD system. In the conventional BP decoding algorithm, after information initialization, it is necessary to traverse the check nodes and variable nodes to update the edge information in the parity check matrix, then updating the information of each node accordingly, finally comparing the syndromes. If the decision is successful, end the decoding, otherwise continue the iteration until the maximum number of iterations is reached. Although this algorithm has high accuracy, the updated nodes are in a waiting state before the next update, the utilization rate is low. Thus, a layered BP (LBP) decoding algorithm [40] is introduced to the postprocessing of CV-QKD system, which can make faster use of the updated node information and improve the decoding efficiency. In LBP decoding algorithm, each layer will update the variable node, taking the updated variable node as the input of the next layer to participate in the operation of the next layer in the same iteration. In this way, the updated information will be immediately used in this iteration, which can reduce the iteration number. Generally, it only needs half of the iteration number to achieve the same effect as the BP decoding algorithm. Thus, it can speed up the decoding process and it is suitable for application in high-speed postprocessing for CV-QKD system. ## III Performance of the Protocol In this section, we first present the performance of the non-Gaussian postselection in terms of sampling efficiency from Gaussian distribution data to Non-Gaussian distribution data. Then we present the performance of non-Gaussian reconciliation for CV-QKD system in terms of reconciliation efficiency, frame error rate of error correction, and average iteration number. ### Non-Gaussian postselection performance As described in Sec. II, for the virtual \(k\)-photon subtraction, we cannot directly sample non-Gaussian data from the raw data of Gaussian distribution due to the probability of Gaussian distribution may be higher than that of non-Gaussian distribution. We present the acceptance-rejection sampling method to solve this problem, in which the sampling efficiency is a very important parameter to evaluate sampling performance. In addition to Eq. 3, the sampling efficiency can also be expressed by geometric interpretation. It refers to the probability that the data conforming to \(c_{k}g(x)\) falls under \(f_{k}(x)\). The selection of \(c\) in Eq. 2 has an important impact on sampling efficiency. First, it needs to satisfy that \(f_{k}(x)\) is completely below \(cg(x)\) in order to sample correctly. Secondly, the sampling efficiency should be as high as possible. That is, the value of \(c_{k}\) should be as small as possible when condition 1 is met. Therefore, its value can be calculated by \[c_{k}=\max_{x}[\frac{f_{k}(x)}{g(x)}],x\in R. \tag{7}\] As shown in Fig. 3, we give the Gaussian distribution probability density function of actual data and the corresponding non-Gaussian probability density function with virtual subtraction of one photon. We can obtain that the optimal value of \(c_{1}\) is about 1.32 calculated by Eq. 7 when \(T\) is set to 0.8. The corresponding sampling efficiency is about 75.4%, which is much higher than the previous results of sampling with uniformly distributed data. Similarly, we can get the results of virtual \(k\)-photon subtraction. ### Non-Gaussian reconciliation performance Information reconciliation has an important impact on the performance of CV-QKD system. Reconciliation efficiency not only affects that whether secret keys can be extracted, but also affects the transmission distance of CV-QKD system. On the other hand, the success rate of reconciliation and processing speed also have an important impact on the secret key rate of the system. Thus, we test the error correction performance of both Gaussian and non-Gaussian data, including reconciliation efficiency, FER and average iteration number (AIN). The error correction performance of four types data of distribution is tested under two MET-LDPC codes. The Figure 3: The Gaussian distribution probability density function of actual data and non-Gaussian probability density function with virtual subtraction of one photon. The thin black line represents the probability density of raw data actually generated. The thin red line represents the probability density of postselected data after virtual subtraction of one photon. \(g(x)\) and \(f_{1}(x)\) are fitted probability density functions of raw data and postselected data (virtual subtraction of one photon) separately. The pink dot represents the point where \(c\) gets the optimal value when virtual subtracting one photon, recording as \(c_{1}\). The purple line represents the probability density function after \(g(x)\) is enlarged by \(c_{1}\) times. The modulation variance is set to 20. data of these four types of distribution contains Gaussian distribution data and three non-Gaussian distribution data which includes the data of virtual 1-photon subtraction, virtual 2-photon subtraction, virtual 3-photon subtraction. The two MET-LDPC codes are rate of 0.1 and 0.05. For each rate of MET-LDPC code and each type of data, seven sets of data with different SNR are tested. The test results are shown in Fig. 4 and Fig. 5. We have tested more than 100 data block for each case. The size of each data block is \(10^{6}\). As can be seen from the results of Fig. 4 and Fig. 5, the error correction performance of non-Gaussian case is higher than that of Gaussian case both in FER and AIN. LBP decoding algorithm is used to the error correction process, which has little effect on FER of decoding. However it can reduce about half of the AIN, the decoding speed can Figure 4: Performance comparison of error correction between Gaussian and non-Gaussian data. The error correction code used for Gaussian and non-Gaussian case is MET-LDPC code, whose rate is 0.1. (a) FER of Gaussian and non-Gaussian data after error correction under different SNR. (b) Average iteration number (AIN) of decoding corresponding to Gaussian and non-Gaussian data in (a). The blue line represents the FER/AIN of Gaussian data after error correction. The rest lines represent the FER/AIN of non-Gaussian data, where red line/yellow line/purple line respectively represents virtual 1-photon/2-photon/3-photon subtraction (VPS-1/VPS21/VPS-3). The dots represent the error correction performance of actual data, reconciliation efficiency from right to left is 93%, 93.5%, 94%, 94.5%, 95%, 95.5%, 96% respectively. The maximum iteration number is set to 150. Figure 5: Performance comparison of error correction between Gaussian and non-Gaussian data under the code rate of 0.05. (a) FER of Gaussian and non-Gaussian data after error correction under different SNR. (b) Average iteration number (AIN) of decoding corresponding to Gaussian and non-Gaussian data in (a). The dots represent the error correction performance of actual data, reconciliation efficiency from right to left is 93.5%, 94%, 94.5%, 95%, 95.5%, 96%, 96.5% respectively. The maximum iteration number is set to 150. be greatly increased to improve the secret key rate of CV-QKD system. FER is related to the set maximum number of iterations, the FER can be reduced by increasing the maximum number of iterations to a certain extent. But it will also increase the decoding delay simultaneously, thus there is a trade-off between FER and AIN. In order to compare the error correction performance under different conditions, we set the maximum number of iterations to 150 for all the tests. In actual applications, the maximum number of iterations can be reasonably set according to the average number of iterations for different cases. Fig. 4 and Fig. 5 show the test results for two MET-LPDC codes, the code rate of 0.1 and 0.05 separately. Although the error correction performance of non-Gaussian data is better than that of Gaussian data at both codes, it also have some different characteristics. For the code rate of 0.1, the error correction performance of non-Gaussian data is obviously better than that of Gaussian data both in FER and AIN, the error correction performance is better with the increase of the number of virtual photon subtraction, especially at a relatively low SNR. This is mainly due to the fact that non-Gaussian postselection diffuses the data originally concentrated around 0 to middle values, the larger the values diffusion with the increase of the number of virtual photon subtraction. Simultaneously, the large value will also concentrate to the middle value. Therefore, the error correction performance will not continue to increase. At a relatively high SNR, error correction is relatively easy at this time, so FER of error correction is very low (close to 0) for both Gaussian data and non-Gaussian data, continuing to increase the SNR cannot reflect the advantages of non-Gaussian data. However, non-Gaussian data error correction can converge faster, so the AIV is less than that of Gaussian data which will increase the error correction speed and the secret key rate of CV-QKD system. For the code rate of 0.05, the error correction performance of non-Gaussian data is also higher than that of Gaussian data, but its advantage is lower than that of 0.1 code rate matrix. This is because the SNR of error correction data corresponding to 0.05 code rate is low, the power of signal is much smaller than that of the noise. In this case, the signal is completely submerged in the noise, so it is difficult to correct the errors. Therefore, the advantages of non-Gaussian postselection are limited, with the increase of the number of photon subtraction, its advantage is not as obvious as that in 0.1 code rate. Although the error correction gain caused by different photon subtraction is not very obvious, the non-Gaussian postselection data still makes the decoding converge faster than that of Gaussian data due to the reduction of the values near 0. Therefore, the FER and AIN performance of decoding processing is still improved. We can use rate-adaptive method [17] or non-fixed rate error correction codes [28] to expand the applicable range of SNR. We have studied these two methods in the preamble work, combining these two methods, the non-Gaussian error correction method proposed in this paper can play an advantage in a large range of SNR. Fig. 6 shows the secret key rate and transmission distance comparison of the original protocol (Gaussian case) and 1-photon subtraction protocol (non-Gaussian case). Fig. 6 shows that non-Gaussian protocols can still achieve high secret key rate over long distance range, which greatly expanding the maximal transmission distance. However, for short distance range, the secret key rate is worth than the original protocols. The main reason is that the probability of photon subtraction success is low. In other words, for non-Gaussian postselection, after selecting the original Gaussian data, since the amount of selected data is reduced, the average secret key rate is reduced. In practical application, we are more concerned about the amount of secret keys obtained per unit time when the secret key rate is greater than 0. For non-Gaussian postselection, the reduction of data will lead to the reduction of the secret key rate of single pulse, but it will greatly improve the data processing speed. The reconciliation efficiency and decoding success rate of non-Gaussian data are higher than that of Gaussian data, thus the secret key rate per unit time is not necessarily lower than that in Gaussian case even at short distance range. It can be further studied in the subsequent high-speed implementation. ## IV Conclusion In this paper, we proposed a non-Gaussian reconciliation method for CV-QKD protocols by non-Gaussian Figure 6: The secret key rate and transmission distance comparison of the original protocols (Gaussian case) and 1-photon subtraction (non-Gaussian case). The modulation variance is 20. Reconciliation efficiency is 95%. postselection at Alice's side, which can reduce the FER and AIN of decoding. We propose an effective postselection method to obtain non-Gaussian data follows specific distribution from Gaussian data, which greatly improves the success rate of virtual photon subtraction of CV-QKD system. Multidimensional reconciliation and MET-LDPC codes are used to perform the information reconciliation in postprocessing of CV-QKD system. The layered belief propagation decoding algorithm is introduced to the error correction, which can greatly reduce the decoding complexity and improve the decoding speed. We test the error correction performance of Gaussian data and non-Gaussian data after our proposed postselection under two representative codes with rate of 0.1 and 0.05. We test 7 sets of data for each case respectively. The corresponding reconciliation efficiency ranges from 93% to 96.5%. The results show that the FER and AIN of decoding performance of non-Gaussian data is significantly better than that of Gaussian data, which greatly improves the secret key rate of CV-QKD system. ###### Acknowledgements. This work was supported by National Natural Science Foundation of China under Grant No. 62001041, No.62201012, the Fundamental Research Funds of BUPT under Grant No. 2022RC08, and the Fund of State Key Laboratory of Information Photonics and Optical Communications under Grant No. IPOC2022ZT09.
2304.13096
Real-time Autonomous Glider Navigation Software
Underwater gliders are widely utilized for ocean sampling, surveillance, and other various oceanic applications. In the context of complex ocean environments, gliders may yield poor navigation performance due to strong ocean currents, thus requiring substantial human effort during the manual piloting process. To enhance navigation accuracy, we developed a real-time autonomous glider navigation software, named GENIoS Python, which generates waypoints based on flow predictions to assist human piloting. The software is designed to closely check glider status, provide customizable experiment settings, utilize lightweight computing resources, offer stably communicate with dockservers, robustly run for extended operation time, and quantitatively compare flow estimates, which add to its value as an autonomous tool for underwater glider navigation.
Ruochu Yang, Mengxue Hou, Chad Lembke, Catherine Edwards, Fumin Zhang
2023-04-25T19:00:41Z
http://arxiv.org/abs/2304.13096v2
# Real-time Autonomous Glider Navigation Software ###### Abstract Underwater gliders are widely utilized for ocean sampling, surveillance, and other various oceanic applications. In the context of complex ocean environments, gliders may yield poor navigation performance due to strong ocean currents, thus requiring substantial human effort during the manual piloting process. To enhance navigation accuracy, we developed a real-time autonomous glider navigation software, named GENIoS_Python, which generates waypoints based on flow predictions to assist human piloting. The software is designed to closely check glider status, provide customizable experiment settings, utilize lightweight computing resources, offer stably communicate with dockservers, robustly run for extended operation time, and quantitatively compare flow estimates, which add to its value as an autonomous tool for underwater glider navigation. autonomous underwater glider navigation, ocean flow prediction ## I Introduction Underwater gliders have been widely used in ocean fields for ocean sampling, surveillance, and many other applications [1, 2, 3, 4, 5]. Usually, the glider is navigated through waypoints manually generated by glider pilots. Nonetheless, manual navigation performance might not be optimal, as the manual waypoints fail to consider the strong and erratic ocean flow. Worse, the glider can drift to an unexpected area where localization and rescue are extremely tough. Additionally, manual glider navigation requires tremendous human labor. In order to circumvent trajectory deviation or abort issues, the glider needs to be closely monitored during the whole mission. Considering the long duration of mission, up to ten glider pilots may take shifts each day, including at midnight. Therefore, there is a pressing need for the development of a glider navigation software that can generate real-time waypoints based on flow predictions to improve navigation accuracy. The software should also achieve autonomous functionality to significantly reduce the heavy human labor of manual piloting. Generally, underwater gliders use the global positioning system (GPS) to localize themselves on the ocean surface [6]. However, considering that GPS signals can not transmit in the deep ocean, the next waypoint of glider can only be decided at surfacing events with GPS update, which makes it hard to perform underwater glider navigation. Controlled by buoyancy and mass, low-speed gliders are strongly affected by ocean currents, so this type of navigation may lead to huge trajectory deviation as the mission is going on. Some gliders try to estimate the flow between the last and current surfacing events and utilize it to promote navigation accuracy [7]. This navigation method based on glider-derived flow estimate may work well in a calm water environment, but in the deep ocean with intense temporal and spatial flow changes, the glider-derived flow estimate may differ from the actual flow a lot, causing poor underwater navigation. So far, it is obvious that one main limitation of glider navigation is insufficient knowledge of the ocean environment, especially ocean currents. Recently in the robotics literature, there has been active research on navigation of underwater gliders based on flow predictions [8, 9, 10, 11, 12]. Some of them incorporated flow data from ocean models to increase prediction accuracy [13, 14, 15, 16, 17, 18, 19, 20]. However, none of the authors have practically implemented a navigation system for real underwater gliders or experimentally validated their navigation algorithms in real-world scenarios. The main contribution of this paper is to develop a real-time autonomous glider navigation software based on the Glider-Environment Network Information System (GENIoS) [21]. The software can generate waypoints in real time based on flow predictions to better navigate gliders in the field of highly variable ocean flow. The operation is meant to be autonomous in order to relieve a part of heavy labor of glider pilots. Additionally, the software possesses the following features * continuous monitoring of glider status. * customizable settings of gliders and deployments. * lightweight requirements of computing resources. * stable communication with dockservers. * robust operation for extended time. * quantitative comparison of flow predictions. Herein, the software is officially named as **GENIoS_Python**. Please check the official website [https://www.geniosppython.com/](https://www.geniosppython.com/). We will release our code soon. ## II System Design The prototype of GENIoS_Python is GCCS (Glider Coordinated Control System) [2], which generates waypoints for a fleet of gliders so that gliders can sample optimally distributed measurements, improving collective survey performance. The navigation performance of GCCS was verified in the Adaptive Sampling and Prediction (ASAP) project [22] in Monterey Bay, California (2004 - 2006). To deal with highly variable flow, we extended GCCS into GENIoS [21] by incorporating real-time ocean flow data and glider surfacing data, which supported the Long Bay missions in South Carolina (2012 - 2013). Building off the success of GENIoS, here comes the newest software GENIoS_Python. As shown in Fig. 1, GENIoS_Python consists of four main modules: glider simulator (gsim), glider planner (gplan), environmental input manager, and dockserver handler. The gsim module supports two running modes: simulated and remote. In the simulated mode, the module simulates the glider's underwater trajectory and surfacing positions according to the glider's 3D kinematics model. Rather than wait a long time (e.g., four hours) for gliders to surface in the real-time deployment, the surfacing interval in the simulated mode can be adjusted as short as possible (e.g., one minute). Therefore, it is effective to verify any underwater navigation algorithm, which may not be practical for validation in real life, with customized glider and deployment settings in an extremely fast manner. In the remote mode, the module simply obtains the glider surfacing positions through the dockserver handler module in real time. The gplan module is composed of two classes: path tracking and path planning. The path tracking class provides two tracking algorithms: virtual mooring and line control. The virtual mooring algorithm keeps the glider moving towards one single target position, while the line control algorithm keeps the glider moving back and forth between multiple target positions. The core of the tracking algorithms is a flow-cancelling controller, which computes the desired glider heading to cancel out the predicted flow at the current glider location. Then, by assuming the glider as a Newtonian particle [2] and integrating its motion with the heading control, the gplan module can predict the glider trajectory in the next 12 or 24 hours under the influence of flow. Based on the predicted trajectory, the waypoints are computed for the next 12 or 24 hours as well. By following these waypoints, the glider can track the expected trajectory. The path planning class offers a modified \(A^{*}\) algorithm [23] that considers the entire flow field, as opposed to only one single location in the path tracking class. The algorithm plans an optimal path for the glider to avoid the strong flow field in advance, thus consuming much more time or energy. The planned path can be viewed as a high-level path for the path tracking algorithm to follow. The environmental input manager module generates flow predictions for path tracking class to compute waypoints or for path planning class to compute the optimal path. The module supports two modes of predicting flow: simulated and remote. In the simulated mode, the module incorporates the ADCIRC ocean flow model to enable the simulation of glider movement in any flow field. In the remote mode, the module utilizes predictive flow data from the oceanic data model Advanced Circulation (ADCIRC) [14] in real time. A hybrid model GliADCIRC, based on the ADCIRC model, is developed to produce more accurate flow predictions by combining glider-estimated flow with ADCIRC predictive flow. The dockserver handler module interfaces with an onshore computer (dockserver) to obtain the latest glider surfacing data and send waypoints to gliders in real time. When the glider surfaces, it communicates with the dockserver through the SFMC (Slocum Fleet Mission Control) terminal that records the glider-transmitted data as log files. The module continuously monitors the log files to check the latest surfacing event and grab the navigation information for the gplan module to compute waypoints. Relied on fast performance of Python SSH and SFTP packages, the module can check the SFMC terminal every 10 seconds, thus capturing every single surfacing event, even though the glider surfaces for an extremely short time or uses mixed surfacing modes. Then, the module sends the computed waypoints to the dockserver. It takes only 30 seconds out of 10-15 minute surfacing interval to accomplish the entire process of checking surfacing information and transmitting waypoints. Moreover, the module utilizes Linux grep packages to parse glider log files and handle glider data without human interaction, enabling autonomous glider status check. ## III Experiments The performance of GENIoS_Python was validated in both simulated and real experiments. All the experiments were implemented in a Lenovo ThinkCentre Desktop with Ubuntu 16.04 LTS, Intel Core i7-6700 CPU @ 3.40GHz x 8, 32 GB Memory, and Intel HD Graphics 530 (Skylake GT2). ### _Simulated Experiments_ The simulated experiments were implemented in the simulated mode of gsim module. The simulated glider setting was adjusted from the Slocum G3 glider Franklin of Skidaway Institute of Oceanography, University of Georgia. The deployment was set in Gray's Reef near Savannah, Georgia, United States. The ocean flow model was set as ADCIRC. We tested two tracking algorithms of the gplan module: virtual mooring and line control. #### Iii-A1 Virtual Mooring In the virtual mooring mode, both the starting point and the target point were pre-set for the glider Franklin. As seen from the video [https://www.youtube.com/watch?v=5KFjQU2V7rU](https://www.youtube.com/watch?v=5KFjQU2V7rU), GENIoS_Python successfully navigated the glider Franklin to move directly towards the red target point. Fig. 1: System design of GENIoS_Python #### Iii-A2 Line Control In the line control mode, two target points were given and the starting point was set as one of them. As seen from the video [https://www.youtube.com/watch?v=ObHONQhCt04](https://www.youtube.com/watch?v=ObHONQhCt04), GENIoS_Python successfully navigated the glider to move back and forth between two red target points. ### _Real Experiment_ The real experiment was carried out from March in the remote mode of gsim module. The Slocum G1 glider USF-SAM was provided by the glider team of University of South Florida. The deployment field was in Gray's Reef near Savannah, Georgia, United States. The ocean flow model was set as GliADCIRC. The tracking algorithm was set as virtual mooring. GENIoS_Python was set to check the glider status (header) every ten seconds. During the whole experiment, GENIoS_Python successfully detected every surfacing event and transmitted the waypoints (goto file) to the dockserver. The experiment was divided into two journeys with two different target points, respectively. #### Iii-B1 Journey 1 The first journey happened from March 2nd, 2023 to March 4th, 2023 UTC. The target point was 3118.0N, -8008.0E. As shown in Fig. 2, the real-time trajectory of the glider USF-SAM was directly approaching the target point under the navigation of GENIoS_Python. In terms of the full experimental result, it can be seen from the video [https://www.youtube.com/watch?v=AccpquPaL5Ik](https://www.youtube.com/watch?v=AccpquPaL5Ik) that GENIoS_Python successfully navigated the glider USF-SAM to achieve the red target point without twists and turns, saving considerable glider power and mission time. #### Iii-B2 Journey 2 The second journey happened from March 4th, 2023 to March 6th, 2023 UTC. The target point was 3110.0N, -8000.00E. It is obvious from Fig. 3 that the glider USF-SAM was approaching the target point under the navigation of GENIoS_Python. In terms of the full experimental result, it can be seen from the video [https://www.youtube.com/watch?v=R75oJJ1-U7Q](https://www.youtube.com/watch?v=R75oJJ1-U7Q) that GENIoS_Python successfully navigated the glider USF-SAM to move towards the red target point almost in a straight line. In order to assist glider pilots with flow analysis, GENIoS_Python provides the real-time comparison of glider-estimated flow, ADCIRC flow predictions and GliADCIRC flow predictions. If both the glider-estimated flow and flow predictions are larger than \(0.3m/s\), it is most likely that the glider has stepped into the strong flow field where the flow speed may exceed the maximum glider speed, causing significant trajectory deviation. The flow comparison of the whole USF-SAM deployment was shown in Fig. 4. For better visualization, the flow is divided into alongshore component and crossshore component. Considering that the gplan module relies on the GliADCIRC model to compute waypoints for the next 12 or 24 hours in real life, the flow compassion function offers GliADCIRC flow predictions for the upcoming 12 hours following the end of deployment (green line behind the black dashed line of timestamp). ## IV Conclusion In summary, underwater gliders play an important role in diverse oceanic applications. Nevertheless, their navigation performance can be hindered by strong ocean currents, leading to unexpected movement or severe abort. To address this issue, the GENIoS_Python software has been developed as a real-time autonomous glider navigation system. By employing flow predictions to generate waypoints, the software improves navigation accuracy and relieves heavy intervention of glider pilots. This software displays multiple advantageous features, including close monitoring of glider status, customizable experiment settings, light computing requirements, stable communication with dockservers, extended operation time, and quantitative flow estimate comparisons. Future work will focus on data visualization to better interact with human pilots like plotting offset between the expected surfacing position and the actual one in real time. We will Fig. 3: Real-time trajectory of glider USF-SAM in SFMC terminal. The red lines represent the depth-averaged ocean current. The yellow dots represent the real-time glider positions. The purple star represents the target point. Fig. 2: Real-time trajectory of glider USF-SAM in SFMC terminal. The red lines represent the depth-averaged ocean current. The yellow dots represent the real-time glider positions. The purple star represents the target point. also pay attention to incorporating more ocean flow models such as HFRadar and WERA.
2309.02167
Forgotten treasures in the HST/FOC UV imaging polarimetric archives of active galactic nuclei. I. Pipeline and benchmarking against NGC~1068 and exploring IC~5063
Over its 13 years of operation (1990 -- 2002), the Faint Object Camera (FOC) on board the Hubble Space Telescope (HST) observed 26 individual active galactic nuclei (AGNs) in ultraviolet (UV) imaging polarimetry. However, not all of the observations have been reduced and analyzed or set within a standardized framework. We plan to reduce and analyze the AGN observations that have been neglected in the FOC archives using a consistent, novel, and open-access reduction pipeline of our own. We then extend the method to the full AGN sample, thus leading to potential discoveries in the near future. We developed a new pipeline in Python that will be able to reduce all the FOC observations in imaging polarimetry in a homogeneous way. Most of the previously published reduced observations are dispersed throughout the literature, with the range of different analyses and approaches making it difficult to fully interpret the FOC AGN sample. By standardizing the method, we have enabled a coherent comparison among the different observational sets. In this first paper of a series exploring the full HST/FOC AGN sample, we present an exhaustively detailed account of how to properly reduce the observational data. Current progress in data-analysis is implemented in and has provided state-of-the-art UV polarimetric maps. We compare our new maps to the benchmark AGN case of NGC~1068 and successfully reproduce the main results previously published, while pushing the polarimetric exploration of this AGN futher, thanks to a finer resolution and a higher signal-to-noise ratio (S/N) than previously reported. We also present, for the first time, an optical polarimetric map of the radio-loud AGN IC~5063 and we examine the complex interactions between the AGN outflows and the surrounding interstellar medium (ISM).
Thibault Barnouin, Frédéric Marin, Enrique Lopez-Rodriguez, Léo Huber, Makoto Kishimoto
2023-09-05T12:08:54Z
http://arxiv.org/abs/2309.02167v1
# Forgotten treasures in the HST/FOC UV imaging polarimetric archives of active galactic nuclei ###### Abstract Context:Over its 13 years of operation (1990 - 2002), the Faint Object Camera (FOC) on board the Hubble Space Telescope (HST) observed 26 individual active galactic nuclei (AGNs) in ultraviolet (UV) imaging polarimetry. However, not all of the observations have been reduced and analyzed or set within a standardized framework. Aims:We plan to reduce and analyze the AGN observations that have been neglected in the FOC archives using a consistent, novel, and open-access reduction pipeline of our own. We then extend the method to the full AGN sample, thus leading to potential discoveries in the near future. Methods:We developed a new pipeline in Python that will be able to reduce all the FOC observations in imaging polarimetry in a homogeneous way. Most of the previously published reduced observations are dispersed throughout the literature, with the range of different analyses and approaches making it difficult to fully interpret the FOC AGN sample. By standardizing the method, we have enabled a coherent comparison among the different observational sets. Results:In this first paper of a series exploring the full HST/FOC AGN sample, we present an exhaustively detailed account of how to properly reduce the observational data. Current progress in cross-correlation functions, convolution kernels, and a sophisticated merging and smoothing of the various polarization filter images, together with precise propagation of errors, has provided state-of-the-art UV polarimetric maps. We compare our new maps to the benchmark AGN case of NGC 1068 and successfully reproduce the main results previously published, while pushing the polarimetric exploration of this AGN futher, thanks to a finer resolution and a higher signal-to-noise ratio (S/N) than previously reported. We also present, for the first time, an optical polarimetric map of the radio-loud AGN IC 5063 and we examine the complex interactions between the AGN outflows and the surrounding interstellar medium (ISM). Conclusions:Thanks to our newly and standardized reduction pipeline, we were able to explore the full HST/FOC AGN sample, starting with observations that had not been previously published (e.g., IC 5063 here). This pipeline will allow us to make a complete atlas of UV polarimetric images of the 26 unique AGNs observed by the FOC, highlighting the importance and necessity of (imaging) polarimeters for the upcoming new generation of 30-m class telescopes. ## 1 Introduction Polarimetry has proven to be one of the most resourceful observational methods in astronomy (Hildebrand, 2005; Hough, 2006). From stars to planets, supernovae remnants to gamma-ray bursts, polarimetry has brought a wealth of information about the geometry and composition of cosmic sources, including magnetic field intensity and topology in both small- and large-scale structures. However, it is probably the field of active galactic nuclei (AGNs) that polarimetry has contributed the most (Marin, 2019, and references therein), starting with the proposition of a unified model for AGNs from optical and ultraviolet (UV) polarimetry (Antonucci & Miller, 1985), followed by the uncovering of a near-infrared (NIR) polarized accretion disk spectrum in quasars (Kishimoto et al., 2008), or, most recently, the dichotomy of radio-loud and radio-quiet quasars in far-infrared (FIR) polarimetry (Lopez Rodriguez, 2023). One of the major challenges with AGNs is that they are extragalactic, compact objects that cannot be resolved using conventional imaging techniques. As the typical scale of the internal disk of an AGN is \(\sim\)2pc, for NGC 1068 (most studied Type 2 AGN at \(\sim\)13.5Mpc), it would require a spatial resolution of less than 10 milliarcseconds to resolve. Even interferometry is only able to resolve a couple of the closest or most massive sources (e.g., Event Horizon Telescope Collaboration et al., 2019; Gamez, Rosas et al., 2022; Isbell et al., 2022). This method recently was able to get to milliarcseconds resolution with VLTI/MIDI and VLTI/MATISSE in the mid-IR and 100 microarcseconds resolution with CHARA array in the near-IR that has allowed for the resolution the nuclei of NGC 1068 (Lopez-Gonzaga et al., 2014; Gamez Rosas et al., 2022), Circinus galaxy (Tristram et al., 2014; Isbell et al., 2022), and NGC4151 (Kishimoto et al., 2022). Polarimetry has the unique advantage of transcending spatial constraints: polarimetric techniques can provide physical information well below the beam of the observations because only the polarized source is measured, while the unpolarized light does not contribute to the total polarized light. A disk's morphology can be easily told apart from a spherical morphology, even if the region is beyond the resolving power of the best imaging telescope. Using this pivotal characteristic, it is strikingly evident why polarimetry is the key to improving our understanding of the compact and luminous regions at the center of host galaxies known as AGNs. Their characteristics indicate that their intrinsic luminosity is not produced by stars. Instead, a supermassive black hole (\(10^{5}-10^{10}\) solar masses, Shankar et al., 2004; Inayoshi & Haiman, 2016) is thought to accrete matter at the center of the AGN, releasing thermal radiation that peaks in the blue-UV band of the electromagnetic spectrum (Shakura & Sunyaev, 1973). Surrounding this sub-parsec-scale central engine, there are outflows, relativistic jets, clouds of gas with various ionization stages, and a reservoir of dust and molecular gas and stars. The global picture of AGNs is certainly not easy to decipher and even the most up-to-date models struggle with establishing their true locations, compositions, and generation mechanisms of their broad variety of structures. The most enigmatic of all structures lies in the accreting region surrounding the central supermassive black hole (Antonucci, 1993; Urry & Padovani, 1995; Elvis, 2000; Netzer, 2015). This is an area where polarimetry can prove especially handy. In addition, since the accretion engine emits the most in the UV band, where the starlight polluting contribution is weak, it can be most efficient to observe AGNs in this specific waveband. Today, there are no longer any far- or mid-UV polarimeters available for AGN observations. A few telescopes mounted with spectropolarimetric instruments reaching the near-UV band still exist (such as the VLT/FORS2 or the HST/WFPC2) but it would be necessary to observe high-redshift sources in order to probe the far-UV band1. This is highly limiting since high-redshift objects are often fainter than AGNs from the nearby Universe and the required amount of time to reach a minimal signal-to-noise ratio (S/N\(\geq 3\)) in polarization is unfeasible. To examine the UV polarization of AGNs, we must rely on past instruments, namely the Wisconsin Ultraviolet Photoplarimeter Experiment (WUPPE, Nordsieck & Code, 1982; Stanford et al., 1985; Code & Nordsieck, 1989) and various instruments aboard the Hubble Space Telescope (HST). WUPPE provided the first UV polarized spectra of 5 AGNs (NGC 4151, NGC 1068, 3C 273, Centaurus A, and Mrk 421; see Marin et al., 2018). On the other hand, a total of 2,000 datasets were acquired by the various UV polarimeters that equipped the HST from Cycle 0 to Cycle 22. This corresponds to several dozens of AGNs. Among those observations, for each of these instruments, a fraction of the observations lack any associated publication. As an example, 19% of AGN proposals in the FOC archives are yet to be published, implying a deep potential pool of scientific discoveries. Footnote 1: For a galaxy at redshift \(z=3\), the Lyman break will appear to be at wavelengths of about 3600 Å The FOC served as a particularly interesting polarimeter. It was one of HST's five instruments at launch and consisted of a long-focal-ratio, photon-counting imager capable of taking high-resolution images in the wavelength range 1150 - 6500 A. When corrected by COSTAR, the field-of-view (FoV) and pixel size of the f/96 camera were \(7"\times 7"\) (512 \(\times\) 512 pixel\({}^{2}\) format) and 0.014\("\times 0.014"\), respectively. Other configurations were also possible thanks to a different optical relay (f/48). The excellent spatial resolution offered by the FOC, coupled to the very low instrumental polarization and excellent polarizing efficiencies of the polarizers in the f/96 relay made the FOC a unique instrument. No other polarimeter (or any non-polarimetric instruments for that matter) have fully used the spatial resolution capabilities of the Optical Telescope Assembly (OTA) of the HST. Unfortunately, the FOC was replaced by the ACS during Servicing Mission 3B (March 7, 2002). Because the FOC was ahead of its time and was one of the most promising instruments to achieve great discoveries in the field of AGNs, it is a pity that 19% of the AGN proposals in the FOC archives lack any exploitation (5/26 AGNs observed with the FOC were never published). Therefore, in this series of papers, we have decided to propose a rigorous, systematically complete, and consistent re-analysis of all raw HST imaging polarimetric AGN observations from the FOC in the HST archive to enable science deferred or unachieved by many approved programs. In 2005, the Canadian Astronomy Data Center (CADC), in collaboration with STScI, decided to produce the final calibration files for the science observations and the whole FOC dataset was re-calibrated accordingly (Kamp et al., 2006). From this data reprocessing, CADC noticed a sensible modification of the science data due to a new "best geometric reference." Since this re-calibration and the release of consistently processed data, no re-analysis of the observation has been published for the AGN dataset. In addition, most of the old reduced observations are dispersed throughout the literature, with different analyses and approaches, making it difficult to fully interpret the FOC AGN sample. This is why we decided to create a consistent, novel and open-access reduction pipeline of our own to produce high-level, science-ready, polarimetric products for the scientific community as well as polarimetric data reduction packages. Ultimately, we aim to explore, download, reduce, and present all the polarimetric images taken with the FOC in a standardized way. A large sample of radio-loud and radio-quiet AGNs is indeed necessary to investigate whether all the differences between pole-on and edge-on objects can be explained by an inclination effect (Antonucci, 1993; Marin, 2016) or whether morphological differences of the circumnuclear region must also be taken into account (Ramos Almeida et al., 2009; Alonso-Herrero et al., 2011; Ichikawa et al., 2015; Lopez Rodriguez, 2023). Using a large AGN sample also allows us to study their thermal and non-thermal physical components (Antonucci, 2012, 2015), which, in turn, enables us to put physical constrains in the AGN components. In this first paper of the series, we present a detailed description of the new reduction pipeline in Sect. 2. We test our methodology against a well-known, previously published FOC polarimetric image of NGC 1068 in Sect. 3. We then proceed with the reduction of a forgotten FOC observation of the Seyfert-2 galaxy IC 5063 (PKS 2048-572) in Sect. 4. We discuss our results Sect. 5 and present our conclusions in Sect. 6. ## 2 Reduction pipeline In this section, we present the general reduction pipeline methodology and our choice of various algorithms to extract as much information as possible from the raw data. We created our new reduction pipeline in python language, making use of this easy-to-read tool for optimized reduction methods. The pipeline is already available to grab on the author's Git2. We focused our work on the FOC instrument but it was written to be modular so that it is relatively easy to add other instruments to the pipeline. The overall data reduction steps is summarized in a diagram in Fig. 1. The FOC instrument measures polarization state by performing three consecutive observations of the same target through three polarizer filters with complementary polarization axis angles, \(\theta_{1}=0^{\circ}\), \(\theta_{2}=60^{\circ}\), and \(\theta_{3}=120^{\circ}\), usually referred to as POL0, POL60, and POL120 respectively. The reduction procedures of polarization observations with the FOC instrument call for at least three rounds of observations with the same instrument, but through three different filters with different properties (see Keyes 1998, Section 8.7). The FOC instrument itself has some photometric uncertainties that also ought to be taken into account (see Keyes 1998, Section 8.3) as well as specific issues to the filter wheel, the uncertainty in the polarizer axis directions, PSF differences, and throughput issues (see Nota & et al. 1996, Sections 4.4.3 and 11.2.6) that induce relative uncertainties between observations. To better implement these uncertainties into our data reduction, we chose to go back to the most generic description of the flux through a polarizer, as described in the appendix of Kishimoto (1999). The three polarized exposures obtained are then combined into the Stokes parameters to compute the polarization state, as described in Section 2.7. For the remainder of this section, actual examples will be provided using the Feb 28, 1995 (5:33AM) FOC observation of NGC 1068 (ID: 5144), which we will explore in further detail in Sect. 3. Footnote 2: git.unistra.fr/t.barnouin/FOC ̇Reduction ### Data importing and selection of the region of interest The data were imported from astrophysics standard FITS files, making use of information in headers to optimize the whole pipeline without requiring user inputs. The required FITS files were calibrated data products that can be retrieved from the MAST HST Legacy Archive3. Footnote 3: archive.stsci.edu/missions-and-data/hst A "query" utility that depends on astroquery Python package allows us to download FOC's _c0f files from the terminal, given a target name (and possibly a proposal id). Otherwise, the user can feed its own FITS files to the pipeline, as long as their HEADER contain the identifying keywords defined in the HST/FOC Data Handbook Chapter 5.2 (Keyes 1998). We made use of the Calibrated exposure FITS files, whose suffix is _c0f in the MAST archives. We immediately translated each observation count as count rates, using the EXPTIME header keyword containing the exposure time of the data set. For a better handling of the data these count rates are conserved as such during the whole pipeline and only translated into physical units when displaying the relevant data through plots. This is done using the PHOTFLAM header keyword containing the inverse sensitivity conversion factor (Nota & et al. 1996). In our case, count rates were transformed as fluxes in erg.cm\({}^{-2}\).s\({}^{-1}\).A\({}^{-1}\). For 2D images (e.g., FOC outputs), the observational data are processed through a Graham's scan algorithm (Graham 1972) for a better selection of the region of interest (ROI). This algorithm finds the convex hull of a set of \(n\) points in the plane with a complexity \(O\left(n\log n\right)\), cropping out non-exploitable values from the data matrix (infinite numbers, zeros,...). An automatized function finds the optimal rectangle-shaped image that contains only valuable data from the observation and removes the unusable empty borders, artifacts from the finite size of the detector and non-physical calibration procedures. The implemented version of this algorithm takes the full set of observations and concurrently crop out the undesired edges on every observation. To do so, it takes as the parameters the pixel step to create the image shell (allows us to reduce the number of points to be considered to run the algorithm), the value to be discarded, and the choice of whether the final crop should be the intersection or the union of all individual crops. The obtained shell is then intersected with the ones obtained for each observations (and each half wave plate) to get an uniform cropping across the whole dataset. An example of such preliminary data selection can be seen in Fig. 2. On average, this procedure remove 15 - 18% of the original raw image of 512 \(\times\) 512 pixels. ### Deconvolution. Before the installation of COSTAR, the FOC point spread function (PSF) suffered from severe spherical aberration, which meant that a circular aperture of 0.1" radius contained only 15 - 18% of the light from a star instead of the expected 70%. COSTAR has restored much of the OTA capabilities, in the sense that the COSTAR-corrected PSF contains more than 75% of the light within a radius of 0.1" at visible wavelengths. The FOC PSF typically measures \(1.6-1.8\) pixels in the visible (\(\sim 0.08^{\circ}\) at full width at half maximum, FWHM), as in Nota & et al. (1996). To recover the underlying fine structures that has been blurred by the photo-diodes of the detector, our pipeline implements several deconvolution algorithms that can be used to treat the raw images before any reduction. A linear regularized deconvolution method was implemented using a standard Wiener filter (Wiener 1949). It is a low-pass filter with some neighboring pixel regularization constraint. Its simplicity is optimal for stationary and Gaussian noise, but for spatially localized features such as singularities or edges, it comes with drawbacks; namely, it creates oscillations along the sharp contours and degrades the resolution. Several iterative regularized methods were also implemented, which enforce a set of additional constraints of positivity, support, and band limitation on a given object, \(O\). Because this process investigates the maximum likelihood in the case of Poisson-noise induced by a PSF \(P\), it has no closed-form solution and requires an iterative approach. The algorithm evaluates the most probable pixel in which a photon should be detected given the raw image, \(I\). The Van-Cittert method finds underlying structures in residual solutions at each iteration and put them in the restored image: \(O^{n+1}=O^{n}+\alpha(I-(P*O^{n}))\), where \(\alpha\) is a convergence parameter generally taken to be equal to 1 (van Cittert 1931). The one-step gradient method replaces this convergence parameter with a convolution of the residual with the inverted PSF : \(O^{n+1}=O^{n}+P^{n}*(I-(P*O^{n}))\). The Richardson-Lucy method multiplies the previously computed deconvolved image with a weighted image made up of a convolution of the data with the known PSF of the detector : \(O^{n+1}=O^{n}(\frac{I}{P*O^{n}}*P^{*})\) (Richardson 1972). Finally, the conjugate gradient iterative method solves the inverse problem with PSF convolution and regularization constraint in an optimized way: the search direction for the solution \(O^{n}\) is orthogonal to the direction of the gradient of the residual function \(R^{n}(x,y)=(I-P*O^{n})(x,y)\). In our pipeline, we let the user decides if a deconvolution should be applied and how many times any iterative algorithm should be run. This is critical for pre-COSTAR observation but less important in the case of post-COSTAR data, as explained previously. In our example of NGC 1068, no deconvolution was performed on the data. ### Error computation and propagation. The background noise is estimated on the calibrated data, before being processed through data alignment and resampling. A very basic first method searches for a common region in all observations and of user defined pixel size (basically, \(\sim\) 10% of the image size) with the least integrated flux. We assume this sub-image to be background dominated and we estimate the background by taking the root mean square of the selected sub-image (see Fig. 3). The user can check for the evolution of the background flux during each observation (see Fig. 4) and verify that there is no transient source involved. Another more robust method takes into account the intensity histogram of each image and assume that the background is the most represented intensity bin (see Fig. 5). The binning is done with a logarithmic range, in such a way that lower intensities get more precise binning than high intensities, and the number of bins is given by the Freedman-Diaconis rule for a sample, \(x\), of size, \(n\): \[N_{bins}=\frac{\max(x)-\min(x)}{2\cdot\frac{IQR(x)}{\sqrt{n}}},\qquad\text{ with }IQR\text{ the interquartile range. } \tag{1}\] Figure 1: Diagram representing the pipeline’s reduction operations from the raw data obtained from the HST Legacy Archives to the obtained polarization maps. If it is required by other flux statistics during the observations, the user can also choose for the number of bins to be computed by the following rules: square-root (\(N_{bins}=\sqrt{(}n)\)), Sturges (\(N_{bins}=\log_{2}(n)+1\)), Rice (\(N_{bins}=2\sqrt{n}\)), and Scott (\(N_{bins}=\frac{\max(x)-\min(0.02)}{3.5\sqrt{2}}\)). This second method has little dependence on transient sources as it looks for an intensity plateau rather than an image location. It can also better estimate observation-dependent levels, as can be seen in the deviation of estimated background values in Table 1. These differences in intensity levels can come from different parameters of observation or calibration. At this point, the background value is subtracted to the whole image. From there are computed and quadratically summed the uncertainties in the "correction factors" as a percentage of the observed flux in each pixel. Following Kishimoto (1999), the wavelength dependence of the transmittance of each polarizer filter \(\sigma_{wav}\) is taken to be 1%, the differences in PSFs through each polarizer filter, \(\sigma_{prf}\), is taken to be 3% and the heavily smoothed flat-fielding uncertainty, \(\sigma_{flat}\), is taken to be 3%. ### Data alignment. The polarized data come from multiple observations with different polarizer filter. To extract the polarization information in the Stokes convention, we must sum the datasets, but it is only possible if the data have been previously aligned. To do so, our method implements a 2D image alignment to sub-pixel precision using a factor of 10 oversampling, allowing us to align our images to a precision up to 0.1 pixel precision, corresponding to an alignment precision between observations of 0.0014 arcsec. This is done through cross-correlation of the phase-space of the misaligned images, as described in Guizar-Siciaros et al. (2008). Each image is then linearly shifted accordingly, using the relation in Eq. 2 to compute the value in each pixel. Once aligned using the imaged large scale structures, the uncertainty coming from the different observations shifts is computed from the displacement with respect to the reference dataset. This uncertainty is computed for each pixel in the resulting image, as half of the difference of the values in this pixel before and after shifting the data (see Eq. 3). This uncertainty is quadratically summed to the global uncertainty inside the pixel: \begin{table} \begin{tabular}{c|c|c|c} Filter name & Observation date \& time & \(N_{bins}\) & Estimated background (\(10^{-4}s^{-1}\)) \\ \hline POL0 & 1995+02-28 05:37:16 & 6348 & 6.43 \\ POL0 & 1995+02-28 06:07:33 & 2141 & 25.7 \\ POL0 & 1995+02-28 07:07:54 & 5747 & 7.78 \\ POL0 & 1995+02-28 07:37:16 & 4319 & 13.1 \\ POL0 & 1995+02-28 08:44:26 & 6091 & 10.3 \\ POL0 & 1995+02-28 09:06:15 & 6205 & 8.57 \\ POL0 & 1995+02-28 10:20:257 & 4727 & 16.3 \\ POL120 & 1995+02-28 10:37:59 & 8324 & 7.06 \\ POL120 & 1995+02-28 12:05:37 & 8696 & 6.49 \\ \end{tabular} \end{table} Table 1: Histogram binning and resulting estimation of the background (here in count rates) for each observation of NGC 1068. Figure 4: Background flux and error for each NGC 1068 dataset as a function of the observation time. The different colors represent the various polarizer filters (polarization axis angle of \(0^{\circ}\), \(60^{\circ}\) and \(120^{\circ}\) for the FOC instrument). Figure 3: Image of NGC 1068 for the first observation. The red rectangle delimits the region considered for background noise. The dark red pixels are considered to be below the background intensity value. Figure 2: Image of the POL0 polarizer filter for NGC 1068. The total exposure time is 1,796 seconds. The blue rectangle delimits the cropped region of interest after Graham’s scan, removing the borders containing non-permitant data for the analysis. \[I_{x,y}^{x\text{b/fred}}(\Delta x,\Delta y)= uv\cdot I_{x\times[\Delta x],y\times[\Delta y]}\] \[+u(1\!-\!v)\cdot I_{x\times[\Delta x],y\times[\Delta y]}\] \[+(1\!-\!u)v\cdot I_{x\times[\Delta x],y\times[\Delta y]}\] \[+(1\!-\!u)(1\!-\!v)\cdot I_{x\times[\Delta x],y\times[\Delta y]}\] \[\sigma_{x,y}^{\text{b/fred}}(\Delta x,\Delta y)=\left|\frac{I_{x,y} -I_{x,y}^{x\text{b/fred}}(\Delta x,\Delta y)}{2}\right|, \tag{3}\] with \(\left\{\Delta x,\Delta y\right.\) are the shifts along the x and y axis, with \(\left\{\Delta x,\lceil\Delta x\rceil\right.\) the floor and ceiling integers of \(\Delta x\), \(u=\Delta x-\lfloor\Delta x\rfloor,v=\Delta y-\lfloor\Delta y\rfloor\). ### Data binning We propose several methods to re-sample the data. Assuming the target pixel size is larger than the original pixel size, the user can resample the data in _arcsec_ or _pixel_ units and can choose to do it by averaging or summing the resampled data. This is done by re-binning the data matrix to a smaller shape using matrix products with rows and columns compressors. This resampling, while reducing the spatial resolution, allows us to get better statistics in the resized pixel that now sums or averages the events from each sub-pixels. This is mandatory to study polarization as polarized fluxes require high statistics to become meaningful. In the extreme case scenario, the user can integrate the whole image down to one pixel to simulate what a polarimetric instrument without imaging capabilities would have observed. Uncertainties computed from previous alignment and background subtraction procedures are propagated through re-sampling as the quadratic sum of the errors of the bin. This uncertainty is then quadratically summed to the root mean square (RMS) of the flux of the sub-pixels of the bin, accounting for some baseline noise: \[\sigma_{X,Y}^{re-sampling}=\sqrt{RMS_{X,Y}^{f}}^{2}+\sigma_{X,Y}^{propage 2}, \tag{4}\] with \(\left\{RMS_{X,Y}^{f}=\frac{\sqrt{\sum_{(x,y)\in(X,Y)}f_{x,y}^{2}}}{\sum_{(x,y )\in(X,Y)}1},\right.\) \(\left.\sigma_{X,Y}^{propage}=\sqrt{\frac{\sum_{(x,y)\in(X,Y)}\sigma_{x,y}^{2} \cdot f_{x,y}}{\sum_{(x,y)\in(X,Y)}f_{x,y}}},\right.\) where each pixel of the re-binned image at coordinates, \(X\), \(Y\), correspond to a subset of pixels \((x,y)\in(X,Y)\) in the original image of flux, \(f_{x,y}\), and associated error, \(\sigma_{x,y}\). ### Data smoothing Several options are available for smoothing the data. The idea behind data smoothing is to reduce noise from a data set to allow important patterns to more clearly stand out. A user-defined function can be convolved to the prepared data, before summing observations that were obtained through the same polarizer filter. The same convolution procedure can also be done after data combination. This convolution can be applied to a weighted dataset whose weights, \(w_{i,j}\), are the inverse square of the error for each pixel \(x,y\): \[S_{xy}=(s*g)_{xy}=\sum_{i,j}^{N_{pixels}}s_{ij}\cdot w_{ij}\cdot g(x\!-\!i,y\!- \!j), \tag{5}\] where \(g\) is some user-defined kernel to which the data should be convoluted. The error of the smoothed pixel is computed by convoluting the square of the errors to the convolution kernel squared: \[\sigma_{xy}^{smoothing}=\sqrt{(\sigma^{2}*g^{2})_{xy}}=\sqrt{\sum_{i,j}^{N_{ pixels}}\sigma_{ij}^{2}\cdot w_{ij}^{2}\cdot g(x\!-\!i,y\!-\!j)^{2}}. \tag{6}\] Another finer data smoothing combines and smooth the data from multiple observation sets at the same time using a Gaussian kernel with user-defined FWHM. Given \(N\) observations through a given polarizer filter, the obtained combined and smoothed pixel at coordinates \((x,y)\) is given by: \[S_{xy}=\frac{\sum_{k}^{N}\sum_{i,j}s_{ij}^{k}\cdot w_{ij}^{k}\cdot g(x\!-\!i, y\!-\!j)}{\sum_{k}^{N}\sum_{i,j}w_{ij}^{k}\cdot g(x\!-\!i,y\!-\!j)}, \tag{7}\] where \(s_{ij}^{k}\) is the signal of the pixel at \((i,j)\) for observation \(k\), \(g(x\!-\!i,y\!-\!j)=e^{-\frac{(x\!-\!i,y\!-\!j)^{2}}{2\cdot x}}\) is a Gaussian kernel with \(\sigma=\text{FWHM}/(2\sqrt{2\sqrightharpoonup 2})\), and \(w_{ij}^{k}=1/{\varepsilon_{ij}^{k}}^{2}\) is the weight given by the inverse-squared error of this same pixel. The error on the combined pixel is obtained by taking the weighted root mean square of the errors: \[\sigma_{xy}^{smoothing}=\frac{\sqrt{\sum_{k}^{N}\sum_{i,j}{\varepsilon_{ij}^{k }}^{2}\cdot{w_{ij}^{k}}^{2}\cdot g(x\!-\!i,y\!-\!j)^{2}}}{\sum_{k}^{N}\sum_{i,j }w_{ij}^{k}\cdot g(x\!-\!i,y\!-\!j)}. \tag{8}\] ### Stokes parameters and polarization components Here, we describe how we computed the Stokes parameters and the uncertainties arising from the polarization measurement. We refer to Kishimoto (1999) and Clarke et al. (1972, p38, p100, p171) for more details. The FOC instrument is not equipped with a circular polarization analyzer that would allow us to characterize the ellipticity of the observed polarization. In the following, we assume a linear polarization, with a Stokes parameter of \(V=0\). We computed the remaining _1, Q, U_ Stokes parameters from the addition and subtraction of the theoretical flux through three polarizer filters with complementary polarization axis angles (\(\theta_{1}=0\)deg, \(\theta_{2}=60\)deg, \(\theta_{3}=120\)deg for the FOC). We defined the Stokes vector as \(\mathbf{S}=(I,Q,U)\). We call \(F_{i}\) the theoretical polarized flux that is not attenuated by the polarizer filter with axis angle \(\theta_{i}\). We define the theoretical polarized flux vector by Figure 5: Intensity histograms on which the background (vertical dashed line) is estimated for each observation. \[\mathbf{F}=\left(\begin{array}{cc}\frac{2\rho_{h_{1}}}{h_{1}},\frac{2\rho_{h_{2}}}{h_ {2}},\frac{2\rho_{h_{3}}}{h_{3}}\end{array}\right)\text{, where }t_{i}\text{ is the transmittance of the polarizer filter with axis oriented to angle \(\theta_{i}\). The general formula is the following: \(\mathbf{S}=A\cdot\mathbf{F}\) and the transformation matrix, \(A\), is given by : \[A=\frac{1}{N}\left[\begin{array}{cc}\frac{k_{h_{1}}\sin{(-2 \theta_{1}+2\theta_{2})}}{\left[\begin{array}{cc}k_{h_{1}}\sin{(-2\theta_{1} +2\theta_{2})}&k_{h_{1}}\sin{(-2\theta_{1}+2\theta_{2})}\\ -k_{h_{1}}\sin{2\theta_{1}+2\theta_{2}}\sin{2\theta_{1}+2\theta_{2}}&-k_{h_{1} }\sin{2\theta_{1}+2\theta_{2}}\\ k_{h_{1}}\cos{2\theta_{1}+2\theta_{2}}\cos{2\theta_{1}+2\theta_{2}}&k_{h_{1} }\cos{2\theta_{1}+2\theta_{2}}\end{array}\right]\text{,}} \tag{9}\] where \(N=\)\(k_{2}k_{3}\sin{(-2\theta_{2}+2\theta_{3})}+k_{3}k_{1}\sin{(-2\theta_{3}+2\theta_{1})}\) \[+k_{1}k_{2}\sin{(-2\theta_{1}+2\theta_{2})}\text{.}\] We then define \(A^{\prime}\) such that \(\mathbf{S}=A^{\prime}\cdot\mathbf{f}\) where \(\mathbf{f}=(f_{\theta_{1}},f_{\theta_{1}},f_{\theta_{2}})\) is the observed polarization flux vector. The error is propagated through the transformation of the variance-covariance matrix of the polarization flux, \(\mathbf{f}\) (\(V^{\mathbf{f}}\)), to that of the Stokes parameters \(\mathbf{S}\) (\(V^{\mathbf{S}}\)), where we assumed no correlation between the flux obtained through the different polarization filters: \[V^{\mathbf{S}}=A^{\prime}V^{\mathbf{f}}A^{\prime T}\text{,} \tag{10}\] \[\text{with }V^{\mathbf{S}}=\left[\begin{array}{ccc}\sigma_{\theta_{1}}^{2}& \sigma_{\theta_{2}}&\sigma_{\theta_{3}}\\ \sigma_{\theta_{1}}&\sigma_{\theta_{2}}^{2}&\sigma_{\theta_{3}}\\ \sigma_{\theta_{1}}&\sigma_{\theta_{2}}^{2}&\sigma_{\theta_{3}}\end{array} \right]\text{, }V^{\mathbf{f}}=\left[\begin{array}{ccc}\sigma_{\theta_{1}}^{2}&0&0\\ 0&\sigma_{\theta_{2}}^{2}&0\\ 0&0&\sigma_{\theta_{2}}^{2}\end{array}\right]\text{.}\] The statistical uncertainty is computed after the combination of the observed polarized flux for each Stokes parameter. The uncertainty on the polarized flux \(\sigma_{\theta_{j}}\) is calculated assuming Poisson noise in the counts and \(\forall k\neq j\), \(\sigma_{\theta_{j}/\theta_{k}}=0\), as they arise from different observations: \[\sigma_{S_{i}}^{stat2} =\sum_{j=1}^{3}\left|\frac{\partial S_{i}}{\partial f_{\theta_{j} }}\right|^{2}\cdot\sigma_{\theta_{j}}^{2}\text{ for }S_{i}\in[I,Q,U]\text{,} \tag{11}\] \[\sigma_{\theta_{j}} =\sqrt{\frac{r_{j}}{t_{j}}}\text{ for }f_{\theta_{j}}\in\left[f_{ \theta_{1}},f_{\theta_{1}},f_{\theta_{1}}\right]\text{,} \tag{12}\] with \(\forall j\), \(r_{j}\) represent the rate and \(t_{j}\) is the exposure time for the polarized flux, \(f_{\theta_{j}}\). We compute the partial derivative of \(S_{i}\) with respect to \(f_{\theta_{j}}\), knowing that \(\forall j\), \(\frac{\partial A^{\prime}}{\partial f_{\theta_{j}}}=0\): \[\frac{\partial S_{i}}{\partial f_{\theta_{j}}}=\sum_{k=1}^{3}A^{\prime}{}_{ik} \frac{\partial f_{\theta_{k}}}{\partial f_{\theta_{j}}}=A^{\prime}_{ij}\text{.} \tag{13}\] The polarizer filters axis angle is known to an uncertainty of \(3^{\circ}\)(Nota & et al. 1996) and this error comes into account when computing the Stokes parameters as they explicitly depend on \(\theta_{1,2,3}\) through the transformation matrix (Eq. 9). Assuming \(\sigma_{\theta}=3^{\circ}\), we compute the uncertainties from the polarizer filters orientation as follows: \[\sigma_{S_{i}}^{axis2}=\sum_{j=1}^{3}\left|\frac{\partial S_{i}}{\partial \theta_{j}}\right|^{2}\cdot\sigma_{\theta_{j}}^{2}\text{ for }S_{i}\in[I,Q,U]\text{,} \tag{14}\] where we compute the partial derivative of \(S_{i}\) with respect to \(\theta_{j}\) assuming \(\forall k\neq j\), \(\frac{\partial f_{\theta_{j}}}{\partial\theta_{j}}=0\): \[\frac{\partial S_{i}}{\partial\theta_{j}}=\frac{1}{N}\left[\sum_{k=1}^{3}\frac{ \partial a^{\prime}_{ik}}{\partial\theta_{j}}f_{\theta_{j}}-S_{i}\frac{ \partial N}{\partial\theta_{j}}\right]\text{ with }a^{\prime}=N\cdot A^{\prime}\text{.} \tag{15}\] These uncertainties are quadratically summed to the previously computed propagated errors. We then rotated Stokes parameters to have north directed up. From the header keyword ORIENTAT providing the angle between north and the image's y axis in the northeast direction, we get \(\alpha\) the rotation angle. We transform the Stokes parameters and covariance matrix in the following way : \[\mathbf{S}_{r}=R_{I}(-2\alpha)\cdot\mathbf{S}\text{,} \tag{16}\] \[V^{\mathbf{S}}_{r}=R_{I}(-2\alpha)\cdot V^{\mathbf{S}}\cdot R_{I} (-2\alpha)^{T}\text{,} \tag{17}\] where \(R_{I}(-2\alpha)=\begin{bmatrix}1&0&0\\ 0&\cos{(2\alpha)}&\sin{(2\alpha)}\\ 0&-\sin{(2\alpha)}&\cos{(2\alpha)}\end{bmatrix}\). The polarization degree and angle are determined from the Stokes parameters by the following well-known equations: \[P =\frac{\sqrt{Q^{2}+U^{2}}}{I}\text{,} \tag{18}\] \[\theta_{P} =\frac{1}{2}\arctan{\left(\frac{U}{Q}\right)}\text{.} \tag{19}\] And the associated errors are propagated as follows: \[\sigma_{P} =\frac{1}{I}\left\{\frac{Q^{2}\sigma_{Q}^{2}+U^{2}\sigma_{U}^{2}+2 QU\sigma_{QU}}{Q^{2}+U^{2}}+\frac{Q^{2}+U^{2}}{I^{2}}\sigma_{I}^{2}\text{,}\right. \tag{20}\] \[\left.-\frac{2Q}{I}\sigma_{IQ}-\frac{2U}{I}\sigma_{IU}\right)^{ \frac{1}{2}}\] \[\sigma_{\theta_{P}} =\frac{1}{2(Q^{2}+U^{2})}\left(U^{2}\sigma_{Q}^{2}+Q^{2}\sigma_{U} ^{2}-2QU\sigma_{QU}\right)^{\frac{1}{2}}\text{.} \tag{21}\] Due to the presence of noise, the normalized Stokes parameters are the only estimates of the true normalized Stokes parameters. To correct this bias, in the following, we refer to the polarization degree as its improved estimator: the debiased polarization degree of \(P_{debiased}=\sqrt{P^{2}-\sigma_{P}^{2}}\)(Simmons & Stewart 1985). ## 3 Benchmarking our pipeline against NGC 1068 In order to test our pipeline, we decided to re-analyze the FOC data of NGC 1068. This is the most archetypal type-2 (edge-on) radio-quiet AGN and thus the best target for benchmarking. It possesses the largest database of radio-to-UV polarization measurements (Marin 2018) and was even part of the original catalog of Carl Seyfert (Seyfert 1943). Its proximity to Earth (\(z\approx 0.00379\), which corresponds to a Hubble distance of \(\sim\) 13.48 Mpc in the standard \(\Lambda\)CDM model) allows us to resolve the first hundreds of parsecs thanks to the spatial resolution of the FOC (at \(z=0.00379\), \(1^{\circ}\) equals 81.5 pc). NGC 1068 was observed by the FOC on Sep 30, 1993 (10:32AM) and on Feb 28, 1995 (5:33AM), with the respective program IDs 3504 and 5144. However, the first observation (1993) was badly saturated along the AGN core direction due to an improper estimation of the expected UV flux (Capetti et al. 1995a). We thus focus our work on the second observation (1995). The dataset was obtained through the F253M UV filter centered around 2530 A, together with the polarizing filters POLO, POL60, POL120. The optical relay f/96 was selected to obtain a FoV of \(7^{\circ}\times 7^{\circ}\) and a pixel size of \(0.014^{\circ}\)\(\times\) 0.014\({}^{\circ}\). It results in a \(512\times 512\) pixelated image of the source and its environment. Each polarizing filter acquired \(\sim\) 3 500 seconds worth of observation for a total exposure time of 10 581 seconds. This observation was first published by Capetti et al. (1995b) and further analyzed by Kishimoto (1999). In Fig. 6, we show the total intensity map with the polarization information superimposed to it. We rebinned the data in order to get a pixel size of 0.1", similarly to Capetti et al. (1995b) for a direct comparison. The pixels were smoothed by a Gaussian kernel of standard deviation 0.2". Only the bins with a S/N higher than 30 in total flux and higher than 3 in polarization have their polarization vectors shown. We display in the top-left corner of the figure the total flux, polarization degree, and polarization angle as integrated over the whole FoV. At 2555A pivot wavelength we observe an integrated flux of \((48.63\pm 3.21)\cdot 10^{-15}\) erg.s\({}^{-1}\).cm\({}^{-2}\).A\({}^{-1}\) with a polarization of \(11.4\pm 0.2\) % at position angle \(97.6\pm 0.4^{\circ}\). The total flux image is dominated by a compact region of about \(3\times 3\) pixels (\(0.3"\times 0.3"\)) that is situated at the base of the polar outflows that extend in the northern and southern directions, although the northern part is much more visible in total flux. The southern part suffers from slightly higher reddening from the dust of the host galactic plane (Kishimoto 1999). The polarization vectors allow to highlight the double-conical morphology of the winds, with a much higher contrast than in pure intensity. The polarization pattern seen in the winds is centro-symmetric, pinpointing the source of emission even if it is hidden by an optically thick dusty region. This circumnuclear region, often called the "torus", is not coincident with the brightest spot of total flux but is immediately below it (\(\sim 0.3"\) towards the south), where the total intensity decreases due to heavy absorption by the dust and gas mixture (Kishimoto 1999). Focusing on the regions with the highest S/N, the structure of the bi-polar winds is revealed thanks to polarimetry. The polarization degree in the brightest region is of the order of \(15-20\)% and significantly increases along the winds, up to \(35-40\)%. Figure 6: Total flux F\({}_{1}\) (erg.s\({}^{-1}\).cm\({}^{-2}\).Å\({}^{-1}\), color-coded) of NGC 1068, with the polarization information superimposed to the image using white vectors. The linear polarization degree is proportional to the vector length while the polarization position angle is indicated by the orientation of the vector (a vertical vector indicating a polarization angle of 0”). North is up, east is left. We show the full FoV so that no potential information is lost. A spatial bin corresponds to 0.1”. The contours are displayed from 1% to 99% every 10% of the maximum value of \(4.863\cdot 10^{-14}\) erg.s\({}^{-1}\).cm\({}^{-2}\).Å\({}^{-1}\). The smoothness of the contours is tightly linked to the amount of smoothing done in the reduction process. A direct comparison with the results of Capetti et al. (1995b) validates ours results. The centro-symmetric pattern is well reproduced and the differences in morphology observed between the northern and southern outflows coincides between Fig. 2 in Capetti et al. (1995b) and our Fig. 6 with a S/N cut of 30 in total flux. We find similar polarization degrees in various positions in the winds, with the exception of the location of the highest patch of polarization degree. In Capetti et al. (1995b), the authors detect a 65% polarization level immediately west of the brightest spot of total flux, while such this region only displays 20% linear polarization in our results. This is very likely due to the different steps between the two reduction pipelines. While in Capetti et al. (1995b) the errors in the Stokes parameters were computed assuming Poisson statistics, we propagated the errors from the calibrated data from the archive. In addition, the smoothing of the image can play a crucial role in blurring or increasing the polarization of a given pixel. No indication on the smoothing process were given in Capetti et al. (1995b), but we tried using different Gaussian kernels without succeeding to find the same patch of high polarization. On the other hand, we note that Kishimoto (1999) re-analyzed the same data and neither found this 65% polarization spot. It indicates that there might have been a small misalignment effect or a numerical artifact in the reduction method of Capetti et al. (1995b). There are, in fact, no evident reasons why such a high polarization degree should exist outside the polar winds half-opening angle, directly west of the torus location, where the central irradiation should be absorbed by the circumnuclear dust wall. We also note that we present a much wider view of NGC 1068, while Capetti et al. (1995b) cropped their results to a FoV of 3.3" \(\times\) 2.9". Integrating the total flux over the 7" \(\times\) 7" image with a binning of 0.10" and a Gaussian combination smoothing of a FWHM of 0.20" gives us about 4.86 \(\pm\) 0.32 \(\times\) 10\({}^{-14}\) erg.s\({}^{-1}\).cm\({}^{-2}\).A\({}^{-1}\), which agrees with the 4.79 \(\pm\) 0.20 \(\times\) 10\({}^{-14}\) erg.s\({}^{-1}\).cm\({}^{-2}\).A\({}^{-1}\) flux recorded by the International Ultraviolet Explorer (IUE) for a 10" \(\times\) 20" fixed aperture at 2700 A, see Kinney et al. (1993). The integrated polarization degree and polarization angle are 11.4% \(\pm\) 0.2% and 97.6" \(\pm\) 0.4", respectively. These values are in good agreement with the WUPPE polarization measurement of NGC 1068 made by Code et al. (1993), who found 12.9% \(\pm\) 1.9% and 112" \(\pm\) 3.8" for an aperture of 6" \(\times\) 12". Through the 1" aperture of the Faint Object Spectrograph (FOS) centered on the peak of the continuum emission, Antonucci et al. (1994) measured a degree of polarization of 17.2% \(\pm\) 1.1% and a position angle of 91.1" \(\pm\) 1.8" in the range 2460-2760 A. By simulating a 1" diameter aperture on the FOC data, we obtained 10.7% \(\pm\) 0.1% and a position angle of 91.0" \(\pm\) 0.7"(see Fig. 7). Another test was made by comparing our polarization map to the one of Kishimoto (1999). In this case, we used a 10 pixel binning, without smoothing, and taking the intersection of both maps cut at [S/N]\({}_{p}\geq\) 3 and [S/N]\({}_{I}\geq\) 30. The results are shown Figure 8: Direct comparison of the polarization map obtained by this pipeline to the one obtained by Kishimoto (1999) for the same 10 pixel binning, without smoothing and taking the intersection of both maps cut at [S/N]\({}_{p}\geq\) 3 and [S/N]\({}_{I}\geq\) 30. Figure 7: Comparison of the integrated polarization obtained by this pipeline through a simulated aperture of 1” diameter (green encircled region) to the one obtained by Antonucci et al. (1994) with the FOS spectropolarimeter. The FOC data was binned to a pixel of 0.1” and smoothed with a Gaussian of a FWHM of 0.2”. The polarization vectors are shown for [S/N]\({}_{p}\geq\) 3 and [S/N]\({}_{I}\geq\) 30. Figure 9: Distribution of the polarization degree obtained by this pipeline to the one obtained by Kishimoto (1999) in the same cut at [S/N]\({}_{p}\geq\) 3 and [S/N]\({}_{I}\geq\) 30. in Fig. 8. The polarization pattern reproduces the shape of the figure from Kishimoto (1999), as the exact same location of the statistically significant pixels. We note, however, a slight difference in the vector length, likely due to a different background estimation. In Kishimoto (1999), the background is estimated by finding a plateau in the outskirt of the radial flux profile as a function of the distance from the center, this plateau value is taken to be the image background. As this sort of technique fails to properly estimate a background value for polluted sources (as we will see with IC 5063 in section 4), we chose to implement a more general approach, hence the possible difference. In Fig. 9, we compare the measured polarization degree from each analysis pipeline using a Gaussian fit. As observed on the superimposed polarization maps, the detected polarization degree is slightly higher from this pipeline with a mean at 23% and a larger distribution than from Kishimoto (1999), which shows a mean at 19% and a more peaked distribution. Hence, the two pipelines give similar results within the uncertainties. The difference most likely comes from the method of background subtraction and how we estimate the uncertainties, as we use the debiased polarization degree. To compare the alignment of the obtained polarization angles in each pixel, we use circular statistics and introduce the metric \(\zeta\)(Clark & Hensley 2019), as defined in Eq. 22, with \(\theta_{1}\) as the polarization angle from Kishimoto (1999) and \(\theta_{2}\) the one from this pipeline. \[\zeta=\cos\left(2\delta\theta\right), \tag{22}\] \[\mbox{with }\delta\theta=\frac{1}{2}\arctan\left[\frac{\sin \left(2\theta_{1}\right)\cos\left(2\theta_{2}\right)-\cos\left(2\theta_{1} \right)\sin\left(2\theta_{2}\right)}{\cos\left(2\theta_{1}\right)\cos\left(2 \theta_{2}\right)+\sin\left(2\theta_{1}\right)\sin\left(2\theta_{2}\right)} \right],\] and \(\theta_{1},\theta_{2}\) the angles to be compared. Here, \(\zeta\) is defined on the range of \([-1,\,1]\), such that two perfectly aligned distributions will have \(\zeta=1\), two perpendicular distributions will have \(\zeta=-1\), and two distributions with no statistical alignment will have \(\zeta=0\). From Fig. 10, we can see that both pipeline get almost identical polarization angles, \(\zeta>0.8\), where the polarized flux is stronger and where the statistics are better. Outside of these regions of high S/N, and where background estimation and uncertainties becomes non-negligible, the alignment is less strong, but still in agreement. Since the whole HST/FOC dataset was re-processed in 2005 with new geometric references and the latest filter calibrations (Kamp et al. 2006), the archival data are intrinsically different from the science data that were used during the FOC lifetime. This can explain the differences in the polarization degree histogram in Fig. 9 and the polarization angle misalignment in the clouds N-W and S-E of the nucleus location in Fig. 10. Additionally, it became important to align the different observation by cross-correlation and not by using the images shifts provided by Hodge (1996). These were used for the data reduction by Kishimoto (1999) but the new best geometric reference modified these values, as it can be seen in Fig. 11, in which the observations are incorrectly shifted using the pixel shifts values from Hodge (1996). A deeper examination of the flux and polarization pattern of NGC 1068 is presented in the mosaic of Fig. 12. Here we plot zoomed-in maps of the AGN, showing the total flux (top-left), polarized flux (top-right), polarization degree (bottom-left), and polarization position angle (bottom-right). Comparing the total and polarized flux map, we see that the sinus shape of the winds is much more emphasized by the polarized flux, where re-processing is clearly revealed. This makes it possible to identify the geometry of the winds with greater precision, using the fact that polarization offers a better contrast than total flux images. Different types of data smoothing were implemented in the pipeline to allow reproduction of older data reductions and results. A simple Gaussian smoothing with a FWHM determined by a pixel radius allow to reproduce the data reduction as usually done by previous papers (see Capetti et al. 1995b; Antonucci et al. 1994). However, for future works, we prefer to use smoothing weighted on each pixel related error. A comparison between Figure 11: Same as Fig. 8, except that the alignment uses the shift values from Hodge (1996); i.e., there is no cross-correlation, highlighting the fact that the 2005’s re-calibrated dataset have updated geometric properties in comparison to the dataset used in Capetti et al. (1995a,b) and Kishimoto (1999). Figure 10: Degree of alignment of the polarization angle obtained by this pipeline to the one obtained by Kishimoto (1999) in the same cut at \([\mbox{S/N}]_{p}\geq 3\) and \([\mbox{S/N}]_{l}\geq 30\). these smoothing methods can be seen in Fig. 13 and it highlights the better S/N permitted by a simple Gaussian smoothing and the improved definition of spatially resolved structures using weighted methods. ## 4 Uncharted FOC observation of IC 5063 Once we have made sure our pipeline produces valid polarization maps, we can undertake the exploration of one of the unpublished AGN observations in the FOC archives: IC 5063. We leave the remaining unexplored AGNs for the next papers of this series. Figure 12: Four different zoomed-in outputs of the pipeline for NGC 1068 with the polarization map superimposed. The polarization vectors are only displayed for the selected cut of [S/N]\({}_{p}\geq 3\) and [S/N]\({}_{l}\geq 30\). The integrated values are computed on the full FoV (\(P^{out}\) and \(\theta_{P}^{up}\)). _Top-left_: Total flux \(F_{A}\) (erg.s\({}^{-1}\).cm\({}^{-2}\).Å\({}^{-1}\), in log-scale). _Top-right_: Polarized flux \(F_{A}\cdot P\) (erg.s\({}^{-1}\).cm\({}^{-2}\).Å\({}^{-1}\)). _Bottom-left_: Polarization degree \(P\) (%). _Bottom-right_: Polarization angle \(\theta_{P}\) (in \({}^{\circ}\), taken in the trigonometric direction with north being 0\({}^{\circ}\)). IC 5063 is a nearby elliptical galaxy (\(z\approx 0.01135\), corresponding to a Hubble distance of \(\sim 48.32\) Mpc in the standard \(\Lambda\)CDM model). It is a radio-loud galaxy, with a bright red nucleus. The latter characteristic can either come from a very steep non-thermal spectrum (with spectral index -4.5) or by re-radiation from hot dust with a black-body color temperature of 650 K (Axon et al. 1982). In 1987, a high polarization degree (\(17.4\pm 1.3\) % at PA \(\approx 4\pm 5^{\circ}\) in \(H\) and \(K\) bands) was measured in near-infrared for a 2.25 arcseconds aperture centered on the nucleus (Hough et al. 1987). This suggests a non-thermal synchrotron source for the near-IR emission from the nucleus. The detection of a strong, broad H\(\alpha\) emission seen in polarized flux Figure 13: Four different zoomed-in outputs of the pipeline for NGC 1068 with the polarization map superimposed. The polarization vectors are only displayed for the selected cut of [S/N]\({}_{p}\geq 3\) and [S/N]\({}_{l}\geq 30\). This juxtaposition shows different smoothing methods, all maps are binned to 0.1\({}^{\circ}\) pixels. _Top-left_: Without smoothing. _Top-right_: Simple Gaussian smoothing of FWHM of 0.2\({}^{\circ}\). _Bottom-left_: Gaussian smoothing with FWHM of 0.2\({}^{\circ}\) where pixels are weighted with their inverse squared error. _Bottom-right_: Combination and weighted Gaussian smoothing where the different observation through a same polarizer filter are both averaged and smoothed at the same time for a better reduction. also suggests the existence of a hidden broad-line region (Inglis et al. 1993), pointing towards a hidden type-1 AGN nested in the dusty heart of an elliptical galaxy. Indeed, a prominent dust lane has been observed along the long axis of IC 5063, mostly concentrated on the northern side. The symmetrical distribution of this dust lane and its continuity outside of the nucleus spanning a few kiloparsecs suggest an external origin, most probably a previous merger, as such structures are unlikely to survive many dynamical timescales (Colina et al. 1991). Finally, it has been observed that this AGN displays strong interactions between the ISM and its radio jets (radio position angle \(\sim 115^{\circ}\), Morganti et al. 1998) that introduce complex emission regions along the jets (Oosterloo et al. 2000). These regions spanning a few hundred parsecs are a perfect observational target for the FOC thanks to its fine spatial resolution (at \(z=0.01135\), \(1^{\circ}\) equals to 241.7 pc). IC 5063 was observed by the FOC on Feb 25, 1998 (program ID 5918). The observation used the F502M filter centered around 4985 A, for an exposure time of 5 261 seconds through each POLO, POL60 and POL120 filters. This adds up to a total observation time of 15 783 seconds. The observation was reduced in total flux by Dasyra et al. (2015), in particular, their Fig. 8, and allowed for the identification of discrete gas-outflow starting points along the radio jets. However, no polarization study has ever been published despite the rather good quality of the data. Some concern may arise about the potential contamination (dilution) from the extended [O III] polar emission that could impact our resulting maps. Indeed, we can see from Venturi et al. (2021) Fig. 1 (c) that the HST/FOC FoV is totally embedded in the [O III] emission region. These emission lines are formed when atoms from the polar region are photo-ionized by the continuum radiation from the central source. Photo-ionization produces unpolarized photons (Lee et al. 1994; Lee 1994), so our whole map is subject to polarization dilution by the [O III] emission. As such, it likely decreases the observed polarization degree (\(P\)), but it should not change the polarization angle (\(\theta_{P}\)). There should not be misleading polarization patterns. ### Characteristics of the optical polarization map We processed the observation of IC 5063 through our pipeline and we present the resulting 4985 A-centered polarization map in Figs. 14 and 15. We show the total intensity map with the polarization information superimposed to it. We rebinned the data to get individual spatial bins of \(0.1^{\circ}\) and the pixels were smoothed by a Gaussian kernel of standard deviation \(0.2^{\circ}\). Only the bins with a [S/N]\({}_{I}\geq 30\) and [S/N]\({}_{P}\geq 3\) have their polarization vectors shown. We lowered the cut to \(3\sigma\) in polarization degree due to lower polarized flux coming from the the AGN compared to the \(5\sigma\) cut on the observation of NGC 1068. The total flux image is dominated by a croissant-shaped region that is situated near the base of the jets (RA 20h52m02.4s, DEC -57\({}^{\circ}\)04\({}^{\prime}\)08\({}^{\prime}\)), but it does not necessarily corresponds to the location of the hidden nucleus. We remind the reader that the central engine is obscured by a circumnuclear reservoir of dust, so it is not directly visible in total flux. This bright croissant most likely results from: a) the re-emission of the ISM by the interaction of the jet with the host galaxy and/or b) the scattering of photons thermally emitted from the central accretion disk. The opposite jets, invisible in UV/optical light due to Doppler beaming outside our line-of-sight, extend towards the southeast and northwest directions. The lobes of the jets, detected in radio and seen superimposed on the polarization map in Fig. 17, match two regions with a higher intensity than the background light, together with specific morphologies. The northwest lobe region shows a peculiar \(V\) shape that could result from the interaction of the northern jet with the material expelled by the central engine before the onset of jet activity. This material is likely to be the narrow line region (NLR) across the host galaxy that we observe in many radio-quiet AGNs (see the case of NGC 1068). In the southern part, a less intense, diffuse clumpy emission can be seen. This can be due to the fact that the southeast jet and lobe are obscured by the dusty disc of the host galaxy and the medium they interact with is responsible for the observed optical/UV radiation. radio jets (115deg), the polarization angle appears perpendicular. This, together with the high degree of polarization, indicates that scattering prevails in this region. The light we observe directly comes from the obscured central engine through perpendicular scattering onto the winds that act like a mirror. In addition, this result agrees with the polarization angle and degree observed in optical by Inglis et al. (1993) (\(\sim\) 10-30% at \(\sim\) 30-50deg). Any spectropolarimetric attempt to measure broaden Balmer lines in polarized flux should then focus on this region to be free from the turbulence and emission caused by the northern jet and lobe. Interestingly, our independent detection of the undisturbed NLR allows us to say that IC 5063 did have polar winds before the onset of jet activity, but the ionization cone and the NLR structures have been almost completely wiped out by the jet within the first arc-seconds around the core. #### 4.2.2 Region 2: Southern lobe The second brightest spot in flux at the 4985 A-centered optical waveband is situated on the southwestern part of the AGN and corresponds to a zone where the counter jet seem to interact with the local medium. This region of high flux is almost completely depolarized. We can see from the superposition of the radio map to the optical data (Fig. 17) that this depolarized region also corresponds to the southern lobe. This depolarization seen in near-UV may originate from a scattering medium perturbed by the jet and lobe, resulting in a complex reprocessing environment producing polarized photons with various polarization angles and, thus, resulting in a net depolarization of the photon flux. We note, however, the presence of a circular polarization pattern where the lobe encounters the ISM. In these region, the medium might be kinetically aligned by the jet's lobe pushing the medium out of its way, but the statistical significance of the Figure 14: Total flux F\({}_{\rm i}\) (erg.s\({}^{-1}\).cm\({}^{-2}\).Å\({}^{-1}\), color-coded) of IC 5063, with the polarization information superimposed to the image using white vectors. The data is re-sampled to have bins of 0.1” and smoothed with a Gaussian kernel of FWHM of 0.2”. The polarization vectors are displayed for [S/N]\({}_{\rm i}\)\(\geq\) 30 and [S/N]\({}_{p}\)\(\geq\) 3 in white, for [S/N]\({}_{p}\)\(\geq\) 2 in red and for [S/N]\({}_{p}\)\(\geq\) 1 in blue. Their lengths are proportional to \(P\). The contours are displayed from 1% to 99% every 10% of the maximum value of \(1.727\cdot 10^{-15}\) erg.s\({}^{-1}\).cm\({}^{-2}\).Å\({}^{-1}\). detection per pixel is below 3\(\sigma\). However, Table 2 shows that integrating the whole lobe region gives a polarization \(\sim\)1% with a [S/N]\({}_{P}\) of 6.5. This is a clear detection of a polarized source at a PA \(\sim 15^{\circ}\), almost perpendicular to the jet direction. #### 4.2.3 Region 3: Northern lobe From Morganti et al. (1998), we know that this AGN is characterized by fast gas outflows fueled from the central core. In this region located between the hidden nucleus and the unperturbed wind, where the radio map highlight the presence of the northern Figure 16: Optical polarization map for IC 5063 obtained through the pipeline. The polarization vectors are displayed for [S/N]\({}_{P}\geq 30\) and [S/N]\({}_{P}\geq 3\) in white, for [S/N]\({}_{P}\geq 2\) in red and for [S/N]\({}_{P}\geq 1\) in blue and the overlaid regions are listed in Tab. 2 and detailed in the text. Figure 17: 18 GHz ATCA map from Morganti et al. (1998) superimposed onto our HST/FOC map. Both maps have been aligned on the supposed nucleus location. The optical polarization vector have a fixed length to better highlight their orientation. Figure 15: Two different outputs of the pipeline for IC 5063 with the polarization map superimposed. The polarization vectors are only displayed for the selected cut of [S/N]\({}_{P}\geq 30\) and [S/N]\({}_{P}\geq 3\) in white, for [S/N]\({}_{P}\geq 2\) in red and for [S/N]\({}_{P}\geq 1\) in blue. The integrated values are computed on the full FoV (\(P^{\rm{in}}\) and \(\theta_{P}^{\rm{in}}\)). _Left_: Polarization degree \(P\) (%). _Right_: Polarized flux \(F_{A}\cdot P\) (ergs\({}^{-1}\).cm\({}^{-2}\).Å\({}^{-1}\)). lobe, we detected a low degree of polarization (\(\sim 2.6\pm 0.5\)%), probably for the same reason as stated above for the southern lobe. Also, similarly to the southern lobe, the associated PA \(\sim 10^{\circ}\) corresponds to a direction that is perpendicular to the radio jet. Furthermore, the V-shape seen in the 4985 A-centered optical waveband most likely indicate a highly interactive region where the radio jet, the polar wind, and the ISM meet. #### 4.2.4 Region 4: Dust lane On the northern part of the HST/FOC map, we can see a clear cut in logarithmic scale on the flux intensity map. We know from Morganti et al. (1998), in particular, their Fig. 4, that this obscured region in optical corresponds to the extension of the dust lane, which is confirmed by the WFPC2 infrared map (Fig. 18). Using the spatial capabilities of the FOC, we can isolate and integrate the polarization from this dusty region. Given the low flux from this specific region, we get higher errors but we do observe a high polarization degree, around 18% at \(\sim 84^{\circ}\), exactly along the position angle of the dust lane. The high polarization degree and the polarization angle parallel to the dust lane strongly suggest dichroic absorption from starlight in the foreground dust lane. The integrated PA could then highlight the large-scale ordered magnetic field in the dust lane. #### 4.2.5 Region 5: Diffuse medium The map of IC 5063 can essentially be divided in three parts : the northern section that is dominated by the dust lane, the southeast to northwest diagonal that corresponds to the highly asymmetric AGN, and the southern part that is essentially the ISM, a diffuse medium where Maksym et al. (2020) identified "dark rays." That is to say, the projected shadow of the circumnuclear dusty torus. For completeness, we investigated this region and found a low polarization degree (\(\sim 1\)%) and with a polarization position angle \(\sim 123^{\circ}\), namely, it is parallel to the jet radio axis. Because the southern region of the map is not as much obscured as the northern region, the observed polarization either results from dichroic absorption and re-emission of host starlight passing through the diffuse medium, or it could be the imprint of the magnetic field of the galaxy itself. Deeper (and longer) observations would be needed to assess the correct interpretation. #### 4.2.6 Region 6: Highly polarized knot This region is characterized by a high S/N associated to a strong polarized flux. It is just north of the estimated position of the nuclei and goes across the croissant-shaped region of highest flux. We measured a high polarization degree at \(6.1\pm 0.4\)% associated with a polarization angle of \(151.2\pm 1.7^{\circ}\). We learn from Lopez-Rodriguez et al. (2013) that the central 1.2 arcsec of IC 5063 in J, H, and K\({}_{\rm{\tiny R}}\) band was measured to be \(2.0\pm 0.7\)%, \(2.5\pm 0.9\)%, and \(7.8\pm 0.5\)%, respectively, and that the PA of polarization is wavelength-independent (within the error bars) and measured to be \(3\pm 6^{\circ}\) in the three filters. In this HST/FOC observation we get a high polarization degree because we are less diluted by the host starlight in the UV than in the IR, but the different PA we get with respect to Lopez-Rodriguez et al. (2013) indicates some pollution by either the jet or by the dust lane. As the PA-radioPA is similar to region 4, we estimate that we get a mix of signals from the dust lane and additionally from a region situated further away from the torus height. We can confirm that this is not the jet polarization, as otherwise we would see it all along the jet structure (also, due to Doppler boosting outside of our line-of-sight, this would not be a reasonable explanation). It is more likely that the polarization observed in UV comes from dichroic absorption from the dust lane of AGN core photons scattered in our line of sight by the jet base or the ionized wind base. ## 5 Discussion We went on to test our new reduction pipeline against NGC 1068 and applied it to IC 5063 with the results detailed above. Here, we discuss this process, along with two more points related to our IC 5063 analysis and to the use of a different background noise estimator than usual. ### Past polarimetry of IC 5063 In contrary to, for instance, NGC 1068 or NGC 4151 (Marin et al. 2018, 2020), IC 5063 has not been extensively observed in optical polarimetry. This is due to the fact that its optical polarization is strongly affected by both interstellar polarization and dilution by the host galaxy, but also because of the presence of the dust lane that hides a significant fraction of the AGN. From the archives, we were able to retrieve four papers that present optical and/or near-infrared polarimetric measurements of IC 5063. Martin et al. (1983) was probably the first to measure the optical linear polarization of IC 5063 using Pockels cell polarimeters on the Steward observatory. The observations were made with a Corning 4-96 filter (\(3800-5600\) A) and a 4" aperture. The authors found a polarization degree of \(1.28\%\pm 0.14\%\) at a polarization position angle of \(10.1^{\circ}\pm 3.2^{\circ}\). Broadband optical and near-infrared polarization measurements were also undertaken by Hough et al. (1987) with the Hatfield Optical-IR polarimeter on the 3.9m Anglo-Australian Telescope. The au Figure 18: Total flux of IC 5063 at 5997 Å observed by HST/WFPC2 in 1995. The superimposed polarization vectors, shown with a constant length for better visualization, are taken from the polarization map at 4985 Å from this paper. They are displayed for the selected cut of [S/N]\({}_{\rm{\tiny I}}\geq 30\) and [S/N]\({}_{\rm{\tiny P}}\geq 3\) in white, for [S/N]\({}_{\rm{\tiny P}}\geq 2\) in red and for [S/N]\({}_{\rm{\tiny P}}\geq 1\) in blue (less significative). Both maps have been aligned on the croissant-shaped high flux region. thors noted that the polarization decreases from B to J and then rises towards longer wavelength, while the polarization angle is wavelength-independent. Inglis et al. (1993) were the first to provide spectropolarimetric data with the same telescope as Hough et al. (1987), mounted with a spectrograph and a rotating Thomson CCD. Strong, broad H\(\alpha\) emission was discovered in polarized flux. The polarization obtained by Inglis et al. (1993) between 4500 and 7000 A is approximately constant : \(\sim 1.7\%\) at 3\({}^{\circ}\). Finally, Lopez-Rodriguez et al. (2013) used the infrared polarimeter built by the University of Hertfordshire for the Anglo-Australian Telescope and measured polarization in J, H and Kn bands at four different sized apertures: 1.2, 2.0, 3.0, and 4.0 arcsecs. They found a larger and aperture-dependent polarization degree than in the optical, with a constant polarization position angle. All archival polarimetric data are plotted in Fig. 19. We indicated the contribution of the interstellar polarization in gray, following the standard Serkowski's law (Serkowski et al. 1975). We used a synthetic 1.5\({}^{\circ}\) radius aperture on our IC 5063 map and checked whether we could reproduce the observation. The data we obtain is strongly position-dependent. Targeting the brightest spot in ultraviolet light, we only probe the jet base region, while displacing the aperture along the jet direction or placing it inside the dust lane region dramatically alter the observed polarization, as demonstrated in Sect. 4. Because we do not know what was the exact pointing of the various telescopes that obtained polarimetric data in the past, we made use of the fact that Inglis et al. (1993) measured both the narrow emission lines in the total flux and the broad emission line in polarized flux to estimate that the pointing encompassed the region of the map that is dominated by the polar winds (region 1 in Fig. 16). We thus extracted the continuum linear polarization from this region and obtained \(1.3\%\pm 0.3\%\) at \(173.2^{\circ}\pm 4.3^{\circ}\). This value is consistent with the previous measurement (see Fig. 19). Playing with aperture values and position centers, we easily see that a variation of half an arcsecond can completely change the observed polarization in a source as complicated as IC 5063. Polarization imaging at a high resolution is thus needed to determine the polarization of each separate region, as integrating over a too large aperture would ultimately result in a mix of several emission and/or scattering mechanisms - thus resulting in an erroneous explanation. ### Background noise estimation Estimating the background for point-like sources is rather simple: a circular region is used for the source and a co-located, disjointed annulus is used for the background. Generally, the background should be extracted from a region near the source. This is a good estimation of the background as long as the annulus does not intersect other sources, as it is independent on the instrument biases. This technique cannot be applied to extended sources, such as IC 5063. What is usually done for imagery of AGNs is to study the evolution of the flux along a virtual line that starts from the SMBH location and that extends in a direction perpendicular to winds and jets axis. When this flux reaches a plateau, the value of this plateau is assumed to be the background flux. This requires that the source is extended, but not to the point of reaching the borders of the FoV. In order to generalize this process for complex structures with potentially polluted background (such as the dust lane in IC 5063), we used a method that is not common in astrophysics (see e.g. Almoznino et al. 1993) but that is frequently used in astrophotography (Bijaoui 1980). The method, as presented in Sect. 2.3, is based on the counts histogram of the stacked pictures. This mode calculation assumes the sky radiation is normally distributed around a typical sky value. Thus if the distribution of the image intensities is constructed, the sky pixels will practically form a Gaussian centered on the sky level. Other radiating sources in the frame, which represent additional photon contributions on top of the sky, will form a tail on the bright side of the sky distribution (see Fig. 5). The advantage of this approach is that, no matter how bright these sources are, they will not affect the location of the peak of this histogram and hence the measured sky value. Other "bad" pixels, such as cosmic ray hits and defects of the CCD chip, that are either fainter or brighter than the sky will (for the most part) not affect the peak location. The sky value can be taken either as the exact location of the histogram peak (mode) or as the peak of a Gaussian or parabola fitted to the histogram in its peak region. To estimate the background noise, we constructed the distribution of the image intensities using a logarithmic scale and a statistic rule to bin the intensities. The background value is then chosen to be the peak of the obtained histogram (mode). ## 6 Conclusion In this first paper of a series, we present a general pipeline for the reduction of UV and optical polarimetric data from the final re-calibrated archive of the HST/FOC legacy instrument. This code, written in python language, is designed to bring polarimetric analysis to the community. We checked and validated our tool by re-reducing NGC 1068's data with even a slightly better precision that what was done a few decades ago. Moreover, it allowed us to begin the analysis of the UV polarization map of IC 5063. We were able to use the full power of the space-resolved polarization to divide the analysis in regions of different characteristics. With this, the pipeline proves to be a powerful tool to make use of the still unmatched polarimetric capabilities of the FOC. Its ease and promptness of use will allow us to homogeneously reduce and analyse the full AGN sample Figure 19: Compilation of published IC 5063 polarimetric measurements, prior to interstellar and starlight dilution correction. Top: Polarization degree. Bottom: Polarization angle minus the radio position angle of the jet structure. The aperture used for each measurement is color-coded. The interstellar polarization (ISP) contribution is highlighted in gray. We simulated an aperture of radius 1 arcsec centered on the WCS reference point, situated on the NW part of our region 3, right at the beginning of the V-shape. We obtained a \(1.0\pm 0.4\%\) polarization degree with an associated \(7.9\pm 10.5^{\circ}\) polarization angle. Our measurement is shown using a green cross. See text for details and references. among the FOC archive to get a better understanding of their overall properties and present them in the subsequent papers of the series. However, due to the fact that the FOC is the last mid- to far-UV polarimeter to have been in operation, this study can only be completed or taken further with future instruments. This study aim then at the preparation of future spectro-polarimeter instruments, such as POLLUX, planned to be mounted on the Habitable Wold Observatory by NASA. ###### Acknowledgements. The authors would like to acknowledge the referee for their comments that helped to improved this paper. TB and FM would also like to acknowledge the great help of Drs. Morganti and Maksym, who kindly shared their ACTA and HST images of IC 5063, respectively. The authors would also like to deeply thank Dr. Antonucci for his comments and help through the pipeline creation and results analyses.
2302.00402
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
Recent years have witnessed a big convergence of language, vision, and multi-modal pretraining. In this work, we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. Empirical study shows that mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding. Notably, mPLUG-2 shows new state-of-the-art results of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and video caption tasks with a far smaller model size and data scale. It also demonstrates strong zero-shot transferability on vision-language and video-language tasks. Code and models will be released in https://github.com/alibaba/AliceMind.
Haiyang Xu, Qinghao Ye, Ming Yan, Yaya Shi, Jiabo Ye, Yuanhong Xu, Chenliang Li, Bin Bi, Qi Qian, Wei Wang, Guohai Xu, Ji Zhang, Songfang Huang, Fei Huang, Jingren Zhou
2023-02-01T12:40:03Z
http://arxiv.org/abs/2302.00402v1
# mPLUG-2: A Modularized Multi-modal Foundation Model ###### Abstract Recent years have witnessed a big convergence of language, vision, and multi-modal pretraining. In this work, we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. Empirical study shows that mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding. Notably, mPLUG-2 shows new state-of-the-art results of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and video caption tasks with a far smaller model size and data scale. It also demonstrates strong zero-shot transferability on vision-language and video-language tasks. Code and models will be released in [https://github.com/alibaba/AliceMind](https://github.com/alibaba/AliceMind). ## 1 Introduction Large-scale pre-trained foundation models have been an emerging paradigm for a wide range of artificial intelligence (AI) fields, across language (Devlin et al., 2018; Brown et al., 2020), vision (Dosovitskiy et al., 2020; Liu et al., 2021) and multi-modality (Radford et al., 2021; Yu et al., 2022; Wang et al., 2022). With the broad success of Transformer architecture (Vaswani et al., 2017), recent years have featured a trend toward the big convergence of language, vision and multimodal pre-training (Yu et al., 2022; Wang et al., 2022; Alayrac et al., 2022). One line along this trend proposes to unify the tasks and modalities with a unified sequence-to-sequence generation framework such as T5 (Raffel et al., 2020), OFA (Wang et al., 2022) and Flamingo (Alayrac et al., 2022). On the other hand, BERT (Devlin et al., 2018), Florence (Yuan et al., 2021) and BEIT-3 (Wang et al., 2022) models all the tasks as instance discrimination, and adopt the pure encoder-based architecture. The predominant foundation models propose to share the same single network for multi-modality (Alayrac et al., 2022) to leverage the information from modality collaboration. However, the strategy will suffer from the issue of modality entanglement due to the large variance of different modality tasks. The challenge is that multiple modalities may interfere with each other (Huang et al., 2022), especially when there are many modalities and tasks. It is difficult for a single-module foundation model to balance the gain of modality collaboration and the influence of modality Figure 1: A brief illustration of the new paradigm with modularized design for building multi-modal foundation model. entanglement on a large number of downstream tasks across multiple modalities. To alleviate the challenge, in this work, we introduce a new unified paradigm of multi-modal foundation models, as shown in Figure 1. It features a module-based network design considering both the modality collaboration and modality entanglement, where mPLUG-2 designs certain shared functional modules to encourage the modality collaboration, while reserving modality-specific modules to tackle the problem of modality entanglement. Different modules are then jointly trained effectively on both the uni-modal and multi-modal datasets according to the task's module design. As a result, different modules can be flexibly selected and combined for the large number of uni-modal and cross-modal understanding and generation tasks accordingly. The details of the supported downstream tasks are given in Table 1. To the best of our knowledge, the proposed method tackles the largest number of different kinds of downstream tasks across text, image and video. Specifically, we design a unified dual-vision encoder module by disentangling spatial and temporal representations, where video inputs share the standard Transformer module with image inputs for modeling spatial information and an extra local temporal modeling module is used for temporal relation modeling on video-related tasks. Then a novel universal layers module is introduced to serve as a pivot across different modalities, where vision and language modalities are projected to the common language-guided semantic space by sharing self-attention modules. Besides, an extra cross-attention module is used to fuse the universal vision representation with the original fine-grained vision representation. The detailed module design is shown in Figure 2. Finally, different modules of mPLUG-2 are jointly pre-trained with task and modality instructions [20] on both uni-modal and cross-modal tasks. During inference, mPLUG-2 can select different modules for various uni-modal and cross-modal tasks with the modularized Transformer architecture. The selected modules for different tasks can be found in Table 2 in Appendix. We evaluate the new unified paradigm of mPLUG-2 on over 30 challenging uni-modal and cross-modal understanding and generation benchmarks and it achieves state-of-the-art or competitive results with a similar model size and data scale. Equipping with the module-based network design, mPLUG-2 can be also easily extended to additional tasks by selecting and adding modules. Notably, mPLUG-2 shows new state-of-the-art results of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and video caption tasks, respectively. mPLUG-2 also demonstrates strong zero-shot transferability on vision-language and video-language tasks. ## 2 Related Work **Vision-only Foundation Models** ConvNets [23, 14] have long been the main stream visual architecture before the emergence of vision transformer (a.k.a. ViT) [15]. Due to the superior capacity of Transformer network, ViT stands out in various downstream tasks [13, 12]. Apart from scaling up the naive ViT architecture with large-scale dataset such as JFT-3B [11], SwinV2-G [10] extends the original ViT with hierarchical architectures. In addition, EVA [14] distills the multi-modal knowledge to scale up ViT by leveraging unlabeled images with the large-scale pre-trained image-text model (e.g. CLIP [1]). Recently, InternImage [20] revitalizes the convolutional neural networks with deformable convolution and achieves the state-of-the-art performance on various vision downstream tasks. Besides, InternVideo [20] extends to video tasks by assembling two large video models with both generative and discriminative \begin{table} \begin{tabular}{l|c c c c c|c c c c|c c c c c} \hline \hline & \multicolumn{3}{c}{Computer Vision} & \multicolumn{3}{c}{Natural Language Processing} & \multicolumn{3}{c}{Image-Text} & \multicolumn{3}{c}{Video-Text} \\ \cline{2-13} Method & Image Cls. & Video Cls. & Det. & Seg. & Text Cls. & QA & Summarization & Retrieval & QA & Captioning & VG & Retrieval & QA & Captioning \\ \hline BiFT-3 & ✓ & & & ✓ & ✓ & & & ✓ & ✓ & & & & & \\ EVA & ✓ & ✓ & ✓ & ✓ & & & ✓ & & & ✓ & & & \\ CLIP & ✓ & & & & & & ✓ & & ✓ & & & & \\ ALBEF & & & & & & & & ✓ & ✓ & & ✓ & ✓ & \\ BLIP & & & & & & & & & ✓ & & ✓ & ✓ & \\ VATT & ✓ & ✓ & & & & & & & ✓ & & ✓ & & \\ Florence & ✓ & ✓ & ✓ & & & & & ✓ & & ✓ & & \\ CoCa & ✓ & ✓ & & & & & & ✓ & ✓ & & ✓ & & \\ VideoCoCa & & ✓ & & & & & & & ✓ & ✓ & & & ✓ & \\ Paining & & ✓ & & & & & & ✓ & ✓ & & & ✓ & & ✓ \\ GIT2 & ✓ & & & & ✓ & ✓ & & ✓ & ✓ & & & & ✓ \\ FLAVA & ✓ & & & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & & & \\ OFA & ✓ & & & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & & \\ OmniVL & ✓ & ✓ & & & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline mPLUG 2.0 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: **A system-level comparison between mPLUG-2 and existing foundation models in terms of various uni-modal and multi-modal downstream tasks.** “Cls.” denotes the classification. ”Det.” and “Seg.” are the short for ”Detection” and “Segmentation” tasks respectively. ”VG” stands for visual grounding task. Our mPLUG-2 is capable of supporting both uni-modal (i.e., CV and NLP) and multi-modal (i.e., Image-Text and Video-Text) downstream tasks simultaneously with the help of modularization. self-supervised video learning. **Language-only Foundation Models** Inspired by the successful practice of the BERT Devlin et al. (2018) in natural language understanding, a massive large-scale language foundation models are proposed for natural language processing. BART Lewis et al. (2020) is a denoising autoencoder like BERT but with encoder-decoder architecture which shows effectiveness for both text generation and comprehension tasks. Apart from BERT-series methods Devlin et al. (2018); Lewis et al. (2020); Liu et al. (2019), there are numerous other effective architectures and pre-training objectives. T5 Raffel et al. (2020) introduce a unified framework that covers all text-based language tasks into a text-to-text format. GPT-3 Brown et al. (2020) is an auto-regressive language foundation model which includes 175 billion parameters, and shows strong performance on many NLP tasks under the few-shot and zero-shot settings. **Vision-Language Foundation Models** Benefiting from a large number of image/video-text pairs in the Internet, the emergence of vision-language foundation models can subsum vision-language pre-training. The success of CLIP Radford et al. (2021) and ALIGN Jia et al. (2021) indicates that the model pre-trained with simple contrastive objectives on noisy image-text pairs can generate powerful vision-language representation. Moreover, ALBEF Li et al. (2021), BLIP Li et al. (2022) and mPLUG Li et al. (2022) extend the task with multi-modal text completion and text generation for auxiliary learning. On the other hand, some foundation models are built through task unification. For instance, Florence Yuan et al. (2021) unifies the contrastive objectives that can leverage both vision and vision-language data. BEiT-3 Wang et al. (2022) ascribe the pre-training task to mask data modeling in terms of text, vision, and vision-language. SimVLM Wang et al. (2021), OFA Wang et al. (2022), and CoCa Yu et al. (2022) perform the generative pre-training for vision-language understanding and generation. Different from predominant foundation models, mPLUG-2 introduces a new modularized transformer framework, which can leverage different compositions of modules for both uni-modal and cross-modal tasks by both sharing common universal modules and disentangling modality-specific ones to address the problem of modality entanglement. ## 3 Method ### Overall Framework As shown in Figure 2, mPLUG-2 consists of a dual-vision encoder module for image and video, a text encoder module, a universal layers module that serves as a multi-modal pivot shared by all tasks, a multi-modal fusion module and a shared decoder module for uni-modal and cross-modal generation. We first use two uni-modal encoders which encode image/video and text separately to represent the inherent information of the individual modality. For image/video, we adopt the dual-vision encoder to encode visual features with spatial modeling and local temporal modeling. Then, the Figure 2: The overall framework and module details of mPLUG-2. visual and linguistic representations are fed into the universal module separately, which consists of multiple universal layers. Each universal layer projects different modalities to shared semantic space for cross-modal alignment while preserving the original representation of different modalities. The output of universal layers is applied to conduct unimodal discrimination tasks. For cross-modal tasks, an additional fusion module will be applied to produce cross-modal representations. Finally, the uni-modal and cross-modal representations can be incorporated as input to a shared Transformer decoder for various generation tasks, which facilitates multi-task pre-training and transfer learning. The modules for different downstream tasks are summarized in Table 2. **Dual-vision Encoder Module** To capture the visual information of various vision modalities, we propose dual-vision encoder to model image and video simultaneously. Specially, we split the image and video frames into a sequence of \(L\) non-overlapping visual tokens. Every sequence of visual tokens with learnable spatial position embeddings and an extra [CLS] token constitute an input visual sequence. However, modeling the completed visual sequences leads to difficulty in spatio-temporal learning without large-scale video pre-training [14, 23, 22]. To alleviate this problem, we decouple the visual representation into the spatial and temporal representation separately by introducing temporal locality. As illustrated in Figure 2(b), we leverage the self-attention (SA) layer and feed-forward layer (FFN) in the Transformer block for spatial modeling, and propose a novel local temporal modeling module (LT) to model the temporal dependency among the spatial representation as: \[V_{LT}^{n} =LN(LT(V^{n-1})+V^{n-1}), \tag{1}\] \[V_{SA}^{n} =LN(SA(V_{LT}^{n-1})+V_{LT}^{n-1}),\] (2) \[V^{n} =LN(FFN(V_{SA}^{n})+V_{SA}^{n}), \tag{3}\] where LN is short for layer normalization. The local temporal modeling module captures the correlation among patches with the same spatial locations through multi-group fusion formulated as: \[V_{g}^{n}=ReLU(A_{g}^{n}\phi_{g}^{n}(V^{n-1}))\in\mathbb{R}^{T \times}\mathbb{\xi} \tag{4}\] \[LT(V^{n-1})=\varphi^{n}(Concat[V_{1}^{n};\cdots;V_{G}^{n}]), \tag{5}\] where \(\phi_{g}^{n}(\cdot)\) and \(\varphi^{n}(\cdot)\) are linear transformation functions. \(A_{g}^{n}\) is the learnable temporal relation parameter, which is instantiated as a convolution kernel. \(T\) and \(C\) are number of frames and size of hidden state. \(G\) indicates the number of groups, and \(Concat\) denotes concatenation function. By using multi-group fusion, the model is able to learn rich temporal information from distinctive representation subspaces at different temporal locations. As a result, except the local temporal module, the dual-vision encoder module enables weight sharing for images and videos, which effectively and efficiently learns the spatial and temporal representation. **Text Encoder Module** For the text encoder module, we use BERT [11] as the text encoder, which transforms the input text and an extra [CLS] token into a sequence of text embeddings. The embedding of [CLS] token is used to summarize the input text. **Universal Layers Module** To benefit from modality collaboration, we propose the universal layers to model the vision and language modalities in the shared semantic space while preserving the original representation of the different modalities. Before the universal module, we take a variable number of image or video features \(V^{N}\) from the dual-vision encoders as input to produce a fixed number \(k\) of visual tokens \(\mathcal{V}=\{v_{1},v_{2},...,v_{k}\}\) to reduce the computational complexity of universal layers. In the \(i_{th}\) universal layer, the visual tokens \(\mathcal{V}^{i-1}\) and the text representation \(\mathcal{W}^{i-1}\) are fed to the shared self-attention layers to align semantics, and then the visual tokens are injected into the original visual feature space by the cross-attention layer to keep the original representation. \[\mathcal{V}_{SA}^{i}=LN(SA(\mathcal{V}^{i-1})+\mathcal{V}^{i-1}) \tag{6}\] \begin{table} \begin{tabular}{l|c c c|c c c c c c c} \hline \hline & \multicolumn{3}{c}{Inject} & \multicolumn{3}{c}{Modables} \\ \hline \multirow{2}{*}{Tasks} & \multirow{2}{*}{Text} & \multirow{2}{*}{Image} & Video & Text Enc & Image Enc & Video Enc & Universal Layers & Fusion Layers & Text Dec & Image Dec & Video Dec \\ \hline Video-Text Retrieval & ✓ & & ✓ & ✓ & & ✓ & ✓ & ✓ & & \\ Video-Text Question Answering & ✓ & & ✓ & ✓ & & ✓ & ✓ & ✓ & & ✓ \\ Video-Text Captioning & ✓ & & ✓ & ✓ & ✓ & & ✓ & ✓ & & ✓ \\ Image-Text Retrieval & ✓ & ✓ & & ✓ & ✓ & & ✓ & ✓ & & \\ Image-Text Question Answering & ✓ & ✓ & ✓ & ✓ & & ✓ & ✓ & ✓ & ✓ & \\ Image-Text Captioning & ✓ & ✓ & & ✓ & ✓ & & ✓ & ✓ & ✓ & \\ Image-Text Captioning & ✓ & ✓ & & ✓ & ✓ & & ✓ & ✓ & ✓ & \\ Visual Grounding & ✓ & ✓ & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ Video Classification & & & ✓ & & ✓ & ✓ & ✓ & & \\ Image Classification & & ✓ & & ✓ & ✓ & & ✓ & & \\ Image Detection & & ✓ & & & ✓ & ✓ & & & \\ Image Segmentation & & ✓ & & ✓ & & ✓ & & & \\ Text Classification & ✓ & & & ✓ & & & ✓ & & \\ Text Question Answering & ✓ & & & ✓ & & & ✓ & & \\ Text Summarization & ✓ & & & ✓ & & ✓ & ✓ & & ✓ & \\ \hline \hline \end{tabular} \end{table} Table 2: **The modules for downstream tasks.** \[\mathcal{W}^{i}_{SA}=LN(SA(\mathcal{W}^{i-1})+\mathcal{W}^{i-1}) \tag{7}\] \[\mathcal{V}^{i}_{CA}=LN(CA(\mathcal{V}^{i}_{SA},V^{n})+\mathcal{V}^{i}_{SA}) \tag{8}\] \[\mathcal{V}^{i}=LN(FFN(\mathcal{V}^{i}_{CA})+\mathcal{V}^{i}_{CA}) \tag{9}\] \[\mathcal{W}^{i}=LN(FFN(\mathcal{W}^{i}_{SA})+\mathcal{W}^{i}_{SA}) \tag{10}\] Then \([\mathcal{V}^{i},\mathcal{W}^{i}]\) is fed into the next universal layer repeatedly to get the final common image and text representation. Finally, the output of the universal layers \([\mathcal{V}^{S};\mathcal{W}^{S}]\) are combined with the original representations \([V^{N};W^{M}]\) by the cross-attention layer for the text-aware visual and visual-aware text representation, where \(S,N,M\) are the layers of universal module, dual-vision encoder and text encoder respectively. Fusion ModuleTo effectively capture the cross-modal interaction between vision and language modalities, we use the fusion module as in ALBEF (Li et al., 2021), which is composed of a stack of Transformer blocks with cross-attention layers. Specifically, the fusion module takes the text embeddings from the universal layers module as the input. Then, the text-aware vision embedding cross-attends to the visual-aware text embeddings in language-shared common space. By cascading the Transformer blocks with cross-attention layers, fusion module is able to yield multi-modal vision-language representations. Shared Decoder ModuleTo empower the model with the capability of generation, a shared decoder module is introduced to enable the model to generate text with both unimodal and multi-modal information. In detail, the shared decoder module is a Transformer decoder with arbitrary inputs. For example, image captioning only requires the visual features, while the multi-modal features are used for visual question answering. By taking different types of input, our shared decoder module can adapt to a variety of tasks with text generation. The shared decoder module facilitates multi-task pre-training and transfer learning. ### Unified Pre-training Objectives We jointly train the multiple modules of mPLUG-2 with the following three objectives. Language LossFor the text encoder module, we use Masked Language Modeling (MLM) as in BERT (Devlin et al., 2018) to learn the text representation. We randomly mask 15% tokens in the text and the model is asked to predict these masked tokens with the context representations. Multi-modal LossFor the cross-modal module, we employ the Cross-modal Matching Losses (CML) as in ALBEF (Li et al., 2021), which consists of Vision-language Matching (VLM) and Vision-language Contrastive Learning (VLC). Instruction-based Language Model LossFollowing Flamingo (Alayrac et al., 2022) and OFA (Wang et al., 2022), we adopt the Instruction-based Language Model Loss to unify various generation tasks. We use handcrafted instructions to discriminate tasks and modalities, which include Video/Image-Text Pairs, Video/Image Captioning, Video/Image Question Answering, Text Generation, etc. ## 4 Experiment ### Training Setup Pre-training DatasetsFollowing previous works (Li et al., 2021; 2022), we pre-train our model with the same popular image-text datasets with 14M images including MS COCO (Lin et al., 2014), Visual Genome (Krishna et al., 2017), Conceptual Captions 3M (Sharma et al., 2018), Conceptual Captions 12M (Changpinyo et al., 2021), and SBU Captions (Ordonez et al., 2011). For video-text datasets, we adopt the web-sourced video dataset WebVid-2M (Bain et al., 2021) with 2.5M video-text pairs. The text datasets consists of WikiCorpus (Devlin et al., 2018) (about 20GB) and cleaned common crawl (about 350GB). The collection and cleaning method of the latter is generally the same as that used in c4 (Raffel et al., 2020). The implementation details of pre-training can be found in the Appendix. ### Main Results We evaluate the new unified paradigm of mPLUG-2 on over 30 benchmarks including vision-language tasks (e.g. multi-modal retrieval, question answering and captioning) (Xu et al., 2016, 2017; Chen and Dolan, 2011), language-only tasks (e.g. text classification, question answering and summarization) (Wang et al., 2018; Rush et al., 2015), and vision-only tasks (e.g. image classification and video action recognition) (Deng et al., 2009; Kay et al., 2017). Specially, the vision-language benchmarks can be categorized as image-text parts and video-text parts. Details of these datasets can be found in the Appendix. #### 4.2.1 Multi-modal Tasks Text-to-video RetrievalWe compare mPLUG-2 with several state-of-the-art methods on MSRVTT (Xu et al., 2016), DiDeMo (Anne Hendricks et al., 2017) and LSMDC (Rohrbach et al., 2015) datasets. The results are summarized in Table 3. We can observe that mPLUG-2 outperforms the previous SoTA methods on most of the datasets. In particular, our method yields 5.7% lift in terms of R@1 on LSMDC datasets compared with HiTeA, which indicates that the proposed model can leverage the temporal information presented in fruitful movie clips through the proposed local temporal modeling module in the dual-vision encoder. Video Question AnsweringTable 4 summarizes the video question answering results on MSRVTT-QA [22], MSVD-QA [22], and TGIF-FrameQA [23]. It can be observed that mPLUG-2 outperforms all the existing foundation models on MSRVTT-QA and TGIF-FrameQA by a large margin, and it also attains the comparable result with big foundation models GIT2 [23] on MSVD-QA even using significantly smaller amount of pre-trained data. In particular, mPLUG-2 achieves absolute improvement 0.6% on MSRVTT and 0.5% on TGIF-FrameQA. Furthermore, mPLUG-2Base achieves the comparable results compared to the large models (i.e., VideoCoCa and GIT2) with smaller model size. Video CaptioningTable 55 compares mPLUG-2 with existing methods on video captioning datasets MSRVTT and MSVD. As shown in the table, although pre-trained on less data, mPLUG-2 derives the significant improvement on MSRVTT dataset, and comparable performance on MSVD dataset. On MSRVTT Caption, our method surpasses SoTA method VideoCoCa [23] and GIT2 [23] by 4.4% on CIDEr and 3.0% on BLEU@4. Moreover, we can notice mPLUG-2 outperforms HiTeA with the same amount of pre-training data, which shows that mPLUG-2 is able to generate stronger video-language representation. Visual GroundingWe compare mPLUG-2 with existing state-of-the-art methods on visual grounding datasets including RefCOCO [22], RefCOCO+ [22] and RefCOCOg [24]. Table 7 shows that mPLUG-2 achieves comparable performance to the state-of-the-art methods. Our method achieve 0.97% absolute improvement compared with the second best method on RefCOCO "testB" split without using object detection data for pre-training. Queries in "testB" split can refer to various visual concepts but only people in "testA". The improvement demonstrates that the introduction of universal layers can help model the visual concepts in the image. Image-Text RetrievalWe evaluate mPLUG-2 on image-text retrieval datasets MSCOCO and Flickr30k. As shown in Table 6, both mPLUG-2Base and mPLUG-2 achieves comparable or better performance than state-of-the-art methods. Florence [23] and BLIP [10] use 0.9B and 129M data for pre-train respectively. In contrast, our mPLUG-2 only requires 17M data. It demonstrate that mPLUG-2 is data-efficient. Visual Question AnsweringWe report the performance of mPLUG-2 on visual question answering test sets. mPLUG-2 surpasses state-of-the-art method Florence [23] 0.95% on test-dev and 0.77% on test-std. The scale of the pre-trained data used in our model is 89.11% less than that in Florence. It shows that our mPLUG-2 can learn multi-modal represent efficiently and effectively. Image CaptioningWe compare mPLUG-2 with existing state-of-the-art methods on MSCOCO [10]. Following [10], we train mPLUG-2 on the COCO Caption with cross-entropy loss and test on the same Karpathy split. As shown in Table 9, our mPLUG-2 achieves new SoTA results on COCO Caption. Moreover, our method achieves competitive results with big foundation models, such as LEMON [11] and BLIP [10] which use more than nearly 10x amount of pre-training data. Specifically, our mPLUG-2 outperforms BLIP on COCO caption by an obvious 1.2 point margin on BLEU@4, and 1 point on CIDEr. \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline & & \multicolumn{3}{c|}{MSRVTT} & \multicolumn{3}{c}{DiDeMo} & \multicolumn{3}{c}{LSMIC} \\ \cline{3-10} Method & \#PT Data & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline Frozen [2] & 5M & 31.0 & 59.5 & 70.5 & 31.0 & 59.8 & 72.4 & 15.0 & 30.8 & 39.8 \\ BridgelFormer [1] & 5M & 37.6 & 64.8 & 75.1 & 37.0 & 62.2 & 73.9 & 17.9 & 35.4 & 44.5 \\ Singularity [10] & 5M & 36.8 & 65.9 & 75.5 & 47.4 & 75.2 & 84.0 & - & - \\ LAVENDER [10] & 30M & 37.8 & 63.8 & 75.0 & 47.4 & 74.7 & 82.4 & 22.2 & 43.8 & 53.5 \\ All-in-one [23] & 283M & 37.9 & 68.1 & 77.1 & 32.7 & 61.4 & 73.5 & - & - \\ Omvi1-[23] & 18M & 47.8 & 74.2 & 83.8 & 52.4 & 79.5 & 85.4 & - & - & - \\ HiTeA [23] & 17M & 46.8 & 71.2 & 81.9 & **86.5** & **81.7** & **89.7** & 28.7 & 50.3 & 59.0 \\ \hline mPLUG-2Base & 17M & 48.3 & 75.0 & 83.2 & 52.3 & 80.8 & 87.5 & 25.5 & 45.8 & 55.8 \\ mPLUG-2 & 17M & **53.1** & **77.6** & **84.7** & 56.4 & 79.1 & 85.2 & **34.4** & **55.2** & **65.1** \\ \hline \hline \end{tabular} \end{table} Table 3: **Performance comparison on text-to-video retrieval. All results are reported on R@1/R@5/R@10.** \begin{table} \begin{tabular}{l|c c c|c c c|c c} \hline \hline & & \multicolumn{3}{c|}{MSRVTT} & \multicolumn{3}{c}{DiDeMo} & \multicolumn{3}{c}{LSMIC} \\ \cline{3-10} Method & \#PT Data & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline Frozen [2] & 5M & 31.0 & 59.5 & 70.5 & 31.0 & 59.8 & 72.4 & 15.0 & 30.8 & 39.8 \\ BridgelFormer [1] & 5M & 37.6 & 64.8 & 75.1 & 37.0 & 62.2 & 73.9 & 17.9 & 35.4 & 44.5 \\ Singularity [10] & 5M & 36.8 & 65.9 & 75.5 & 47.4 & 75.2 & 84.0 & - & - \\ LAVENDER [10] & 30M & 37.8 & 63.8 & 75.0 & 47.4 & 74.7 & 82.4 & 22.2 & 43.8 & 53.5 \\ All-in-one [23] & 283M & 37.9 & 68.1 & 77.1 & 32.7 & 61.4 & 73.5 & - & - \\ Omvi1-[23] & 18M & 47.8 & 74.2 & 83.8 & 52.4 & 79.5 & 85.4 & - & - \\ HiTeA [23] & 17M & 46.8 & 71.2 & 81.9 & **86.5** & **81.7** & **89.7** & 28.7 & 50.3 & 59.0 \\ \hline mPLUG-2Base & 17M & 48.3 & 75.0 & 83.2 & 52.3 & 80.8 & 87.5 & 25.5 & 45.8 & 55.8 \\ mPLUG-2 & 17M & **53.1** & **77.6** & **84.7** & 56.4 & 79.1 & 85.2 & **34.4** & **55.2** & **65.1** \\ \hline \hline \end{tabular} \end{table} Table 4: **Performance comparison on video question answering. Accuracy is reported for evaluation. mPLUG-2 creates a new state-of-the-art video question answering results on MSRVTT-QA and TGIF-FrameQA with open-vocabulary generation.** #### 4.2.2 Language Only Tasks Natural Language UnderstandingWe evaluate mPLUG-2 on 6 tasks of the GLUE benchmark Wang et al. (2018) for natural language understanding. Table 10 shows that mPLUG-2 achieves comparable performance to the state-of-the-art natural language and multimodal pretrained models including RoBERTa Liu et al. (2019), DeBERTa He et al. (2021). Our method with DeBERTa achieves improvement compared with DeBERTa He et al. (2021) on three tasks, which also demonstrate the effectiveness of universal modules for modality collaboration. Natural Language GenerationWe evaluate mPLUG-2 on Gigaword abstractive summarization Rush et al. (2015) for natural language generation. As shown in Table 1111, mPLUG-2 achieves the comparable result with the state-of-the-art models. #### 4.2.3 Vision Only Tasks Video Action RecognitionVideo action recognition is the most representative for video understanding since it requires the model to understand the spatio-temporal cues revealed in the video. Table 12 summarizes the performance of different approaches on Kinetics 400, Kinetics 600, and Kinetics 700 datasets. Our mPLUG-2 surpasses the most of SoTA methods. For example, comapred with Florence pre-trained on 900M vision-text pairs, mPLUG-2 improves the Top-1 accuracy by 1.9% on Kinetics 600 and 0.6% on Kinetics 400. Meanwhile, we can notice that the performance of mPLUG-2 is better than OmniVL with similar amount of pre-training data, which shows the effectiveness of the dual-vision encoder module for video representation learning. \begin{table} \begin{tabular}{l l|c c c c|c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{MSCOCO (5K test set)} & \multicolumn{6}{c}{Flickr30K (1K test set)} \\ \cline{3-13} & \multicolumn{2}{c|}{TR} & \multicolumn{2}{c|}{IR} & \multicolumn{2}{c}{TR} & \multicolumn{2}{c}{IR} \\ \cline{3-13} Method & \#PT Data & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline E2E-VLP Xu et al. & 4M & - & - & - & - & - & 86.2 & 97.5 & 98.92 & 73.6 & 92.4 & 96.0 \\ UNITER Chen et al. (2020) & 4M & 65.7 & 88.6 & 93.8 & 52.9 & 79.9 & 88.0 & 87.3 & 98.0 & 99.2 & 75.6 & 94.1 & 96.8 \\ OSCAR Li et al. (2020) & 4M & 70.0 & 91.1 & 95.5 & 54.0 & 80.8 & 88.5 & - & - & - & - & - \\ UNIMO Li et al. (2020) & 4M & - & - & - & - & - & 89.4 & 98.9 & 99.8 & 78.0 & 94.2 & 97.1 \\ VLMo Wang et al. (2021) & 4M & 78.2 & 94.4 & 97.4 & 60.6 & 84.4 & 91.0 & 95.3 & 99.8 & **100.0** & 84.5 & 97.3 & 98.6 \\ ALIGN Li et al. (2021) & 1.8B & 77.0 & 93.5 & 96.9 & 89.3 & 83.8 & 98.5 & 93.8 & **100.0** & 84.9 & 97.4 & 98.6 \\ ALBERT Li et al. (2021) & 14M & 77.6 & 94.3 & 97.2 & 60.7 & 84.3 & 90.5 & 95.9 & 99.8 & **100.0** & 85.6 & 97.5 & 98.9 \\ Florence Yuan et al. (2021) & 0.9B & 81.8 & 95.2 & - & 63.2 & 85.7 & - & 97.2 & 99.9 & - & 87.9 & **98.1** & - \\ BLIP Li et al. (2022) & 129M & 82.4 & 95.4 & 97.9 & 65.1 & 86.3 & 91.8 & 97.4 & 99.8 & 99.9 & 87.6 & 97.7 & 99.0 \\ \hline mPLUG-2\({}_{\text{max}}\) & 17M & 81.2 & 95.6 & **98.1** & 65.3 & 86.9 & 92.4 & 96.9 & **100.0** & **100.0** & **88.2** & 97.8 & 99.0 \\ mPLUG-2 & 17M & **82.5** & **95.7** & 98.0 & **65.7** & **87.1** & **92.6** & 97.2 & **100.0** & **100.0** & 88.1 & 97.6 & **99.1** \\ \hline \hline \end{tabular} \end{table} Table 6: **Performance comparison on image-text retrieval.** All results are reported on R@1/R@5/R@10. \begin{table} \begin{tabular}{l l|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{RefCOCO} & \multicolumn{3}{c}{RefCOCOg} \\ & val & testA & testB & val-u & test-u \\ \hline UNITER Chen et al. (2020) & 81.41 & 87.04 & 74.17 & 74.76 & 75.77 \\ VILLA Gan et al. (2020) & 82.39 & 87.48 & 74.84 & 76.18 & 76.71 \\ MDETR Kamath et al. (2021) & 86.75 & 89.58 & 81.41 & 81.64 & 80.89 \\ UNICORN Yang et al. (2021) & 88.29 & 90.42 & 83.06 & 83.44 & 83.93 \\ OFAL\({}_{Large}\) Wang et al. (2022) & 90.05 & **92.93** & 85.26 & 84.54 & **85.20** \\ \hline mPLUG-2 & **90.33** & 92.80 & **86.05** & **84.70** & 85.14 \\ \hline \hline \end{tabular} \end{table} Table 7: **Evaluation results on visual grounding (ReferCOCO and ReferCOCO).** We use the [email protected] (a prediction is right if the IoU between the grounding-truth box and the predicted bounding box is larger than 0.5) to measure model performance. \begin{table} \begin{tabular}{l l|c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{MSKVIT} & \multicolumn{3}{c}{MSVD} \\ \cline{2-10} & \#PT Data & B@4 & M & R & C & B@4 & M & R & C \\ \hline UniVL Luo et al. (2020) & 136M & 42.2 & 28.2 & 61.2 & 49.9 & - & - & - & - \\ SwiBERT Lin et al. (2022) & - & 41.9 & 29.9 & 62.1 & 53.8 & 58.2 & 41.3 & 77.5 & 120.6 \\ CLIPHCGion Tang et al. (2021) & - & 46.1 & 30.7 & 63.7 & 57.7 & - & - & - & - \\ MV-GPT Seo et al. (2022) & 69M & 48.9 & 38.7 & 64.0 & 60.0 & - & - & - \\ LAVENDER Li et al. (2022) & 30M & - & - & - & 60.1 & - & - & - & 150.7 \\ HFTa Ye et al. (2022) & 17M & 49.2 & 30.7 & 65.0 & 65.1 & 71.0 & 45.3 & 81.4 & 146.9 \\ VideoCoCau Yan et al. (2022) & 3B & 53.8 & - & 68.0 & 73.2 & - & - & - \\ GIT Wang et al. (2022c) & 0.8B & 53.8 & 32.9 & 67.7 & 73.9 & 79.5 & 51.1 & 87.3 & 180.2 \\ GIT Wang et al. (2022c) & 12.9B & 54.8 & 33.1 & 68.2 & 75.9 & **82.2** & **52.3** & **88.7** & **185.4** \\ \hline mPLUG-2\({}_{\text{max}}\) & 17M & 52.2 & 32.1 & 66.9 & 72.4 & 69.3 & 45.1 & 81.9 & 148.2 \\ mPLUG-2 & 17M & **57.8** & **34.9** & **70.1** & **80.3** & 75.0 & 48.4 & 85.3 & 165.8 \\ \hline \hline \end{tabular} \end{table} Table 5: **Performance comparison on video captioning.** B@4: BLEU@4; M: METEOR; R: ROUGE-L; C: CIDEr. \begin{table} \begin{tabular}{l l|c c c|c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{offT Data} & \multicolumn{3}{c}{test-dev} & \multicolumn{3}{c}{test-std} \\ \cline{2-10} & UNTFE Chen et al. (2020) & 4M & 72.0 & 72.91 & \multicolumn{3}{c}{tumMO Li et al. (2020)} \\ & E2E-VLP Xu et al. & 4M & 73.25 **Image Classification** We further evaluate the performance of mPLUG-2 in terms of image classification on ImageNet-1K. As we can see in the Table 13, We can see that mPLUG-2 achieves comparable results or even surpass the SoTA methods on ImageNet-1K without using the ImageNet data for pre-training. Besides, to effectively evaluate the robustness and generalization ability of mPLUG-2, we perform the evaluation on 5 ImageNet variants (i.e. IN-V2, IN-Real., IN-Adversarial, IN-Rondition, and IN-Sketch). Following standard evaluation procedure (Fang et al., 2022a), all these models are first fine-tuned on the original ImageNet-1K training set and directly tested on the 6 variants without further fine-tuning. As shown in Table 13, mPLUG-2 not only achieves the highest accuracy on ImageNet-1K validation set but also obtains the relative small gap (i.e., \(\Delta_{\downarrow}\)), which reflects the excellent robustness and generalization capability of mPLUG-2 with the help of the universal layer module by learning language-shared representation. instructions are used to boost the performance. **Impact of Local Temporal Modeling Module** To validate the effectiveness of our proposed local temporal modeling module in the dual-vision encoder, we conduct experiments with the different temporal modeling structures. Specially, we have tried out the temporal self-attention and temporal convolution for comparison. The results are summarized in Table 15. We can notice that the local temporal modeling module outperforms temporal self-attention module by introducing modeling temporal locality. Meanwhile, with the help of the multi-group fusion mechanism, the local temporal modeling module can learn the diverse temporal representations in distinctive representation subspaces while the temporal convolution is restricted in the same temporal representation spaces, thus leading to the better performance. **Impact of Universal Layer** To validate the effectiveness of our proposed universal layer module, we ablation this module for all uni-modal and multi-modal tasks. As shown in Table 16 and Table 17, we set Row 1/2/2 as the baseline of the vision/language/vision-language task in this experiment, respectively. We can find that compared with the baseline the shared universal layer is beneficial for all modality tasks by encouraging collaboration between modalities. In Figure 3, we visualize the Grad-CAM on the cross-attention map in the first universal layer. For each sample, we present two cross-attention maps that attend to different visual concepts. The results show that the universal layer can encourage modality collaboration and modality entanglement between visual patch features and language features by attending the areas of various visual concepts in the image. **Universal Layer for Modality Collaboration** Here we investigate the influence of universal layer in terms of modality collaboration. We randomly sample some vision-language pairs, and sketch the UMAP visualization of the generated embeddings from pre-trained mPLUG-2 in the Figure 4. We can observe that with the help of universal layer, the distance between vision and text samples are more closer instead of solely two concentrated clusters. Besides, we quantitatively compute the modality gap \(\|\Delta\|\)[12], where the \(\Delta\) is the difference between the center of vision embeddings and text embeddings. It can be observed that the model with universal layer would encourage the collaboration between vision and language modalities thus yielding lower modality gap compared to the model without universal layer. ## 5 Conclusion This paper presents mPLUG-2, a new unified paradigm with modularized design for building multi-modal foundation models. mPLUG-2 introduces a module-based network design that shares common universal modules for modality collaboration and disentangles modality-specific modules to address the problem of modality entanglement. Experimental results show that the new unified paradigm of mPLUG-2 can achieve strong performances on a broad range of over 30 tasks across the text, image and video modalities. It is also easy to extend mPLUG-2 to more tasks by selecting \begin{table} \begin{tabular}{l|c c c c c c|c} \hline \hline Method & ImageNet & CIFAR10 & CIFAR100 & Cars & DTD & SUN & Food101 & Average \\ \hline CLIP-ViT-L/14 & 86.2 & 98.6 & 92.2 & 91.6 & 81.9 & 80.7 & 94.4 & 89.4 \\ \hline +Universal Layers & 86.6 (+0.4) & 99.3 (+0.7) & 93.1 (+0.9) & 94.4 (+2.8) & 85.1 (+3.2) & 80.4 (-0.4) & 95.4 (+1.0) & **90.6** (+1.2) \\ \hline \hline \end{tabular} \end{table} Table 16: **Evaluation of the impact of universal layer in terms of boosting vision task’s performance.** \begin{table} \begin{tabular}{l|c c c c c c|c} \hline \hline Model & SST-2 & RTE & MRPC & QQP & MNLI & QNL & VQA test-dev \\ \hline BERT\({}_{base}\) & 91.7 & 71.4 & 86.3 & 90.8 & 84.3 & 89.3 & 78.6 \\ +Joint Training & 92.5 & 82.3 & 86.6 & 90.6 & 86.2 & 92.1 & 78.9 \\ +Universal Layers & **93.5** & **85.2** & **87.3** & **91.3** & **87.6** & **93.2** & **79.3** \\ \hline \hline \end{tabular} \end{table} Table 17: **Evaluation of the impact of the universal layer in terms of boosting vision task’s performance.** Figure 4: The UMAP visualization of generated vision and language embeddings from pre-trained mPLUG-2. The black lines refer to vision-language pairs. Figure 3: Grad-CAM visualizations for latent queries in the universal layers. and adding modules.
2303.08377
Control of Impurity Phase Segregation in a PdCrO$_2$/CuCrO$_2$ Heterostructure
PdCrO$_2$ films are synthesized on CuCrO$_2$ buffer layers on Al$_2$O$_3$ substrates. This synthesis is accompanied by impurity phase segregation, which hampers the synthesis of high quality PdCrO$_2$ films. The potential causes of impurity phase segregation were studied by using a combination of experiments and ab initio calculations. X-ray diffraction and scanning transmission electron microscopy experiments revealed impurity phases of Cu$_x$Pd$_{1-x}$ alloy and chromium oxides, Cr$_2$O$_3$ and Cr$_3$O$_4$, in PdCrO$_2$. Calculations determined that oxygen deficiency can cause the impurity phase segregation. Therefore, preventing oxygen release from delafossites could suppress the impurity phase segregation. The amounts of Cr$_2$O$_3$ and Cr$_3$O$_4$ depend differently on temperature and oxygen partial pressure. A reasonable theory-based explanation for this experimental observation is provided.
Tom Ichibha, Sangmoon Yoon, Jong Mok Ok, Mina Yoon, Ho Nyung Lee, Fernando A. Reboredo
2023-03-15T05:32:53Z
http://arxiv.org/abs/2303.08377v1
# Control of Impurity Phase Segregation in a PdCrO\({}_{2}\)/CuCrO\({}_{2}\) Heterostructure ###### Abstract PdCrO\({}_{2}\) films are synthesized on CuCrO\({}_{2}\) buffer layers on Al\({}_{2}\)O\({}_{3}\) substrates. This synthesis is accompanied by impurity phase segregation, which hampers the synthesis of high quality PdCrO\({}_{2}\) films. The potential causes of impurity phase segregation were studied by using a combination of experiments and ab initio calculations. X-ray diffraction and scanning transmission electron microscopy experiments revealed impurity phases of Cu\({}_{x}\)Pd\({}_{1-x}\) alloy and chromium oxides, Cr\({}_{2}\)O\({}_{3}\) and Cr\({}_{3}\)O\({}_{4}\), in PdCrO\({}_{2}\). Calculations determined that oxygen deficiency can cause the impurity phase segregation. Therefore, preventing oxygen release from delafossites could suppress the impurity phase segregation. The amounts of Cr\({}_{2}\)O\({}_{3}\) and Cr\({}_{3}\)O\({}_{4}\) depend differently on temperature and oxygen partial pressure. A reasonable theory-based explanation for this experimental observation is provided. ## I Introduction Delafossites are intriguing materials that can combine 2D electronic conductivity in the cation \(A\) layers and magnetism in slightly distorted octahedra in the \(B\)O\({}_{6}\) layers, which stack alternately [1; 2; 3]. The abundant possible choices of monovalent \(A\) and trivalent \(B\) cations lead to a number of delafossite materials with diverse physical properties [4; 5]. The \(AB\)O\({}_{2}\) delafossites were first reported in 1971 by a group of the DuPont Experimental Station [1; 2; 3; 4]. A quarter of century after delafossites were first reported, they received renewed attention when the transparent \(p\)-type semiconductor CuAlO\({}_{2}\) was discovered [6; 7]. Simultaneously, Tanaka et al.[8] reported the strong anisotropy of electronic conduction for the metallic PdCoO\({}_{2}\) single crystals [3]. One decade later, Takatsu and Maeno et al., working on PdCoO\({}_{2}\) and PdCrO\({}_{2}\), reported the growth of single crystals of PdCrO\({}_{2}\)[9]. These single crystals exhibit intriguing phenomena[4] such as the unconventional anomalous Hall effect in PdCrO\({}_{2}\)[10] and anomalous temperature dependence of specific heat and electrical resistivity that are driven by high-frequency phonons in PdCoO\({}_{2}\)[10]. Their seminal work originated the continuous study of delafossite metals to this day. Delafossite metals have electronic conductivity comparable with the most conductive pure metals [3; 4; 8; 11] owing to their remarkably long electronic mean free paths of up to 20 um[4; 12; 13]. Among delafossite metals, PdCrO\({}_{2}\) is especially interesting because it coexists with a layer-wise non-collinear spin state [14; 15; 16; 17; 18] and exhibits high electronic conductivity [16]. Its topological properties, primarily caused by spin-orbit coupling in Pd, allow for the observation of an unconventional anomalous Hall effect [10; 19] in bulk PdCrO\({}_{2}\). Additionally, PdCrO\({}_{2}\) films and surfaces have been studied. Angle-resolved photoemission spectroscopy experiments showed that Pd-terminated PdCrO\({}_{2}\) has surface ferromagnetism, which may originate from the Stoner-like instability[20; 21]. Experimental studies of PdCrO\({}_{2}\) films established that the antiferromagnetic spin state remains stable down to a thickness of 3.6 nm [22]. Hybrid layered heterostructures, composed of PdCrO\({}_{2}\) and other delafossite materials, could exhibit interesting and different phenomena than their parent compounds [23]. However, despite the interest in the material, the epitaxial growth of PdCrO\({}_{2}\) films has not been widely studied [24; 25; 26; 27; 22]. The growth of PdCrO\({}_{2}\) films on Al\({}_{2}\)O\({}_{3}\) is sometimes accompanied by impurity phases (i.e., Cu\({}_{x}\)Pd\({}_{1-x}\) alloy and chromium oxides) [22]. Recent research discovered that a one-monolayer buffer layer of CuCrO\({}_{2}\) on an Al\({}_{2}\)O\({}_{3}\) substrate suppresses this instability [22]. However, a nonnegligible amount of impurity phase is still formed. Understanding the mechanism of the impurity phase segregation and how to suppress it is highly desired for the growth of heterostructures containing PdCrO\({}_{2}\) or other Pd-based delafossites. In this work, the mechanism of impurity phase segregation of a heterostructure of a PdCrO\({}_{2}\) layer with a CuCrO\({}_{2}\) buffer layer on an Al\({}_{2}\)O\({}_{3}\) substrate was studied using a combination of experiments and ab initio calculations. X-ray diffraction (XRD) and scanning transmission electron microscopy (STEM) experiments were performed, and the segregation of Cu\({}_{x}\)Pd\({}_{1-x}\) alloy and chromium oxide (Cr\({}_{2}\)O\({}_{3}\) and Cr\({}_{3}\)O\({}_{4}\)) impurity phases was observed. These experiments revealed that the formation of Cr\({}_{2}\)O\({}_{3}\) negatively correlates with oxygen partial pressure, whereas the formation of Cr\({}_{3}\)O\({}_{4}\) does not correlate with oxygen partial pressure. Moreover, the Cr\({}_{2}\)O\({}_{3}\) (Cr\({}_{3}\)O\({}_{4}\)) formation weakly (strongly) positively correlates with temperature. The segregation of Cu\({}_{x}\)Pd\({}_{1-x}\) alloy and chromium oxide impurity phases must be accompanied by the appearance or disappearance of point defects because the segregation processes are not stoichiometric. In this scenario, calculations revealed that oxygen vacancies can cause the impurity phase segregation. Calculations also revealed that the segregation of Cr\({}_{2}\)O\({}_{3}\) or Cr\({}_{3}\)O\({}_{4}\) is energetically the most favorable among the chromium oxides, agreeing with the experiments described in Section III.1. Finally, the calculations also revealed that the formation of Cr\({}_{2}\)O\({}_{3}\) and Cr\({}_{3}\)O\({}_{4}\) depends on temperature and oxygen partial pressure. ## II Experimental and calculation details ### Experimental details A PdCrO\({}_{2}\) layer with thickness of approximately 10 nm was grown on a one-monolayer (\(\sim\)0.38 nm) CuCrO\({}_{2}\) buffer layer on an Al\({}_{2}\)O\({}_{3}\) substrate via pulsed laser deposition using polycrystalline targets. Before the film growth, commercially available Al\({}_{2}\)O\({}_{3}\) (0001) substrates (CrysTec, Germany) were annealed at 1100 \({}^{\circ}\)C for 1 h to achieve atomically flat surfaces with step-terrace structure. For PdCrO\({}_{2}\) films, the growth conditions were widely varied: temperature (\(T\)) was 500-800 \({}^{\circ}\)C, and oxygen partial pressure (\(P_{\rm O_{2}}\)) was 10-500 mTorr. The repetition rate and fluence of KrF excimer laser (\(\lambda\) = 248 nm) were fixed at 5 Hz and 1.5 J/cm\({}^{2}\), respectively. The cross-sectional STEM specimens were prepared using low-energy ion milling at LN\({}_{2}\) temperature after mechanical polishing. High-angle annular dark field (HAADF) STEM measurements were performed on a Nion UltraSTEM200 operated at 200 kV. The microscope is equipped with a cold-field emission gun and a third- and fifth-order aberration corrector for sub-angstrom resolution. The convergence half-angle of 30 mrad was used, and the inner angle of the HAADF STEM was approximately 65 mrad. ### Calculation details Density functional theory (DFT) implemented in the VASP package [28] was used to understand the energetics of competing phases during the experimental growth process. The Perdew-Burke-Ermerhof (PBE)+_U_ method [29; 30] was used. The Hubbard \(U\) correction was applied to the 3\(d\) shell of the Cr atoms. The \(U\) value was 3.3 eV, which was optimized compared with the results of the HSE06 functional [31], as described in the Supporting Information. The core electrons were replaced with pseudopotentials made by the projector-augmented wave method accompanied by the VASP code [32; 33; 34]. The cutoff energy was 520 eV, and _k_-spacing was 0.30 A\({}^{-1}\), which converged the Cr vacancy formation energy in CuCrO\({}_{2}\) within 2 meV. Experimental lattice vectors for CuCrO\({}_{2}\)[35], PdCrO\({}_{2}\)[36], and Al\({}_{2}\)O\({}_{3}\) were used [37]. The lattice vectors reported in the Materials project [38] were used for chromium oxides and chromium metal [39]. The atomic coordinates were relaxed for the functional. The convergence criteria for the self-consistent field and ionic cycles were \(1.0\times 10^{-7}\) eV and \(1.0\times 10^{-6}\) eV, respectively. ## III Results and discussions ### Segregation of impurity phases Impurity phases including Cu\({}_{x}\)Pd\({}_{1-x}\), Cr\({}_{2}\)O\({}_{3}\), and Cr\({}_{3}\)O\({}_{4}\) have been observed experimentally [40]. In Figure 1 we show in addition to 20-0 XRD spectrum, the intensity of the XRD data as a function of the growth conditions. As reported in Ref. [22], the high-quality PdCrO\({}_{2}\) films can be achieved only within a relatively narrow growth window. Outside the growth window, the metallic properties are severely deteriorated by the impurity formation. The resistance could not be measured because of the high resistivity. The rectangular boxes in Figure 1 highlight the main impurities observed in XRD: Cr\({}_{3}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\). The bottom two panels of Figure 1 map the XRD intensities of Cr\({}_{3}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\) for temperature and oxygen partial pressure. The relative abundances between Cr\({}_{3}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\) are difficult to assess quantitatively using the XRD intensities because the XRD reflectivity varies with substances and angles. However, we use the intensities to assess qualitatively how the formation of each substance is affected by growth conditions. The XRD intensity of Cr\({}_{3}\)O\({}_{4}\) strongly positively correlates with temperature (correlation coefficient [41]\(\rho\) = +0.82), whereas the correlation between the XRD intensity of Cr\({}_{2}\)O\({}_{3}\) and temperature is weak (\(\rho\) = +0.19). Moreover, Cr\({}_{3}\)O\({}_{4}\)'s peak strength does not depend on oxygen partial pressure (\(\rho\) = +0.01), but Cr\({}_{2}\)O\({}_{3}\)'s peak strength negatively depends on oxygen partial pressure (\(\rho\) = \(-\)0.48). These results are compared with our calculations in the last paragraph of Section III. Figure 1: XRD spectrum and mapping of XRD peak strengths of Cr\({}_{3}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\) for temperature and oxygen partial pressure. This measurement was performed for an approximately 6 nm PdCrO\({}_{2}\) film grown on a one-monolayer CuCrO\({}_{2}\) buffer layer. ### Point-defect formation energy Because the system segregates oxygen-deficient oxides Cr\({}_{2}\)O\({}_{3}\) or Cr\({}_{3}\)O\({}_{4}\), the segregation process should be accompanied by the appearance or disappearance of point defects. The ratio of O to Cr (O/Cr) in Cr\({}_{2}\)O\({}_{3}\) is O/Cr = 1.5. In Cr\({}_{3}\)O\({}_{4}\), O/Cr is about 1.3. In the delafossite materials, O/Cr is 2.0. Therefore, the formation energies of multiple types of point self-defects in bulk Al\({}_{2}\)O\({}_{3}\), CuCrO\({}_{2}\), and PdCrO\({}_{2}\) were calculated. To simplify the problem, defects in the bulk were calculated, even though samples with different thickness of PdCrO\({}_{2}\) and CuCrO\({}_{2}\) films have been grown in this and other works [22; 40]. The following describes the notation used for the point defects and explains how to evaluate the formation energies. V\({}_{\rm Cr}\), V\({}_{\rm Cu}\), V\({}_{\rm Pd}\), and V\({}_{\rm O}\) indicate vacancies in the Cr, Cu, Pd, and O sites. The Cu (Pd) replacement defects in the Cr sites, or _antisite defects_, are indicated by Cu\({}_{\rm Cr}\) (Pd\({}_{\rm Cr}\)). Larger defect complexes such as Cu\({}_{\rm Cr}\)&V\({}_{\rm Cu}\) (Pd\({}_{\rm Cr}\)&V\({}_{\rm Pd}\)) can form when Cu (Pd) atoms move to a preformed V\({}_{\rm Cr}\), leaving the V\({}_{\rm Cr}\)(\(\eta_{\rm H}\)). The formation energies of these defects are given as follows: for CuCrO\({}_{2}\), \[\Delta E\left(\text{V}_{\alpha}\right) =E\left(\text{CuCrO}_{2}\right)_{\text{V}_{\alpha}}-E\left(\text{ CuCrO}_{2}\right)_{\text{bulk}}\] \[+\text{ }\mu_{\alpha},\text{ \ }\left(\alpha=\text{Cu},\text{ Cr},\text{ or O}\right), \tag{1}\] \[\Delta E\left(\text{Cu}_{\rm Cr}\&\text{V}_{\rm Cu}\right) =E\left(\text{CuCrO}_{2}\right)_{\text{Cu}_{\rm O}\&\text{V}_{ \rm O}}-E\left(\text{CuCrO}_{2}\right)_{\text{bulk}}\] \[+\text{ }\mu_{\rm Cr}, \tag{2}\] and for PdCrO\({}_{2}\), \[\Delta E\left(\text{V}_{\alpha}\right) =E\left(\text{PdCrO}_{2}\right)_{\text{V}_{\alpha}}-E\left(\text{ PdCrO}_{2}\right)_{\text{bulk}}\] \[+\text{ }\mu_{\alpha},\text{ \ }\left(\alpha=\text{Pd},\text{ Cr}\text{ or O}\right), \tag{3}\] \[\Delta E\left(\text{Pd}_{\rm Cr}\&\text{V}_{\rm Pd}\right) =E\left(\text{PdCrO}_{2}\right)_{\text{Pd}_{\text{O}}\&\text{V}_{ \rm Pd}}-E\left(\text{PdCrO}_{2}\right)_{\text{bulk}}\] \[+\text{ }\mu_{\rm Cr}. \tag{4}\] Here, \(E(\text{CuCrO}_{2})_{\text{bulk}}\) and \(E(\text{PdCrO}_{2})_{\text{bulk}}\) are the total energies of pristine delafossite structures. \(E(\text{CuCrO}_{2})_{X}\) and \(E(\text{PdCrO}_{2})_{X}\) are the total energies of structures with type-\(X\) defects. The chemical potential of atomic species \(\alpha\) is \(\mu_{\alpha}\). The oxygen vacancy formation energy in the Al\({}_{2}\)O\({}_{3}\) substrate was also calculated by \[\Delta E\left(\text{V}_{\rm O}\right)=E\left(\text{Al}_{2}\text{O}_{3}\right)_ {\text{V}_{\rm O}}-E\left(\text{Al}_{2}\text{O}_{3}\right)_{\text{bulk}}+\text { }\mu_{\rm O}. \tag{5}\] Our experiments observed the Cu-Pd alloy and Cr oxide impurity phases on the composite sample of Al\({}_{2}\)O\({}_{3}\), CuCrO\({}_{2}\), and PdCrO\({}_{2}\) (see SS III.1). The defect formation energies should be evaluated for the experimental conditions: the chemical equilibrium states consisting of Al\({}_{2}\)O\({}_{3}\), CuCrO\({}_{2}\), PdCrO\({}_{2}\), Cu\({}_{\rm Pt}\)Pd\({}_{1-x}\), and a chromium oxide. The exact value of \(x\) in Cu\({}_{x}\)Pd\({}_{1-x}\) is not known experimentally. The ratio \(x\) potentially depends on the volume comparison of CuCrO\({}_{2}\) and PdCrO\({}_{2}\). However, the change in the results is negligible when \(x\) changes from 0.5 to 0.25 or 0.75 (variations of only 0.33 eV were observed), as described in the Supporting Information. Therefore, the results reported below assumed \(x\) = 0.5. Solving the following equations yields the chemical potentials. For example, if Al\({}_{2}\)O\({}_{3}\), CuCrO\({}_{2}\), PdCrO\({}_{2}\), CuPd, and Cr\({}_{3}\)O\({}_{4}\) coexist, then \[2\mu_{\rm Al}+3\mu_{\rm O} =E\left(\text{Al}_{2}\text{O}_{3}\right), \tag{6}\] \[\mu_{\rm Cu}+\mu_{\rm Cr}+2\mu_{\rm O} =E\left(\text{Cu}\text{CrO}_{2}\right),\] (7) \[\mu_{\rm Pd}+\mu_{\rm Cr}+2\mu_{\rm O} =E\left(\text{PdCrO}_{2}\right),\] (8) \[\mu_{\rm Cu}+\mu_{\rm Pd} =E\left(\text{CuPd}\right),\] (9) \[3\mu_{\rm Cr}+4\mu_{\rm O} =E\left(\text{Cr}_{3}\text{O}_{4}\right). \tag{10}\] There exist as many independent linear equations as unknown chemical potentials, so the chemical potentials are trivially determined. ### Formation energies of defects as a function of the chemical potentials The formation energies of point defects in Al\({}_{2}\)O\({}_{3}\), CuCrO\({}_{2}\), and PdCrO\({}_{2}\) were calculated for different chromium oxides, as described in Section III.2. We also considered the Cr metal as the Cr source of the Cr-rich limit. The results are summarized in Figure 2. The point-defect formation energies are all positive, so CuCrO\({}_{2}\) and PdCrO\({}_{2}\) are thermodynamically stable and stoichiometric under the considered chemical conditions. For low values of the oxygen chemical potential (\(\lesssim-8.6\) eV), the V\({}_{\rm O}\) in CuCrO\({}_{2}\) and PdCrO\({}_{2}\) have the lowest formation energies; the V\({}_{\rm O}\) in Al\({}_{2}\)O\({}_{3}\) is much higher. As soon as the oxygen chemical potential increases, the V\({}_{\rm Cu}\) and V\({}_{\rm Pd}\) become the lowest formation energy defects. The Cr vacancies, by contrast, have much higher formation energies. The experimental chemical potentials are not well defined because the system is out of equilibrium, as described in Section III.2. However, each element's stability corresponds to anywhere between the vertical lines that correspond to the oxygen chemical potentials with Cr\({}_{3}\)O\({}_{4}\) and Cr\({}_{2}\)O\({}_{3}\). The V\({}_{\rm Cr}\), Cu\({}_{\rm Cr}\)&V\({}_{\rm Cu}\), and Pd\({}_{\rm Cr}\)&V\({}_{\rm Pd}\) are all Cr-deficient point defects. For CuCrO\({}_{2}\), the formation energy of Cu\({}_{\rm Cr}\)&V\({}_{\rm Cu}\) is lower than that of V\({}_{\rm Cr}\). Therefore, the Cr site does not have a vacancy because a neighboring Cu occupies the Cr site by forming V\({}_{\rm Cu}\) next to Cu\({}_{\rm Cr}\). ### Instability of CuCrO\({}_{2}\) and PdCrO\({}_{2}\) for oxygen-deficient samples Experiments found the segregation of impurity phases of Cu\({}_{x}\)Pd\({}_{1-x}\), Cr\({}_{2}\)O\({}_{3}\), and Cr\({}_{3}\)O\({}_{4}\) on a 10 nm PdCrO\({}_{2}\) layer with a one-monolayer CuCrO\({}_{2}\) buffer layer on an Al\({}_{2}\)O\({}_{3}\) substrate. The samples were grown under low oxygen partial pressures. The simultaneous presence of seven compounds (CuCrO\({}_{2}\), PdCrO\({}_{2}\), Cu\({}_{x}\)Pd\({}_{1-x}\), Cr\({}_{2}\)O\({}_{3}\), Cr\({}_{3}\)O\({}_{4}\), O\({}_{2}\), and Al\({}_{2}\)O\({}_{3}\)) but only five chemical elements complicates the theoretical analysis. Finding a solution for the chemical potential equations is impossible when the equations outnumber the independent variables. In this case, the system is out of equilibrium. The chemical potentials may not be uniform throughout the sample. For instance, near the surface, the oxygen chemical potential may be a function of temperature and oxygen partial pressure. By contrast, near the regions where Cr\({}_{2}\)O\({}_{3}\) and Cr\({}_{3}\)O\({}_{4}\) coexist, the chemical potentials of Cr and O are uniquely determined by the formation energies of the two solids. Alternatively, near the Al\({}_{2}\)O\({}_{3}\) substrate, the oxygen chemical potential may be determined by temperature and the concentration of oxygen vacancies in Al\({}_{2}\)O\({}_{3}\). The impurity phases are oxygen deficient (i.e., chromium rich) relative to CuCrO\({}_{2}\) and PdCrO\({}_{2}\): the O/Cr ratios of Cr\({}_{2}\)O\({}_{3}\) (1.5) and Cr\({}_{3}\)O\({}_{4}\) (\(\sim\)1.3) are smaller than that of CuCrO\({}_{2}\) and PdCrO\({}_{2}\) (2.0). To elucidate the possible cause of impurity phase segregation, several different possible reactions originating from out-of-equilibrium states were considered. Then their potential to destabilize CuCrO\({}_{2}\) and PdCrO\({}_{2}\) was examined. The analysis revealed that low oxygen partial pressures and high temperatures could explain the segregation of Cr\({}_{2}\)O\({}_{3}\) and Cu\({}_{x}\)Pd\({}_{1-x}\). Preexisting defects as energetic as oxygen vacancies in Al\({}_{2}\)O\({}_{3}\) could enhance the segregation of Cr\({}_{3}\)O\({}_{4}\) and Cu\({}_{x}\)Pd\({}_{1-x}\). ### Thermochemical reactions To simplify the analysis, the Cu\({}_{x}\)Pd\({}_{1-x}\) alloy is assumed to be CuPd, as described in the last paragraph of Section III.2. The theoretical approach shows that the combination of CuCrO\({}_{2}\) and PdCrO\({}_{2}\) is stable against the CrO\({}_{2}\) and CuPd impurity phase segregation, which is a stoichiometric process. The thermochemical equation of this segregation is \[E(\text{CuCrO}_{2})+E(\text{PdCrO}_{2}) =E(\text{CuPd})+2E(\text{CrO}_{2})\] \[+Q\left(\text{CrO}_{2}\right), \tag{11}\] where \(Q\left(\text{CrO}_{2}\right)\) is the energy gained, or lost if negative, to form CrO\({}_{2}\). The value of \(Q\left(\text{CrO}_{2}\right)\) was calculated to be \(-1.102\) eV per two formula units of CrO\({}_{2}\), so this reaction is endothermic. By contrast, the Cr/O ratios of CuCrO\({}_{2}\) and PdCrO\({}_{2}\) vs. Cr\({}_{2}\)O\({}_{3}\) or Cr\({}_{3}\)O\({}_{4}\) are different. Therefore, the impurity phase segregation may be caused by an impurity-absorbing defect. For Cr\({}_{2}\)O\({}_{3}\)+CuPd, the impurity phase segregation may be caused and promoted by an oxygen-adsorbent mechanism because the Cr/O ratios of CuCrO\({}_{2}\) and PdCrO\({}_{2}\) (1/2) and Cr\({}_{2}\)O\({}_{3}\) (2/3) are different. This oxygen deficiency may be the result of low environmental oxygen concentration relative to chromium from either (i) defective CuCrO\({}_{2}\), PdCrO\({}_{2}\), or Al\({}_{2}\)O\({}_{3}\) or (ii) low oxygen content in the vacuum growth chamber [42]. For mechanism (i), preexisting V\({}_{0}\) in CuCrO\({}_{2}\), PdCrO\({}_{2}\), or Al\({}_{2}\)O\({}_{3}\) and formation of Cr-deficient defects such as V\({}_{\text{Cr}}\) in CuCrO\({}_{2}\) or PdCrO\({}_{2}\) were considered to keep the Cr/O ratios constant before and after the process [43]. Therefore, the energy gain obtained by the (dis)appearance of point defects in CuCrO\({}_{2}\), PdCrO\({}_{2}\), and Al\({}_{2}\)O\({}_{3}\) was compared with the release of oxygen molecules into the oxygen gas in the growth chamber. All these possibilities were considered as particle exchanges with a particle bath. The energy cost of taking an atom (\(\alpha=\text{O}\) or Cr) from one of these particle baths is defined as \[\nu_{\alpha}\equiv E(\text{bath})_{\text{bulk}}-E(\text{bath})_{\alpha}. \tag{12}\] Here, \(E(\text{bath})_{\text{bulk}}\) and \(E(\text{bath})_{\alpha}\) are the energies of the particle bath without defects and with an \(\alpha=\text{O}\) or Cr vacancy, respectively [44]. The thermochemical equations for the segregation of Cr\({}_{2}\)O\({}_{3}\) when introducing O to or removing Cr from the particle bath are given as follows: \[E(\text{CuCrO}_{2}) +E(\text{PdCrO}_{2})=E(\text{CuPd})+E(\text{Cr}_{2}\text{O}_{3})\] \[+\nu_{\text{O}}+Q\left(\text{Cr}_{2}\text{O}_{3},\text{V}_{\text{ O}}^{\text{em}}\right), \tag{13}\] \[E(\text{CuCrO}_{2}) +E(\text{PdCrO}_{2})=E(\text{CuPd})+(4/3)E(\text{Cr}_{2}\text{O}_{3})\] \[-(2/3)\nu_{\text{Cr}}+Q\left(\text{Cr}_{2}\text{O}_{3},\text{V}_{ \text{Cr}}^{\text{int}}\right). \tag{14}\] In equation (13), the term \(\nu_{\text{O}}\) takes into account the effect of removing an oxygen vacancy in the particle bath, and \(-(2/3)\nu_{\text{Cr}}\) considers the effect of creating a fraction of Cr vacancies in the particle bath. Similarly, the segregation of Cr\({}_{3}\)O\({}_{4}\) could be explained by the following reactions: \[E(\text{CuCrO}_{2}) +E(\text{PdCrO}_{2})=E(\text{CuPd})+(2/3)E(\text{Cr}_{3}\text{O}_{ 4})\] \[+(4/3)\nu_{\text{O}}+Q\left(\text{Cr}_{3}\text{O}_{4},\text{V}_{ \text{O}}^{\text{em}}\right), \tag{15}\] \[E(\text{CuCrO}_{2}) +E(\text{PdCrO}_{2})=E(\text{CuPd})+E(\text{Cr}_{3}\text{O}_{4})\] \[-\nu_{\text{Cr}}+Q\left(\text{Cr}_{3}\text{O}_{4},\text{V}_{ \text{Cr}}^{\text{int}}\right). \tag{16}\] Their derivations are described in detail in Appendix A. The exothermic energies, \(Q\), are shown in eqs (13)-(16), for different values of \(\nu_{\text{O}}\) and \(\nu_{\text{Cr}}\), depending on the particle baths Figure 2: Formation energies of intrinsic point defects in CuCrO\({}_{2}\), PdCrO\({}_{2}\), and Al\({}_{2}\)O\({}_{3}\) calculated by the PBE+_U_ method as a function of the oxygen chemical potential. The chemical potentials are calculated for CuCrO\({}_{2}\), PdCrO\({}_{2}\), Al\({}_{2}\)O\({}_{3}\), CuPd, and different chromium oxides. in Table 1. The table shows that only \(Q\left(\mathrm{Cr_{2}O_{3}},\mathrm{V_{O}^{rem}}\right)\) and \(Q\left(\mathrm{Cr_{3}O_{4}},\mathrm{V_{O}^{rem}}\right)\) can be positive (i.e., exothermic reaction), whereas the reactions involving the formation of Cr-deficient defects are always endothermic. Therefore, the preexisting oxygen vacancies could explain the spontaneous segregation of Cr\({}_{2}\)O\({}_{3}\), Cr\({}_{3}\)O\({}_{4}\), and CuPd impurity phases. Figure 3 shows \(Q\left(\mathrm{Cr_{2}O_{3}},\mathrm{V_{O}^{rem}}\right)\) and \(Q\left(\mathrm{Cr_{3}O_{4}},\mathrm{V_{O}^{rem}}\right)\) in Table 1 for different \(\nu_{\mathrm{O}}\) (i.e., different particle bath). The stability of oxygen atoms in each particle bath negatively correlates with \(\nu_{\mathrm{O}}\): oxygen atoms are the most (least) stable in Al\({}_{2}\)O\({}_{3}\) (O\({}_{2}\) gas). \(Q\left(\mathrm{Cr_{2}O_{3}},\mathrm{V_{O}^{rem}}\right)\) changes depending on \(-\nu_{\mathrm{O}}\), as given in eq (13). Similarly, \(Q\left(\mathrm{Cr_{3}O_{4}},\mathrm{V_{O}^{rem}}\right)\) changes depending on \(-(4/3)\nu_{\mathrm{O}}\), as given in eq (15). The impurity phase segregation is endothermic when O\({}_{2}\) gas is the particle bath and exothermic for the other particle baths. The energetically favored chromium oxide changes from Cr\({}_{2}\)O\({}_{3}\) to Cr\({}_{3}\)O\({}_{4}\) with \(\nu_{\mathrm{O}}\) decreasing from PdCrO\({}_{2}\) to Al\({}_{2}\)O\({}_{3}\). ### Entropy contributions to the formation of \(\mathrm{Cr_{2}O_{3}}\) and \(\mathrm{Cr_{3}O_{4}}\) In this section, entropy contributions to the positive \(Q\left(\mathrm{Cr_{2}O_{3}},\mathrm{V_{O}^{rem}}\right)\) and \(Q\left(\mathrm{Cr_{3}O_{4}},\mathrm{V_{O}^{rem}}\right)\) are considered. For convenience, \(Q(\mathrm{Cr_{2}O_{3}})\equiv Q\left(\mathrm{Cr_{2}O_{3}},\mathrm{V_{O}^{rem}}\right)\) and \(Q(\mathrm{Cr_{3}O_{4}})\equiv Q\left(\mathrm{Cr_{3}O_{4}},\mathrm{V_{O}^{rem}}\right)\). Entropy contributions depend on the temperature, point-defect densities, and oxygen partial pressure. The energies \(E\) were replaced by Helmholtz free energies \(F(T)\) in eqs (13) and (15). For bulk structures, \(F(T)\) was evaluated by \[F(T)=E+F_{\mathrm{vib}}(T). \tag{17}\] Here, \(F_{\mathrm{vib}}(T)\) is the vibrational free energy. For \(\nu_{\mathrm{O}}\) in a defective solid, the vacancy configurational entropy contribution was considered in addition to \(F_{\mathrm{vib}}(T)\). If the vacancy density is \(c_{\mathrm{v}}\), then the free energy change when removing one vacancy is \[\Delta F_{\mathrm{config}}(T,c_{\mathrm{v}})=k_{\mathrm{B}}T[-\ln \left(c_{\mathrm{v}}\right)+\ln\left(1-c_{\mathrm{v}}\right)] \tag{18}\] (details in Appendix B). Here, \(k_{\mathrm{B}}\) is the Boltzmann constant. Therefore, \(Q(\mathrm{Cr_{2}O_{3}})\) and \(Q(\mathrm{Cr_{3}O_{4}})\) depend on vacancy density and temperature when the bath location is a defective solid. When considering the case of O\({}_{2}\) released into the growth chamber, because the experimental oxygen partial pressure is very low and the temperature is high, the translational entropy contribution could significantly stabilize the oxygen gas. This stabilization may change \(Q(\mathrm{Cr_{2}O_{3}})\) and \(Q(\mathrm{Cr_{3}O_{4}})\) from negative to positive. Without entropy contributions, they are negative, as shown in Table 1. The Helmholtz free energy \(F(T)\) of the oxygen gas per molecule is defined as \[F(T,P_{\mathrm{O_{2}}})=E(\mathrm{O_{2}})+F_{\mathrm{vib}}(T)+F_{\mathrm{rot} }(T)+F_{\mathrm{trans}}(T,P_{\mathrm{O_{2}}}). \tag{19}\] Here, \(E(\mathrm{O_{2}})\) is the energy of an isolated oxygen molecule and \(F_{\mathrm{vib}}(T)\), \(F_{\mathrm{rot}}(T)\), and \(F_{\mathrm{trans}}(T,P_{\mathrm{O_{2}}})\) are free energies by vibrational, rotational, and translational entropies, respectively. Then \(F_{\mathrm{rot}}(T)\) and \(F_{\mathrm{trans}}(T,P_{\mathrm{O_{2}}})\)[45] are given by \[F_{\mathrm{rot}}(T) =-k_{\mathrm{B}}T\left(1+\ln\frac{8\pi^{2}Ik_{\mathrm{B}}T}{2h^{2} }\right), \tag{20}\] \[F_{\mathrm{trans}}(T,P_{\mathrm{O_{2}}}) =-k_{\mathrm{B}}T\ln\frac{k_{\mathrm{B}}T}{P_{\mathrm{O_{2}}} \Lambda^{3}},\] (21) \[\Lambda \equiv\frac{h}{\sqrt{2\pi mk_{\mathrm{B}}T}} \tag{22}\] Here, \(h\) is the Planck constant, and \(I\) is the moment of inertia of an oxygen molecule. Therefore, \(Q(\mathrm{Cr_{2}O_{3}})\) and \(Q(\mathrm{Cr_{3}O_{4}})\) depend on oxygen partial pressure and temperature when the bath location is the dilute oxygen gas. The entropy contributions in eqs (13) and (15) yield \(Q(\mathrm{Cr_{2}O_{3}})\) and \(Q(\mathrm{Cr_{3}O_{4}})\) for different bath locations and conditions. In these equations, the bulk free energies depend on only the temperature. When the bath is a defected crystal, \(\nu_{\mathrm{O}}\) depends on temperature and vacancy density. When the bath is the oxygen gas, \(\nu_{\mathrm{O}}\) depends on temperature and oxygen partial pressure. When \(\nu_{\mathrm{O}}\)'s entropy contributions are ignored, \(Q(\mathrm{Cr_{2}O_{3}})\) and \(Q(\mathrm{Cr_{3}O_{4}})\) barely depend on the temperature: \(Q(\mathrm{Cr_{2}O_{3}})\) and \(Q(\mathrm{Cr_{3}O_{4}})\) do not change more than 60 meV from 600 to 1000 K, and \(Q(\mathrm{Cr_{2}O_{3}})-Q(\mathrm{Cr_{3}O_{4}})\) does not change more than 1 meV from 600 to 1000 K. Therefore, the dependence of \(Q(\mathrm{Cr_{2}O_{3}})\) and \(Q(\mathrm{Cr_{3}O_{4}})\) on the conditions is almost equivalent to that of \(\nu_{\mathrm{O}}\). To understand the conditions under which different oxides might be generated experimentally, different baths for exchanging oxygen were systematically considered. When the bath is a defected crystal, \(\nu_{\mathrm{O}}\) was calculated for vacancy densities in the range of 10\({}^{-8}\)-10\({}^{-1}\) per site and temperatures in the range of 600-1000 K. When the bath is the oxygen gas, \(\nu_{\mathrm{O}}\) was calculated for oxygen partial pressures in the range of 10\({}^{-6}\)-10\({}^{0}\) atm and temperatures in the range of 600-1000 K. Figure 4 shows a map of the \(\nu_{\rm O}\) calculated for different bath locations. The vertical width of each area indicates the variation width corresponding to vacancy densities in the range of 10\({}^{-8}\)-10\({}^{-1}\) per site or oxygen partial pressures in the range of 10\({}^{-6}\)-10\({}^{0}\) atm. Figure 4 is divided into the three regions I-III according to the corresponding \(Q\)(Cr\({}_{2}\)O\({}_{3}\)) and \(Q\)(Cr\({}_{3}\)O\({}_{4}\)) values. The region I is \(Q\)(Cr\({}_{2}\)O\({}_{3}\), Cr\({}_{3}\)O\({}_{4}\)) \(<\) 0: the impurity phase segregation does not proceed spontaneously. The region II is \(Q\)(Cr\({}_{2}\)O\({}_{3}\)) \(>\) 0 and \(Q\)(Cr\({}_{2}\)O\({}_{3}\)) \(>\) 0 (Cr\({}_{3}\)O\({}_{4}\)): Cr\({}_{2}\)O\({}_{3}\) + Cu\({}_{8}\)Pd\({}_{1-x}\) is spontaneously predominantly formed. The region III is \(Q\)(Cr\({}_{3}\)O\({}_{4}\)) \(>\) 0 (Cr\({}_{2}\)O\({}_{3}\)) \(>\) 0: Cr\({}_{3}\)O\({}_{4}\) + Cu\({}_{8}\)Pd\({}_{1-x}\) is spontaneously predominantly formed. Therefore, when the particle bath is oxygen gas, CuCrO\({}_{2}\), or PdCrO\({}_{2}\), the majority of chromium oxide is Cr\({}_{2}\)O\({}_{3}\). When the particle bath is Al\({}_{2}\)O\({}_{3}\), the majority of chromium oxide is Cr\({}_{3}\)O\({}_{4}\). Furthermore, other bath locations than the above listed are realistically possible. For example, an oxygen-terminated PdCrO\({}_{2}\) surface could lead to very high \(\nu_{\rm O}\). By contrast, a Pd-terminated surface could lead to very low \(\nu_{\rm O}\). Investigation of such further complicated mechanisms is a possible future work for theory and experiments. These calculations revealed that \(\nu_{\rm O}\) in O\({}_{2}\) gas decreases with decreasing oxygen partial pressure, and \(\nu_{\rm O}\) in defected crystals decreases with increasing \(c_{\rm v}\) (details in supporting information). This result is not surprising because it indicates that gaseous oxygen molecules are more stable under oxygen-poor conditions. In reality, \(c_{\rm v}\) would negatively correlates with oxygen partial pressure, so \(\nu_{\rm O}\) positively correlates with oxygen partial pressure in every bath location: lower oxygen partial pressure facilitates the segregation of impurity phases. This analysis agrees with the experimental finding described in Section III.1: Cr\({}_{2}\)O\({}_{3}\) formation negatively correlates with oxygen partial pressure. However, this analysis does not explain the independence of Cr\({}_{3}\)O\({}_{4}\) formation on oxygen partial pressure. Rather, Cr\({}_{3}\)O\({}_{4}\) formation strongly depends on temperature, unlike Cr\({}_{2}\)O\({}_{3}\). Some hypotheses are considered to explain the Cr\({}_{3}\)O\({}_{4}\) experimental results. (i) Most of the bath locations belong to region II in Figure 4, and the temperature determines how much the metastable Cr\({}_{3}\)O\({}_{4}\) is segregated. (ii) Some of the bath locations belong to region III, but high barrier energy is required to transfer oxygen atoms, moving oxygen vacancies, so the temperature determines the oxygen exchange rate. For example, the barrier energy required to transfer oxygen atoms from Al\({}_{2}\)O\({}_{3}\) to the surface would be higher than from PdCrO\({}_{2}\) to the surface. To verify the hypotheses, saddle state analyses by methods such as the nudged elastic band method, molecular dynamics, and/or modeling the sample's surface should be applied in future works. ## IV Conclusion The mechanism of impurity phase segregation with the epitaxial growth of a PdCrO\({}_{2}\) layer on a CuCrO\({}_{2}\) buffer layer on an Al\({}_{2}\)O\({}_{3}\) substrate was investigated via a combination of experiments and ab initio calculations. XRD experiments revealed the formation of Cu\({}_{x}\)Pd\({}_{1-x}\) alloy and chromium oxide (Cr\({}_{2}\)O\({}_{3}\) and Cr\({}_{3}\)O\({}_{4}\)) impurity phases. Consequently, the impurity phase segregation should be involved with appearance or disappearance of point defects or oxygen migration because the possible segregation processes are not stoichiometric. In this scenario, several possible mechanisms of impurity phase segregation were considered with oxygen vacancy disappearance or chromium vacancy appearance into different particle baths: Al\({}_{2}\)O\({}_{3}\), CuCrO\({}_{2}\), PdCrO\({}_{2}\), and the dilute oxygen gas. Calculations established that the oxygen vacancy consumption processes are energetically favorable and supported ex Figure 4: Plots of the energy of the oxygen sink (\(\nu_{\rm O}\)) using eq (12) for different possible locations in the experimental range of temperatures and estimated concentration. The upper and lower edges of V\({}_{\rm O}^{\rm ren}\) in O\({}_{2}\) gas are \(P_{\rm O_{2}}\) = 1 and 10\({}^{-6}\) atm. The upper and lower edges of the other areas are \(c_{\rm v}\) = 10\({}^{-1}\) and 10\({}^{-8}\) per site. The dark green area between the blue and green areas is the overlap of the blue and green areas. \begin{table} \begin{tabular}{|c||c|c|c|} \hline Bath location & \(Q\) (Cr\({}_{2}\)O\({}_{3}\), V\({}_{\rm O}^{\rm ren}\)) & \(Q\) (Cr\({}_{2}\)O\({}_{3}\), V\({}_{\rm Cr}^{\rm min}\)) & \(Q\) (Cr\({}_{3}\)O\({}_{4}\), V\({}_{\rm O}^{\rm ren}\)) & \(Q\) (Cr\({}_{3}\)O\({}_{4}\), V\({}_{\rm Cr}^{\rm min}\)) \\ \hline PdCrO\({}_{2}\) & **+2.703** & \(-\)0.990 & **+2.466** & \(-\)3.192 \\ CuCrO\({}_{2}\) & **+3.308** & \(-\)0.989 & **+3.273** & \(-\)3.189 \\ Al\({}_{2}\)O\({}_{3}\) & **+5.666** & – & **+6.417** & – \\ O\({}_{2}\) in vacuum (\(T=0\)) & \(-\)1.619 & – & \(-\)3.297 & – \\ \hline \end{tabular} \end{table} Table 1: Exothermic energies, \(Q\) in eqs (13)–(16), for the formation of Cr\({}_{2}\)O\({}_{3}\) or Cr\({}_{3}\)O\({}_{4}\) and CuPd accompanied by V\({}_{\rm O}^{\rm ren}\)’s removal from or V\({}_{\rm Cr}^{\rm min}\)’s introduction to different bath locations. perimental evidence that Cr\({}_{2}\)O\({}_{3}\) or Cr\({}_{3}\)O\({}_{4}\) are the predominant chromium oxide impurity phases. Specifically, preventing the release of oxygen atoms from delafossite materials could suppress the impurity phase segregation. ## Appendix ### Derivation of eqs (13) and (14) For ease of explanation, let the particle bath be PdCrO\({}_{2}\). Consider the following thermochemical equations for the segregation of Cr\({}_{2}\)O\({}_{3}\) by removing preexisting O vacancies or creating Cr vacancies. \[E(\text{CuCrO}_{2})_{\text{bulk}}^{(n)}+E(\text{PdCrO}_{2})_{ \text{Vo}}^{(n)}\] \[=E(\text{CuCrO}_{2})_{\text{bulk}}^{(n-1)}+E(\text{PdCrO}_{2})_{ \text{bulk}}^{(n-1)}\] \[+E(\text{CuPd})+E(\text{Cr}_{2}\text{O}_{3})+Q\left(\text{Cr}_{2 }\text{O}_{3},\text{V}_{\text{O}}^{\text{rem}}\right) \tag{23}\] and \[E(\text{CuCrO}_{2})_{\text{bulk}}^{(n)}+E(\text{PdCrO}_{2})_{ \text{bulk}}^{(n)}\] \[=E(\text{CuCrO}_{2})_{\text{bulk}}^{(n-1)}+E(\text{PdCrO}_{2})_{ \text{(2/3V}_{\text{O}}}^{(n-1)}\] \[+E(\text{CuPd})+(4/3)E(\text{Cr}_{2}\text{O}_{3})+Q\left(\text{ Cr}_{2}\text{O}_{3},\text{V}_{\text{Cr}}^{\text{int}}\right). \tag{24}\] Here, for example, \(E(\text{CuCrO}_{2})_{\text{bulk}}^{(n)}\) is the energy of \(n\) f.u. bulk CuCrO\({}_{2}\), \(E(\text{PdCrO}_{2})_{\text{Vo}}^{(n)}\) is the energy of \(n\) f.u. PdCrO\({}_{2}\) with an oxygen vacancy, \(E(\text{PdCrO}_{2})_{\text{(2/3V}_{\text{O}}}^{(n-1)}\) is the energy of \((n-1)\) f.u. PdCrO\({}_{2}\) with a fraction of \(2/3\) chromium vacancies, and \(E(\text{CuPd})\) is the energy of 1 f.u. bulk CuPd. For the bulk, the following relationships hold according to the definitions. \[E(\text{CuCrO}_{2})_{\text{bulk}}^{(n)} =n\,E(\text{CuCrO}_{2}), \tag{25}\] \[E(\text{PdCrO}_{2})_{\text{bulk}}^{(n)} =n\,E(\text{PdCrO}_{2}). \tag{26}\] Define the energy gain from removing \(m\) oxygen or chromium vacancies in \(n\)PdCrO\({}_{2}\) as follows: \[\nu_{\text{O}}(n,m) \equiv E(\text{PdCrO}_{2})_{\text{bulk}}^{(n)}-E(\text{PdCrO}_{2} )_{m\text{Vo}}^{(n)}, \tag{27}\] \[\nu_{\text{Cr}}(n,m) \equiv E(\text{PdCrO}_{2})_{\text{bulk}}^{(n)}-E(\text{PdCrO}_{2} )_{m\text{Vo}}^{(n)}. \tag{28}\] In the thermodynamic limit (\(n\to\infty\)), the following relationships should hold: \[\nu_{\text{O,Cr}}(n,m) \simeq\nu_{\text{O,Cr}}(n-1,m), \tag{29}\] \[\nu_{\text{O,Cr}}(n,m) \simeq m\,\nu_{\text{O,Cr}}(n,1), \tag{30}\] Moreover, define \[\nu_{\text{O,Cr}}\equiv\nu_{\text{O,Cr}}(n,1) \tag{31}\] These are the \(\nu_{\alpha}\) defined in eq (12). Applying eqs (25)-(31) to eqs (23) and (24), yields eqs (13) and (14). ### Configurational entropy of removing a vacancy. When \(n\) vacancies exist in \(N\) sites, the configurational entropy is \[S(N,n)=k_{\text{B}}\ln\frac{N!}{(N-n)!n!}. \tag{32}\] The entropy change achieved by adding one vacancy is given by \[\Delta S(N,n) \equiv S(N,n+1)-S(N,n)\] \[=k_{\text{B}}\left[-\ln\left(c_{\text{v}}+1/N\right)+\ln\left(1-c _{\text{v}}\right)\right], \tag{33}\] \[c_{\text{v}} \equiv n/N. \tag{34}\] For the limit of \(N\to\infty\) with fixed \(c_{\text{v}}\), \[\Delta S(N,n)\to\Delta S(c_{\text{v}})=k_{\text{B}}\left[-\ln\left(c_{\text{v }}\right)+\ln\left(1-c_{\text{v}}\right)\right] \tag{35}\] Therefore, the free energy change achieved by removing one vacancy is given by \[\Delta F(T,c_{\text{v}}) =-T(-S(c_{\text{v}})) \tag{36}\] \[=k_{\text{B}}T\left[-\ln\left(c_{\text{v}}\right)+\ln\left(1-c_{ \text{v}}\right)\right]. \tag{37}\] ## V Acknowledgments We acknowledge E. Heinrich for valuable help with manuscript preparation. This work was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division (theory and synthesis) and as part of the Computational Materials Sciences Program and Center for Predictive Simulation of Functional Materials (structural characterization). VESTA [46] was used to draw the crystal structures.
2304.03285
$\text{DC}^2$: Dual-Camera Defocus Control by Learning to Refocus
Smartphone cameras today are increasingly approaching the versatility and quality of professional cameras through a combination of hardware and software advancements. However, fixed aperture remains a key limitation, preventing users from controlling the depth of field (DoF) of captured images. At the same time, many smartphones now have multiple cameras with different fixed apertures -- specifically, an ultra-wide camera with wider field of view and deeper DoF and a higher resolution primary camera with shallower DoF. In this work, we propose $\text{DC}^2$, a system for defocus control for synthetically varying camera aperture, focus distance and arbitrary defocus effects by fusing information from such a dual-camera system. Our key insight is to leverage real-world smartphone camera dataset by using image refocus as a proxy task for learning to control defocus. Quantitative and qualitative evaluations on real-world data demonstrate our system's efficacy where we outperform state-of-the-art on defocus deblurring, bokeh rendering, and image refocus. Finally, we demonstrate creative post-capture defocus control enabled by our method, including tilt-shift and content-based defocus effects.
Hadi Alzayer, Abdullah Abuolaim, Leung Chun Chan, Yang Yang, Ying Chen Lou, Jia-Bin Huang, Abhishek Kar
2023-04-06T17:59:58Z
http://arxiv.org/abs/2304.03285v1
# DC\({}^{2}\): Dual-Camera Defocus Control by Learning to Refocus ###### Abstract Smartphone cameras today are increasingly approaching the versatility and quality of professional cameras through a combination of hardware and software advancements. However, fixed aperture remains a key limitation, preventing users from controlling the depth of field (DoF) of captured images. At the same time, many smartphones now have multiple cameras with different fixed apertures - specifically, an ultra-wide camera with wider field of view and deeper DoF and a higher resolution primary camera with shallower DoF. In this work, we propose DC\({}^{2}\), a system for **defocus control** for synthetically varying camera aperture, focus distance and arbitrary defocus effects by fusing information from such a dual-camera system. Our key insight is to leverage real-world smartphone camera dataset by using image refocus as a proxy task for learning to control defocus. Quantitative and qualitative evaluations on real-world data demonstrate our system's efficacy where we outperform state-of-the-art on defocus deblurring, bokeh rendering, and image refocus. Finally, we demonstrate creative post-capture defocus control enabled by our method, including tilt-shift and content-based defocus effects. ## 1 Introduction Smartphone cameras are the most common modality for capturing photographs today [13]. Recent advancements in computational photography such as burst photography [18], synthetic bokeh via portrait mode [48], super-resolution [55], and more have been highly effective at closing the gap between professional DSLR and smartphone photography. However, a key limitation for smartphone cameras today is depth-of-field (DoF) control, i.e., controlling parts of the scene that appear in (and out of) focus. This is primarily an artifact of their relatively simple optics and imaging systems (e.g., fixed aperture, smaller imaging sensors, etc.). To bridge the gap, modern smartphones tend to computationally process the images for further post-capture enhancements such as synthesizing shallow DoF (e.g., portrait mode [37, 48]). However, this strategy alone does not allow for DoF _extension_ or post-capture refocus. In this work, we propose Dual-Camera Defocus Control (DC\({}^{2}\)), a framework that can provide post-capture **defocus control** leveraging multi-camera systems prevalent in smartphones today. Figure 1 shows example outputs from our framework for various post-capture DoF variations. In particular, our method is controllable and enables image refocus, DoF extension, and reduction. Post-capture defocus control is a compound process that involves removing defocus blur (i.e., defocus deblurring) and then adding defocus blur selectively based on the scene depth. Defocus deblurring [23, 4, 5, 2, 31, 35, 39, 40, 42, 59, 60, 43], itself, is challenging due to the nature of the defocus point spread function (PSF) formation which can be spatially varying in size and shape [28, 46]. The PSF's shape and size are not only depth dependent, but also vary based on aperture size, focal length, focus distance, optical aberration, and radial distortion. Synthesizing and adding defocus blur [33, 37, 38, 57, 58, 9, 17] is also difficult and requires an accurate depth map along with an all-in-focus image. Additionally, it requires realistic blur formation and blending around the object's boundaries. Most prior work has addressed defocus deblurring and synthesizing defocus blur as two isolated tasks. There has been less work on post-capture defocus control (e.g., image refocusing [22, 34, 41]). The image refocusing literature [22, 34] has focused on light-field data captured with specialized hardware. While the results in [51, 52] are the state-of-the-art, light-field data is not representative of smartphone and DSLR cameras by lacking realistic defocus blur and spatial resolution [12]. Most modern smartphones are now equipped with two or more rear cameras to assist with computational imaging. The primary camera - often referred to as the wide camera or \(\mathbf{W}\) - has a higher resolution sensor, a higher focal length lens but a relatively shallower DoF. Alongside \(\mathbf{W}\) is the ultra-wide (\(\mathbf{UW}\)) camera, often with a lower resolution sensor, lower focal length (wider field of view) and wider DoF. Our critical insight is to leverage this unique camera setup and cross-camera DoF variations to design a system for realistic post-capture defocus control. Differently from prior work, we tackle the problem of defocus control (deblurring _and_ adding blur) and propose using real-world data easily captured using a smartphone device to train our learning-based system. Our primary contributions in this work are as follows: * We propose a learning-based system for **defocus control** on dual-camera smartphones. This subsumes the tasks of defocus deblurring, depth-based blur rendering, image refocusing and enables arbitrary post-capture defocus control. * In the absence of defocus control ground-truth, we enable training our system on real-world data captured from a smartphone device. To achieve that, we reformulate the problem of defocus control as learning to refocus and define a novel training strategy to serve the purpose. * We collect a dataset of diverse scenes with focus stack data at controlled lens positions the \(\mathbf{W}\) camera and accompanying \(\mathbf{UW}\) camera images for training our system. Additionally, we compute all-in-focus images using the focus stacks to quantitatively evaluate image refocus, defocus deblurring and depth-based blurring tasks and demonstrate superior performance compared to state-of-the-art (SoTA) methods across all three tasks. * Finally, we demonstrate creative defocus control effects enabled by our system, including tilt-shift and content-based defocus. ## 2 Related Work Defocus DeblurringDefocus blur leads to a loss of detail in the captured image. To recover lost details, a line of work follows a two-stage approach: (1) estimate an explicit defocus map, (2) use a non-blind deconvolution guided by the defocus map [23, 42]. With the current advances in learning-based techniques, recent work perform single image deblurring directly by training a neural network end-to-end to restore the deblurred image [2, 31, 39, 40, 43]. Due to the difficulty of the defocus deblurring task, other works try to utilize additional signals, such as the dual pixel (DP) data to improve deblurring performance [4, 5, 59, 60, 5]. DP data is useful for deblurring as it provides the model with defocus disparity that can be used to inform deblurring. While the DP data provides valuable cues for the amount of defocus blur at each pixel, the DP views are extracted from a single camera. Therefore, the performance of the DP deblurring methods drops noticeably and suffer from unappealing visual artifacts for severely blurred regions. In the same vein, we aim to exploit the \(\mathbf{UW}\) image as a complementary signal already available in modern smartphones yet ignored for DoF control. By using the \(\mathbf{UW}\) image with different DoF arrangements, we can deblur regions with severe defocus blur that existing methods cannot handle because of the fundamental information loss. Nevertheless, we are aware that using another camera adds other challenges like image misalignment, occlusion, and color mismatches which we address in Section 4.3. **Bokeh Rendering** Photographers can rely on shallow DoF to highlight an object of interest and add an artistic effect to the photo. The blur kernel is spatially variant based on depth as well as the camera and optics. To avoid the need of estimating depth, one work magnifies the existing defocus in the image to make the blur more apparent without explicit depth estimate [7]. Since recent work in depth estimation improved significantly [30, 44], many shallow DoF rendering methods assume having depth [37] or estimate depth in the process [57, 48]. Using an input or estimated depth map, a shallow DoF can be synthesized using classical rendering methods [38, 17, 9, 48], using a neural network to add the synthetic blur [21, 33, 50] or a combination of classical and neural rendering [37]. With that said, shallow DoF synthesis methods typically assume an all-in-focus image or an input with a deep DoF. Our proposed framework learns to blur as a byproduct of learning to refocus with the insight that the refocus task involves both deblurring and selective blurring. Unlike prior work that addressed either defocus deblurring or image bokeh rendering, we introduce a generic framework that facilitates post-capture full defocus control (e.g., image refocusing). Image Refocus and DoF ControlAt capture time, the camera focus can be adjusted automatically (i.e., autofocus [3, 6, 19]) or manually by moving the lens or adjusting the aperture. When the image is captured, it can still be post-processed to manipulate the focus. Nevertheless, post-capture image refocus is challenging as it requires both deblurring and blurring. Prior work uses specialized hardware to record a light field which allows post-capture focus control [34, 53]. However, light field cameras have low spatial resolution and are not representative of smartphone cameras. An alternative to requiring custom hardware is to capture a focus stack, and then merge the frames required to simulate the desired focus distance and DoF [10, 22, 29, 36], but the long capture time restricts using focus stacks to static scenes. Research on single-image refocus is limited due to its difficulty, but the typical approach is to deblur to obtain an all-in-focus image followed by blurring. Previous work used classical deblurring and blurring [8] to obtain single image refocus, and the most notable recent single-image-based image refocus is RefocusGAN [41], which trains a two-stages GAN to perform refocusing. The limited research on software-based image refocus is likely due to the challenging task that involves both defocus deblurring and selective blurring. In our work, we provide a practical setup for post-capture image refocus without the restrictions of inaccessible hardware or the constraint of capturing a focus stack. We do so by leveraging the dual camera that is available in modern smartphones. Image FusionCombining information from images with complementary information captured using different cameras [36, 47] or the same camera with different capture settings [15, 18] can enhance images in terms of sharpness [22, 36, 47], illuminant estimation [1], exposure [11, 15, 36, 18], or other aspects [49, 16, 47, 16]. With the recent prevalence of dual-camera smartphones today, researchers have pursued works that target this setup. One line of work has used dual-camera for super-resolution to take advantage of the different resolutions the cameras have in still photos [51, 56, 64] as well as in videos [26]. The dual-camera setup has also been used in multiple commercial smartphones, e.g., Google Pixel devices to deblur faces by capturing an ultra-wide image with faster shutter time and fusing with the wide photo [25]. To our knowledge, we are the first to investigate using the dual-camera setup for defocus control. ## 3 Learning to Refocus as a Proxy Task As mentioned, smartphone cameras tend to have fixed apertures limiting DoF control at capture time. In our work, we aim to unlock the ability to synthetically control the aperture - by transferring sharper details where present and synthesizing realistic blur. However, to train such a model, we run into a chicken and egg problem: we require a dataset of images captured with different apertures, which isn't possible with smartphones. An alternative solution could be to generate such a dataset synthetically, but modeling a realistic point spread function (PSF) for the blur kernel is non-trivial [5]. Professional DSLRs provide yet another alternative [20] but often require paired captures smartphone / DLSR captures to reduce the domain gap. Ideally, we would like to use the same camera system for both training and Figure 2: **Image refocus as a proxy task.** Since we cannot gather a _real_ dataset for arbitrary focus manipulation, our idea is to train a model to perform _image refocus_ using a target defocus map as an input. At the test time, our trained model can perform arbitrary focus manipulation by feeding it an arbitrary target defocus map. evaluation. To resolve this, we observe that a somewhat parallel task is image refocus. When we change the focus distance, the defocus radius is adjusted in different parts of the image, involving a combination of pixels getting deblurred and blurred. This suggests that image refocus is at least as hard as scaling the DoF. Motivated by this observation, we make the hypothesis that by training a model on image refocus as a _proxy task_, we can use the same model to control the DoF at test time as we show in Figure 2. The key idea is to provide the model with reference and target defocus maps ( Section 4.1) as input, and at test time control the model behavior by manipulating this target defocus map. ## 4 Method To train a model on our proxy task, we need to collect a dataset of focus stacks for the wide camera and a paired ultra-wide frame which can be used as a guide due to its deeper DoF. In Figure 3 we show the high-level structure of our method dubbed DC\({}^{2}\). The primary module that we train is the Detail Fusion Network (DFNet), which requires a reference wide frame, (aligned) reference ultra-wide frame, and estimated defocus maps. In Section 4.1, we describe how we collect the focus stack data and process it to obtain the inputs needed for DFNet. We then describe the architecture details of DFNet in Section 4.2, which is motivated by the dual-camera input setup. ### Data Processing Using the Google Pixel 6 Pro as our camera platform, we captured a dataset of 100 focus stacks of diverse scenes, including indoor and outdoor scenarios. For each scene, we sweep the focus plane for the wide camera and capture a complete focus stack. We simultaneously capture a frame from the ultra-wide camera, which has a smaller aperture, deeper DoF, and fixed focus. For each frame, we use optical-flow-based warping using PWCNet [45] and following prior work [25] to align the ultra-wide frame with the wide frame. Since the alignment is imperfect (e.g., in textureless regions and occluded boundaries), we estimate an occlusion mask that can be used to express potentially misaligned regions for the model. To estimate defocus maps, we require the metric depth. We use the depth map embedded in the Pixel camera's portrait mode output which can estimate metric depth using dual camera stereo algorithms [63] with a known camera baseline. To compute the defocus map associated with each frame, we use the following formula for the radius of the circle of confusion \(c\) \[c=A\frac{|S_{2}-S_{1}|}{S_{2}}\frac{f}{S_{1}-f} \tag{1}\] where \(A\) is the camera aperture, \(S_{1}\) is the focus distance, \(S_{2}\) is the pixel depth, and \(f\) is the focal length. In Figure 1(a), we show a visualization of a focus stack, associated \(\mathbf{U}\mathbf{W}\), stereo depth, and a collection of sample scenes. ### Model Architecture Our method performs detail fusion on two primary inputs: the reference wide (\(\mathbf{W}\)) and ultra-wide (\(\mathbf{U}\mathbf{W}\)) images. Since we train the model to refocus, \(\mathbf{W}\) is expected to be treated as a base image, while \(\mathbf{U}\mathbf{W}\) is a guide for missing high-frequency details. Based on this intuition, we propose **Detail Fusion Network (DFNet)** that has two refinement paths: \(\mathbf{W}\) refinement path (\(\Phi^{W}_{ref}\)), \(\mathbf{U}\mathbf{W}\) refinement path (\(\Phi^{UW}_{ref}\)), and a fusion Figure 3: **Data processing and high-level architecture.** (_Left_) To be able to use the reference inputs for our Detail Fusion Network, we need to align the inputs and a depth estimate to approximate the defocus map of the reference \(\mathbf{W}\) and the target defocus map we would like to synthesize. We use flow-based alignment with PWCNet [45] and use the stereo depth estimated using portrait mode [48]. (_Right_) Our Detail Fusion Network (DFNet) consists of refinement modules to refine the reference inputs combined with a fusion module that predicts blending masks to combine the two refined inputs. module (\(\Phi_{fusion}\)) that predicts blending masks for the refined \(\mathbf{W}\) and refined \(\mathbf{U}\mathbf{W}\). Note that the \(\mathbf{W}\) refinement path never gets to see the \(\mathbf{U}\mathbf{W}\) frame and vice versa. We use a network architecture based on Dynamic Residual Blocks Network (DRBNet) [39] for our refinement modules with multi-scale refinements. For the fusion module, we use a sequence of atrous convolutions [14] for an increased receptive field and predict a blending mask for each scale. To preserve high-frequency details in the blending mask, we add upsampling layer and residual connections when predicting the blending mask of the larger scale. During training, we blend the outputs of \(\Phi_{ref}^{W}\) and \(\Phi_{ref}^{UW}\) and compute the loss for all scales for improved performance. In Figure 3 we show a high-level diagram of our architecture and how each component interacts with the others. By visualizing the intermediate outputs between our different modules, we observe that the network indeed attempts to maintain the low-frequency signal from\(\mathbf{W}\) while utilizing high-frequency signals from \(\mathbf{U}\mathbf{W}\). Please refer to the supplementary material for a detailed model architecture and a deeper analysis of model behavior and visualizations. ### Training Details We train our model by randomly sampling slices from the focus stack in our training scenes. For each element in the batch, we randomly sample a training scene, and sample two frames to use as reference and target images, respectively. While we can approximate depth from all pairs, severely blurry frames can have unreliable depth. To address that, we use the stereo pair with the greatest number of matched features to use for the scene depth to compute the defocus maps. We train on randomly cropped 256x256 patches, using a batch size of 8, and a learning rate of \(10^{-4}\) for 200k iterations, and then reduce the learning rate to \(10^{-5}\) for another 200k iterations using Adam [24]. Our reconstruction loss is a combination of \(L_{1}\) loss on pixels and gradient magnitudes, SSIM loss [54], and perceptual loss [61]. For a target wide frame \(\mathbf{W}_{tgt}\) and a model output \(y\), the loss is \[\begin{split} L_{total}=L_{1}(\mathbf{W}_{tgt},y)+L_{1}(\nabla \mathbf{W}_{tgt},\nabla y)\\ L_{SSIM}(\mathbf{W}_{tgt},y))+L_{VGG}(\mathbf{W}_{tgt},y) \end{split} \tag{2}\] ## 5 Experimental Results We train our method to perform defocus control through training on the _proxy task_ of image refocus. As a result, our model can perform a variety of related defocus control tasks. Specifically, we evaluate our method on defocus deblurring, synthesizing shallow DoF, and image refocus. **Evaluation metrics.** We use the standard signal processing metrics, i.e., the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM). We also report the learned perceptual image patch similarity (LPIPS) [62]. ### Defocus Deblurring **Task.** The goal of defocus deblurring is to remove the defocus blur in the image. For our method to perform defocus deblurring, we simply set the target defocus map to all zeros. To obtain an all-in-focus image as a ground truth, we perform focus stacking using our focus stacks through commercial software provided by HeliconFocus. Then the evaluation task is deblurring individual slices from the focus stack to generate an all-in-focus image. Due to the focus magnification between the focus stack slices, we align the field-of-view (FoV) with the all-in-focus image through a combination of FoV matching and brute-force search for the best scaling and translation parameters that minimize the error. We use the same alignment method when evaluating all the methods to ensure fairness. **Methods.** We compare our method with the following single-image defocus deblurring methods: Dynamic Residual Blocks Network (DRBNet) [39], Multi-task DP (MDP) network [2], and Iterative Filter Adaptive Network (IFAN) [27]. Note that these methods do not take the ultra-wide image as input, and the main purpose of the comparison is to highlight the value of leveraging an available dual-camera setup. Our dataset does not contain DP data and thus we are not able to benchmark the DP defocus deblurring methods [4, 5, 35, 60, 59]. As for the evaluation on other defocus deblurring datasets (e.g., [4]), our method requires dual-camera input not available in current datasets. **Evaluation.** In Table 1, we compare the performance of our method against other defocus deblurring methods. Our method achieves the best results on all metrics with dual camera inputs. Note that our method has never seen all-in-focus outputs / zero target defocus maps during training and learns to deblur via the proxy task. Figure 4 shows two deblurring results of our method against DRBNet [39]. As shown in the zoomed-in insets, our method is able to restore severely blurred regions better compared to DRBNet. In general, single-image defocus deblurring methods suffer from artifacts and tend to hallucinate when restoring severely blurred regions. Therefore, an additional signal such as the \(\mathbf{U}\mathbf{W}\) is very useful when the details are completely lost in the input image. While the main task of our \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **LPIPS \(\downarrow\)** \\ \hline MDP [2] & 23.50 & 0.674 & 0.394 \\ IFAN [27] & 23.48 & 0.679 & 0.371 \\ DRBNet [39] & 24.27 & 0.681 & 0.377 \\ Ours & **24.79** & **0.704** & **0.351** \\ \hline \hline \end{tabular} \end{table} Table 1: **Defocus deblurring evaluation. Performance on generating all-in-focus images from a single slice in the focus stack. The best results are in bold numbers.** proposed method is not only defocus deblurring, it achieves the SoTA deblurring results quantitatively and qualitatively. These results also demonstrate how generic and flexible our proposed defocus control framework is. ### Shallow DoF Rendering **Task.** We also evaluate our method on rendering shallow DoF images. The input to the method is an all-in-focus image, an approximate target defocus map, and the desired output is the image with a synthetic shallow DoF guided by the defocus map. We use the all-in-focus image generated from the focus stack as input and try to reconstruct the various slices in the focus stack using each slice's defocus map as a target. **Methods.** We compare against BokehMe [37], a recent state-of-the-art in shallow DoF synthesis that relies on blending the outputs of classic blur synthesis with neural rendering methods. We also evaluate the classical scattering-based blur and the neural renderer within BokehMe in isolation. **Evaluation.** In Table 2, we show that our method is competitive with SoTA shallow DoF rendering methods. Note that for DoF reduction, **UW** does not provide a useful signal since the task primarily involves signal removal from **W**, but the model learns to perform this task as a byproduct of training on image refocus. In Figure 5 we show visual results where our model synthesizes realistic blur. ### Image Refocus **Task.** Image refocus involves shifting the focus plane and as a result, the near and far focus depths. To evaluate on image refocus, we randomly sample two frames from a focus stack, a reference frame, and a target frame, and evaluate the model performance in reproducing the target frame. **Methods.** There is limited work on single-image refocus, the most notable work being RefocusGAN [41]. The idea behind RefocusGAN is to use generative models to deblur the image followed by blurring it. This approach is likely because of the difficulty of realistically switching between different defocus amounts directly [7]. However, we are not able to compare with RefocusGAN as the code and trained models are not available. As an alternative for comparison, we adopt SoTA in defocus deblurring (DRBNet [39]) and SoTA in blurring (BokehMe [37]) for image refocus. We also compare against blurring the aligned **UW** directly since it could approximate an all-in-focus image. **Evaluation.** In Table 3 we show that our method outperforms the baseline in image refocus. Note that since we train our method to switch between the reference defocus to the target defocus, the model can implicitly learn to switch between different PSF scales from the data. We show visual results in Figure 6. Note that when the target image contains \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **LPIPS \(\downarrow\)** \\ \hline BokehMe [37] & 26.65 & 0.870 & 0.241 \\ Neural Rend. [37] & 27.87 & 0.874 & 0.246 \\ Classic Rend. [37] & 26.66 & 0.870 & 0.241 \\ Ours & **29.78** & **0.898** & **0.172** \\ \hline \hline \end{tabular} \end{table} Table 2: **Bokeh blurring evaluation.** performance on simulating different slices of the focus stack from the all-in-focus image. Figure 4: **Defocus deblurring.** We showcase the results of our method against SoTA single image defocus deblurring DRBNet [39]. Note that our method restores severely blurred regions in the background that single-image based methods often struggle with. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **LPIPS \(\downarrow\)** \\ \hline UW + Blur [37] & 21.89 & 0.803 & 0.364 \\ Deblur [39]+Reblur [37] & 26.40 & 0.833 & 0.312 \\ Ours & **28.58** & **0.860** & **0.217** \\ \hline \hline \end{tabular} \end{table} Table 3: **Image refocus evaluation.** Performance on re-synthesizing focus planes given an input with different focus plane from the same scene. blurry regions like shown on the wall, our method deblurs the input just enough to match the target defocus. ### Ablation Study The key idea of our work is using the ultra-wide camera as a guide to performing DoF control. To evaluate the effects of using **UW**, we train a model using only W (only keeping the Wide refinement module) and similarly training a **UW** only model. We compare their performance on image refocus in Table 4. Note that while the wide input is sufficient when the target involves only blurring or minimal deblurring, it is an ill-posed setup when it requires considerable deblurring. On the other hand, the warped **UW** lower quality severely limits the performance when relying on it completely. We visualize an example in In Figure 7. Note that when using **W** only, deblurring performance is limited. Also we note that when removing the occlusion mask, while signal-processing metrics could see slight improvements, qualitative performance drops as we can observe ghosting artifacts around occluded boundaries. **Applications.** Our method allows for arbitrary target defocus maps as an input. In Figure 8 we demonstrate a _tilt-shift_ effect, where a large scene appears smaller because of the blur, as well as using a segmentation mask to deblur objects of interest (the person) while blurring the remaining objects. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **PSNR**\(\uparrow\) & **SSIM**\(\uparrow\) & **LPIPS**\(\downarrow\) \\ \hline W only & 28.44 & 0.855 & 0.260 \\ UW only & 22.66 & 0.822 & 0.307 \\ No occlusion & **28.81** & **0.864** & 0.219 \\ Full input & 28.58 & 0.860 & **0.217** \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablations on Image Input**. Comparison on different input types. Although performance increases by removing the occlusion mask, qualitative performance drops (see Figure 7). Figure 5: **Blurring results.** Our method can synthesize shallow DoF from an all-in-focus image with a performance competitive with SoTA in bokeh rendering [37]. Figure 6: **Refocus results.** We shift the focus plane and demonstrate that we can match the desired refocused image and the target blur without completely deblurring the input. Our method outperforms the baseline that refocuses by deblurring and reblurring the image. ## 6 Limitations and Conclusion We present DC\({}^{2}\), a novel framework for defocus control with dual-camera consumer smartphones. We bypass the need for synthetic data and domain gap issues by training with real data captured with a smartphone device. We do so by re-framing the defocus control problem as refocus and designing a learning-based solution for the same. The key idea behind our method is to use **UW** input as an additional signal to control defocus of **W**. Naturally, a limitation then is the DoF of **UW** itself; as objects outside of its DoF might not be sharper than in **W**. In general, our method benefits from asymmetry in the **W** and **UW** camera configurations and likely won't perform as well in systems with identical cameras. Another limitation is our dependence on pre-existing optical flow and stereo depth algorithms which can suffer from severe artifacts with defocus blur (Figure 9). A promising avenues for future work includes utilizing additional cameras to jointly model both scene depth and defocus control. **Acknowledgment** We would like to thank Junlan Yang, Xiaotong Wu, Lun-Cheng Chu, Mauricio Delbracio, Yichang Shih, and Seang Chau for their support and fruitful discussions.
2310.10400
Can Word Sense Distribution Detect Semantic Changes of Words?
Semantic Change Detection (SCD) of words is an important task for various NLP applications that must make time-sensitive predictions. Some words are used over time in novel ways to express new meanings, and these new meanings establish themselves as novel senses of existing words. On the other hand, Word Sense Disambiguation (WSD) methods associate ambiguous words with sense ids, depending on the context in which they occur. Given this relationship between WSD and SCD, we explore the possibility of predicting whether a target word has its meaning changed between two corpora collected at different time steps, by comparing the distributions of senses of that word in each corpora. For this purpose, we use pretrained static sense embeddings to automatically annotate each occurrence of the target word in a corpus with a sense id. Next, we compute the distribution of sense ids of a target word in a given corpus. Finally, we use different divergence or distance measures to quantify the semantic change of the target word across the two given corpora. Our experimental results on SemEval 2020 Task 1 dataset show that word sense distributions can be accurately used to predict semantic changes of words in English, German, Swedish and Latin.
Xiaohang Tang, Yi Zhou, Taichi Aida, Procheta Sen, Danushka Bollegala
2023-10-16T13:41:27Z
http://arxiv.org/abs/2310.10400v1
# Can Word Sense Distribution Detect Semantic Changes of Words? ###### Abstract Semantic Change Detection (SCD) of words is an important task for various NLP applications that must make time-sensitive predictions. Some words are used over time in novel ways to express new meanings, and these new meanings establish themselves as novel senses of existing words. On the other hand, Word Sense Disambiguation (WSD) methods associate ambiguous words with sense ids, depending on the context in which they occur. Given this relationship between WSD and SCD, we explore the possibility of predicting whether a target word has its meaning changed between two corpora collected at different time steps, by comparing the distributions of senses of that word in each corpora. For this purpose, we use pretrained static sense embeddings to automatically annotate each occurrence of the target word in a corpus with a sense id. Next, we compute the distribution of sense ids of a target word in a given corpus. Finally, we use different divergence or distance measures to quantify the semantic change of the target word across the two given corpora. Our experimental results on SemEval 2020 Task 1 dataset show that word sense distributions can be accurately used to predict semantic changes of words in English, German, Swedish and Latin. ## 1 Introduction SCD of words over time has provided important insights for diverse fields such as linguistics, lexicography, sociology, and information retrieval (IR) [14, 15, 16]. For example, in IR one must know the seasonal association of keywords used in user queries to provide relevant results pertaining to a particular time period. Moreover, it has been shown that the performance of publicly available pretrained LLMs declines over time when applied to emerging data [21, 1, 13] because they are trained using a static snapshot. Moreover, Su et al. (2022) showed that the temporal generalisation of LLMs is closely related to their ability to detect semantic variations of words. A word is often associated with multiple _senses_ as listed in dictionaries, corresponding to its different meanings. Polysemy (i.e. coexistence of several possible meanings for one word) has been shown to statistically correlate with the rate of semantic change in prior work [1, 18, 19]. For example, consider the word _cell_, which has the following three noun senses according to the WordNet1: (a) **cell%1:03:00** - _the basic structural and functional unit of all organisms_, (b) **cell%1:06:04** - _a handheld mobile radiotelephone for use in an area divided into small sections_, and (c) **cell%1:06:01** - _a room where a prisoner is kept_. Here, the WordNet sense ids for each word sense are shown in boldface font. Mobile phones were first produced in the late 1970s and came into wider circulation after 1990. Therefore, the sense (b) is considered as a more recent association compared to (a) and (c). Given two sets of documents, one sampled before 1970 and one after, we would expect to encounter (b) more frequently in the latter set. Likewise, articles on biology are likely to contain (a). As seen from this example **the sense distributions of a word in two corpora provide useful information about its possible meaning changes over time**. Footnote 1: [https://wordnet.princeton.edu/](https://wordnet.princeton.edu/) Given this relationship between the two tasks SCD and WSD, a natural question arises - _is the word sense distribution indicative of semantic changes of words?_ To answer this question, we design and evaluate an unsupervised SCD method that uses only the word sense distributions to predict whether the meaning associated with a target word \(w\) has changed from one text corpora \(\mathcal{C}_{1}\) to another \(\mathcal{C}_{2}\). For the ease of future references, we name this method Sense-based Semantic Change Score (SSCS). Given a target word \(w\), we first disambiguate each occurrence of \(w\) in each corpus. For this purpose, we measure the similarity between the contextualised word embedding of \(w\), obtained from a pre-trained Masked Language Model (MLM), from the contexts containing \(w\) with each of the pre-trained static sense embeddings corresponding to the different senses of \(w\). Next, we compute the sense distribution of \(w\) in each corpus separately. Finally, we use multiple distance/divergence measures to compare the two sense distributions of \(w\) to determine whether its meaning has changed from \(\mathcal{C}_{1}\) to \(\mathcal{C}_{2}\). To evaluate the possibility of using word senses for SCD, we compare the performance of SSCS against previously proposed SCD methods using the SemEval-2020 Task 1 (unsupervised lexical SCD) (Schlechtweg et al., 2020) benchmark dataset. This task has two subtasks: (1) a _binary classification_ task, where for a set of target words, we must decide which words lost or gained sense(s) from \(\mathcal{C}_{1}\) to \(\mathcal{C}_{2}\), and (2) a _ranking_ task, where we must rank target words according to their degree of lexical semantic change from \(\mathcal{C}_{1}\) to \(\mathcal{C}_{2}\). We apply SSCS on two pre-trained static sense embeddings, and six distance/divergence measures. Despite the computationally lightweight and unsupervised nature of SSCS, our experimental results show that it surprisingly outperforms most previously proposed SCD methods for English, demonstrating the effectiveness of word sense distributions for SCD. Moreover, evaluations on German, Latin and Swedish show that this effectiveness holds in other languages as well, although not to the same levels as in English. We hope our findings will motivate future methods for SCD to explicitly incorporate word sense related information. Source code implementation for reproducing our experimental results is publicly available.2 Footnote 2: [https://github.com/LiNLP/Sense-based-Semantic-Change-Prediction](https://github.com/LiNLP/Sense-based-Semantic-Change-Prediction) ## 2 Related Work Semantic Change Detection:SCD is modelled in the literature as the unsupervised task of detecting words whose meanings change between two given time-specific corpora (Kutuzov et al., 2018; Tahmasebi et al., 2021). In recent years, several shared tasks have been held (Schlechtweg et al., 2020; Basile et al., 2020; Kutuzov and Pivovarova, 2021), where participants are required to predict the degree or presence of semantic changes for a given target word between two given corpora, sampled from different time periods. Various methods have been proposed to map vector spaces from different time periods, such as initialisation (Kim et al., 2014), alignment (Kulkarni et al., 2015; Hamilton et al., 2016), and joint learning (Yao et al., 2018; Dubossarsky et al., 2019; Aida et al., 2021). Existing SCD methods can be broadly categorised into two groups: (a) methods that compare word/context clusters (Hu et al., 2019; Giulianelli et al., 2020; Montariol et al., 2021), and (b) methods that compare embeddings of the target words computed from different corpora sampled at different time periods (Martinc et al., 2020; Beck, 2020; Kutuzov and Giulianelli, 2020; Rosin et al., 2022). Rosin and Radinsky (2022) recently proposed a temporal attention mechanism, which achieves SoTA performance for SCD. However, their method requires additional training of the entire MLM with temporal attention, which is computationally expensive for large MLMs and corpora. The change of the _grammatical profile_(Kutuzov et al., 2021; Giulianelli et al., 2022) of a word, created using its universal dependencies obtained from UDPipe (Straka and Strakova, 2017), has shown to correlate with the semantic change of that word. However, the accuracy of the grammatical profile depends on the accuracy of the parser, which can be low for resource poor languages and noisy texts. Sabina Uban et al. (2022) used polysemy as a feature for detecting lexical semantic change discovery. The distribution of the contextualised embeddings of a word over its occurrences (aka. _sibling_ embeddings) in a corpus has shown to be an accurate representation of the meaning of that word in a corpus, which can be used to compute various semantic change detection scores (Kutuzov et al., 2022; Aida and Bollegala, 2023). XL-LEXEME (Cassotti et al., 2023) is a supervised SCD method where a bi-encoder model is trained using WiC (Pilehvar and Camacho-Collados, 2019) dataset to discriminate whether a target word appears in different senses in a pair of sentences. XL-LEXEME reports SoTA SCD results for English, German, Swedish and Russian. Sense Embeddings:Sense embedding learning methods represent different senses of an ambiguous word with different vectors. The concept of multi-prototype embeddings to represent word senses was introduced by Reisinger and Mooney (2010). This idea was further extended by Huang et al. (2012), who combined both local and global contexts in their approach. Clustering is used in both works to categorise contexts of a word that belong to the same meaning. Although the number of senses a word can take depends on that word, both approaches assign a predefined fixed number of senses to all words. To address this limitation, Neelakantan et al. (2014) introduced a non-parametric model, which is able to dynamically estimate the number of senses for each word. Although clustering-based approaches can allocate multi-prototype embeddings to a word, they still suffer from the fact that the embeddings generated this way are not linked to any sense inventories (Camacho-Collados and Pilehvar, 2018). On the other hand, knowledge-based methods obtain sense embeddings by extracting sense-specific information from external sense inventories, such as the WordNet (Fellbaum and Miller, 1998) or the BabelNet3(Navigli and Ponzetto, 2012). Chen et al. (2014) extended word2vec (Mikolov et al., 2013) to learn sense embeddings using WordNet synsets. Rothe and Schutze (2015) made use of the semantic relationships in WordNet to embed words into a shared vector space. Iacobacci et al. (2015) used the definitions of word senses in BabelNet and conducted WSD to extract contextual information that is unique to each sense. Footnote 3: [https://babelnet.org/](https://babelnet.org/) Recently, contextualised embeddings produced by MLMs have been used to create sense embeddings. To achieve this, Loureiro and Jorge (2019) created LMMS sense embeddings by averaging over the contextualised embeddings of the sense annotated tokens from SemCor (Miller et al., 1993). Scarlini et al. (2020) proposed SenseEmBERT (Sense Embedded BERT), which makes use of the lexical-semantic information in BabelNet to create sense embeddings without relying on sense-annotated data. ARES (context-AwaRe Embedding) (Scarlini et al., 2020) is a knowledge-based method for generating BERT-based embeddings of senses by means of the lexical-semantic information available in BabelNet and Wikipedia. ARES and LMMS embeddings are the current SoTA sense embeddings. ## 3 Sense-based Semantic Change Score SSCS consists of two steps. First, in SS3.1, we compute the distribution of word senses associated with a target word in a corpus. Second, in SS3.2, we use different distance (or divergence) measures to compare the sense distributions computed for the same target word from different corpora to determine whether its meaning has changed between the two corpora. ### Computing Sense Distributions We represent the meaning expressed by a target word \(w\) in a corpus \(\mathcal{C}\) by the distribution of \(w\)'s word senses, \(p(z_{w}|w,\mathcal{C})\). As explained in SS1, our working hypothesis is that if the meaning of \(w\) has changed from \(\mathcal{C}_{1}\) to \(\mathcal{C}_{2}\), then the corresponding sense distributions of \(w\), \(p(z_{w}|w,\mathcal{C}_{1})\) will be different from \(p(z_{w}|w,\mathcal{C}_{2})\). Therefore, we first estimate \(p(z_{w}|w,\mathcal{C})\) according to the probabilistic model illustrated in the plate diagram in Figure 1. We consider the corpus, \(\mathcal{C}\) to be a collection of \(|\mathcal{C}|\) sentences from which a sentence \(s\) is randomly sampled according to \(p(s|\mathcal{C})\). Next, for each word \(w\) in vocabulary \(\mathcal{V}\) that appears in \(s\), we randomly sample a sense \(z_{w}\) from its set of sense ids \(\mathcal{Z}_{w}\). As shown in Figure 1, we assume the sense that a word takes in a sentence to be independent of the other sentences in the corpus, which enables us to factorise \(p(z|w,\mathcal{C})\) as in (1). \[p(z_{w}|w,\mathcal{C})=\sum_{s\in\mathcal{C}(w)}p(z_{w}|w,s)p(s|\mathcal{C}) \tag{1}\] Here, \(\mathcal{C}(w)\) is the subset of sentences in \(\mathcal{C}\) where \(w\) occurs. We assume \(p(s|\mathcal{C})\) to be uniform and set it to be \(1/|\mathcal{C}|\), where \(|\mathcal{C}|\) is the number of sentences in \(\mathcal{C}\). Following the prior works that use static sense embeddings for conducting WSD, the similarity between the pre-trained static sense embedding \(\mathbf{z}_{w}\) of the sense \(z_{w}\) of \(w\), and the contextualised word embedding \(\mathbf{f}(w,s)\) of \(w\) in \(s\) (obtained from Figure 1: Plate diagram showing the dependencies among the sentences \(s\) in the corpus \(\mathcal{C}\), the target word \(w\), and its sense \(z\) in \(s\). a pre-trained MLM) can be used as the confidence score for predicting whether \(w\) takes the sense \(z_{w}\) in \(s\). Specifically, the LMMS and ARES sense embeddings we use in our experiments are computed using BERT Devlin et al. (2019) as the back-end, producing aligned vector spaces where we can compute the confidence scores using the inner-product as given by (2). \[p(z_{w}|w,s)=\frac{\langle\mathbf{z}_{w},\mathbf{f}(w,s)\rangle}{\sum_{z^{ \prime}_{w}\in\mathcal{Z}_{w}}\langle\mathbf{z}^{\prime}_{w},\mathbf{f}(w,s)\rangle} \tag{2}\] In WSD, an ambiguous word is typically assumed to take only a single sense in a given context. Therefore, WSD methods assign the most probable sense \(z_{w}^{*}\) to \(w\) in \(s\), where \(z_{w}^{*}=\operatorname*{arg\,max}_{z_{w}\in\mathcal{Z}_{w}}p(z_{w}|w,s)\). However, not all meanings of a word might necessarily map to a single word sense due to the incompleteness of sense inventories. For example, a novel use of an existing word might be a combination of multiple existing senses of that word rather than a novel sense. Therefore, it is important to consider the sense distribution over word senses, \(p(z_{w}|w,s)\), instead of only the most probable sense. Later in SS5.1, we experimentally study the effect of using top-\(k\) senses of a word in a given sentence. ### Comparing Sense Distributions Following the procedure described in SS3.1, we independently compute the distributions \(p(z_{w}|w,\mathcal{C}_{1})\) and \(p(z_{w}|w,\mathcal{C}_{2})\) respectively from \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). Next, we compare those two distributions using different distance measures, \(d(p(z_{w}|w,\mathcal{C}_{1}),p(z_{w}|w,\mathcal{C}_{2}))\), to determine whether the meaning of \(w\) has changed between \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). For this purpose, we use five distance measures (i.e. Cosine, Chebyshev, Canberra, Bray-Curtis, Euclidean) and two divergence measures (i.e. Jensen-Shannon (JS), Kullback-Leibler (KL)) in our experiments. For computing distance measures, we consider each sense \(z_{w}\) as a dimension in a vector space where the corresponding value is set to \(p(z_{w}|w,\mathcal{C})\). The definitions of those measures are given in Appendix A. ## 4 Experiments Data and Evaluation Metrics:We use the SemEval-2020 Task 1 dataset Schlechtweg et al. (2020) to evaluate SCD of words over time for English, German, Swedish and Latin in two subtasks: binary classification and ranking. In the classification subtask, the words in the evaluation set must be classified as to whether they have semantically changed over time. Classification **Accuracy** (i.e. percentage of the correctly predicted words in the set of test target words) is used as the evaluation metric for this task. To predict whether a target word \(w\) has its meaning changed, we use Bayesian optimisation to find a threshold on the distance, \(d(p(z_{w}|w,\mathcal{C}_{1}),p(z_{w}|w,\mathcal{C}_{2}))\), between the sense distributions of \(w\) computed from \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). Specifically, we use the Adaptive Experimentation Platform4 to find the threshold that maximises the classification accuracy on a randomly selected held-out portion of words from the SemEval dataset, reserved for validation purposes. We found that using Bayesian optimisation is more efficient than conducting a linear search over the parameter space. We repeat this threshold estimation process five times and use the averaged parameter values in the remainder of the experiments. Footnote 4: [https://ax.dev/](https://ax.dev/) In the ranking subtask, the words in the evaluation set must be sorted according to the degree of semantic change. Spearman's rank correlation coefficient (\(\rho\in[-1,1]\)) between the human-rated gold scores and the induced ranking scores is used as the evaluation metric for this subtask. Higher \(\rho\) values indicate better SCD methods. Statistics of the data used in our experiments are shown in Table 4 in Appendix B. The English dataset includes two corpora from different centuries extracted from CCOHA Alatrash et al. (2020). Let us denote the corpora collected from the early 1800s and late 1900s to early 2000s respectively by \(C_{1}\) and \(C_{2}\). For each language, its test set has 30-48 target words that are selected to indicate whether they have undergone a semantic change between the two time periods. These words are annotated by native speakers indicating whether their meanings have changed over time and if so the degree of the semantic change. Sense Embeddings:We use two pre-trained sense embeddings in our experiments: LMMS Loureiro and Jorge (2019) and ARES Scarlini et al. (2020). For English monolingual experiments we use the 2048-dimensional LMMS5 and ARES6 embeddings computed using bert-large-cased,7 which use WordNet sense ids. For the multilingual experiments, we use the 768-dimensional ARES embeddings computed using multilingual-bert,8 which uses BabelNet sense ids. As a baseline method that does not use sense embeddings, we use the English WSD implementation in NLTK9 to predict WordNet sense-ids for the target words, when computing the sense distributions, \(p(z_{w}|s)\) in (1). Footnote 5: [https://github.com/danlow/LMMS/tree/LMMS_ACL19](https://github.com/danlow/LMMS/tree/LMMS_ACL19) Footnote 6: [http://sensembert.org/](http://sensembert.org/) Footnote 7: [https://huggingface.co/bert-large-cased](https://huggingface.co/bert-large-cased) Footnote 8: [https://huggingface.co/bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) Footnote 9: [https://www.nltk.org/howto/wsd.html](https://www.nltk.org/howto/wsd.html) Hardware and Hyperparameters:We used a single NVIDIA RTX A6000 and 64 GB RAM in our experiments. It takes ca. 7 hours to compute semantic change scores for all target words in the 15.3M sentences in the SemEval datasets for all languages we consider. The only hyperparameter in SSCS is the threshold used in the binary classification task, which is tuned using Bayesian Optimisation. The obtained thresholds for the different distance measures are shown in Table 5 in Appendix B. ## 5 Results ### Effect of Top-\(k\) Senses The correct sense of a word might not necessarily be ranked as the top-1 because of two reasons: (a) the sense embeddings might not perfectly encode all sense related information, and (b) the contextualised word embeddings that we use to compute the inner-product with sense embeddings might encode information that is not relevant to the meaning of the target word in the given context. Therefore, there is some benefit of not strictly limiting the sense distribution only to the top-1 ranked sense, but to consider \(k(\geq 1)\) senses for a target word in a given context. However, when we increase \(k\), we will consider less likely senses of \(w\) in a given context \(s\), thereby introducing some noise in the computation of \(p(z_{w}|w,\mathcal{C})\). We study the effect of using more than one sense in a given context to compute the sense distribution of a target word. Specifically, we sort the senses in the descending order of \(p(z_{w}|w,s)\), and select the top-\(k\) ranked senses to represent \(w\) in \(s\). Setting \(k=1\) corresponds to considering the most probable sense, i.e. \(\arg_{z_{w}\in\mathcal{Z}_{w}}\max p(z_{w}|w,s)\). In Figure 2, we plot the accuracy (for the binary classification subtask) and \(\rho\) (for the ranking subtask) obtained using LMMS sense embeddings and JS divergence measure against \(k\). For this experiment, we use a randomly held-out set of English words from the SemEval dataset. From Figure 2, we see that both accuracy and \(\rho\) increase initially with \(k\) up to a maximum at \(k=2\), and then start to drop. Following this result, we limit the sense distributions to the top 2 senses for the remainder of the experiments reported in this paper. ### English Monolingual Results Aida and Bollegala (2023) showed that the performance of an SCD method depends on the metric used for comparisons. Therefore, in Table 1 we show SCD results for English target words with different distance/divergence measures using LMMS, ARES sense embeddings against the NLTK WSD baseline. We see that LMMS sense embeddings coupled with JS divergence report the best performance for both the binary classification and ranking subtasks across all settings, whereas ARES sense embeddings with JS divergence obtain similar accuracy for the binary classification subtask. Comparing the WSD methods, we see that NLTK is not performing as well as the sense embedding-based methods (i.e. LMMS and ARES) in terms of Spearman's \(\rho\) for the ranking subtask. Although NLTK matches the performance of ARES on the classification subtask, it is still below that of LMMS. Moreover, the best performance for NLTK for the classification subtask is achieved with multiple metrics such as Cosine, Bray-Curtis, Euclidean and KL. Therefore, we conclude that the sense embedding Figure 2: Spearman’s \(\rho\) and Accuracy on SemEval English held-out data when using the top-\(k\) senses in the sense distribution with JS as the divergence. based sense distribution computation is superior to that of the NLTK baseline. Among the different distance/divergence measures used, we see that JS divergence measure performs better than the other measures. In particular, for both ARES and LMMS, JS outperforms other measures for both subtasks, and emerges as the overall best metric for SCD. On the other hand, Cosine distance, which has been used as a baseline in much prior work on semantic change detection (Rosin et al., 2022) performs poorly for the ranking subtask although it does reasonably well on the classification subtask. Rosin et al. (2022) predicted semantic changes by thresholding the cosine distance. They used peak detection methods (Palshikar, 2009) to determine this threshold, whereas we use Bayesian optimisation methods. ### English SCD Results We compare SSCS against the following prior SCD methods on the SemEval-2020 Task 1 English data. Due to space limitations, further details of those methods are given in Appendix C. **BERT + Time Tokens + Cosine** is the method proposed by Rosin et al. (2022) that fine-tunes pretrained BERT-base models using time tokens. **BERT + APD** was proposed by Kutuzov and Giulianelli (2020) that uses the averages pairwise cosine distance. Based on this insight, Aida and Bollegala (2023) evaluate the performance of **BERT + TimeTokens + Cosine** with the average pairwise cosine distance computed using pre-trained BERT-base as the MLM. **BERT+DSCD** is the sibling Distribution-based SCD (DSCD) method proposed by Aida and Bollegala (2023). A **Temporal Attention** mechanism was proposed by Rosin and Radinsky (2022) where they add a trainable temporal attention matrix to the pre-trained BERT models. Because their two proposed methods (fine-tuning with time tokens and temporal attention) are independent, Rosin and Radinsky (2022) proposed to use them simultaneously, which is denoted by **BERT + Time Tokens + Temporal Attention**. Yuksel et al. (2021) extended word2vec to create **Gaussian embeddings**(Vilnis and McCallum, 2015) for target words independently from each corpus Rother et al. (2020) proposed Clustering on Manifolds of Contextualised Embeddings (**CMCE**) where they use mBERT embeddings with dimensionality reduction to represent target words, and then apply clustering algorithms to find the different sense clusters. CMCE is the current SoTA for the binary classification task. Asgari et al. (2020) proposed **EmbedLexChange**, which uses fasttext (Bojanowski et al., 2017) to create word embeddings from each corpora separately, and measures the cosine similarity between a word and a fixed set of pivotal words to represent a word in a corpus using the distribution over those pivots. UwB(Prazak et al., 2020) learns separate word embeddings for a target word from each corpora and then use Canonical Correlation Analysis (CCA) to align the two vector spaces. UWB was ranked 1st for the binary classification subtask at the official SemEval 2020 Task 1 competition. In Table 2, we compare our SSCS with JS using LMMS sense embeddings (which reported the best performance according to Table 1) against prior work. For prior SCD methods, we report performance from the original publications, without re-running those methods. However, not all prior SCD methods evaluate on both binary classification and ranking subtasks as we do in this paper, which is indicated by N/A (not available) in Table 2. XL \begin{table} \begin{tabular}{l l l c} \hline \hline & Metric & Spearman’s \(\rho\) & Accuracy \\ \hline \multirow{8}{*}{**Datasets**} & Cosine & 0.007 & **0.595** \\ & Chebyshev & 0.301 & 0.541 \\ & Canberra & **0.423** & 0.568 \\ & Bray-Curtis & 0.175 & **0.595** \\ & Euclidean & 0.257 & **0.595** \\ & JS & 0.302 & 0.514 \\ & KL & 0.351 & **0.595** \\ \hline \multirow{8}{*}{**Semi**} & Cosine & 0.149 & 0.595 \\ & Chebyshev & 0.080 & 0.568 \\ & Canberra & 0.447 & 0.649 \\ & Bray-Curtis & 0.485 & 0.622 \\ & Euclidean & 0.277 & 0.568 \\ & JS & **0.529** & **0.730\({}^{\dagger}\)** \\ & KL & 0.233 & 0.622 \\ \hline \multirow{8}{*}{**Semi**} & Cosine & 0.274 & 0.622 \\ & Chebyshev & 0.124 & 0.568 \\ & Canberra & 0.502 & 0.405 \\ & Bray-Curtis & 0.541 & 0.676 \\ & Euclidean & 0.245 & 0.568 \\ & JS & **0.589\({}^{\dagger}\)** & **0.730\({}^{\dagger}\)** \\ & KL & 0.329 & 0.676 \\ \hline \hline \end{tabular} \end{table} Table 1: Semantic Change Detection performance on SemEval 2020 Task 1 English dataset. LEXEME is a supervised SCD method that is the current SoTA on this dataset. From Table 2, we see that SSCS obtains competitive results for both binary classification and ranking subtasks on the SemEval-2020 Task 1 English dataset, showing the effectiveness of word sense information for SCD. It matches the performance of CMCE for the binary classification subtask, while outperforming Temporal attention with Time Token fine-tuning (Rosin and Radinsky, 2022) on the ranking subtask. Although models such as CMCE and EmbedLexChange have good performance for the binary classification subtask, their performance on the ranking subtask is poor. Both of those methods learn static word embeddings for a target word _independently_ from each corpus. Therefore, those methods must first learn comparable distributions before a distance measure can be used to calculate a semantic change score for a target word. As explained above, CMCE learns CCA-based vector space alignments, while EmbedLexChange uses the cosine similarity over a set of fixed pivotal words selected from the two corpora. Both vector space alignments and pivot selection are error prone, and add additional noise to SCD. On the other hand, SSCS uses the _same_ MLM and sense embeddings on both corpora when computing the sense distributions, thus obviating the need for any costly vector space alignments. Both TimeToken and Temporal Attention methods require retraining a transformer model (i.e. BERT models are used in the original papers). TimeToken prepends each sentence with a timestamp, thereby increasing the input length, which results in longer training times with transformer-based LLMs. On the other hand, Temporal Attention increases the number of parameters in the transformer as it uses an additional time-specific weight matrix. Interestingly, from Table 2 we see that SSCS outperforms both those methods convincingly despite not requiring any fine-tuning/retraining of the sense embeddings nor MLMs, which is computationally attractive. SSCS (which is unsupervised) does not outperform XL-LEXEME (which is trained on WiC data) for the ranking subtask. In particular, we see a significant performance gap between XL-LEXEME and the rest of the unsupervised methods, indicating that future work on SCD should explore the possibility of incorporating some form a supervision to further improve performance. Although in SSCS, we used pre-trained static sense embeddings without any further fine-tuning, we could have used WiC data to select the classification threshold. During inference time, XL-LEXEME computes the average pair-wise cosine distance between the embeddings of sentences that contain the target word (which we are interested in predicting whether its meaning has changed over time), selected from each corpora. However, as already discussed in SS 5.2, JS divergence outperforms cosine distance for SCD. Therefore, it would be an interesting future research direction would be to incorporate the findings from unsupervised SCD to further improve performance in supervised SCD methods. ### Multilingual SCD Results To evaluate the effectiveness of word sense distributions for detecting semantic change of words in other languages, we use the 768-dimensional ARES multilingual sense embed \begin{table} \begin{tabular}{l c c} \hline \hline Model & Accuracy & Spearman \\ \hline BERT-base + TimeTokens + Cosine (Rosin et al., 2022) & N/A & 0.467 \\ BERT-base + APD (Kutuzov and Giulianelli, 2020) & N/A & 0.479 \\ BERT-base + Temporal Attention (Rosin and Radinsky, 2022) & N/A & 0.520 \\ BERT-base + TimeTokens + DSCD (Aida and Bollegala, 2023) & N/A & 0.529 \\ BERT-base + TimeTokens + Temporal Attention (Rosin and Radinsky, 2022) & N/A & 0.548 \\ Gaussian Embeddings (Yuksel et al., 2021) & 0.649 & 0.400 \\ CMCE (Rother et al., 2020) & **0.730** & 0.440 \\ EmbedLexChange (Asgari et al., 2020) & 0.703 & 0.300 \\ UWE (Prazák et al., 2020) & 0.622 & 0.365 \\ XL-LEXEME (Cassotti et al., 2023) (supervised) & N/A & **0.757** \\ SSCS (LMMS + JS) & **0.730** & 0.589 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison against previously proposed SCD methods on the English data in SemEval-2020 Task 1. dings,10 trained using BabelNet concept ids. We use bert-base-multilingual-cased11 as the multilingual MLM in this evaluation because it is compatible with the ARES multilingual sense embeddings. For evaluations, we use the ranking subtask data in the SemEval 2020 Task 1 for German, Swedish and Latin. Footnote 10: [http://sensembert.org/resources/ares_embedding.tar.gz](http://sensembert.org/resources/ares_embedding.tar.gz) Footnote 11: [https://huggingface.co/bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) In Figure 3 we compare \(\rho\) values obtained using different distance/divergence measures. We see that JS performs best for English, KL for German and Latin, whereas Canberra for Swedish. Overall, the divergence-based measures (i.e. KL and JS) report better results than distance-based measures across languages, except in Swedish. Various factors affect the performance such as the coverage and sparseness of sense distributions, size of the corpora in each time period, and the number of test target words in each evaluation dataset. Therefore, it is difficult to attribute the performance differences across languages purely to the different distance/divergence measures used to compare the sense distributions. The performance for non-English languages is much lower compared to that for English. This is due to three main reasons: (a) the limited sense coverage in BabelNet for non-English languages (especially Latin and Swedish in this case), (b) the accuracy of ARES sense embedding for German and Latin being lower compared to that for English, and (c) the multilingual contextualised embeddings obtained from mBERT has poor coverage for Latin. Although more language-specialised MLMs are available such as GermanBERT12, LatinBERT,13 and SwedishBERT14, we must have compatible sense embeddings to compute the sense distributions. Learning accurate multilingual sense embeddings is an active research area (Rezaee et al., 2021; Upadhyay et al., 2017) on its own and is beyond the scope of this paper which focuses on SCD. Footnote 12: [https://huggingface.co/bert-base-german-cased](https://huggingface.co/bert-base-german-cased) Footnote 13: [https://github.com/ghuman/latin-bert](https://github.com/ghuman/latin-bert) Footnote 14: [https://huggingface.co/KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) ### Qualitative Analysis Figure 4 shows example sense distributions and the corresponding JS divergence scores for the words _plane_ (a word that has changed meaning according to SemEval annotators, giving a rating of 0.882) and _pin_ (a word that has not changed its meaning, with a rating of 0.207) from the SemEval English binary classification subtask. We see that the two distributions for _plane_ are significantly different from each other (the second peak at sense-id 5 vs. 6, respectively in \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\)), as indicated by a high JS divergence (i.e. 0.221). On the other hand, the sense distributions for _pin_ are similar, resulting in a relatively smaller (i.e. 0.027) JS divergence. This result supports our claim that sense distributions provide useful clues for SCD of words. In Table 3, we show the top- and bottom-8 ranked words according to their semantic change scores in the SemEval English dataset. We compare the ranks assigned to words according to SSCS against the NLTK baseline (used in Table 1) and DSCD (Aida and Bollegala, 2023). From Table 3 we see that for 6 (i.e. _plane, tip, graft, record, stab, head_) out of the top-8 ranked words with a semantic change between the corpora, SSCS assigns equal or lower ranks than either of NLTK or DSCD. Moreover, we see that SSCS assigns lower ranks to words that have not changed meaning across corpora. As an error analysis, let us consider _risk_, which is assigned a higher rank (8) incorrectly by SSCS, despite not changing its meaning. Further investigations (see Appendix D) reveal that the sense distributions for _risk_ computed from the two corpora are indeed very similar, except that \(\mathcal{C}_{2}\) has two additional senses not present in \(\mathcal{C}_{1}\). However, those additional senses are highly similar to ones Figure 3: Multilingual semantic change detection of SSCS for ranking task in English, German, Latin and Swedish. present in \(\mathcal{C}_{1}\) and imply the semantic invariance of _risk_. Explicitly incorporating sense similarity into SSCS could further improve its performance. ## 6 Conclusion We proposed, SSCS, a sense distribution-based method for predicting the semantic change of a word from one corpus to another. SSCS obtains good performance among the unsupervised methods for both binary classification and ranking subtasks for the English unsupervised SCD on the SemEval 2020 Task 1 dataset. The experimental results highlight the effectiveness of using word sense distribution to detect semantic changes of words in different languages. ## Acknowledgements Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar. This paper describes work performed at the University of Liverpool and is not associated with Amazon. ## 7 Limitations An important limitation in our proposed sense distribution-based SCD method is its reliance on sense labels. Sense labels could be obtained using WSD tools (as demonstrated by the use of NLTK baseline in our experiments) or by performing WSD using pre-trained static sense embeddings with contextualised word embeddings, obtained from pre-trained MLMs (as demonstrated by the use of LMMS and ARES in our experiments). Even if the WSD accuracy is not high for the top-1 predicted sense, SSCS can still accurately predict SCD because it uses the sense distribution for a target word, and not just the top-1 predicted sense. Moreover, SSCS uses the sense distribution of a target word over the entire corpus and not for a single sentence. Both WSD and sense embedding learning are active research topics in NLP Bevilacqua and Navigli (2020). We can expect the performance of WSD tools and sense embeddings to improve further in the future, which will further improve the SCD accuracy of SSCS. Although we evaluated the performance of SSCS in German, Latin and Swedish in addition to English, this is still a limited set of languages. How Figure 4: Sense distributions of pin and plane in the two corpora in SemEval 2020 Task 1 English dataset. In each subfigure, probability (in \(y\)-axis) is shown against the sense ids (in \(x\)-axis). The sense distributions of plane have changed across corpora, while that of pin remain similar. The human ratings for plane and pin are respectively 0.882 and 0.207, indicating that plane has changed its meaning between the two corpora, while pin has not. The JS divergence between the two sense distributions for plane is 0.221, while that for pin is 0.027. \begin{table} \begin{tabular}{l|c|c|c c c} \hline \hline \multirow{2}{*}{Word} & \multicolumn{2}{c|}{Gold} & NLTK & SSCS & DSCD \\ & rank & \(\Delta\) & rank & rank & rank \\ \hline plane & 1 & ✓ & 2 & 1 & 15 \\ tip & 2 & ✓ & 17 & 6 & 7 \\ prop & 3 & ✓ & 24 & 17 & 4 \\ graft & 4 & ✓ & 23 & 4 & 36 \\ record & 5 & ✓ & 7 & 2 & 14 \\ stab & 6 & ✓ & 11 & 11 & 11 \\ bit & 7 & ✓ & 8 & 15 & 9 \\ head & 8 & ✓ & 14 & 10 & 28 \\ \hline multitude & 30 & ✗ & 26 & 23 & 35 \\ savage & 31 & ✗ & 22 & 29 & 26 \\ contemplation & 32 & ✗ & 13 & 35 & 37 \\ tree & 33 & ✗ & 12 & 27 & 30 \\ relationship & 34 & ✗ & 37 & 31 & 34 \\ fiction & 35 & ✗ & 35 & 33 & 29 \\ chairman & 36 & ✗ & 27 & 34 & 33 \\ risk & 37 & ✗ & 31 & 8 & 21 \\ \hline \hline Spearman & 1.000 & 0.462 & 0.589 & 0.529 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on the top-8 semantically changed (\(\Delta=\checkmark\)) words with the highest degree of semantic change and the bottom-8 stable words (\(\Delta=\checkmark\)) with the lowest degree of semantic change. NLTK baseline performs WSD using NLTK’s WSD functionality and uses KL to compare sense distributions. DSCD is proposed by Aida and Bollegala (2023) and approximates sibling embeddings using multivariate Gaussians. SSCS is our proposed method, which uses LMMS sense embeddings and JS as the distance metric. ever, conducting a large-scale multilingual evaluation for SCD remains a formidable task due to the unavailability of human annotated semantic change scores/labels for words in many different languages. As demonstrated by the difficulties in data annotation tasks during the SemEval 2020 Task 1 on Unsupervised SCD (Schlechtweg et al., 2020), it is difficult to recruit native speakers for all languages of interest. Indeed for Latin it has been reported that each test word was annotated by only a single expert because it was not possible to recruit native speakers via crowd-sourcing for Latin, which is not a language in active usage. Therefore, we can expect similar challenges when evaluating SCD performance for rare or resource poor languages, with limited native speakers. Moreover, providing clear annotation guidelines for the semantic changes of a word in a corpus is difficult, especially when the semantic change happens gradually over a longer period of time. The semantic changes of words considered in SemEval 2020 Task 1 dataset span relatively longer time periods, such as 50-200 years. Although it is possible to evaluate the performance of SCD methods for detecting semantic changes of words that happen over a longer period, it is unclear from the evaluations on this dataset whether SSCS can detect more short term semantic changes. For example, the word _corona_ gained a novel meaning in 2019 with the wide spread of COVID-19 pandemic compared to its previous meanings (e.g. sun's corona rings and a beer brand). We believe that it is important for an SCD method to accurately detect such short term semantic changes, especially when used in applications such as information retrieval, where keywords associated with user interests vary over a relatively shorter period of time (e.g. seasonality related queries can vary over a few weeks to a few months). ## 8 Ethical Considerations We considered the problem of SCD of words across corpora, sampled at different points in time. To evaluate our proposed method, SSCS, against previously proposed methods for SCD we use the publicly available SemEval 2020 Task 1 datasets. We are unaware of any social biases or other ethical issues reported regarding this dataset. Moreover, we did not collect, annotate, or distribute any datasets as part of this work. Therefore, we do not foresee any ethical concerns regarding our work. Having said that, we would like to point out that we are using pre-trained MLMs and static sense embeddings in SSCS. MLMs are known to encode unfair social biases such as gender- or race-related biases (Basta et al., 2019). Moreover, Zhou et al. (2022) showed that static sense embeddings also encode unfair social biases. Therefore, it is unclear how such biases would affect the SCD performance of SSCS. On the other hand, some gender-related words such as _gay_ have changed their meaning over the years (e.g. _offering fun and gaiety_ vs. _someone who is sexually attracted to persons of the same sex_). The ability to correctly detect such changes will be important for NLP models to make fair and unbiased decisions and generate unbiased responses when interacting with human users in real-world applications.
2308.05521
Checkpoint Placement for Systematic Fault-Injection Campaigns
Shrinking hardware structures and decreasing operating voltages lead to an increasing number of transient hardware faults,which thus become a core problem to consider for safety-critical systems. Here, systematic fault injection (FI), where one program-under-test is systematically stressed with faults, provides an in-depth resilience analysis in the presence of faults. However, FI campaigns require many independent injection experiments and, combined, long run times, especially if we aim for a high coverage of the fault space. One cost factor is the forwarding phase, which is the time required to bring the system-under test into the fault-free state at injection time. One common technique to speed up the forwarding are checkpoints of the fault-free system state at fixed points in time. In this paper, we show that the placement of checkpoints has a significant influence on the required forwarding cycles, especially if we place faults non-uniformly on the time axis. For this, we discuss the checkpoint-selection problem in general, formalize it as a maximum-weight reward path problem in graphs, propose an ILP formulation and a dynamic programming algorithm that find the optimal solution, and provide a heuristic checkpoint-selection method based on a genetic algorithm. Applied to the MiBench benchmark suite, our approach consistently reduces the forward-phase cycles by at least 88 percent and up to 99.934 percent when placing 16 checkpoints.
Christian Dietrich, Tim-Marek Thomas, Matthias Mnich
2023-08-10T12:03:54Z
http://arxiv.org/abs/2308.05521v1
# Checkpoint Placement ###### Abstract Shrinking hardware structures and decreasing operating voltages lead to an increasing number of transient hardware faults, which thus become a core problem to consider for safety-critical systems. Here, systematic fault injection (FI), where one program-under-test is systematically stressed with faults, provides an in-depth resilience analysis in the presence of faults. However, FI campaigns require many independent injection experiments and, combined, long run times, especially if we aim for a high coverage of the fault space. One cost factor is the _forwarding phase_, which is the time required to bring the system-under test into the fault-free state at injection time. One common technique to speed up the forwarding are checkpoints of the fault-free system state at fixed points in time. In this paper, we show that the placement of checkpoints has a significant influence on the required forwarding cycles, especially if we place faults non-uniformly on the time axis. For this, we discuss the checkpoint-selection problem in general, formalize it as a maximum-weight reward path problem in graphs, propose an ILP formulation and a dynamic programming algorithm that find the optimal solution, and provide a heuristic checkpoint-selection method based on a genetic algorithm. Applied to the MiBench benchmark suite, our approach consistently reduces the forward-phase cycles by at least 88 percent and up to 99.934 percent when placing 16 checkpoints. Fault Injection, Checkpoint Placement ## I Introduction Functional safety standards (e.g., ISO 26262 or IEC 61508 [1, 2]) demand that we assess the effects of transient hardware faults (soft errors) on our systems. As soft errors are rare in reality [3, 4], it is common to use systematic _fault injection (FI)_[5, 6] to quantify the resilience of a program. Such systematic FI campaigns are typically executed in three steps: 1. _trace_ a fault-free program execution as the _golden run_, which spans up the _fault space (FS)_ of all potential faults (one fault per every time step and for every bit of information). 2. _prune_ the FS [7, 8, 9] to plan a representative subset of faults as _pilot injections_, which will be carried out. 3. re-execute the program for every planned pilot, _inject_ it at the planned time and location, and classify the following program behavior. For each injected fault \(f\) in step (3), we need to bring the FI platform into the fault-free state at \(t_{f}\) (Fig. 1): After a reset to \(t_{0}\), the platform _forwardly_ the program by fault-free execution to \(t_{f}\). There, we inject \(f\) and continue --now faulty--execution flow until the FI platform detects a crash, a completion, or a timeout: Depending on the program-under-test, the fault model, and the employed pruning strategy in step (3), the distribution \(D(t)\) of pilot injections over the run time \(t\) is typically _not_ uniform. For instance, if DRAM is unprotected while processor caches employ ECC, potential soft errors manifest whenever the program loads new data into the cache. A pruning strategy that takes this into account would yield a FI distribution as shown in Fig. 2 (a). Please note that selecting pilot injections, is _not_ subject of this paper. For our proposed method, the distribution \(D(t)\) of planned/executed fault injections is given. The later the fault, the more forward instructions are executed before the actual injection takes place; earlier instructions are Fig. 1: Phases of an Injection Experiment: The golden run spans \([t_{0},t_{\text{end}}]\), the injection is done at \(t_{f}\), and the checkpoint \(C\) restores the program state at \(t_{C}\) Fig. 2: Checkpoints on an example FI distribution. (a) The distribution of the 14,000 injection experiments determines the (b) population of the required per-experiment forward cycles. Its integral (red area) is the total number of forward cycles for the FI campaign, which can be reduced by checkpoints (green areas). The (c) uniform placement of checkpoints is the state of the art, which is significantly outperformed by (d) an optimal checkpoint-selection algorithm. forwarded more often than later instructions (Fig. 2 (b)). For this, we introduce the term _population_, which is the number of experiments that are in the forward phase at a given point in time if we would start them all in parallel at \(t_{0}\). With \(t_{f}\) of a fault \(f\), that fault leaves the population; making it a monotonically decreasing function. A broadly applied technique to speed up the (repetitive) forwarding is _checkpointing_[10, 11]: By resetting the system not to the initial state at \(t_{0}\), but to some later state at \(t_{C}\leq t_{f}\) (Fig. 1), we save a significant amount of forwarding cycles. In this paper, we address the question of _checkpoint placement_. The state-of-the art approach [11, 12, 13, 14, 15] is to distribute checkpoints _uniformly_ on the time axis (Fig. 2 (c)). However, this is not ideal: Significantly higher savings can be obtained by our optimal placement strategy of the checkpoints (Fig. 2 (d)). For the paper, we claim the following contributions: * We describe the Checkpoint problem for a fixed number of checkpoints and reduce it to a maximum-weighted reward path problem for _directed acyclic graphs (DAGs)_. * We point out the shortcomings of the naive time-uniform checkpoint selection and propose three distribution-dependent checkpoint-selection methods. * We quantify the benefits of our approach on real-world FI distributions and show that our methods outperform the uniform selection in the best case by \(82.85\) percentage points. The remainder of this paper is organized as follows: Sec. II describes our FI model and characterizes the checkpoint-selection problem. In Sec. III, we reduce the problem to a constant-length maximum-weight reward path in a transitive DAG and provide three selection methods. We evaluate and compare all methods in Sec. IV, discuss our findings in Sec. V, review the relevant literature in Sec. VI, and conclude the paper in Sec. VII. ## II Fault-Injection Model and Problem Description **Systematic FI Campaigns** We target _systematic_ FI campaigns, which plan and inject many different faults into deterministic re-runs of the same _program-under-test (PUT)_. From a fault-free golden run, which has the temporal extent \([t_{0},t_{\text{end}}]\), a fault-planning strategy chooses \(F\) faults for injection. For each fault \(f\), we bring the FI platform (e.g., simulator or FPGA) into the fault-free state at \(t_{f}\) (see Fig. 1). After a reset to \(t_{0}\), the platform _forward_s the program by fault-free execution to \(t_{f}\). There, we inject \(f\) and continue the, now faulty, execution flow until the FI platform detects a crash, a completion, or aborts the injection (e.g., timeout). With checkpoints, we cut the fault-free forwarding time: Instead of resetting to \(t_{0}\), we restore to a previously-saved checkpoint \(C\), which brings us directly into the fault-free state at \(t_{C}\). From there, we endure a shorter forwarding phase, which saves us \(t_{C}-t_{0}\) cycles for _this_ injection (see Fig. 1). A checkpoint \(C_{i}\) is usable for all faults with \(t_{C_{i}}\leq t_{f}\), but it should only be used for faults with \(t_{C_{i}}\leq t_{f}<t_{C_{i+1}}\) to maximize savings. We assume that we can perform exactly \(k\) checkpoints at different points in time. This is, for example, the case when we employ FPGAs as FI platform [16] and use duplicated flip-flops to store the checkpoint, which allows for checkpoint restoration in a single cycle but limits \(k\) to the number of FPGAs. We further assume that checkpoint restoration is equally fast or faster than a full reset, which is inherently the case if resets are also implemented by a checkpoint at \(t_{0}\) (as in GemFI [17] or MEFISTO [18]). We further assume that the time axis is discretized into equal intervals, for whose we use the term _cycle_. **Checkpoint Selection Problem** Our goal is to reduce the number of required forwarding cycles over all planned fault injections \(F\) with \(k\) checkpoints. To give an intuition for this problem, we look at the relation between fault distribution, checkpoints, and the number of forwarding cycles in Fig. 2: In Fig. 2 (a), we show an artificial FI distribution \(D(t)\) over \([t_{0},t_{\text{end}}]\), where over 400 injections happen around \(t=10\) while almost none are at \(t=40\). Fig. 2 (b) shows the FI-experiment _population_\(P(t)\) that execute a specific forward cycle if we do _not_ employ checkpoints: At \(t=0\), all \(14\,000\) experiments start and run until their respective \(t_{f}\), where they leave the forwarding population, whereby \(P(t)\) is a monotonically decreasing function from \(P(t_{0})\) to \(P(t_{\text{end}})=0\). Formally, the non-checkpointed FI population is \[P(t)=\int_{t}^{t_{\text{end}}}D(t)\] As each running experiment executes \(t_{f}-t_{0}\) forwarding cycles, the sum of all forwarding cycles for the whole FI campaign is equal to the integral \(\int_{t_{0}}^{t_{\text{end}}}P(t)\). It is our overall goal to shrink this integral. In Fig. 2 (e) and (d), we see how checkpoints achieve this goal: At \(t_{C_{i}}\), only those faults \(f\) enter the population whose \(t_{f}\in[t_{C_{i}},t_{C_{i+1}})\). Each checkpoint "cuts out" a (green) rectangle of area \(w\) and we end up with the vastly reduced red areas (see (c) and (d)), which however vary depending on the position of the checkpoints. More formally, the set of checkpoints \(\mathcal{C}\), whose size is determined by the FI platform, saves us \(S(\mathcal{C})\) forwarding cycles over all fault injections. For notational ease, we use the system reset at \(t_{0}\) as an artificial checkpoint \(C_{0}\). \[S(\mathcal{C})=\sum_{i=0}^{k-1}w_{C_{i+1}}^{C_{i}}=\sum_{i=0}^{k-1}(t_{C_{i+1} }-t_{C_{i}})\cdot P(t_{C_{i+1}})\] After these fundamental considerations, we can describe the Checkpoint problem more precisely: Given \(P\) and \(k\), where should we place our checkpoints to maximize \(S(\mathcal{C})\)? ## III Checkpoint Selection The state-of-the-art checkpoint selection strategy, termed **uniform()**, involves evenly distributing checkpoints along the time axis between \(t_{0}\) and \(t_{\text{end}}\). While this method is frequently employed in the literature [11, 12, 13, 14, 15], it neglects the FI distribution and fails to utilize a degree of freedom available on the time axis. Consequently, it is evident that **uniform()** typically does not maximize savings. However, given its negligible computation cost, \(S(\mathcal{C})\) can directly contribute to end-to-end savings without the need to offset any selection overheads. Therefore, any checkpoint-selection methodology has to strike a balance between the production of optimal results and computational overhead. Moving forward, we will delve into the underlying computational complexity of the checkpoint-selection problem and introduce three \begin{table} \begin{tabular}{l l} \hline \hline Variables & Description \\ \hline \(t_{0}\dots t\dots t_{\text{end}}\) & discrete time axis \\ \(D:t\mapsto\mathbb{N}_{0}\) & FI distribution \\ \(P:t\mapsto\mathbb{N}_{0}\) & FI population, integral of \(D\). \\ \(s_{0}\dots s_{n}\) & steps in \(P()\), there are \(n\) steps. \\ \(\{C_{1}\dots C_{k}\}\in\mathcal{C}\) & set of \(k\) (real) checkpoints \\ \(C_{0}=s_{0}=t_{0}\) & artificial checkpoint to model system reset \\ \(R_{b}^{a}\) & the rectangle below \(P\) from \((s_{a},0)\) to \((s_{b},P(s_{b}))\) and with area \(w_{a,b}\). \\ \hline \hline \end{tabular} \end{table} TABLE I: Notation Overview distinct solution strategies: ilp(), DP(), and genetic(). Tab. I gives an overview of our notation. ### _Theoretical Considerations_ First, we aim to examine the structure of the Checkpoint problem to gain insight into its computational complexity. Despite the qualitative nature of Fig. 2 (b), \(P(t)\) is, in reality, a step function with a discrete abscissa (time axis) and steps \(s_{0}\ldots s_{n}\). The maximum number of steps is \(n\leq t_{\text{end}}\), but fewer steps may be present if no experiments are scheduled for a certain \(t\) (\(D(t)=0\)). However, the number of steps is generally large. Moreover, we only need to consider these steps as potential locations for checkpoints, because moving a checkpoint \(c_{j}\) that is located between two steps (\(s_{i}<c_{j}<s_{i+1}\)) to \(s_{i+1}\) will always increase savings by \((s_{i+1}-c_{j})\cdot P(s_{i+1})\). Therefore, we can translate any \(\mathcal{C}\) to a \(\mathcal{C}^{\prime}\) with \(S(\mathcal{C}^{\prime})\geq S(\mathcal{C})\) by relocating all checkpoints to the next step. We associate Checkpoint with finding the maximum-weight reward path of constant length in a DAG. This problem was recently connected [19] to the Knapsack problem, a foundational problem in combinatorial optimization, known to be NP-complete [20]. However, in a graph with \(m\) edges, we can find the maximum-weight reward path of length \(k\) using dynamic programming in \(\mathcal{O}(km)\) time [19]. Yet, we are not aware of any implementation in an actual application of that theoretical result. The correlation with Checkpoint is as follows: consider our population function \(P\), with steps \(s_{0},\ldots,s_{n}\) at discrete time steps on the abscissa, where \(s_{0}<\ldots<s_{n}\). For notational simplicity, we define \(s_{0}:=0\). In our reduction, we create several rectangles for each time step \(s_{i}\), the height of each rectangle is chosen to be \(P(s_{i})\), fitting precisely beneath the curve \(P\) at \(s_{t}\). For each step \(s_{t}\), we create exactly \(t\) rectangles \(R_{t}^{0},\ldots,R_{t}^{t-1}\) and set the width of \(R_{t}^{i}\) to \(s_{t}-s_{i}\) for \(i=0,\ldots,n\). The rectangle's area \(w\) equals its height times its width. The total number of rectangles created by this reduction is \(\sum_{i=0}^{n}i=\mathcal{O}(n^{2})\). The optimal checkpoint selection is then equivalent to find \(k\) rectangles that maximize the covered area under the \(P()\). To construct a DAG, we create one node \(v_{t}\) for each step \(s_{t}\) and introduce the artificial entry and exit nodes (\(v_{0}\), \(v_{n}\)) that act as start and end points for our desired paths. Moreover, we add one \(e_{i,j}=(v_{i},v_{j})\) for each rectangle \(R_{j}^{i}\) we created and set its weight to the rectangle area \(w_{i,j}\). Our graph is directed (i.e., in a positive direction), acyclic (i.e., no backward edges), and complete (i.e., transitive). Finding \(k\) optimal checkpoints can then be stated as finding the maximum-weight path between \(v_{0}\) and \(v_{n}\) that visits \(k\) inner nodes. The intuition behind this reduction is that selecting \(k\) checkpoints is, in fact, a step-wise under-approximation of \(P()\) with \(k\) rectangles. If maximized, this approximation minimizes the integral of the error to \(P()\); this integral comprises the remaining forwarding cycles of our FI campaign. In our reduction, we created all possible rectangles under \(P()\) that span between two steps. But to calculate a \(k\)-stepped under-approximation of \(P\), we must (a) avoid selecting overlapping rectangles, (b) ensure no gaps between adjacent rectangles, and (c) select \(k\) rectangles so their combined width is \(t_{\text{end}}-t_{0}\). Our graph structure encodes these constraints: (a) if we select the arc \(e_{i,j}\) we cannot select an arc \(e_{a,b}\) with \(a\leq j\) and \(b\geq i\) because the graph contains no backward arcs. (b) If we enter an inner node \(v_{i}\) via the arc \(e_{\ell,i}\), we also must visit a leaving arc \(e_{i,r}\) to reach the non-inner node \(v_{n}\), meaning all selected rectangles "touch" each other. (c) As the arc \(e_{i,j}\) represents the rectangle \(R_{j}^{i}\) with width \(s_{j}-s_{i}\), and we select only adjacent rectangles between \(v_{0}\) and \(v_{n}\), our selection spans from \(t_{0}\) to \(t_{\text{end}}\). ### _ILP-Based Checkpoint Selection_ To operationalize our reduction of Checkpoint, we use _integer linear programming_ (ILP) to formulate the selection of checkpoints as the optimization problem ilp(). More precisely, we use the _implicit path enumeration technique_ (_IPET_), which is also widely used in the real-time domain [21, 22] to find the worst-case execution path through a program. In contrast to control-flow graphs, our graph is acyclic, arc-weighted, and we search for a constant-length path. For our IPET formulation (see Fig. 3), we introduce one binary variable \(v_{t}\) for every inner node in the previously described DAG. If the ILP solver sets \(v_{t}\) to one, this indicates that step \(s_{t}\) is selected as a checkpoint. Further, we introduce the artificial nodes \(v_{0}\) and \(v_{n}\), which act as entry and exit nodes to our DAG. In our example, \(v_{3}\) is \(v_{n}\). With the constraint \(\sum v_{i}=k+2\), we force the solver to select a constant-length path through our DAG, visiting exactly \(k\) inner nodes; placing \(k\) "real" checkpoints. With IPET flow constraints, we encode the DAG structure: For each arc, we introduce a binary variable \(e_{i,j}\) that determines whether the arc from \(v_{i}\) to \(v_{j}\) is on the chosen path. For each inner node, the sum of incoming (\(e_{*,*}\)) and the sum of outgoing (\(e_{t,*}\)) must be equal to the node variable \(v_{t}\). The intuition behind this is that an inner node is entered as often as it is left. Further, the entry and exit nodes are surely part of the chosen path. As our graph has no cycles, each node and each arc can be visited exactly once and all variables have a binary domain. As maximization objective, we sum up all arc variables, which we weight by the area \(w\) of the rectangles they represent. For example, \(e_{0,2}\) is weighted with \(w_{0,2}=(s_{2}-s_{0})\cdot P(s_{2})\) (blue dashed rectangle). By construction, our ILP formulation will result in the optimal checkpoint placement for a given FI distribution. In total, we require \(\mathcal{O}(n^{2})\) many binary variables for a distribution with \(n\) steps. For our example, with \(k=1\), the solver has only one degree of freedom in choosing an inner node for a checkpoint. The possible solutions are \(\{v_{0},v_{1},v_{3}\}\) and \(\{v_{0},v_{2},v_{3}\}\), which reflect the paths \(e_{0,1}\to e_{1,3}\) and \(e_{0,2}\to e_{2,3}\). ### _Checkpoint Selection with Dynamic Programming_ As solving integer linear programs is usually computationally expensive, and moreover does not come with any worst-case run times in general, we further provide a dynamic programming algorithm DP() to find the maximum-weight reward path in an arc-weighted DAG \(G\). We use \(v_{0}\) and \(v_{n}\) as DAG entry and exit Fig. 3: ILP formulation of the checkpoint selection problem. nodes (source and sink), and search for a path with length \(k+1\) and maximum weight between \(v_{0}\) and \(v_{n}\). We create a dynamic programming table \(T\), which contains entries \(T[i,j]\) which encode the _maximum_ weight of any path in \(G\) which starts at \(v_{0}\) and ends at \(v_{i}\), uses at most \(j\) internal nodes from \(v_{1}\),...,\(v_{i}\). Thus, the table has \(n\cdot(k+1)\) entries. We initialize the table by setting \(T[i,0]=w_{0,i}\) for \(i\in[0,n]\) where and \(w_{a,b}\) is the weight of the edge between node \(v_{a}\) and \(v_{b}\); further, \(w_{a,a}=0\) for all nodes \(a\). We compute all other entries \(T[i,j]\) with \(j>0\) recursively through \[T[i,j]=\max_{x=0}^{\mathrm{i}}\left\{T[i-x,j-1]+w_{i-x,i}\right\}\enspace.\] The correctness of this recursion follows from the fact that to compute the \(j\)-step maximum-weight path between \(v_{0}\) and \(v_{i}\), we consider all possibilities for the additional step being located at \(v_{i-x}\). Left of \(v_{i-x}\), we have a \(j-1\)-step path of weight \(T[i-x,j-1]\), while we append one additional step of weight \(w_{i-x,i}\) to the right of \(v_{i-x}\). We fill the table \(T\) step by step for increasing values of \(i\) and \(j\) and read off the maximum weight of a solution in the table entry \(T[n,k]\). All entries are positive, and for each step only values from the previous row \(j-1\) are required. To identify the inner nodes (i.e., the selected checkpoints) that are part of the maximum-weight path, we use a second table \(X\) of size \(n\cdot(k+1)\) that records the value \(x\) for the selected maximum as \(X[i,j]=x\). Afterwards, we set \(C_{j}=X[C_{j}+1,j]\) with \(C_{k}=X[n,k]\). As we have to consider up to \(n\) possibilities for each entry and there are \(n\cdot(k+1)\) entries, the whole procedure takes \(\mathcal{O}(k\cdot n^{2})\) time and \(\mathcal{O}(k\cdot n)\) space. In the checkpoint-selection case, it is likely that we can further reduce the computation complexity by using that our DAG is a complete graph (i.e., a tournament), and moreover, the arc weights may satisfy the triangle inequality. ### _Genetic Checkpoint Selection_ As the computation time requirement of DP() is still quadratic, we propose the heuristic checkpoint-selection strategy genetic() based on genetic algorithms [23]. We have chosen genetic algorithms as (1) checkpoint selection is a discrete optimization problem and (2) we expect that combining two good solutions (cross-over operation) will often yield an even better result. Using a heuristic also allows us to abort the selection process when we see no further progress or when a pre-defined time budget runs out. We describe genetic() by defining the used genome, the genome-derived phenotype, the fitness function, as well as the used mutation operators. For the genome, which encodes one valid checkpoint selection, we choose an \(k\)-sized vector of steps from \(P\). As phenotype, we use the rectangles spanned by the selected steps and use \(S(\mathcal{C})\) as the fitness function, which we aim to maximize. As _random_ mutation operators, we combine two genomes by a two-point crossover with p=0.5, or mutate the one checkpoint by (each with p=0.125): (1) moving it one step to the left/right, (2) moving it three steps to the left/right, (3) moving it to a random step, or moving it to the middle between its left and right neighbor. Initially, we start with a population of 100 random genomes. In each round, we enlarge this population by cross-over and mutation to 300 individuals. After sorting them according to the fitness function, we surely select the 10 best genomes and exchange place 11 to 100 with p=0.5 with another randomly-picked individual to avoid getting stuck in a local optimum. We execute this heuristic search up to a given number of seconds in parallel and return the globally-best selection of checkpoints. The described algorithm does _not_ guarantee an optimal solution. ## IV Evaluation With our evaluation, we demonstrate that (1) uniform() shows a wide range of forward-cycle reductions, (2) our selection methods produce consistently better (or equal) results than uniform(), (3) the achieved advantage correlates with the non-uniformity of the fault distribution, and (4) genetic()'s results were optimal for multiple hundreds of distributions but at lower costs than ilp() and DP(). We compare our methods by saved cycles, runtime, and sensitivity to the uniformity of the distribution. We consider uniform() to be the state-of-art checkpoint-selection method (see Sec. VI). For our evaluation, we use synthetic benchmarks and realistic FI distributions, which we derive from the MiBench benchmark suite [24]. ### _Non-Uniformity Metric_ As we already discussed, the shortcoming of uniform() is that it does not exploit the temporal variance of the FI distribution. To quantify the intuition that genetic() can perform better on less uniform distributions, we require an metric to measure the "non-uniformity" \(U^{-}\) of the distribution \(D\). For this, we normalize it as \(\overline{D}\) in time and height to 100 percent and propose the _linearly-weighted frequency spectrum (WFFT)_ metric with \(\mathrm{fft}_{i}\) being the \(i\)-th element of the fast Fourier transformation: \[U^{-}(\overline{D})=\sum_{i=0}^{100}i\cdot\left|\mathrm{fft}_{i}(\overline{D})\right|\] With this metric, we look at the distribution in the frequency domain and weight low-frequency signal shares with a higher value than high-frequency shares. Thereby, the perfectly-uniform distribution will result in \(U^{-}=0\) and distributions with larger gaps end up with a higher score. ### _Synthetic FI Distributions_ Our initial objective is to compare the uniform(), genetic(), and ilp() approaches on synthetic FI distributions. These synthetic distributions are designed to qualitatively emulate different real-world distributions (see also Fig. 6), while offering the flexibility to span a wide variety of distributions and degrees of non-uniformity. The use of synthetic benchmarks enables us to scale \(t_{\text{end}}\), which in turn determines the number of steps \(s_{\text{t}}\) and consequently the number of ILP variables. As different programs and/or pruning methods only differ in the fault distribution at the checkpoint-selection stage, our evaluation demonstrates the generalizability of our approach. **Generation of FI Distributions** To synthesize random distributions, we start with a uniform distribution of faults which serves as a "noise carpet". Onto this carpet, we overlay between 2 and 100 (log-normal distribution) peaks shaped by the Gumbel distribution [25]. Each peak's height ranges from 2 to 5 times (uniform distribution) that of the carpet, with a width constituting 2 to 10 percent (uniform) of the total distribution. The Gumbel peaks simulate localized FI maxima, while the uniform carpet establishes a "base height". To illustrate the end results of our distribution generation method, refer to Fig. 4 which presents 36 FI histograms with 10000 steps, sorted and color-coded by their WFFT. Our generation approach produces distributions with diverse characteristics and a broad spectrum of non-uniformity. For instance, the top row displays more uniform distributions punctuated by a few shallow peaks, indicative of a low WFFT. In contrast, the bottom row exhibits distributions with a high WFFT, resulting from a few pronounced peaks and a smaller portion of uniformly distributed faults. Each tile in Fig. 4 is additionally annotated with the percentual savings in forward cycle achieved by the genetic() algorithm over the uniform() method when _eight_ checkpoints are placed. The additional savings range from \(0.7\) to \(33.2\) percent. It is noteworthy that less uniform distributions, as denoted by a higher WFFT, tend to yield larger genetic() gains. This trend is especially pronounced in distributions characterized by a few high peaks, as the genetic() algorithm can effectively pinpoint these to position the checkpoints, unlike the uniform() method which remains oblivious to the distribution characteristics. **Scalability** With our synthetic distributions, we want to compare the scalability of ilp(), DP(), and genetic(). This demonstrates the efficiency of our heuristic checkpoint-placement strategy and we can quantify the costs associated with finding an optimal solution with ilp() and DP(). To this end, we generate 100 distributions for varying numbers of steps and place checkpoints using all three methods (refer to Tab. II). These experiments were conducted on an AMD Ryzen 7 Pro 5850U CPU (16 HW threads, 48 GiB DRAM) and utilized Gurobi 10.0.1 to solve the ILP instance. For the genetic() algorithm, we set a time limit of 10 seconds, and we recorded the moment at which the last improvement occurred--that is, when the solution initially stabilized at the final result. In contrast, we restricted ilp() and DP() to run approximately 3 minutes; we employed Gurobi's standard options. For more than 3000 steps, ilp() ran into the time limit and we cannot report run-time numbers. A key observation, as evidenced by Tab. II, is that the genetic() algorithm arrived at the optimal selection, as discovered by iip() or DP(), for \(99.7\) percent of all generated distributions. Only for 3 distributions, all having 150 000 steps, genetic() did not converge on the optimal solution. For these three, however, the geometric mean of the over-approximation is only \(3.61\cdot 10^{-4}\) percent. Further, we note that solving ilp() with Gurobi, which is a general-purpose solver not specifically target at our problem, scales worst. With DP(), we can scale to 50 times larger problems within the same time limit. In contrast, genetic() scales best for these distributions and converges to a final solution well within the time limit. Hence, we deduce that the genetic() algorithm is well-suited to solve the checkpoint-placement problem, as it demonstrates both speed and efficiency in yielding favorable results. ### _Real-World Distributions_ Next, we want to explore the benefits our selecting methodology when applied to real-world FI distributions that we derive from traces of the MiBench. As these distributions have a high number of steps (up to 872 million), we only compare uniform() and genetic() as it is unrealistic to solve the resulting ILP instance or to execute DP(). For these experiments, we used an Ampere Altra machine with 80 aarch64 cores and 256 GiB of DRAM. **Fault Model** To explore different levels of "non-uniformity", we chose a fault model that allows us to scale this metric while still being connected to a real hardware implementation: With our fault model, all main-memory cells are uniformly vulnerable to single-event upsets (bit flips), while the caches are more robust against faults (due to being SRAM). Since the cache "filters" the CPU's memory accesses, only some accesses actually access the memory, from where the system incorporates and, thereby, activates faults. To exploit this bursty fault-activation pattern, a FI planner would employ def-use pruning [26] and inject faults at cache-miss time. **MiBench Traces** With the valgrind tool, we execute the MiBench benchmarks as Linux programs (aarch64, x86-64), and collect the memory-stage and the instruction-fetch accesses that happen after invoking main(). We choose x86-64 as a representative for CASC architectures and aarch64 as representative for load-store RISC architectures. With the obtained access traces, we use the pycachesim cache simulator [27] to derive the cache-miss distribution for different instruction- and data-cache setups: we simulate four-way associative caches with six sizes that range from 2 KiB to 64 KiB, which reflects the cache hierarchies of the safety-relevant Arm M7 processor family. For x86 the benchmarks ispell, sphinx and rsynth from the office branch and the tiff's and mad from the consumer branch are not compliable. For aarch64, the pgp_[d,e] benchmarks is buggy while valgrind crashes for ghostscript and rijndael_[d,e] due to a known \begin{table} \begin{tabular}{c c c c c} \hline \hline & ILP & DP & Genetic Algorithm \\ \cline{2-5} Steps & Run Time & Run Time & Run Time & Optimal? \\ \hline 500 & 10.6\(\pm\)2.9 s & 1.7\(\pm\)0.1 ms & 1.2\(\pm\)0.3 s & 100/100 \\ 1000 & 9.5\(\pm\)0.6 s & 6.5\(\pm\)0.2 ms & 1.3\(\pm\)0.3 s & 100/100 \\ 1500 & 22.8\(\pm\)14 s & 14.3\(\pm\)0.2 ms & 1.3\(\pm\)0.3 s & 100/100 \\ 2000 & 43.2\(\pm\)2.3 s & 25.2\(\pm\)0.8 ms & 1.3\(\pm\)0.3 s & 100/100 \\ 2500 & 75.9\(\pm\)9.9 s & 38.9\(\pm\)0.8 ms & 1.3\(\pm\)0.4 s & 100/100 \\ 3000 & 157.6\(\pm\)103.5 s & 55.9\(\pm\)1.2 ms & 1.3\(\pm\)0.3 s & 100/100 \\ \hline 10000 & – & 0.7\(\pm\)0.0 s & 1.5\(\pm\)0.4 s & 100/100 \\ 50000 & – & 16.9\(\pm\)0.7 s & 1.8\(\pm\)0.8 s & 100/100 \\ 100000 & – & 71.6\(\pm\)3.4 s & 2.2\(\pm\)1.6 s & 100/100 \\ 150000 & – & 175.2\(\pm\)10.7 s & 3.1\(\pm\)2.2 s & 97/100 \\ \hline \hline \end{tabular} \end{table} TABLE II: Scalability on Synthetic Distributions Fig. 4: 36 randomly generated distributions, colored and sorted by their WFFT and arranged in left-to-right and top-to-bottom order. Each tile is annotated with the percentual forward-cycle reduction that genetic() achieves over uniform() for placing 8 checkpoints. bug. With the two architectures (aarch64/x86-64), \(23/28\) benchmarks, two memory-access paths, and 7 cache sizes (including no cache), we end up with \(714\) FI distributions. We use these distributions as \(D(t)\) and apply uniform() and genetic(), which we execute for 10 seconds. **Fixed Number of Checkpoints** To show that genetic() produces consistent results, we evaluate the selection strategies under different parameters: the cache size, difference in architecture and the number of checkpoints. First, we quantify the influence of the cache size for different architectures (aarch64, x86-64) and memories (instruction, data). While larger caches result in less uniform distributions, they have an especially high impact in the instruction-memory accesses as loops result in a high locality. In Fig. 5, we show the reductions for selecting a fixed amount of 16 checkpoints over different cache sizes and architectures. While uniform() is able to result in large reductions for many benchmarks, we also see that its results significantly deteriorate for large cache sizes. Over the shown matrix, we see that genetic() (\(\sigma_{G}\in[93.73,98.52]\)) consistently outperforms uniform() (\(\sigma_{U}\in[72.86,93.75]\)). When looking at individual benchmarks, genetic() achieves at least a reduction by 88.35 percent (for gsm_d, D-Mem, 64K), while uniform() even resulted in _no improvement_ for one benchmark (ghostscript, I-Mem, 16K) as all injections were planned before the first uniform checkpoint. In the best case, genetic() even achieves 99.934 percent (for bitcints, I-Mem, 8K) savings. Regarding the architecture, we see no significant difference between aarch64 and x86, which brings us to the conclusion that our findings are also generalizable to other architectures. Qualitatively, we could identify three patterns: (1) when everything fits into the cache, cache misses only occur in the warm-up period and genetic() correctly sets the checkpoints in the warm-up period, while uniform() distributes them blindly over the whole program run. We could observe this for the D-Mem of all benchmarks with a small input (e.g., bitcount, ADCCMs and stringsearch). (2) for benchmarks with a high cache pressure, cache misses occur regularly, and the injection instructions become more uniformly distributed. For these benchmarks (e.g., PATRICIA with its >270 KiB input size), the advantage of genetic() disappears. (3) for benchmarks with an irregular cache-miss distribution, uniform() often places checkpoints in periods of low cache-miss rates, and the effect of the checkpoint is not optimally utilized. For example, in Fig. 6, uniform() disadvantageously places checkpoints in a period with nearly no misses, while genetic() uses those in initial cache warming phase, leading to a 7.7 percentage-point improvement. **Varying Number of Checkpoints** Next, we are interested how both strategies perform when we scale the number of checkpoints between 2 and 16. The results are shown in Fig. 7, where we plot Fig. 5: Forward-cycle reduction for different cache sizes and 16 checkpoints. The x-axis is sorted by the forward-cycle reductions for uniform(). \(\sigma_{G}\) and \(\sigma_{U}\) refer to the average reduction for genetic() and uniform() respectively. Fig. 6: Distribution and checkpoints for JPEG compression. After the cache has warmed (\(t\approx 30\)), the data memory is only rarely read, leading to a skewed injection distribution. the achieved reduction per benchmark as two points (one red and one blue). The lines mark the average reduction per checkpoint count and placement strategy, while higher is better. Further, we tile the results along the CPU architecture and cache-type axis to determine if those dimensions have an significant impact on the achieved savings. First, we can see that increasing the number of checkpoints has a diminishing effect for both strategies and the 16th checkpoint has a far smaller effect than the third one. However, on average, \(\mathsf{genetic()}\) has a consistent advantage, regardless of the memory kind or the architecture, which \(\mathsf{uniform()}\) cannot close, even with 16 checkpoints. At worst, and averaged over all benchmarks, \(\mathsf{genetic()}\) requires 7 checkpoints for aarch64 D-Mem to achieve the same reduction as 16 uniform checkpoints. For x86 I-Mem, we even require only 6 checkpoints to achieve an average reduction of 89.8 percent (uniform: 77.72 %). **Sensitivity with Respect to Non-Uniformity** Next, we investigate the influence of the "non-uniformity" of our real-world distributions to further substantiate our conjecture that \(\mathsf{genetic()}\) has superior performance over \(\mathsf{uniform()}\) for less uniform inputs. In Fig. 8, we plot the achieved forward cycle reductions per benchmark against \(U^{-}\). Again, each benchmark appears as two points (red for \(\mathsf{genetic()}\) and blue for \(\mathsf{uniform()}\)) per tile. To highlight the trend for more non-uniform distributions, we plot a linear regression through both result sets. We show results for four checkpoint counts (2, 4, 8, 16) and higher is better. First, we see that real-world FI distributions that stem from our fault model have an even higher WFFT score than our synthetic benchmarks. Making the checkpoint selection problem even more important for real-world FI campaigns. Further, while \(\mathsf{uniform()}\)'s performance deteriorates for less uniform distributions, \(\mathsf{genetic()}\)'s performance even exhibits an improvement, particularly when the number of available checkpoints is small. This can be attributed to the fact that a checkpoint placed immediately prior to a significant peak yields a greater reduction than one positioned before an extended, shallow hill. ## V Discussion We evaluated the proposed checkpoint-selection algorithm on the MiBench benchmark suite for both aarch64 and x86, with varying numbers of checkpoints and cache sizes. In addition, we demonstrated that \(\mathsf{genetic()}\) is able to the achieve (almost) the same optimal results as \(\mathsf{ilp()}\) and \(\mathtt{DP()}\) while it scales better when confronted with larger problems. Our evaluation reveals a greater efficiency in our checkpoint-placement methods for more irregular distributions. These gains are attributed to our adherence to the actual distribution rather than the blind, uniform placement of checkpoints. For a fault model that provoked non-uniform distributions, our method correctly pinpoints and leverages areas of high importance such as the cache warm-up period, characterized by a high density of injection sites. Our approach, irrespective of architecture and memory type, consistently outperforms \(\mathsf{uniform()}\) in selecting superior checkpoints and reducing forward-phase cycles. The premise of this paper rests on the assumption of a fixed number of checkpoints, possibly constrained by the FI platform. Yet in situations with an unrestricted number of checkpoints, they are not without cost. Firstly, checkpoints require memory and storage, potentially significant if the entire DRAM state is captured. Secondly, the process of creating, storing, and distributing checkpoints to the fault injector consumes time, effectively reducing the net savings; more checkpoints may paradoxically lead to fewer savings. Thirdly, our evaluation demonstrated diminishing returns from additional checkpoints. In contrast, improving the placement of existing checkpoints enhances their effectiveness with marginal cost increase. For instance, even for the worst outcome, \(\mathsf{genetic()}\) with six checkpoints achieves a greater reduction than \(\mathsf{uniform()}\) with 16 checkpoints. However, determining the Pareto-optimal number of checkpoints for distribution-aware placement remains an area for future research. Additionally, the overhead incurred by determining checkpoint distribution is minor compared to the cycles saved. In our experiments, we limited \(\mathsf{genetic()}\)'s runtime to 10 seconds, although it often converged sooner (see Tab. II). The total runtime of a complete systematic FI campaign, while indeed dependent on the specific PUT, typically spans several hours or even days. Consequently, investing an additional ten seconds to optimize checkpoint placement using \(\mathsf{genetic()}\) is always justifiable. Given these considerations, we argue that an optimized checkpoint selection ought to become the norm Fig. 8: Sensitivity with Respect to the Non-Uniformity of the Fault Distribution. (N=714) Fig. 7: Varying Number of Checkpoints. Effect of the checkpoint count on the forward-cycle reduction over all cache sizes and benchmarks. Horizontal jitter to reduce overprinting. for any FI campaign that has, either by a constraint or by a design decision, a fixed number of checkpoints. ## VI Related Work When we look into literature, FI tools and, when reported, the way checkpoints are placed in evaluations can be divided into three categories. The first common approach is to use them to skip the startup sequence of the simulator, which is often longer than the loading time for a checkpoint. This is done in both fault injection tools GemFI [17] and MEFISTO [18]. In addition, several works report utilizing checkpoints in this way to accelerate their evaluation [28, 29]. The next approach is to use more than a single checkpoint. For example, GangES [12] saves checkpoints periodically during recording the golden run, which results in a uniform distribution. Several other works [13, 14, 15, 11] distribute them uniformly. However, in all this works the distribution of checkpoints is not a focus, and thus they do not report on the effect of using checkpoints. Some FI tools like FAIL\({}^{*}\)[30] leave the decision, where to place checkpoints and how many to the user. Very few studies attempt to quantify the impact of checkpoints on FI-campaign run time. Ruano _et al._[31] positioned a single checkpoint at three-quarters of the total runtime and "almost at the end". Their analysis found that the later checkpoint led to more significant runtime savings and they concluded that a detailed examination of checkpoint selection is necessary. Parotta _et al._[32]_uniformly_ place checkpoints to accelerate hardware-assisted FI-campaigns. They find that beyond a certain number of checkpoints--in their case, 10 checkpoints--, savings become diminishing; a result that aligns with our own. Schirmeier _et al._[33] propose smart-hopping, an improved forwarding mechanism based on hardware breakpoints, to speed up the forwarding phase for hardware-assisted fault injection. Although they briefly explore checkpoint placement, their placement method results in the _uniform_ distribution if used without smart-hopping and hardware support. In contrast, we provide a fundamental study of checkpoint selection that is hardware and FI-mechanism independent. While not being our focus, the efficient storage and retrieval of checkpoints is another important topic [34]. ## VII Conclusion One cost factor of comprehensive FI campaigns is the forwarding phase, which is the time required to bring the _program-undertest (PUT)_ into the fault-free state at injection time. The common technique to speed up this process are checkpoints of the fault-free system state at fixed points in time. In this paper, we show that the placement of checkpoints has a significant influence on the required forwarding cost, especially if the planned faults are non-uniformly distributed in time. For this, we discuss the checkpoint-selection problem in general, reduce it to the problem of finding the maximum-weight reward path in DAGs, and propose three distinct methods; two of them provide the optimal solution, while the third is a heuristic based on genetic algorithms. We compared the proposed methods with synthetic benchmarks and applied our genetic algorithm on the MiBench benchmark suite on both aarch64 and x86, with varying amounts of checkpoints and cache size reflecting those of the M7 processor family. This evaluation parameters resulted in a total of \(714\) FI distributions. Our approach consistently performs better than a time-uniform checkpoint selection regardless of the underlying architecture and equally for data- and instruction-fetch accesses. Overall, with 16 checkpoints, we are able to consistently reduce the forward-phase cycles, with reductions of at least 88 percent and up to 99.934 percent. ## Acknowledgements We thank the anonymous reviewers (in advance) for their valuable feedback and dedicated efforts in helping us improve this paper. This work was funded by the _Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)_ - 468988364, 501887536. The source code and fault distributions used for the evaluation are available as an data artifact [35].
2308.13343
Squeeze aggregated excitation network
Convolutional neural networks have spatial representations which read patterns in the vision tasks. Squeeze and excitation links the channel wise representations by explicitly modeling on channel level. Multi layer perceptrons learn global representations and in most of the models it is used often at the end after all convolutional layers to gather all the information learned before classification. We propose a method of inducing the global representations within channels to have better performance of the model. We propose SaEnet, Squeeze aggregated excitation network, for learning global channelwise representation in between layers. The proposed module takes advantage of passing important information after squeeze by having aggregated excitation before regaining its shape. We also introduce a new idea of having a multibranch linear(dense) layer in the network. This learns global representations from the condensed information which enhances the representational power of the network. The proposed module have undergone extensive experiments by using Imagenet and CIFAR100 datasets and compared with closely related architectures. The analyzes results that proposed models outputs are comparable and in some cases better than existing state of the art architectures.
Mahendran N
2023-08-25T12:30:48Z
http://arxiv.org/abs/2308.13343v1
# Squeeze aggregated excitation network ###### Abstract Convolutional neural networks have spatial representations which read patterns in the vision tasks. Squeeze and excitation links the channel wise representations by explicitly modeling on channel level. Multi layer perceptrons learn global representations and in most of the models it is used often at the end after all convolutional layers to gather all the information learned before classification. We propose a method of inducing the global representations within channels to have better performance of the model. We propose SaEnet, Squeeze aggregated excitation network, for learning global channelwise representation in between layers. The proposed module takes advantage of passing important information after squeeze by having aggregated excitation before regaining its shape. We also introduce a new idea of having a multibranch linear(dense) layer in the network. This learns global representations from the condensed information which enhances the representational power of the network. The proposed module have undergone extensive experiments by using Imagenet and CIFAR100 datasets and compared with closely related architectures. The analyzes results that proposed models outputs are comparable and in some cases better than existing state of the art architectures. ## 1 Introduction Convolutional neural networks (CNNs) have always been emerging in network engineering irrespective of the current trends. Multiple architectures have been proposed by identifying problems and providing solutions one-by-one [5, 13]. Various architectures emerged after the success of AlexNet [9], including VGG [10], Residual network [4]. Convolution based architectures have been exploring to increase the representational power of the network [7]. Researchers have been trying to enhance the representations of deep learning architectures in various ways as CNN is effective for vision based tasks. There are multiple representations with which layers of neural networks learn for vision tasks. Convolutional networks learn with spatial correlations within local receptive field. Spatial correlations are important for learning new features for performing models. CNN learns the spatial representations which is key for identifying patterns in images. Fully connected (FC) layers learns global representations as they have connections with all the nodes in that layer. Fully connected layers are often used in architectures at the nearing end as it helps in classification of results using Softmax layer. Squeeze and Excitation network (SENet), one of the latest developments in this field, have explicitly modeling interdependencies of channels [7]. This enhances the representational power of the network by feature recalibration. SENet [7] made changes internally within channels to have a impact on performance of the model. This module selectively sends important information to the next layer from globally learned information. Squeeze and excitation network proposed to have channel-wise representations in the model. Christian et al. [13] proposed Inception module which introduces a new architectural design of having an optimal sparse CNN within the block. The multibranch convolutions consists of different filter sizes which are concatenated at the end of the block. This new strategy is ignites the new architectural topology and thereby proposing a method of achieving better performance by having less theoretical complexity. The inception module usually contains convolutional layers to make it learn spatial representation. Similar approaches have been proposed like research which have multiple branches of same topology within module. These aggregated modules is also known to be measured with new dimension name called as cardinality. Aggregated networks make network learn more spatial correlations without increasing depth and utilizing the computation effectively. Further researchers have adopted this inception based approach and built network architectures [1, 2, 6, 16, 1]. Fully connected layers learns global representations from all the previous layers and use it for classification. These learned have been used towards the end in usual classification architectures. Squeeze and excitation networks [7] proposes channelwise representation learning with the help of FC layers. SENet makes channelwise representatiosn stronger by squeeze and excitation operations. Channel wise features are recalibrated with the squeezed input. The excitation part of proposed module consists of fully connected layer which learns global representation. This module make sure that only the most important features are passed to the next layer. This specific module design when repeatedly called in networks like ResNet inside the residual module can have a greater impact as it acts as a filter to the network. The comparison between the existing aggregated residual module, the Squeeze and excitation module and the proposed squeeze aggregated excitation module is shown in Figure 1. From the figure, the Squeeze excitation module passes important features in both SE and SaE module. In the proposed SaE module, we utilize this stage of module in a better way by increasing cardinality between layers. We have increased cardinality of the first FC layer after the squeeze module. From the inspiration of inception module we use aggregated FC layer of same size similar to resnext. Since this has increased impact than stacked layers as discussed [14]. This not only makes important features to be learned by global representations in module but also have better performance with increased cardinality. The proposed module is shown in Figure 2. The proposed module have better theoretical complexity than existing SEmodule. We use a reduction size of 32 and cardinality of 4. We keep the cardinality values small as the important features are being learned and not to increase the complexity. The results from the aggregated FC layers are concatenated in the excitation phase and regained its output shape to pass it to upcoming layers. The entire operation of Squeeze aggregated excitation module containing the aggregated FC layers is shown in Figure 2 In this paper, we propose a combination of architectural designs by linking spatial, channel wise and global representations. We brought a mix of SEnet with aggregated resnet to propose this module, SaEnet, Squeeze aggregated excitation module. The main contributions of this paper is summarized as follows: * We recognize the impact of the aggregated modules and representational power from Inception architectures and squeeze and excitation module respectively. Aggregated module reduce the theoretical complexity Figure 1: The comparison between the resnext, serenset and proposed saeresnet modules are shown. a) The aggregated module of multi-branch convolutional layers having less theoretical complexity. b) The squeeze and excitation module consists of reduction of input size to 1 along their channels and then regaining its original shape. Input to squeeze operation is sent to regain the original shape unlike aggregated network. c)The proposed SaEnet module containing a mix of a) and b). The squeeze performs reduction of input and the reduced input is passed to aggregated layers for learning more representations than the SEnet. Results are concatenated and follows the SEnet for regaining their shape. of the network. * As a fully connected layer learns from the global representation of the network, we propose an idea of having fully connected layers as aggregated module which is to be proven effective in [14]. * We propose a method of having an multi-branch fully connected layers for which the squeezed layer passes important features before excitation to normal shape. This aggregated module not only has less complexity but enhance the performance than traditional aggregated modules as only important features are being fed as input and other features are omitted. ## 2 Related Work The prior work related to the proposed model is seen in this section. From the introduction of ConvNets by YannLe Cunn, the convolutional neural network (CNN) have success in vision applications. Later multiple networks have been proposed once the researchers have access to better computational machines. This helped the researchers to contribute significant work in field network engineering. Architecture uses CNN for better performance have been seen from AlexNet [9], ZFNet [17], VGG [10], Resnet [4]. The most relevant methods to the proposed architecture are multi branch convolutions and grouped convolutions. During 2015, Inception created a wave in architectural design which achieves competitive performance with lesser computational complexity. It uses multi branch convolutions where the convolutions in branches are customized. It utilizes the image module to the maximum extent by adding multiple convolutional filters within a multi-branched structure. This won the ILSRVC in 2014 and reduced the parameters of previous best AlexNet from 60 million to 4 million. This multibranch convolutions later have been used in aggregated ResNet modules. Some of the notable architectures which emerged from the inception having traits directly or indirectly are ResNeXt, Xception, Mobilenet [2, 6]. The inception module shows the benefits of having deeper networks. Xception architectures have splitted the convolution operations within the inception module which makes the convolution operation much faster. From xception, Mobilenet also uses depthwise separable convolutions for all layers which have lesser computations and model size is comparable smaller in nature. Grouped convolutions distribute models over multiple GPUs. Alexnet uses the grouped convolutions to distribute the model over two GPUs. There's little evidence that this type of convolutions helped in increasing accuracy to the best of our knowledge.Channel wise convolution is a variant from grouped convolution with the total number of groups are equal to the number of channels. Residual networks have established the new wave by proposing a solution of vanishing gradient. When going deeper architecture, model performance gets degraded as the learned representations are not being transformed to deep networks. This can be overcome by the residual module of resnet by giving shortcut connections which pass previous learned representations repeatedly at regular intervals. Batch normalization helps stabilize the learning by regulating inputs to the layers [8]. This normalization works on the batch wise inputs. Residual networks have multiple variants as it can expand its depth even to 1000 layers and forms a testing base for researchers [5]. The upgradation of residual networks have been going on for long time. The variants are discussed one by one. Biglittle resnet [1] which alters the residual network by creating two branches. One branch keeps the original residual structure intact called 'big' and another branch called little focuses on convolution layers with smaller feature maps by allowing models to learn other patterns as well. Resnext [14] proposed an aggregated residual module by creating a new dimension of cardinality. Resnext have multibranch convolution networks similar to the inception module but with the same topology. The multibranch is connected Figure 2: The figure depicts the inside working of proposed SaE module. The inputs from convolution is being squeezed then passed to aggregated FC layers which follows the excitation process. The splitted input is passed at the end to regain the original shape for future layers. with concatenation of resulting output from all the residual branches and passed to the next layer. Resnext reduces theoretical complexity and also enhances the performance better than traditional stacked architecture as claimed by Saining [14]. Wide residual network (WRN) [16] is another form of residual network which has an expanded width of convolution layers. This makes the module learn more than traditional resnet and also passes the learned information via shortcut connections. This collectively enhances the performance of the module. Researchers proposed resnest which uses split attention networks [18]. In this resnest, the module creates a multipath network with a combination of channel wise representations. This can be done by splitting the layer with 1x1 convolution and then followed by 3x3 convolution. Then everything is combined with a split attention block. SEnet [7] proposed a resnet based squeeze and excitation module which increases accuracy by channel wise representations in the module. Highway networks have a gated mechanism for regulating shortcut connections [12]. There is a relation with different filters used within architectures as it increases the representation of the network by learning smaller patterns. Existing approaches include inception [13], Pyramid network [3] have multiple topologies mixed to provide better performance.In xception [2] and mobilenets [6], the depth wise separable convolution implements depthwise and pointwise convolution. In depth-wise convolution, a single filter is applied on each input channel. Then the pointwise convolution is applied where the 1x1 kernel filter is applied on each value. The channel-wise convolution is part of separable convolution where in the channel wise the convolutions are completed before pointwise convolutions. These architectures do have reduced parameters when compared with similar performing architectures or non aggregated networks. This work is related to the squeeze operation which deliberately reduces the structure of the network. There are architectures like Structured transform networks [11], Deep fried convnets [15], Shufflenet [19] which have considered computational aspects as well. Architectures have focused on reduced computation, small networks and some have effect on mobile based applications. There are other approaches as well to create smaller networks by shrinking compression methods like quantization of pre-trained networks. The attention mechanism in CNN focuses on the most important parts of the image and neglect the irrelevant parts. This helps in understanding the complex scenes in an image effectively. They are typically used by combining softmax function or sigmoid function which acts like a gating mechanism. The attention mechanism is used in squeeze and excitation network within SE block which models channel-wise relationships. The similar approach has been taken in proposed SaE network with lightweight gating mechanism to focus on channel-wise relationship in the network. Similar related architecture for our proposed approach is the ensemble method. Ensembles have multiple replicas of the same model which work in parallel for the same problem and results are chosen based on all the results combined. Ensemble models have better performance than single models as they predict better. ## 3 Representation comparison of SEnet and SaEnet Researchers claimed that having aggregated modules containing more than one convolution operation by branching the input is effective than having deeper network or wider layers [14, 16, 4]. This increased cardinality operations irrespective of filter size being same or different have impact on accuracy. This results when the model has been learning better spatial representations including the convolutional layers in the aggregated modules. The proposed one have enough layers to learn when the important information is being given as input. The idea of learning global representation in between spatial learning is proposed by Squeeze and excitation networks. Squeeze operation enable the learned information from global receptive field accessible to following layers which enhance performance. Making the global information to be learned with increased cardinality will make the model to learn better than existing approaches. The images from figure 3 depict the learned representations for the first convolutional layer of both the SEnet and the proposed SaEnet. The learned representations have been taken after model is trained from scratch for 50 epochs with initial learning rate of 0.01 with the steps of 15 epochs the value is decayed with the rate of 0.1. This makes the model to learn in a better way. Due to the smallest computational power that we have, we are able to run it for 50 epochs. The images depicts that the proposed SaEnet learns more rep Figure 3: This explains the learned activation kernels for particular input dataset. We compare the learned values for the first convolution of SE resnet and the proposed SaE resnet models. resentations than SEnet as the consolidated information is passed on to the much more layers than SEnet. ## 4 Methodology As discussed in the previous sections, the proposed network learns spatial representations from convolutional layers, channel-wise representations from squeeze and excitation layer and better channel-wise global representations from the aggregated layers. The proposed module is closely related to the ResNeXt which contains the aggregated molecule within the residual module. In resnext, researchers proposed the branched convolution contains the same layers with groups of size 32. For explaining of the module better, we use the proposed approach with residual network [4]. The residual module has become universal model for evaluation and bringing depth into consideration. Residual module contains shortcut connections which skip one or more layers. The layers are concatenated with the shortcut connection. The basic version of residual module, for an input x, the functions that are involved on altering input including batch normalization and dropout are indication with 'F()' the residual module is given by, \[Resnet=x+F(x) \tag{1}\] For aggregated module, the input alone is concatenated as they use branched convolutions and the resultant formula is given as, \[ResneXt=x+\sum F(x) \tag{2}\] The core part evolved from the squeeze and excitation network. The Squeeze and excitation module consists of combination of two methods namely Squeeze and Excitation. Squeeze module contains the input to be squeezed with fully connected (FC) layers. The output of the convolutional layer is fed into the global average pooling layer to generate the channel wise input. Then the input is fed to FC layer with the reduction size. The excitation part of module consists of having FC layer without reduction to bring it back to original form. FC layer followed by scaling operation wherein the output is scaled by channelwise multiplication with the feature map. The final output is rescaled to its original shape. Squeeze and excitation operation for residual module is formulated as, \[SEnet=x+F(x\cdot Ex(Sq(x))) \tag{3}\] Wherein, 'Sq' function specifies the Squeeze operation involving FC with reduction size 'r'. The 'Ex' operation is excitation operation which happens after 'Sq' to reshape the channelwise modified inputs to same shape without reduction. This is followed by a scaling operation with the input to bring its original form. This is concatenated with the input as it is in the residual module. Having the bigger groups in the squeezed format may increase the size and deviates the core idea of squeezed excitation. Thus we tested by having a cardinality of 4 which enhances itself as the core important features only are engaging with the excitation layer. This cardinality is enough to learn global representations better. In the squeeze operation, the FC layer with reduced size acts on the output of global average pooling. This conversion makes the important features to pass through the module and boosts the representational power of the network. We propose the increased cardinality of that FC layer. Having branched FC while reduction makes the model to learn more global representations in the network. The aggregated layers inside squeeze operation are concatenated and passed to the FC layer as shown in Figure 4. Then the output from FC is multiplied with input layer of the module for regaining the dimension. This final output is obtained by a scaling operation similar to SENet. This operation inside a residual module is depicted as follows, \[SaEnet(Proposed)=x+F(x\cdot Ex(\sum Sq(x))) \tag{4}\] ## 5 Squeeze aggregated excitation resnet As the proposed module is on the existing SENet, it have the same characteristics of SEnet. This module can be integrated in architectures. The standard architectures output layer can be fed directly to the proposed module. We implement the proposed module on residual network as it is the Figure 4: Figure represents the proposed aggregated fully connected layers within SaE net. This combines with concatenation at the end. The compressed conv layers after squeeze is fed as input to this module. The layers consists of reduction size 32 with the cardinality 4. testing base for most of the models [1, 7, 14] and for easy evaluation of experiments with existing architectures. The proposed SaE module is explained in detail on implementing in resnet architecture. The residual module contains a shorcut connection which appends after certain layers to pass the learned information of network. This helps in building deeper networks by avoiding vanishing gradient problem. The SaE module is incorporating module in residual module without changing the architecture. The purpose of choosing residual module is that it is passing information constantly each time and the learning of network can be made better if residual module learns better. Researchers implement the proposed modules on the residual module for this reason [1, 7, 14]. The squeeze operation uses summation for all the branched FC layers followed by excitation operation. Figure 5 shows the comparison between the SE module with the proposed module on resnet module. The indepth function of Squeeze and excitation is explained in the figure. The squeeze operation after the global average pooling layer for getting channel wise statistics. This is then fed to squeeze operation by shrinking the input. This is followed by excitation layer. The residual module consists of a repeating convolutional layer after certain layers forming a module. This type of modules repeat on a periodic basis for passing the learned gradients and not letting them vanish when going deeper. The residual network is structured with the base of VGG network [4, 10]. ## 6 Implementation We implement the proposed method in comparison with state of the art architectures. We use two datasets for comparison CIFAR100 and Imagenet. On the datasets CIFAR 100 and Imagenet, we use input of size 224x224. We use H100 for testing the proposed method. The proposed module can be incorporated with any existing module. This has similar characteristics to the SE module. We implement the proposed module on ResNet and compared with other architectures. The residual module have been implemented for deeper models, we implement that characteristic along with proposed SENeXt. The comparison between the residual network along with the proposed Squeeze aggregated excitation module on resnet and resnext is tabulated 1 The above table represents the proposed Squeeze aggregated excitation module on residual network and aggregated residual network. The plain residual network is added on column right after the output size to compare with the proposed SaE resnet. Both the variants contain similar SaE characteristic along with its own model characteristics. All the convolutional layers are backed by the batch normalization technique. When compared to other normalization techniques, batch normalization provides a better by solving internal covariant shift. This normalization have better performance than other techniques. The normal SaE resnet starts with the convolutional layer which is followed by batch normalization. We also use the activation function relu in all the convolutional layers except the last layer which uses softmax as it is used for classification. We explain the proposed model in comparison with vanilla residual network. Vanilla resnet is proposed for vanishing gradient problem. Later models proposed on top of resnet either having their own alterations [1] or making alterations in the module [7, 14, 16]. Alterations in the module is definitely being passed to the next layers using shortcuts. The proposed squeeze aggregated excitation (SaE) is an upgraded version of squeeze excitation (SE) module with extra cardinality parameter. SE module is proposed within the resnet module by squeezing and regaining to the same position using FC layers. The core part of SE module is altered with our proposed approach. Within the residual module, the squeeze module uses the global average pooling layer. We use the raw format by taking mean average of each tensor since the global average pooling layer is not in Pytorch. The squeeze operation is followed by aggregation which increases cardinality. We also have a reduction size as SE module which allows the squeezed form of information can be forwarded to next layer. We use reduction of 32 for any model which reduces from the current input layer to SaE Figure 5: Figure represents the overall comparison between the squeeze and excitation resnet module with the proposed squeeze aggregated excitation resnet module. In figure, the small rectangle represents fully connected (FC) layers and the convolutional layers are represented as ’Conv’.The squeeze operation followed by excite or aggregated excite is always combined with the input so as to regain its original shape. module. The squeezed output passes through the FC layer of reduced size. This layer is key layer after squeeze as this layer is receiving end of important information passed after squeeze. This have direct impact on the representational power of the network. We innovate by adding cardinality to this layer. We tried increasing the dense layers of the network. This increases the complexity of the network and also it doesn have much effect when compared to multibranch module [14]. Thus we use multibranch FC layer for increasing representational power of the network. We use the branch value of 4 i.e., we use 4 fully connected layers which learns from the squeezed input. The value is chosen to have an impact on the learned representations at the same time it should be less complex to use even in resnext. We also tested the SaE module on aggregated resnet. This layer is combined using the concatenation as shown in figure 4. The concatenation passes all the layer outputs to the next layer. This FC layer brings the outputs to original shape as used by convolutional layers. The proposed SaE module is effective than SE module as it have a aggregated module which enhances the representations and the theoretical complexity is similar to existing SE module. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{Resnet-50} & \multicolumn{2}{c|}{SaE resnet-50} & \multicolumn{2}{c|}{SaE resneXt-50} \\ \hline \multicolumn{2}{c|}{} & \multicolumn{8}{c|}{conv, 7x7, 64, stride 2} \\ \hline \multicolumn{2}{c|}{} & \multicolumn{8}{c|}{max pool, 3x3, stride 2} \\ \hline \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,64\\ conv,3x3,64\\ conv,1x1,256\end{bmatrix}\times 3\)} & \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,64\\ conv,3x3,64\\ conv,1x1,256\end{bmatrix}\times 3\)} & \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,128\\ conv,3x3,128\ C=32\\ conv,1x1,256\end{bmatrix}\times 3\)} & \multirow{3}{*}{\(\times 3\)} \\ & & & & & & \(conv,1x1,256\) & \multirow{3}{*}{\(\times 3\)} \\ & & & & & & \(fc,[8,256]\times 4\) & \multirow{3}{*}{\(\times 4\)} \\ \hline \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,128\\ conv,3x3,128\\ conv,1x1,512\end{bmatrix}\times 4\)} & \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,128\\ conv,3x3,128\\ conv,1x1,512\end{bmatrix}\times 4\)} & \multirow{3}{*}{\(\times 4\)} & \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,128\\ conv,3x3,128\\ conv,1x1,512\end{bmatrix}\times 4\)} & \multirow{3}{*}{\(\times 4\)} \\ & & & & & & \(conv,1x1,512\) & \multirow{3}{*}{\(\times 4\)} \\ & & & & & & \(conv,1x1,512\) & \multirow{3}{*}{\(\times 6\)} \\ & & & & & & \(fc,[16,512]\times 4\) & \multirow{3}{*}{\(\times 6\)} \\ \hline \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,256\\ conv,3x3,256\\ conv,1x1,1024\end{bmatrix}\times 6\)} & \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,256\\ conv,3x3,256\\ conv,1x1,1024\end{bmatrix}\times 6\)} & \multirow{3}{*}{\(\times 6\)} & \multirow{3}{*}{\(\begin{bmatrix}conv,1x1,512\\ conv,3x3,512\ C=32\\ conv,1x1,1024\end{bmatrix}\times 4\)} & \multirow{3}{*}{\(\times 3\)} \\ & & & & & & \(conv,1x1,2048\) & \multirow{3}{*}{\(\times 3\)} \\ & & & & & & \(fc,[64,2048]\times 4\) & \multirow{3}{*}{\(\times 3\)} \\ \hline \hline \end{tabular} \end{table} Table 1: The shapes and operations along with groups (C) and aggregated FC layers depicted with the cardinality values. (**left**): Resnet-50, (**Middle**): SaE resnet-50 and (**Right**): SaE resneXt-50. The fc indicates the output dimension of the two fully connected layers in SaE network. ## 7 Experiments We experiment the proposed method on CIFAR-100, and modified Imagenet. Imagenet training images are modified and instructions are discussed in the upcoming section. For imagenet, we transform input data to batches of size 256 before training. We use Stochastic gradient descent (SGD) optimizer with the momentum of 0.9 and with the weight decay of 1e-4. This makes the network to learn slowly but effectively. We use the cross entropy loss function for all the datasets. Due to the computational constraint, we train all the models to be tested for 50 epochs on H100. Both the datasets follows same set of procedures and implemented in Pytorch. The initial learning rate for the model is 0.01 and it is decayed by the rate of 0.1 after each 15 steps. For resnext, the cardinality used is 32. ### Cifar-100 We experiment on CIFAR 100 dataset. CIFAR-100 dataset contains 100 classifications with each class containing 6000 images. Among those images, we have 50000 for training and 10000 for testing purposes. We test on this dataset and the results are tabulated. From the table 2, the squeeze and excitation and the proposed squeeze aggregated excitation is very close than vanilla resnet. In certain cases, like in SaE resnet, proposed network performed better on Top-1. And in Big little based aggregated resnet with SE module, both top-1 and top-5 is performed comparably better than other variants. We could see the basic version of SaE module outperforms the resnet and SE resnet. ### Imagenet results We have experimented on custom modified imagenet dataset. Due to the computational constrains we are unable to train the 1000 images per class. In Caltech 256 it consists a total of 30607 images wherein each class contains minimum of 80 images. The average images per class is 119. Thus, we consider a total of 250 images for imagenet which is 5 times more than the validation set of 50 per class. We used the same validation set for evaluating the trained model. This modified dataset is used for testing imagenet with the above used data transformations. The results for the experiments on models are tabulated 3. From the obtained results, the proposed module have top-5 accuracy when compared to the vanilla resnet and SE resnet. Wherein, the SaE resnet achieves top-1 and top-5 on all the remaining experiment. When compared with big little resnet, resnext and in big little resnext, the proposed SaE module beats the plain network and squeeze excitation added networks. ## 8 Conclusion In this paper, we propose the SaE module, a upgraded version of SE module, to improve the representational power with the increased cardinality. The proposed model attached to residual network is subjected to extensive experiments which achieves better performance than the existing models. In addition to the architecture, we also introduce the aggregated fully connected layers for its global representation learning capabilities. We hope the combination of spatial, channel-wise and global representations in between network architecture proves useful for having better representations. Finally, the proposed network may be helpful in learning important features better in vision based related fields. \begin{table} \begin{tabular}{c|c|c} \hline Models & Top-1 & Top-5 \\ \hline Resnet & 1.1000 & 4.9600 \\ SE Resnet & 6.0200 & **10.2800** \\ SaE Resnet & **6.0700** & 8.6500 \\ \hline BLresnet & **30.3600** & **59.8100** \\ BLresnet + SE & 29.9300 & 59.6300 \\ BLresnet + SaE & 29.5500 & 58.2400 \\ \hline Aggregated resnet & 5.5500 & 9.0900 \\ Aggregated resnet + SE & 5.7000 & 8.4600 \\ Aggregated resnet + SaE & **5.7100** & **8.5300** \\ \hline BL Aggregated resnet & 29.1200 & 58.4700 \\ BL Aggregated resnet + SE module & **29.4600** & **58.8100** \\ BL Aggregated resnet + SaE module & 28.3900 & 56.6600 \\ \hline \end{tabular} \end{table} Table 2: Results obtained from experimenting on various models using CIFAR-100 dataset. \begin{table} \begin{tabular}{c|c|c} \hline Models & Top-1 & Top-5 \\ \hline Resnet & 0.4000 & 0.6000 \\ Resnet + SE & **0.5480** & 0.7740 \\ Resnet + SaE & 0.4780 & **0.8280** \\ \hline BL resnet & 21.8740 & 43.4680 \\ BL resnet + SE & 22.1860 & 43.9180 \\ BL resnet + SaE & **22.2820** & **43.9680** \\ \hline ResneXt & 0.1100 & 0.5380 \\ ResneXt + SE & 0.3100 & 0.5860 \\ ResneXt + SaE & **0.3292** & **0.7228** \\ \hline BL resneXt & 22.8000 & 44.7600 \\ BL resneXt + SE & 24.6640 & 47.7960 \\ BL resneXt + SaE & **24.9200** & **47.9220** \\ \hline \end{tabular} \end{table} Table 3: Results for the various models tested on modified Imagenet dataset. Train dataset consists of 250 images in the imagenet and validation data remains unaltered.
2310.09151
BibRank: Automatic Keyphrase Extraction Platform Using~Metadata
Automatic Keyphrase Extraction involves identifying essential phrases in a document. These keyphrases are crucial in various tasks such as document classification, clustering, recommendation, indexing, searching, summarization, and text simplification. This paper introduces a platform that integrates keyphrase datasets and facilitates the evaluation of keyphrase extraction algorithms. The platform includes BibRank, an automatic keyphrase extraction algorithm that leverages a rich dataset obtained by parsing bibliographic data in BibTeX format. BibRank combines innovative weighting techniques with positional, statistical, and word co-occurrence information to extract keyphrases from documents. The platform proves valuable for researchers and developers seeking to enhance their keyphrase extraction algorithms and advance the field of natural language processing.
Abdelrhman Eldallal, Eduard Barbu
2023-10-13T14:44:34Z
http://arxiv.org/abs/2310.09151v1
# BibRank: Automatic Keyphrase Extraction Platform Using Metadata ###### Abstract Automatic Keyphrase Extraction involves identifying essential phrases in a document. These keyphrases are crucial in various tasks such as document classification, clustering, recommendation, indexing, searching, summarization, and text simplification. This paper introduces a platform that integrates keyphrase datasets and facilitates the evaluation of keyphrase extraction algorithms. The platform includes BibRank, an automatic keyphrase extraction algorithm that leverages a rich dataset obtained by parsing bibliographic data in BibTeX format. BibRank combines innovative weighting techniques with positional, statistical, and word co-occurrence information to extract keyphrases from documents. The platform proves valuable for researchers and developers seeking to enhance their keyphrase extraction algorithms and advance the field of natural language processing. keyphrase extraction graph algorithms software platform BibTeX datasets context ## 1 Introduction The internet hosts an extensive collection of scientific documents, numbering in the tens of millions. Google Scholar, a web-based search engine dedicated to academic research, strives to provide comprehensive access to scholarly literature across various disciplines. A study Gusenbauer (2019) reported that by the end of 2018, Google Scholar had indexed approximately 400 million articles. Keyphrases considered concise summaries of documents, aid information retrieval, indexing, and collection browsing. Automatic keyphrase extraction is the process of automatically identifying essential phrases within a document. Keyphrases find application in document clustering, classification, summarization, recommendation systems, and question answering. Automatic keyphrase extraction methods have been developed in domains such as social media, medicine, law, and agriculture, where they support specialized systems for organizing and retrieving information Merrouni et al. (2016)Merrouni et al. (2019). Automatic keyphrase extraction methods can be categorized into unsupervised, supervised, and semi-supervised. Unsupervised techniques, which are domain-dependent, do not require labeled training data. On the other hand, supervised methods rely on manually annotated data, while semi-supervised ones strike a balance by requiring less annotated data compared to supervised methods. This paper introduces a downloadable platform that integrates keyphrase datasets in BibTeX format and facilitates the evaluation of keyphrase extraction algorithms. The platform currently encompasses 19 algorithms for automatic keyphrase extraction and methods for evaluating their performance against a diverse gold standard dataset. Among the 19 algorithms is a keyphrase extraction method called BibRank. BibRank exploits an information-rich dataset created by parsing bibliographic data in BibTeX format. It combines a new weighting technique applied to the bibliographic data with positional, statistical, and word co-occurrence information. The main contributions of this paper are as follows: 1. BibRank dataset: Construction of an information-rich dataset by parsing publicly available bibliographic data, which includes manually assigned keywords. 2. BibRank algorithm: Introduction of the BibRank algorithm, a novel method for keyphrase extraction that utilizes the bibliographic information within the BibRank dataset and statistical information. 3. BibRank platform: Provision of a downloadable platform that integrates the BibRank dataset, BibRank algorithm, and other state-of-the-art keyphrase extraction algorithms. The platform includes evaluation metrics and allows for the integration of keyphrase extraction algorithms and datasets. 4. Manual evaluation of keyphrases: Keyphrase extraction algorithms are evaluated using gold standard datasets as a benchmark. In our evaluation process, we rely on expert human evaluators to assess the quality and effectiveness of these gold-standard algorithms. The remaining sections of the paper closely align with the contributions presented earlier. The next section briefly overviews notable keyphrase extraction algorithms and datasets. Section 3 introduces the heterogeneous BibRank dataset and presents the BibRank algorithm. Section 4 concentrates on the automatic evaluation of the BibRank algorithm and other state-of-the-art algorithms. Moreover, this section includes assessing the gold standard algorithms' quality, guided by expert human evaluators. The paper concludes by summarizing our findings. ## 2 Related work This section provides an overview of the essential stages in the automatic keyword extraction algorithms pipeline, highlighting the algorithms that influenced BibRank. The keyword extraction pipeline comprises linguistic preprocessing, candidate phrase selection, keyphrase feature selection, and keyphrase ranking and selection. The text is segmented into sentences and tokenized into words during linguistic preprocessing. Several language processing techniques are applied, including lemmatization, stemming, POS tagging, stop word removal, and Named Entity Recognition (NER) Merrouni et al. (2016). Sometimes, POS tagging is followed by syntactic parsing, and NER is particularly valuable in languages with reliable NER systems. Candidate phrases are selected from the processed text using n-gram sequencing and noun-phrase chunking (NP chunking) Mihalcea and Tarau (2004). Rules-based on acceptable sequences of POS tags, such as selecting sequences starting with adjectives and ending with a noun in English, are employed Hasan and Ng (2014) to reduce the number of candidate phrases. The subsequent step in the pipeline is feature selection for candidate phrases. Two types of features are calculated: in-document features and external features Merrouni et al. (2016). In-document features can be statistical Danesh et al. (2015), positional Merrouni et al. (2019) linguistic Papagiannopoulou and Tsoumakas (2020) or context-based Caragea et al. (2014). Statistical features like TF-IDF score are commonly used, while positional features indicate the candidate phrase's location in the title, abstract, or main text. Context features, such as sentence embeddings computed by deep neural networks, are also utilized. External features require resources like Wikipedia Li et al. (2010) to quantify the association strengths between keyphrases. An example of a supervised keyphrase extraction algorithm that utilizes external features is CeKE Caragea et al. (2014). CeKE employs citation-based features created from the references used in a publication. The assignment of weights to each candidate phrase is based on the calculated features in the keyphrase ranking and selection step. Subsequently, the candidate phrases are sorted, and the most relevant ones are selected using an experimental threshold. In the context of unsupervised methods, graph-based ranking algorithms like TextRank Mihalcea and Tarau (2004) deserve to be mentioned. These algorithms draw inspiration from the Google PageRank algorithm Page et al. (1999) and have demonstrated success in text summarization and keyword extraction. The text document is represented as a graph, where candidate phrases are nodes, and their relationships are edges. These relationships can be co-occurrence relations Beliga et al. (2015), syntactic dependencies Mihalcea and Tarau (2004), or semantic relations Li et al. (2010). In the keyphrase ranking step, an adapted PageRank algorithm is employed, which iterates until convergence on the graph representation of the text, ultimately selecting the top-ranked candidate phrases. Another algorithm in this family is PositionRank Florescu and Caragea (2017). Building upon the principles of TextRank, PositionRank introduces a bias towards frequently occurring candidate phrases that appear early in the document. It operates at the word level, transforming the text into a graph, applying a position-based PageRank algorithm, and extracting candidate phrases. Other initiatives that share a connection with our work encompass the creation and visualization of bibliometric networks. VosViewer stands out as a notable tool in these endeavors van Eck et al. (2010). While VosViewer is not specifically a tool for keyphrase extraction, it is a relevant software used for creating and visualizing bibliometric networks. These networks can encompass journals, researchers, or single publications, helping to analyze and visualize trends and patterns in scientific literature. VosViewer provides multiple avenues to build, visualize, and investigate bibliometric networks, simplifying the process for users to gain insights from bibliometric data ## 3 BibRank ### BibRank Dataset Keyphrrase datasets serve as the standard for evaluating automatic keyphrase extraction methods, encompassing texts and lists of associated keyphrases. These gold standards are widely available across scientific publications, news articles, and web posts Papagiannopoulou and Tsoumakas (2020). We utilize BibTeX entries from the web to construct a new and information-rich keyphrase extraction dataset. Unlike existing datasets that often include only the abstract, full article text, title, and keywords of a document, our dataset incorporates additional metadata such as the publication year, journal title, and author name. An example of a BibTeX record for a publication is illustrated in Figure 1, where the entry type (e.g., "Article") is indicated after the "@," followed by various attributes (e.g., author, title, journal, and paper keywords) and their respective values. Publicly available BibTeX records can be found in online archives like the TUG bibliography archive. TUG's archive contains a vast collection of over 1.6 million categorized BibTeX records from various journals. The archive supports search capabilities using SQL commands Beebe (2009). To create the BibRank dataset, we processed more than 30,000 BibTeX records extracted from the TUG bibliography archive. Currently, the dataset consists of 18,193 unique records with 22 attributes. These attributes represent the distinct values in all the bib records, including publication year, journal of publication, and bib archive. The dataset includes publications from 1974 to 2019. Table 1 provides statistics on authors, journals, topics, and bib files covered by the dataset. The bib files, referring to the archives or databases from which the papers were imported, were categorized into one of the following 12 topics: science history journals, computer science journals and topics, ACM Transactions, cryptography, fonts and typography, IEEE journals, computational/quantum chemistry/physics, numerical analysis, probability and statistics, SIAM journals, mathematics, and mathematical and computational biology. Expanding the dataset by processing additional bibliography files in BibTeX format is possible. The file for the dataset and the essential tools for altering and producing new datasets are available in the BibRank project's GitHub repository. This repository grants users access to the original data and equips them with the requisite resources for customizing the data to their particular requirements or generating entirely new datasets ### BibRank algorithm The BibRank algorithm, comprising five steps, presents an innovative method for weighting candidate phrases, emphasizing the abstracts of scientific publications and based on the concept of a context for a group of BibTeX records. 1. Candidate Selection. The candidate phrases in the document are noun chunks. To identify the noun chunks, we apply rules based on sequences of POS tags. In our workflow, we use the Stanford CoreNLP Natural Language Processing Toolkit Manning et al. (2014), but other noun chunkers can be easily integrated into the platform. 2. PositionRank Weight Calculation. The PositionRank algorithm Florescu and Caragea (2017) assigns position weights to candidate phrases. Higher weights are given to the words appearing earlier in the document. For example, if a phrase consists of positions 3, 6, and 8, its weight is calculated as follows: \(\frac{1}{3}+\frac{1}{5}+\frac{1}{8}=\frac{5}{8}=0.625\) \begin{table} \begin{tabular}{|l|l|} \hline Data & Count \\ \hline Records (abstracts) & 18,193 \\ \hline Authors & 16,883 \\ \hline Journals & 693 \\ \hline Bib Files & 285 \\ \hline Topics & 12 \\ \hline Avg Words & 121 \\ \hline Avg Keyphrases & 9 \\ \hline \end{tabular} \end{table} Table 1: BibRank Dataset ## Appendix The final weight of each candidate phrase is determined by summing and normalizing the position weights of each word in the phrase. Additionally, the scores of each word are recursively computed using the PageRank algorithm, as described by Equation 1 Florescu and Caragea (2017), Mihalcea and Tarau (2004). \[S\left(v_{i}\right)=\left(1-d\right)\cdot\hat{p_{i}}+d\cdot\sum_{v_{j}\in\ln \left(v_{i}\right)}\frac{w_{ji}}{Out\left(v_{j}\right)}S\left(v_{j}\right) \tag{1}\] In Equation 1, \(S(v_{i})\) represents the weight of each word \(i\) in a candidate phrase \(p\), represented by the vertex \(v_{i}\). The damping factor \(d\) reflects the probability of jumping to a random vertex in the graph, and \(\hat{p}\) is the position weight of the word \(i\). The set \(\ln\left(v_{i}\right)\) contains the adjacent vertices pointing to vertex \(i\), and \(w_{ji}\) is the edge weight between \(v_{i}\) and \(v_{j}\). Finally, \(Out(v_{j})\) is the set of adjacent vertices pointed to by vertex \(i\), and is computed as \(\sum_{V_{k}\in Out(V_{j})}w_{jk}\). 3. Context Formulation. The computation of the context for a publication involves selecting a set of BibTeX records according to specific criteria. For instance, if we consider a computer science article published in 2012, the context could be formed by including all computer science papers published within the same year. With the original BibRank dataset containing 22 attributes, each attribute can potentially define a distinct context. 4. Bib Weight Calculation. The bib weights aim to capture the occurrence frequency of candidate phrases within the context. Each record includes a list of keyphrases, allowing for the calculation of weights for candidate phrases based on Equation 2. \[\lambda_{p}=\frac{1}{\alpha}\sum_{d\subseteq D}c_{pd}\] (2) \(\lambda p\) is the bib weight, \(\alpha\) is a factor used for normalization, \(D\) is the set of all records that belong to the chosen context, \(d\) is a record, and \(c\) is the occurrence of a candidate phrase in the record's keyphrases list. \(\alpha\) was calculated as the maximum bib weight across all keyphrases in the context documents. 5. Candidate Phrase Ranking and Selection. The ranking of candidate phrases is determined by combining their bib weights and position scores. The scores of individual words within each candidate phrase are added to the phrase's bib weight, resulting in a sum that determines the final ranking of the candidate phrases, as illustrated in Equation 3. The document's keyphrases are then determined by selecting the top \(N\) candidate phrases. \[S_{final}\left(p\right)=\sum_{v_{i}\in\mathrm{V}_{p}}S(v_{i})+\lambda_{p}\] (3) \(V_{p}\) is the set of words that belongs to candidate phrase \(p\) and \(\lambda_{p}\) is the calculated bib weight for the candidate phrase \(p\). In the illustrated Figure 2, The BibRank algorithm begins by processing the input text, extracting nouns and noun phrases like 'Keyword' and 'automatic identification,' which are considered as selected candidates. It then infers keyphrases, including 'Keyword extraction' and 'automatic identification,' assigning them scores of 0.38 and 0.30, respectively. These scores denote their relevance and significance to the document's main topic, calculated based on position weight and Bib weights. Figure 2: BibRank Keyphrases Extraction Example ### BibRank platform BibRank is a versatile online platform developed in Python that simplifies the integration of keyphrase extraction algorithms, encompassing three modules: Datasets, Algorithms, and Evaluation. One of the standout attributes of the platform is its comprehensive support for keyphrase extraction datasets. It seamlessly incorporates user datasets and features multiple pre-integrated datasets, such as the BibRank dataset (see 3.1) and five others extensively detailed in table 2. This table provides crucial information about the papers linked to each dataset, the number of documents contained, and the document types, distinguishing between abstracts and full papers. Moreover, BibRank facilitates users in crafting personalized datasets with ease. The platform offers user-friendly routines tailored to process BibTeX files, simplifying the generation of new datasets that align with the user's specific needs and requirements. The platform offers a comprehensive range of keyphrase extraction algorithms, including the BibRank algorithm (refer to 3.2) and ten additional ones, all clearly specified in table 3. It provides a user-friendly interface for effortlessly integrating the user's own keyphrase extraction algorithms. For smooth integration, the user's algorithm must extend a superclass that encompasses the blueprint for the crucial extraction operations, where the algorithm's name is designated as a class attribute. Additionally, the algorithm must incorporate a function that efficiently returns the extracted keyphrases and their corresponding weights. The platform incorporates PKE, an open-source toolkit for keyphrase. Boudin (2016). To assess the accuracy of a keyphrase extraction algorithm on a given dataset, the platform provides an evaluation module in the form of a Python script. Users can select the algorithm to be evaluated and specify the metadata for the dataset, such as the year of publication or journal. The evaluation script computes the recall (R), precision (P), and F1 scores, widely recognized as standard measures of algorithm performance. ## 4 Results ### Evaluation methodology The widely accepted assumption that the gold standard serves as the reference truth for evaluating algorithms is acknowledged. However, a comprehensive twofold evaluation process was conducted to examine this assumption critically. The first evaluation aimed to assess the algorithms against the gold standard, while the second evaluation focused on evaluating the gold standard itself. \begin{table} \begin{tabular}{|c|c|c|} \hline Dataset & Documents & Type \\ \hline ACM Schutz (2008) & 2,304 & Full papers \\ \hline NUS Nguyen and Kan (2007) & 211 & Full papers \\ \hline Inspec Hulth (2003) & 2,000 & Abstracts \\ \hline WWW Caragea et al. (2014) & 1,330 & Abstracts \\ \hline KDD Caragea et al. (2014) & 755 & Abstracts \\ \hline BibRank Dataset & 18,193 & Abstracts and Metadata \\ \hline \end{tabular} \end{table} Table 2: BibRank platform Datasets \begin{table} \begin{tabular}{|c|c|c|} \hline Method & Year & Approach Type \\ \hline TFIDF FRANK (1999) & 1999 & Statistical \\ \hline KPMiner El-Beltagy and Rafea (2010) & 2010 & Statistical \\ \hline YAKE Campos et al. (2020) & 2020 & Statistical \\ \hline TextRank Mihalcea and Tarau (2004) & 2004 & Graph based \\ \hline CollabRank Wan and Xiao (2008) & 2008 & Graph based \\ \hline TopicRank Bougouin et al. (2013) & 2013 & Graph based \\ \hline PositionRank Florescu and Caragea (2017) & 2017 & Graph based \\ \hline SGRank Danesh et al. (2015) & 2015 & Hybrid Statistical-graphical \\ \hline sCAKE Duari and Bhatnagar (2019) & 2018 & Hybrid Statistical-graphical \\ \hline KeyBERT Grootendorst (2021) & 2021 & Sentence Embeddings \\ \hline \end{tabular} \end{table} Table 3: BibRank platform Models Datasets with manually assigned keywords were used as benchmarks to assess the algorithms' performance. The evaluations were carried out using the BibRank platform, where the algorithms were tested on the BibRank dataset with parameter adjustments. The default setting for the first parameter, determining the number of keywords to extract, was \(10\) for all algorithms. The second parameter, the tokenizer, utilized the Stanford CoreNLP toolkit, as explained in the BibRank algorithm section. The damping factor \(\alpha\) was set to \(0.85\), and the window size was set to \(2\) based on experiments by Florescu and Caragea (2017). Extracted keyphrases were compared to the manually assigned keywords in the gold standard dataset to measure the algorithms' performance, considering exact matches as successful hits. Standard evaluation metrics such as recall, precision, and F1 score were computed. Evaluators with expertise were sought through a reputable freelancing platform to evaluate the gold standard. These evaluators were carefully selected based on specific criteria, including fluency in English and a proven track record in similar tasks. Two experts were assigned to evaluate 100 annotated documents containing keywords using seven algorithms and the gold standard. The evaluators were kept unaware of the algorithm names or the gold standard during the evaluation process to prevent potential bias. The evaluators meticulously annotated the different data sets using a five-point scale: 1. Very bad: The keywords are considered inadequate and do not meaningfully represent the text. 2. Bad: The keywords are a mix of poor and good choices, lacking consistency and not fully capturing the essence of the text. 3. Acceptable: The keywords are generally satisfactory and represent the text to a reasonable extent. 4. Good: The keywords are of good quality, although they may not fully encompass all the text's main ideas. 5. Very good: The provided keywords accurately summarize the text and effectively capture the main ideas. Overall, our twofold evaluation approach provides a comprehensive analysis of both the algorithm and the gold standard, allowing us to understand the strengths and weaknesses of each. ### Results The evaluation of the algorithms involved three experiments, each utilizing a different section of the BibRank dataset. The experiments focused on specific domains, namely "Computer science (compsci)," "ACM," and "history, philosophy, and science," consisting of 335, 127, and 410 papers, respectively. In choosing the dataset years, we aimed for diverse temporal coverage and ran tests on various combinations to ensure validity. For Computer science (compsci), bib scores were generated using publications from the years 1980 to 1987, and the test data was sourced from publications in 1988; ACM bib scores were derived from 1990 to 1996 and tested against 1997 to 2020 publications; for "history, philosophy, and science," scores were based on 2009 to 2011, testing with 2012 to 2014 publications. For a comprehensive overview of these experiments, including the categories used, please refer to Table 4. The table displays the categories the articles belong to and seven selected algorithms for evaluation. We selected these algorithms to exemplify various keyphrase extraction approaches discussed in the Related Works section, showcasing the implementation of distinct methodologies for keyword extraction. Upon closer inspection, the BibRank algorithm demonstrates consistent enhancements across different datasets, as can be seen in the tables 5, 6, and 7. When compared to TextRank and PositionRank, which use comparable techniques, the integration of Bib Weights in the BibRank algorithm leads to a noticeable enhancement in performance. 1. YAKE (Yet Another Keyword Extractor) is a statistical keyphrase extraction algorithm that utilizes a "maximal marginal relevance" approach to promote diversity in the selected keywords. This ensures that the extracted keyphrases cover a wide range of topics and concepts. 2. The SGRank and sCake methods are algorithms used to extract keyphrases from a document. They employ statistical analysis and graph-based techniques, blending both advantages to identify important keywords. Notably, sCake stands out for integrating domain-specific knowledge into its process when analyzing documents. 3. KeyBERT represents a user-friendly and lightweight algorithm for keyword extraction. It harnesses the power of BERT transformers' embeddings to identify important keywords in a given text. Using an unsupervised technique, KeyBERT calculates the cosine similarity between each phrase and document to determine the most relevant keyphrases. The preceding sections contain in-depth discussions about graph-based techniques, including TextRank, PositionRank, and BibRank. These algorithms use graph-based approaches to analyze word relationships and extract essential keywords from a text. Our objective in incorporating these algorithms is to comprehensively evaluate various keyphrase extraction techniques. In addition to using standard gold keyphrases, the chosen experts manually evaluated seven keyphrase extraction approaches. To gauge the performance of each method, the experts assigned scores from 1 to 5 to the generated keywords for 100 randomly selected documents. Table 8 summarizes the average performance of each evaluated approach. These evaluations offer valuable insights into the effectiveness of the diverse keyphrase extraction methods. The figure denoted by 3 provides a clear and organized visual display of the results for the keyphrase extraction algorithms. These algorithms were evaluated based on the domains depicted on the x-axis, while the F1 score is plotted on the y-axis. ### Discussion The Yake algorithm and the gold standard sets of keyphrases received the lowest scores from the experts in our evaluation. This result was expected for Yake, as it is the only statistical approach among the evaluated techniques. Prior research Hasan and Ng (2014) has also indicated that models relying on statistical features exhibit lower average performance in keyphrase extraction tasks. However, the surprising finding was the performance of the gold standard keyphrases. We conducted interviews with the experts who participated in the evaluation to gain deeper insights. One expert mentioned that the gold standard keyphrases are overly general and limited in scope. They are designed to capture the central ideas or keyphrases of the document, which may result in the omission of some important keywords. In contrast, algorithms such as BibRank, PositionRank, TextRank, and KeyBERT better understood the document's meaning, enabling them to extract more relevant and specific keyphrases. Figure 4 presents an abstract that the experts evaluated, and the corresponding scores provided by the experts are listed in table 9. The gold standard keywords received low scores despite including important keyphrases like "Chinese \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Science-history-journals} \\ \hline & Bib Weights & P & R & F1 \\ \hline TextRank & 0 & 0.0621 & 0.0912 & 0.0685 \\ \hline PositionRank & 0 & 0.0740 & 0.1102 & 0.0817 \\ \hline & 54 & 0.0780 & 0.1098 & 0.0833 \\ \cline{2-5} BibRank & 81 & 0.0787 & 0.1108 & 0.0842 \\ \cline{2-5} & 173 & 0.0811 & 0.1136 & 0.0867 \\ \hline \end{tabular} \end{table} Table 6: BibRank Improvements: science-history-journals \begin{table} \begin{tabular}{|l|c|c|c||c|c|c||c|c|c|} \hline & \multicolumn{3}{|c||}{Compsci} & \multicolumn{3}{|c||}{science-history-journals} & \multicolumn{3}{|c|}{Probstat} \\ \hline & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline Yake & 0.0728 & 0.0367 & 0.0458 & 0.0606 & 0.0705 & 0.0602 & 0.0171 & 0.0366 & 0.0228 \\ \hline SGRank & 0.1282 & 0.0730 & 0.0861 & 0.0645 & 0.0903 & 0.0690 & 0.0594 & 0.1235 & 0.0783 \\ \hline sCake & 0.1213 & 0.0714 & 0.0829 & 0.0676 & 0.0949 & 0.0724 & 0.0549 & 0.1141 & 0.0725 \\ \hline KeyBert & 0.0839 & 0.0564 & 0.0617 & 0.0315 & 0.0501 & 0.0368 & 0.0380 & 0.0880 & 0.0520 \\ \hline TextRank & 0.1236 & 0.0716 & 0.0835 & 0.0621 & 0.0912 & 0.0685 & 0.0562 & 0.1175 & 0.0745 \\ \hline PositionRank & 0.1579 & 0.0953 & 0.1094 & 0.0740 & 0.1102 & 0.0817 & 0.0605 & 0.1347 & 0.0815 \\ \hline BibRank & 0.1812 & 0.109 & 0.1249 & 0.0811 & 0.1136 & 0.0867 & 0.0659 & 0.1457 & 0.0886 \\ \hline \end{tabular} \end{table} Table 4: Evaluation Results of selected keyphrase extraction algorithms, including BibRank \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & Bib Weights Records & P & R & F1 \\ \hline TextRank & 0 & 0.1236 & 0.0716 & 0.0835 \\ \hline PositionRank & 0 & 0.1579 & 0.0953 & 0.1094 \\ \hline & 299 & 0.1764 & 0.1065 & 0.1216 \\ \cline{2-5} BibRank & 976 & 0.1764 & 0.1063 & 0.1218 \\ \cline{2-5} & 1155 & 0.1809 & 0.1083 & 0.1242 \\ \cline{2-5} & 1746 & 0.1812 & 0.109 & 0.1249 \\ \hline \end{tabular} \end{table} Table 5: BibRank Improvements: Compsci dependency parsing" and "unlabeled data." However, there were cases where essential keyphrases were missing, while some keywords not explicitly mentioned in the abstract were included in the gold standard set. For instance, the term "semi-supervised learning" was incorporated in the gold standard keyword list but did not appear in the original abstract. Yake achieved a low score, indicating that the algorithm lacks the contextual understanding exhibited by the other keyword extraction methods. SGRank outperformed the gold standard, effectively highlighting essential keywords such as "long-distance word," "unlabeled attachment score," and "supervised learning method." SCake also demonstrated strong performance, successfully extracting detailed keywords related to different types of dependency parsers and incorporating "short dependency information." KeyBERT showcased robust performance, extracting comprehensive keywords such as "improves parsing performance" and "parsing approach incorporating," which enhanced the understanding of the paper's content. TextRank consistently performed well, generating similar keywords to SCake and SGRank, indicating its consistency in identifying key concepts. PositionRank, with a score of 5, provided additional context by introducing terms such as "short dependencies." BibRank consistently scored 5 in both evaluations, effectively extracting keywords related to various parser types, "short dependency information," and specific performance metrics like "high performance." It also included additional contextual keywords, such as "machine translation," providing a comprehensive overview of the abstract's content. Overall, these evaluations shed light on the strengths and weaknesses of different keyphrase extraction methods and help us understand their performance characteristics in the context of academic literature. The detailed results of our evaluations, substantiating the findings discussed in this paper, are recorded and made available for public scrutiny and exploration. These results can be found in our GitHub repository's "evaluation_results" folder. ## 5 Conclusions This paper introduces the BibRank platform, a versatile online platform developed in Python, which simplifies the integration of keyphrase extraction algorithms. A new keyphrase extraction dataset, the BibRank dataset, is presented to benchmark keyphrase extraction algorithms. The paper also introduces a state-of-the-art keyphrase extraction algorithm, BibRank, which utilizes the notion of context to compute keyphrases. The main keyphrase extraction algorithms are comprehensively evaluated in the study using a two-fold approach: evaluating the algorithms against the gold standard and evaluating the gold standard itself. The evaluations are conducted on the BibRank dataset using standard evaluation metrics. Expert evaluators assess the gold standard using a five-point \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & Bib Weights Records & P & R & F1 \\ \hline TextRank & 0 & 0.0562 & 0.1175 & 0.0745 \\ \hline PositionRank & 0 & 0.0605 & 0.1347 & 0.0815 \\ \hline \multirow{3}{*}{BibRank} & 139 & 0.0646 & 0.1422 & 0.0868 \\ \cline{2-5} & 237 & 0.0649 & 0.1423 & 0.0870 \\ \cline{2-5} & 367 & 0.0659 & 0.1457 & 0.0886 \\ \hline \end{tabular} \end{table} Table 7: BibRank Improvements: probstat \begin{table} \begin{tabular}{|l|l|l|} \hline Model & Expert 1 & Expert 2 \\ \hline Gold Standard & 2.85 & 2.08 \\ \hline Yake & 2.95 & 1.65 \\ \hline SGRank & 4.2 & 3.47 \\ \hline sCake & 3.71 & 3.39 \\ \hline KeyBert & 4.61 & 4.39 \\ \hline TextRank & 4.77 & 3.99 \\ \hline PositionRank & 4.41 & 3.37 \\ \hline BibRank & 4.4 & 3.77 \\ \hline \end{tabular} \end{table} Table 8: Manual Evaluation scale. The results demonstrate that some algorithms, such as BibRank and PositionRank, outperform the gold standard in extracting relevant and specific keyphrases, while others, like Yake, achieve lower scores due to their statistical nature. This evaluation provides valuable insights into the strengths and weaknesses of different keyphrase extraction methods in the context of academic literature. The BibRank algorithm demonstrates state-of-the-art performance when evaluated against the gold standard. The authors encourage researchers to use the BibRank platform for evaluating their own keyphrase extraction algorithms. To ensure reproducibility, the BibRank platform, BibRank algorithm, and the BibRank dataset are publicly available (see the Data Availability Statement) for use by the research community. Platforms such as BibRank and other keyphrase extraction tools have the potential to operate alongside VosViewer. If the research community starts using BibRank, we'll think about adding a plugin for integration with VosViewer. ## 6 Data Availability The BibRank keyphrase extraction framework is readily available on GitHub to facilitate reproducibility. The repository includes: * The implementation of BibRank and 18 other keyphrase extraction methods. * A detailed installation guide. * Examples of evaluations. * The Bib dataset used for evaluation. * Comprehensive instructions for running experiments with the BibRank model. Figure 3: Evaluation Results of Keyphrase Extraction Algorithms. \begin{table} \begin{tabular}{|l|l|l|} \hline Model & Expert 1 & Expert 2 \\ \hline Gold Standard & 2 & 1 \\ \hline Yake & 3 & 1 \\ \hline SGRank & 4 & 3 \\ \hline sCake & 4 & 3 \\ \hline KeyBert & 4 & 4 \\ \hline TextRank & 5 & 4 \\ \hline PositionRank & 5 & 5 \\ \hline BibRank & 5 & 5 \\ \hline \end{tabular} \end{table} Table 9: The expert evaluation for the abstract presented in figure 4 * Reviewers full evaluation results. GitHub repository available at: [https://github.com/dallal9/Bibrank](https://github.com/dallal9/Bibrank) (Accessed: 3 October 2023) ## 7 Funding Eduard Barbu has been supported by the EKTB55 project "Teksti lihtsustamine eesti keeles"
2308.03150
"We care": Improving Code Mixed Speech Emotion Recognition in Customer-Care Conversations
Speech Emotion Recognition (SER) is the task of identifying the emotion expressed in a spoken utterance. Emotion recognition is essential in building robust conversational agents in domains such as law, healthcare, education, and customer support. Most of the studies published on SER use datasets created by employing professional actors in a noise-free environment. In natural settings such as a customer care conversation, the audio is often noisy with speakers regularly switching between different languages as they see fit. We have worked in collaboration with a leading unicorn in the Conversational AI sector to develop Natural Speech Emotion Dataset (NSED). NSED is a natural code-mixed speech emotion dataset where each utterance in a conversation is annotated with emotion, sentiment, valence, arousal, and dominance (VAD) values. In this paper, we show that by incorporating word-level VAD value we improve on the task of SER by 2%, for negative emotions, over the baseline value for NSED. High accuracy for negative emotion recognition is essential because customers expressing negative opinions/views need to be pacified with urgency, lest complaints and dissatisfaction snowball and get out of hand. Escalation of negative opinions speedily is crucial for business interests. Our study then can be utilized to develop conversational agents which are more polite and empathetic in such situations.
N V S Abhishek, Pushpak Bhattacharyya
2023-08-06T15:56:12Z
http://arxiv.org/abs/2308.03150v1
# "We care": Improving Code Mixed Speech Emotion Recognition in Customer-Care Conversations ###### Abstract Speech Emotion Recognition (SER) is the task of identifying the emotion expressed in a spoken utterance. Emotion recognition is essential in building robust conversational agents in domains such as law, healthcare, education, and customer support. Most of the studies published on SER use datasets created by employing professional actors in a noise-free environment. In natural settings such as a customer care conversation, the audio is often noisy with speakers regularly switching between different languages as they see fit. We have worked in collaboration with a leading unicorn in the Conversational AI sector to develop Natural Speech Emotion Dataset (NSED). NSED is a natural code-mixed speech emotion dataset where each utterance in a conversation is annotated with emotion, sentiment, valence, arousal, and dominance (VAD) values. In this paper, we show that by incorporating word-level VAD value we improve on the task of SER by 2%, for negative emotions, over the baseline value for NSED. High accuracy for negative emotion recognition is essential because customers expressing negative opinions/views need to be specified with urgency, test complaints and dissatisfaction snowball and get out of hand. Escalation of negative opinions speedily is crucial for business interests. Our study then can be utilized to develop conversational agents which are more polite and empathetic in such situations. ## 1 Introduction Conversational agents which can participate in a dialogue effectively have massive applications across multiple domains. Mensio et al. (2018) discussed three steps of evolution for conversational agents: textual interaction, vocal interaction and embodied interaction. Recently, OpenAI released ChatGPT, a multi-lingual textual conversational model based on the large language model (LLM) GPT 3.5. ChatGPT can "answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests" effectively while retaining knowledge from the conversational context as well as the pre-training phase Bang et al. (2023). ChatGPT has outperformed state-of-the-art LLMs for various tasks in the zero-shot setting. It was found that, through interactivity, one can improve the performance of ChatGPT by 8% ROUGE-1 on summarization tasks and 2% ChrF++ on the machine translation tasks Bang et al. (2023). With the integration of interactability, ChatGPT has leaped over traditional LLMs with applications across several domains such as law, healthcare, finance and education. In many situations, conversation through the speech modality is favorable and convenient as compared to the textual modality. ChatGPT, while a great conversational agent, can only work with the textual modality. A conversational agent which can take speech input and give speech responses that are polite and empathetic, in an end-to-end fashion, is the next phase of evolution for interactive chatbots. Conversational agents such as ChatGPT need to recognize the emotion of the human interlocutor correctly in order to give responses which are polite and empathetic in nature. Emotion recognition, when done efficiently by chatbots, make the conversations more human-like. Speech emotion recognition is an important sub-task while developing speech-to-speech chatbots. Our specific problem statement is to solve Speech Emotion Recognition (SER) where the input is the raw audio of a spoken utterance in a dyadic conversation and the output is its corresponding emotion label, valence, arousal and dominance for a natural code-mixed speech dataset. **Speech Emotion Recognition (SER)** is the task of identifying the emotion of a spoken utterance. Dimensional models plot emotions across the three dimensions of _arousal, dominance_ and _valence_. Arousal, valence and dominance signify the intensity, polarity and control exerted by an emotion, respectively. For example, _anger_ has high arousal, negative valence and high dominance whereas _fear_ has low arousal, negative valence and low dominance. Categorical models define discrete emotion classes such as _anger, happy_ and _sad_ for various downstream tasks. Our contributions are: 1. A model trained on a natural code-mixed speech emotion dataset, Natural Speech Emotion Dataset (NSED), for the task of Speech Emotion Recognition (SER). NSED has over \(5000\) conversational utterances annotated for emotion, sentiment, valence, arousal, and dominance. 2. The technique of incorporating word-level VAD values to improve the performance for SER by 2% for negative emotions, in an industry setting. High accuracy for negative emotion recognition is essential, because customers expressing negative opinions/views need to be pacified with urgency, lest complaints and dissatisfaction snowball and get out of hand. Escalation of negative opinions speedily is crucial for business interests. ### Motivation SER has been an important yet challenging task for researchers. Whenever there is a human-machine interaction in environments where only speech can be propagated, SER becomes a key step for the machine to generate an appropriate response. The task of Emotion Recognition in Conversation (ERC) has many controlling variables such as the context, topic, argumentation logic and speaker/listener personalities, describe the emotional states of the interlocutors. A recent study [1] explored the benefits of using an emotion-aware chatbot to help people with alexithymia, a condition which makes it difficult to understand and express emotions. Alexithymia is common in people with neurodevelopmental disorders (NDD). The chatbot provided different utterances to the users and asked them to imitate those utterances by inducing some kind of emotion such as joy or anger. It was found that the interaction with the chatbot became more straightforward as users acquired familiarity: 17 of the 19 participants could perform all emotional activities with progressively decreasing help from the facilitator. Figure 1: Flow-diagram of an ideal conversational agent that generates polite and empathetic responses. Emotion recognition is an essential step in this pipeline. Most of the SER datasets available today are created by employing professional actors in a clean noise-free environment. In a natural setting, conversations are impromptu, often involving frequent code-mixing and code-switching between multiple languages such as Hindi, English, Marathi, etc. In a customer care setting, it is essential for conversational agents to be polite and empathetic in response to the emotion expressed by the customer. This leads to better overall customer satisfaction and customer retention rates. Our **industry-partner** is a unicorn company in the Conversational AI sector which empowers over 45000 businesses across the world through their conversational messaging platform. This platform helps businesses engage with customers effectively across commerce, marketing and support with over 9 Billion messages per month. Their mission is to "build the most advanced and innovative platform for conversational engagement with a focus on delivering customer delight". We are collaborating with them to work on speech emotion recognition. Through our discussions with them, we explored various ways to approach this problem. They gave us a clear picture of the real-world challenges that are existent in the conversational AI sector. Some of the major challenges are: frequent code-mixing, low-quality recordings and a lack of annotated natural conversational datasets. As we will discuss further, the dataset annotated for our experiments, NSED, contains customer care conversations from the escalation department of a customer care service. High accuracy for negative emotion recognition is essential, because customers expressing negative opinions/views need to be pacified with urgency, lest complaints and dissatisfaction snowball and get out of hand. Escalation of negative opinions speedily is crucial for business interests. This tells us that a speech emotion recognition model operating for the escalation department should be very good in detecting negative emotions in conversations. An SER model which is capable of capturing contextual information well and is robust to the variations introduced by a natural code-mixed conversation dataset needs to be developed. This model then can be utilised in making speech-to-speech conversational agents more polite and empathetic in an escalation department setting. Figure 1 depicts the importance of emotion recognition while developing emotion aware conversational agents. ## 2 Related work Traditionally acoustic speech features have been used along with a statistical machine-learning model for the task of SER (Schuller et al., 2003). However, selecting the appropriate combination of these low-level features for any given task demands a lot of domain knowledge. Pre-trained deep learning based models trained for other speech processing tasks such as ASR were fine-tuned for SER to get better results (Lu et al., 2020). Recently, self-supervised techniques such as Wav2Vec 2.0 (Baevski et al., 2020) have emerged which learn appropriate speech representations automatically for speech recognition. In Pepino et al. (2021) learned speech representations from Wav2Vec 2.0 are utilized in a downstream model for speech emotion recognition. The proposed model outperformed the state-of-the-art for IEMOCAP (Busso et al., 2008) and RAVDESS (Livingstone and Russo, 2018) datasets. The study also showed that combining low-level acoustic features with the Wav2Vec 2.0 speech representations resulted in performance gains. In Poria et al. (2019) it was shown that detecting an emotional shift in conversations is still a bottleneck for SER. In Tian et al. (2015) non-verbal features were combined with low-level descriptors to improve the performance of emotion recognition in dialogue conversations of the IEMOCAP dataset. In Vaudable and Devillers (2012) the impact of negative emotions on the quality of a call center dialogue was investigated. A study has shown that including dialogue features such as turn number, the topic of discussion ad customer/agent response time can significantly improve the performance of text-based emotion recognition systems (Herzig et al., 2016). In Han et al. (2020) it was shown that by converting a categorical SER task to an ordinal SER task performance for SER can be improved for customer care calls. Deschamps-Berger et al. (2022) showed that using transformer-based architectures like Wav2vec2 xlsr-53 (for speech) and FlauBERT (for text) increase the performance accuracy by over \(20\%\) over baselines. Late fusion of speech and text features also showed performance gains for the task of SER. Kulkarni and Bhattacharyya (2021) showed that by retrofitting VAD values into word-embeddings one can generate embeddings which are more emotion-aware. A recent study showed that utilising VAD-values and a muti-task framework with emotion recognition as the main task and intensity prediction as the auxiliary task improved performance of emotion recognition on suicide notes Ghosh et al. (2023). Today, using transformer-based architectures like Wav2vec2 and BERT and fusing features of different nature give the best results for SER. ## 3 Modeling Our model can be mathematically represented using the below argmax equation. \[E^{*}=\operatorname*{argmax}_{E}P(E|<F>,<VAD>) \tag{1}\] Here, \(E^{*}\) is the emotion class that maximizes the probability function given a feature set, \(<F>\), word-level VAD values of an utterance, \(<VAD>\). Our work aims to show that including the feature set \(<VAD>\) improves the performance of SER for a natural code-mixed dataset. ## 4 Block Diagram and Architecture In Figure 2, the overall architecture of the proposed technique is presented. Speech-based features are extracted using the Wav2Vec2 model. Textual features are extracted from the ASR transcripts using the multilingual-BERT model. Word-level valence, arousal, and dominance (VAD) values are extracted from the ASR transcripts using the NRG-VAD lexicon. All these features, once extracted, are fused together and fed into a BiLSTM model. Then a fully-connected layer along with the softmax layer is used to finally generate the predicted emotion. ## 5 Datasets Customer care conversations were recorded and annotated for emotion recognition. The annotation methodology followed is described below. ### Natural Call Center Speech Emotion Dataset Natural Speech Emotion Dataset (NSED) is a code-mixed dyadic customer care conversation dataset created in collaboration with our industry partner. Below are the steps followed to create this dataset. * **Data Recording:** Our industry partner provided us with over 18000 dyadic customer care audio recordings with duration ranging between a few seconds to about an hour and their corresponding machine-generated text transcripts. All the audio recordings were single-channel (mono) with a sampling rate of 8000Hz. The conversations are interactions between a customer and a customer care executive from the complaint escalation team of a car servicing company. Both the speakers, in most of the audio recordings, switch between Hindi and English freely with some occasional use of regional words in languages such as Marathi. * **Data Processing:** Thirty audio recordings were chosen, each of which was 8-10 minutes long making a total of 4.5 hours long audio recordings. The audacity tool was used to process audio files. Each of these audio recordings was clipped into smaller audio clips corresponding to each **speaking turn**. A speaking turn is defined as the utterance corresponding to a particular speaker before and after any other speaker speaks. Each of these audio clips were then aligned with their corresponding machine-generated transcripts and were tagged with either "customer" or "executive" depending on who was speaking. The machine-generated transcripts contained many crucial mistakes such as wrongly transcribing the word "escalation" as "cancellation". So, the transcripts were corrected, manually, in order to achieve a better quality of textual data. In some instances, the audio quality drops drastically, making it very difficult to understand the words that are being spoken. In this case, a tag, **<inaudible>** is used in place of its transcript and further annotations are not performed. * **Emotion Annotation:** The emotion annotations were performed by a group of annotators with a graduate degree, proficient in both English and Hindi. The annotators worked in pairs to listen and annotate these clips with emotion (neutral, happy, sad, excited, anger, fear, surprised, frustrated, disgust), sentiment (neutral, positive, negative), valence, arousal and dominance (VAD). VAD values were annotated in a scale from 1 to 10 where (5, 5, 5) corresponds to the VAD values of a completely neutral emotion. For VAD, 1 represents the minimum value and 10 represents the maximum value any of the dimensions can have e.g. for valence, 1 represents the most negative and 10 represents the most positive any emotion can get. As we can represent 1000 emotions using the VAD dimensional model and only 9 using the categorical emotion model, not all utterances tagged as "neutral" will have VAD values of (5, 5, 5). For a subset of dataset, consisiting of 1989 utterances, annotated by pair of annotators, the inter-annotator agreement was found to be 0.33 and 0.37 for emotion and sentiment labels respectively by using the cohen-kappa metric of agreement. * **Dataset Examples:** Two of the examples from the annotated NSED dataset are given below: **Example 1:** The utterance is **"Do you want someone to get arrested? Haan?"** and its corresponding emotion, sentiment, valence, arousal and dominance are respectively-_anger, negative, 2, 8, 9_. **Example 2:** The utterance is **"Mai samajhta hun aapko jo bhi problem hui hai. Aage se aapko ye naih hoga nischint rahiye."** and its corresponding emotion, sentiment, valence, arousal and dominance are respectively- _neutral, positive, 6, 5, 5_. ** ## 6 Methodology Text features, Wav2vec2 features, and word-level VAD values are extracted and fused together. Indic-Wav2Vec2 is used to extract speech features that constitute a 768-dimensional vector. Whisper-large (Radford et al., 2022) is used to generate transcripts for each utterance in a conversation. The multi-lingual BERT model is used to generate textual embeddings for each utterance resulting in a 768-dimensional vector. The fused features are then passed through a BiLSTM layer and a fully-connected layer. Finally, a softmax layer is used to predict the corresponding emotion for an utterance. Before extracting the speech features, the wav2vec2 architecture is continually pre-trained as \begin{table} \begin{tabular}{|c|c|c|} \hline **Emotion** & **Utterance Count** & **\%age** \\ \hline Neutral & 3510 & 61\% \\ \hline Anger & 863 & 15\% \\ \hline Frustration & 748 & 13\% \\ \hline Disgust & 116 & 2\% \\ \hline Sad & 403 & 7\% \\ \hline Fear & 19 & \textless{} 1\% \\ \hline Happy & 57 & \textless{} 1\% \\ \hline Surprised & 13 & \textless{} 1\% \\ \hline Excited & 25 & \textless{} 1\% \\ \hline **Total** & **5754** & 100\% \\ \hline \end{tabular} \end{table} Table 1: Per-emotion distribution of the Natural Speech Emotion Dataset (NSED). The dataset contains _Neutral_ utterances in the majority (\(61\%\)). Negative emotions like _Anger_, _Frustration_, _Disgust_, _Sad_, and _Fear_ constitute \(37\%\) of the dataset. Positive emotions like Happy, Surprised and Excited constitute \(2\%\) of the dataset. Figure 2: Overall architecture of the proposed model. Speech features are extracted from the Wav2Vec2 model. Automatic Speech Recognition (ASR) is used to generate transcripts from the speech input. Word-level valence, arousal, dominance (VAD) values, and textual features are extracted from the ASR transcripts. Fused features are then passed through a BiLSTM layer and a fully-connected layer to finally produce an emotion prediction. described below. ### Pre-training Wav2Vec2 Data annotation is a cost-intensive task that is not feasible to do for the whole unlabeled speech dataset (~\(18000\) customer care audio files) provided by our industry partner. Wav2vec2 is a self-supervised speech model which learns speech representations from raw audio signals directly. These speech representations have been shown to be very useful for several speech-processing tasks. Wav2vec2 is pre-trained on 52,0000 hours of Librispeech dataset because of which it has already learned various characteristics of speech present in that dataset. To get even better representations for our dataset we apply a technique called continual pre-training where we continue the pre-training phase with our own unlabeled speech dataset. Kessler et al. (2022) show that using an adapter-based continual pre-training approach for the wav2vec2 architecture reduces computational cost significantly. We use a similar approach to pre-train the Wav2vec2 architecture using the unlabelled NSED dataset. After pre-training, the Wav2Vec2 architecture is fine-tuned for NSED to evaluate the performance for SER with and without continual pre-training. Table 2 shows the precision for the neutral class and the weighted average precision for negative and positive emotions for Wav2Vec2-slsr and Indic-Wav2Vec2 (Javed et al., 2022 (to appear). Indic-Wav2Vec2 gives the best performance with continual pre-training. We use this continually pre-trained indic-wav2vec2 model for our experiments. Figure 3 shows the pipeline of continual pre-training utilised in our experiments. ## 7 Experimental Setup The NSED dataset was split into _train_, _dev_, and _test_ sets in the proportions of \(80\%\), \(10\%\), and \(10\%\) respectively. In each experimental run, the dataset was shuffled with a different seed value before feeding it to the model. The NViDia RTXA6000 GPU was used for all the experiments. A single experimental run took approximately 1 hour to complete. Hyper-parameter tuning was performed using the \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Model Name** & **Neu** & **Pos** & **Neg** \\ \hline Wav2Vec2-XLSR (w/o pre-training) & 0.81 & 0.05 & 0.40 \\ \hline Wav2Vec2-XLSR (with continual pre-training) & 0.89 & 0.05 & 0.53 \\ \hline Indic-Wav2Vec2 (w/o pre-training) & 0.92 & 0.10 & 0.57 \\ \hline **Indic-Wav2Vec2 (with continual pre-training)** & **0.92** & **0.14** & **0.61** \\ \hline \end{tabular} \end{table} Table 2: Results for **Wav2Vec2-xlsr** and **Indic-Wav2Vec2** with and without continual pre-training with unlabelled audio files from NSED. For the **Neutral (Neu)** class, precision values are given while for the **Positive (Pos)** and the **Negative (Neg)** classes, weighted average precision is given. Figure 3: Continual pre-training and fine-tuning of the Wav2Vec2 architecture with unlabeled and labeled NSED data, respectively. random search technique. The hyper-parameters which gave the best overall performance for the negative emotions were used in the end. The results shown in this paper give the performance of the best experimental run for the negative emotions in terms of weighted-average precision. ## 8 Results and Analysis Table 3 gives the performance of the BiLSTM model using different types of features. ### Analysis Only by using the Wav2Vec2 (W) features, our model achieves an average precision of \(0.61\) over all the negative emotions. This forms the baseline for our experiments. When both the Wav2Vec2 (W) and the textual BERT (T) features are concatenated together, our model achieves a weighted-average precision of \(0.64\) over all negative emotions. This shows that textual features have additional emotional information which is absent from only the speech features. When word-level VAD values (VAD), extracted from the NRG-VAD lexicon, are also concatenated along with Wav2Vec2 (W) and textual BERT (T) features, we see an improvement of \(2\%\) with a weighted-average precision of \(0.66\) over all the negative emotions. This shows that by utilizing word-level VAD values we can improve the performance of our SER model for negative emotions. For the neutral emotion class, all the models achieve a precision over \(90\%\). Results for positive emotions are unsatisfactory, with our proposed model giving a weighted-average precision of \(0.16\) for all the positive emotions. This can be attributed to the low amount of utterances with positive emotion in NSED. Even though the performance of our model is poor for positive emotions, it performs well for negative emotions, which is ideal, as we are dealing with customer call conversations where the customer is usually unsatisfied with a product or a service. ### Challenges We faced a number of challenges that one might expect while dealing with a natural code-mixed speech dataset. Some of the challenges are described below: * **Audio Quality**: Poor quality of the audio recordings made our task even more challenging. Due to network irregularities many recordings' audio quality dropped drastically making it hard for the annotators to annotate properly. The call recordings were created in a single-channel format (mono) which made it difficult to segregate audio clips if two people spoke simultaneously. * **Transcription Errors**: Our ASR model struggled with the constant code-switching and noisy environments to produce coherent transcriptions for the spoken utterances. These errors then reflected into poor textual embeddings and missing word-level VAD values. * **Neutral Utterances**: As shown in the table 1, \(61\%\) of the utterances in our dataset are neutral in nature. Because of this, our model was more biased in predicting the neutral class than any other emotion classes. * **Frequent code-mixing and code-switching**: Code-mixing and code-switching make it difficult to extract good features. For the speech input, there doesn't exist a Wav2vec2 model fine-tuned on Hindi+English code-mixed data. The multi-lingual model, Wav2vec2-xlsr-53, fine-tuned for Hindi was used to generate speech representations. The text generated after ASR was transliterated to Hindi. The multilingual BERT-large model was used to generate textual embeddings for the transliterated Hindi text. This transliterated Hindi text was also used to find word-level VAD values. With transliteration as the bottleneck, VAD values for many words weren't found from the NRC-VAD lexicon. ## 9 Conclusion and Future Work In this paper, we discussed the effect of incorporating word-level VAD values on SER for the Natural Speech Emotion Dataset (NSED). We also described the steps involved in creating NSED. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Model Name** & **Neu** & **Ang** & **Sad** & **Fru** & **Neg** & **Pos** \\ \hline W (Baseline) & 0.92 & 0.74 & 0.63 & 0.69 & 0.61 & 0.14 \\ \hline T+W & 0.93 & 0.76 & 0.64 & 0.71 & 0.64 & 0.15 \\ \hline W+VAD & 0.95 & 0.75 & 0.64 & 0.71 & 0.65 & 0.15 \\ \hline T+VAD & 0.95 & 0.78 & 0.65 & 0.72 & 0.65 & 0.15 \\ \hline **T+W+VAD** & **0.96** & **0.79** & **0.67** & **0.74** & **0.66** & **0.16** \\ \hline \end{tabular} \end{table} Table 3: Results for the proposed model trained using **Speech Features (S)**, **Wav2vec2 Features (W)**, **Textual Features (T)**, and **VAD values (VAD)**. The last two columns give the **weighted-average precision** over all the negative emotions and positive emotions. Frequent code-mixing and noisy environments are some the biggest challenges for performing SER on natural datasets. By incorporating word-level VAD values we were able to achieve an improvement of \(2\%\) over the baseline in SER for negative emotions. In future, we look forward to expanding this dataset so that all the emotions have substantial examples. Our SER system can be used to develop conversational agents which generate polite and empathetic statements to pacify a frustrated/angry customer. Different unsupervised techniques for SER can be explored. With this, we can possibly reduce the cost of annotations. Speech-based data augmentation techniques can also be used to increase the amount of data available to us. ## Limitations Our work has certain limitations as described in this section. The NSED dataset used in our experiments is small in comparison to some of the publicly available emotion recognition datasets. The number of utterances which do not belong to the _neutral_ class is low. Positive classes like _happy_ and _excited_ constitute less than \(2\%\) of our dataset. Negative classes like _anger_, _frustration_, _sad_, _disgust_, and _fear_ constitute \(37\%\) of our dataset. We also acknowledge that the emotions annotated for each utterance might not be the exact emotion intended by the speaker. The emotion annotations are in accordance with the interpretations of the annotators. The Automatic Speech Recognition (ASR) step is a bottleneck to our pipeline. As all the conversations are code-mixed, code-switched, natural, and often with a lot of noise, the ASR model couldn't generate an accurate transcript sometimes which lead to poor text features and omission of important words for emotion recognition. We acknowledge that the model might have possibly learnt some sensitive customer information. In future, we will include experiments in our study to remove such sensitive information. We also understand that there are state-of-the-art transformer models which can be experimented on. But due to the limited size of our dataset, we couldn't perform those experiments now. In future, as we expand the dataset, we will also include experiments utilising these transformer based models in our study. ## Ethics Statement The Natural Speech Emotion Dataset (NSED) dataset used in our experiments was annotated by a team of 4 annotators. Each annotator had to listen to an audio conversation between a customer and a customer-care executive and annotate each speaking turn with emotion, sentiment, valence, arousal, and dominance values. The conversational audio files were provided to us by our industry partner because of which NSED remains a proprietary dataset. Consent was taken from both customers and customer-care executives before recording their conversations. The annotators were paid for the time and effort they spent on the annotation task.
2306.04393
The weak Lefschetz property of whiskered graphs
We consider Artinian level algebras arising from the whiskering of a graph. Employing a result by Dao-Nair we show that multiplication by a general linear form has maximal rank in degrees 1 and $n-1$ when the characteristic is not two, where $n$ is the number of vertices in the graph. Moreover, the multiplication is injective in degrees $<n/2$ when the characteristic is zero, following a proof by Hausel. Our result in the characteristic zero case is optimal in the sense that there are whiskered graphs for which the multiplication maps in all intermediate degrees $n/2,\ldots,n-2$ of the associated Artinian algebras fail to have maximal rank, and consequently, the weak Lefschetz property.
Susan M. Cooper, Sara Faridi, Thiago Holleben, Lisa Nicklasson, Adam Van Tuyl
2023-06-07T12:52:10Z
http://arxiv.org/abs/2306.04393v2
# The weak Lefschetz property of whiskered graphs ###### Abstract. By employing a result by Dao-Nair, we show that multiplication by a general linear form in the Artinian algebra created from a whiskered graph with \(n\) vertices always has maximal rank in degrees \(1\) and \(n-1\). This result is optimal in the sense that there are whiskered graphs whose associated Artinian algebras fail to have this property, and hence the weak Lefschetz property, in all intermediate degrees. Key words and phrases:Weak Lefschetz property, graded Artinian rings, whiskered graphs, pseudo-manifolds 2020 Mathematics Subject Classification: 13E10, 13F20, 13F55, 05E45 ## 1. Introduction A graded Artinian algebra \(A=A_{0}\oplus A_{1}\oplus\cdots\oplus A_{t}\) has the _weak Lefschetz property (WLP)_ if there is an \(\ell\in A_{1}\) such that the multiplication maps \(\times\ell:A_{i}\to A_{i+1}\) all have maximal rank, i.e., are injective or surjective. When studying the WLP it is natural to consider standard graded algebras, presented as \(A=R/I\) where \(R=\mathbb{K}[x_{1},\ldots,x_{n}]\) is a polynomial ring over a field \(\mathbb{K}\) and \(I\) is a homogeneous ideal. As was first pointed out in [13], when \(I\) is a monomial ideal, \(A\) has the WLP if and only if the multiplication maps induced by \(\ell=x_{1}+\cdots+x_{n}\) have maximal rank. A recent contribution to the investigation of WLP for monomial ideals is work of Dao and Nair [6] where the Stanley-Reisner ideals \(I_{\Delta}\) together with the squares of the variables are considered over a field of characteristic zero. Their results, which are described in detail in Section 2, complement previous work by Michalek and Miro-Roig [12], and Migliore, Nagel and Schenck [15], both studying the WLP of quadratic and cubic monomial ideals using Togliatti systems. Dao and Nair's work also complements work by Cook, Migliore, Nagel and Zanello [4] that relates the WLP of monomial Artinian algebras to problems in incidence geometry. A quadratic monomial ideal defining an Artinian algebra can be interpreted as an edge ideal of a graph together with the squares of the variables. More precisely, let \(G=(V,E)\) be a finite simple graph on the vertex set \(V=\{x_{1},\ldots,x_{n}\}\) and edge set \(E\). The _edge ideal_\(I(G)=\langle x_{i}x_{j}\mid\{x_{i},x_{j}\}\in E\rangle\) defines an Artinian algebra \(A(G)=R/(\langle x_{1}^{2},\ldots,x_{n}^{2}\rangle+I(G))\), called the Artinian algebra of \(G\). We are interested in the following general question: **Question 1.1**.: _For which graphs \(G\) does \(A(G)\) have the WLP? If \(A(G)\) does not have the WLP, in which degrees do the multiplication maps fail to have maximal rank?_ The WLP of such algebras \(A(G)\) have been studied in [18] and [19], where they classify the WLP for some special classes of graphs including paths, cycles, wheel graphs, and star graphs. However, our understanding of Question 1.1 is far from complete; we contribute to this question Introduction Let \(G\) be a graph with \(2n\) vertices and \(n+1\) edges. A _graph_ is a graph \(G\) if and only if it is a graph with \(2n\) vertices and at least \(n+1\) edges, and it is a graph with \(2n\) vertices. The _graph_\(G\) is a graph with \(2n\) vertices and at least \(n+1\) edges, and it is a graph with \(2n\) vertices. The _graph_\(G\) is a graph with \(2n\) vertices and at least \(n+1\) edges, and it is a graph with \(2n\) vertices. The _graph_\(G\) is a graph with \(2n\) vertices and at least \(n+1\) edges, and it is a graph with \(2n\) vertices. The _graph_\(G\) is a graph with \(2n\) vertices and at least \(n+1\) edges, and it is a graph with \(2n\) vertices. The _graph_\(G\) is a graph with \(2n\) vertices and at least \(n+1\) edges, and it is a graph with \(2n\) vertices and at least \(n+1\) edges. The _graph_\(G\) is a graph with \(2n\) vertices and at least \(n+1\) edges, and it is a graph with \(2n\) vertices and at least \(n+1\) edges. **Lemma 2.2** ([13, 17]).: _An Artinian algebra \(\mathbb{K}[x_{1},\ldots,x_{n}]/I\), where \(I\) is a monomial ideal, has the WLP if and only if \(x_{1}+\cdots+x_{n}\) is a weak Lefschetz element._ Lemma 2.2 was first stated in [13, Proposition 2.2] over an infinite field, and later in [17, Theorem 2.2] over an arbitrary field. Recall that a _simplicial complex_\(\Delta\) on a vertex set \(V=\{x_{1},\ldots,x_{n}\}\) is a set of subsets of \(V\) that satisfies the conditions: (1) if \(F\in\Delta\) and \(G\subseteq F\), then \(G\in\Delta\), and (2) \(\{x_{i}\}\in\Delta\) for all \(i\). An element \(F\in\Delta\) is a _face_. The maximal elements of \(\Delta\) (with respect to inclusion) are called the _facets_ of \(\Delta\). Given a face \(F\in\Delta\), the _dimension_ of \(F\) is \(\dim(F)=|F|-1\). By convention, \(\dim(\emptyset)=-1\). We sometimes call a face \(F\) a _\(t\)-face_ if \(\dim(F)=t\). The _dimension of_\(\Delta\) is defined to be \(\dim(\Delta)=\max\{\dim(F)\mid F\in\Delta\}\). A simplicial complex is _pure_ if all its facets have the same dimension. The \(\mathbf{f}\)-vector of a \(d\)-dimensional simplicial complex \(\Delta\) is the vector of integers \[\mathbf{f}(\Delta)=(f_{0},\ldots,f_{d})\quad\text{where}\quad f_{i}=\text{ number of $i$-dimensional faces of $\Delta$}.\] We are interested in the following class of simplicial complexes. **Definition 2.3** (Pseudo-manifold).: A \(d\)-dimensional simplicial complex \(\Delta\) is a _pseudo-manifold_ if the following conditions hold: 1. \(\Delta\) is pure; 2. every face of dimension \((d-1)\) of \(\Delta\) is contained in at most two facets of \(\Delta\); and 3. for every two facets \(F,F^{\prime}\in\Delta\), there exists a sequence of facets \(F=G_{0},G_{1},\ldots,G_{t}=F^{\prime}\) such that \(\dim(G_{i}\cap G_{i+1})=d-1\) for all \(i=0,\ldots,t-1\). Additionally, a pseudo-manifold has a _boundary_ if there exists at least one face of dimension \((d-1)\) of \(\Delta\) that belongs to exactly one facet of \(\Delta\). Given a simplicial complex \(\Delta\), the Stanley-Reisner ideal of \(\Delta\) is the square-free monomial ideal \[I_{\Delta}=\langle x_{i_{1}}\cdots x_{i_{r}}\mid\{x_{i_{1}},\ldots,x_{i_{r}} \}\not\in\Delta\rangle.\] We construct a graded Artinian algebra from \(\Delta\) as follows: \[A(\Delta)=\mathbb{K}[x_{1},\ldots,x_{n}]/(\langle x_{1}^{2},\ldots,x_{n}^{2} \rangle+I_{\Delta}). \tag{2.1}\] From the definitions of the Stanley-Reisner ideal and the \(\mathbf{f}\)-vector of \(\Delta\), it follows that when \(A=A(\Delta)\), then \[\dim_{\mathbb{K}}A_{i}=f_{i-1}\quad\text{for}\quad i=1,\ldots,\dim(\Delta)+1.\] The following theorem now links together a number of concepts defined above. In the statement below, the \(1\)_-skeleton_ of \(\Delta\) is the simplicial complex consisting of all the faces of \(\Delta\) of dimension at most one, which can be considered a graph. A graph is _bipartite_ if it contains no cycles of odd length. Also in the statement below is the dual graph of \(\Delta\). We will not need to make use of the dual graph in this paper and thus do not define the term here. **Theorem 2.4** (Dao-Nair [6, Theorems 1.1, 1.2]).: _Let \(\Delta\) be a simplicial complex with \(1\)-skeleton \(G\), let \(A=A(\Delta)\) be the Artinian \(\mathbb{K}\)-algebra defined in (2.1), where \(\operatorname{char}(\mathbb{K})=0\), and let \(\ell=x_{1}+\cdots+x_{n}\)._ * _The map_ \(\times\ell:A_{1}\to A_{2}\) _is injective if and only if_ \(f_{0}\leq f_{1}\) _(or equivalently_ \(\dim_{\mathbb{K}}A_{1}\leq\dim_{\mathbb{K}}A_{2}\)_) and_ \(G\) _has no bipartite connected components._ * _If_ \(\Delta\) _is a_ \(d\)_-dimensional pseudo-manifold, then_ \(\times\ell:A_{d}\to A_{d+1}\) _has maximal rank if and only if_ * \(\Delta\) _has boundary, or_ * \(\Delta\) _has no boundary, and the dual graph of_ \(\Delta\) _is not bipartite._ ### Graph theory background Let \(G=(V,E)\) denote a finite simple graph, where \(V=\{x_{1},\ldots,x_{n}\}\) denotes the set of _vertices_ of \(G\), and \(E\) denotes the set of _edges_ of \(G\). We will write \(V(G)\), respectively \(E(G)\), if we wish to highlight that the vertices, respectively edges, belong to the graph \(G\). Note that a graph is a 1-dimensional simplicial complex. **Definition 2.5** (Independent set, complex).: For a graph \(G=(V,E)\), a subset \(W\subseteq V\) is called an _independent set_ if for all \(e\in E\), at least one vertex of \(e\) is not in \(W\). The _independence complex_ of \(G\) is the simplicial complex \[\operatorname{Ind}(G)=\{W\mid W\subseteq V\text{ is an independent set of }G\}.\] It is straightforward to verify that \(\operatorname{Ind}(G)\) satisfies the definition of a simplicial complex. The facets of \(\operatorname{Ind}(G)\) correspond to the _maximal independent sets_ of the graph \(G\). By the Stanley-Reisner correspondence, it can be shown that \(I_{\operatorname{Ind}(G)}=I(G)\), that is, the square-free monomial associated with the simplicial complex \(\operatorname{Ind}(G)\) is precisely the edge ideal of \(G\). Instead of writing \(A(\operatorname{Ind}(G))\) for the graded Artinian algebra of Equation (2.1), we will abuse notation and simply write \(A(G)\) (thus agreeing with our notation in the introduction), and call \(A(G)\) the _Artinian algebra of \(G\)_. We now turn our attention to the main class of graphs we wish to study. **Definition 2.6** (Whiskering).: Given a graph \(G\) with vertex set \(V=\{x_{1},\ldots,x_{n}\}\), the _whiskered graph of \(G\)_, denoted \(w(G)\), is the graph on the vertex set \(V(w(G))=\{x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\}\) and edge set \[E(w(G))=E(G)\cup\{\{x_{i},y_{i}\}\mid i=1,\ldots,n\}.\] We say a graph is _whiskered_ if it is the whiskering of some graph \(G\). Informally, we "whisker" a graph \(G\) by adding a new vertex \(y_{i}\) for each vertex \(x_{i}\) in \(G\), and join these two vertices together with an edge. An example of a graph \(G\) and its whiskered graph \(w(G)\) is given in Figure 1. Whiskered graphs have a number of nice properties, some of which can be found in [7, 9, 20, 22]. In particular, the operation of whiskering always produces a Figure 1. A graph \(G\) and its whiskered graph \(w(G)\) other words no matter what \(G\) we choose, \(I(w(G))\) is a Cohen-Macaulay ideal. This implies also that \(I(w(G))\) is an unmixed ideal, or in the language of our current paper, it has a pure independence complex (see, for example, [7, Theorem 4.4] for a proof). **Lemma 2.7**.: _If \(G\) is a whiskered graph on \(2n\) vertices, then the independence complex \(\operatorname{Ind}(G)\) is a pure simplicial complex of dimension \(n-1\)._ ## 3. Main Result In this section we prove our main result. As we shall show, Theorem 1.2 is in fact a corollary about the structure of the independence complex of a whiskered graph. In particular, we prove that these independence complexes are all pseudo-manifolds. **Theorem 3.1**.: _If \(G\) is a whiskered graph on \(2n\) vertices, then \(\operatorname{Ind}(G)\) is a pseudo-manifold. Moreover, if \(|E(G)|\geq n+1\), then \(\operatorname{Ind}(G)\) is a pseudo-manifold with boundary._ Proof.: Let \(G=w(H)\), where \[V(H)=\{x_{1},\ldots,x_{n}\}\quad\text{and}\quad V(G)=V(H)\cup\{y_{1},\ldots,y _{n}\}\] are the vertex sets of \(H\) and \(G\), respectively. We need to verify Definition 2.3 (1)-(3). By Lemma 2.7, \(\operatorname{Ind}(G)\) is a pure \((n-1)\)-dimensional simplicial complex, which verifies Definition 2.3 (1). We next prove the remaining two conditions. We first show that every \((n-2)\)-face of \(\operatorname{Ind}(G)\) is contained in at most two facets. Observe that if \(U\) is an independent set of \(G\) and \(x_{k}\notin U\) for some \(k\in\{1,\ldots,n\}\), then \(U\cup\{y_{k}\}\) is an independent set of \(U\). This is because the vertex \(y_{k}\) connects only to \(x_{k}\). We therefore conclude that every \((n-2)\)-face \(F\) of \(\operatorname{Ind}(G)\) can be written as a disjoint union \[F=\{x_{i}\mid i\in S\}\cup\{y_{j}\mid j\in T\}\] where \(S,T\subset\{1,\ldots,n\}\), \(S\cap T=\emptyset\), and \(|S\cup T|=n-1\). In particular, \[F\cup\{x_{j}\},\ F\cup\{y_{i}\}\not\in\operatorname{Ind}(G)\quad\text{for} \quad j\in T,\ i\in S.\] Suppose \(\{1,\ldots,n\}\smallsetminus(S\cup T)=\{q\}\). Then there are at most two facets of \(\operatorname{Ind}(G)\) that contain \(F\), namely * \(F\cup\{x_{q}\}\), if \(\{x_{i}\mid i\in S\}\cup\{x_{q}\}\) is an independent set of \(H\), and * \(F\cup\{y_{q}\}\). We have now verified Definition 2.3 (2). Finally, to show Definition 2.3 (3) holds, consider any two facets \(F,F^{\prime}\in\operatorname{Ind}(G)\). We claim there exists a sequence of facets \((F_{0},\ldots,F_{s})\) such that \(F_{0}=F\), \(F_{s}=F^{\prime}\) and \(|F_{i}\cap F_{i+1}|=n-1\). To prove this claim, we let \(Y=\{y_{1},\ldots,y_{n}\}\) be the facet of \(\operatorname{Ind}(G)\) that contains all the vertices of the whiskers (which is clearly an independent set of \(G\)) and let \(F\) be an arbitrary facet of \(\operatorname{Ind}(G)\). After reordering the vertices of \(\operatorname{Ind}(G)\), \(F\) can be written as \[F=\{x_{1},\ldots,x_{i},y_{i+1},\ldots,y_{n}\}.\] In particular, since \(\{x_{1},\ldots,x_{i}\}\) is an independent set of vertices in \(G\), every subset of it is also independent. Consider the sequence of facets \((Y,F_{1},\ldots,F_{i-1},F_{i}=F)\), where \[F_{j}=\{x_{1},\ldots,x_{j},y_{j+1},\ldots,y_{n}\}.\] Note that the sequence satisfies \[|F_{j}\cap F_{j+1}|=|\{x_{1},\ldots,x_{j},y_{j+2},\ldots,y_{n}\}|=n-1.\] Definition 2.3 (3) now holds for any two facets \(F,F^{\prime}\) of \(\operatorname{Ind}(G)\) taking the sequences from \(F\) to \(Y\) and \(F^{\prime}\) to \(Y\) constructed as above, by gluing the two sequences together, with one of them in reverse order. Thus, \(\operatorname{Ind}(G)\) is a pseudo-manifold. Note that the condition \(|E(G)|\geq n+1\) implies that \(H\) has at least one edge. Consequently, there exists a maximal independent set \(D\subsetneq V(H)\). Up to reordering the indices, we may assume \(D=\{x_{1},\ldots,x_{i}\}\) for some \(i<n\). The \((n-2)\)-face \(F=\{x_{1},\ldots,x_{i},y_{i+1},\ldots,y_{n-1}\}\) of \(\operatorname{Ind}(G)\) is contained in the facet \(F\cup\{y_{n}\}\). Moreover, since \(D\) is a maximal independent set, the vertex \(x_{n}\) is adjacent to some \(x_{j}\in D\). In particular, \(F\) is only contained in one facet of \(\operatorname{Ind}(G)\). Consequently, \(\operatorname{Ind}(G)\) is a pseudo-manifold with boundary. To prove Theorem 1.2, we need to take into consideration the characteristic of the field \(\mathbb{K}\). **Corollary 3.2** (Characteristic \(0\) Case).: _Suppose \(G\) is a whiskered graph on \(2n\) vertices and at least \(n+1\) edges, and \(A=A(G)\) is the Artinian algebra of \(G\) over a field \(\mathbb{K}\) of characteristic \(0\). If \(\ell\) is the sum of the variables of \(A\), then \(\times\ell:A_{i}\to A_{i+1}\) has maximal rank when \(i=1\) or \(n-1\)._ Proof.: Let \(G=w(H)\), where \[V(H)=\{x_{1},\ldots,x_{n}\}\quad\text{and}\quad V(G)=V(H)\cup\{y_{1},\ldots,y _{n}\}\] are the vertex sets of \(H\) and \(G\), respectively. Note that \(H\) must have at least one edge for \(G\) to have \(n+1\) edges, so the case \(n=1\) is excluded. If \(H\) has two vertices and one edge, then the whiskering \(w(H)\) is the path \(P_{4}\) in four vertices. The Artinian algebra defined by this graph has the WLP by [18, Theorem 4.4]. Assume \(H\) has at least three vertices. In order to apply Theorem 2.4 we verify that \(f_{0}\leq f_{1}\) in this case. The number of edges of \(\operatorname{Ind}(G)\) is exactly the number of edges missing from \(G\), and furthermore, \(G\) has \(|E(H)|+n\) edges. On the other hand, \(H\) has at most \(\frac{n(n-1)}{2}\) edges, so \[f_{1} =\frac{2n(2n-1)}{2}-|E(H)|-n\] \[=2n(n-1)-|E(H)|\geq 2n(n-1)-\frac{n(n-1)}{2}\] \[=\frac{3n(n-1)}{2}\geq 2n=f_{0},\] where the last inequality holds because \(n\geq 3\). Let \(G^{\prime}\) denote the 1-skeleton of \(\operatorname{Ind}(G)\). As the 1-faces of \(\operatorname{Ind}(G)\) are the non-edges of \(G\), the graph \(G^{\prime}\) is identical to the complement graph of \(G\). Any pairs \(\{y_{i},y_{j}\}\) and \(\{x_{i},y_{j}\}\) with \(i\neq j\) are non-edges of \(G\), making \(G^{\prime}\) connected. Moreover, the triple \(y_{1},y_{2},y_{3}\) makes a triangle of \(G^{\prime}\), so \(G^{\prime}\) is not bipartite. It follows by Theorem 2.4 that the multiplication map \(\times\ell:A_{1}\to A_{2}\) is injective. Since \(|E(G)|\geq n+1\), Theorem 3.1 gives that the simplicial complex \(\operatorname{Ind}(G)\) is an \((n-1)\)-dimensional pseudo-manifold with boundary. By applying Theorem 2.4, the multiplication map \(\times\ell:A_{n-1}\to A_{n}\) has maximal rank. **Corollary 3.3** (Prime Characteristic \(\neq 2\) Case).: _Suppose \(G\) is a whiskered graph with \(2n\) vertices and at least \(n+1\) edges. Let \(A=A(G)\) be the Artinian algebra of \(G\), and assume the characteristic of the base field \(\mathbb{K}\) is a prime not equal to \(2\). If \(\ell\) is the sum of the variables of \(A\), then \(\times\ell:A_{i}\to A_{i+1}\) has maximal rank when \(i=1\) or \(n-1\)._ Proof.: We first consider the multiplication map \(\times\ell:A_{n-1}\to A_{n}\) for all \(n\geq 2\). Since for every \(m\in A_{n-1}\) we have \[\ell m=\sum_{x_{i}m\neq 0\text{ in }A}x_{i}m\] we see that the multiplication map \(\times\ell:A_{n-1}\to A_{n}\) is represented by a matrix \(M\) that only has \(0\) and \(1\) as entries. Since \(\operatorname{Ind}(G)\) is a pseudo-manifold, and since the nonzero monomials of degree \(i\) correspond to faces of dimension \(i-1\) of \(\operatorname{Ind}(G)\), we know the sum of the entries in each column of \(M\) is \(1\) or \(2\). Let \(N\) be the matrix obtained from \(M\) by multiplying a column by \(2\) if the sum of its entries is \(1\). Because the characteristic of \(\mathbb{K}\) is not \(2\), this operation does not affect the rank. By [11, Corollary 45] the matrix \(N^{\top}\), and hence also \(N\), has full rank if and only if it has full rank in characteristic zero and \(N^{\top}\) has at least as many rows as columns. By Corollary 3.2\(M\), and hence \(N\), has full rank when the characteristic of \(\mathbb{K}\) is zero, so it remains to verify that \(f_{n-2}\geq f_{n-1}\), or equivalently, \(\dim_{\mathbb{K}}A_{n-1}\geq\dim_{\mathbb{K}}A_{n}\). This follows from the fact that \(\operatorname{Ind}(G)\) is connected, and every \((n-1)\)-face contains \(n\) faces of dimension \((n-2)\). We now consider the multiplication map \(\times\ell:A_{1}\to A_{2}\) for all \(n\geq 2\). Because the case \(n=2\) is covered by the previous case, we assume \(n\geq 3\). As shown in the proof of Corollary 3.2, we have \(f_{0}\leq f_{1}\) for \(\operatorname{Ind}(G)\). By [11, Theorem 57] the map \(\times\ell:A_{1}\to A_{2}\) has full rank if it does so when the characteristic of \(\mathbb{K}\) is zero. This is indeed the case by Corollary 3.2. The example below shows that the hypothesis \(|E(G)|\geq n+1\) cannot be dropped from the previous statements. **Example 3.4**.: Let \(H\) be the graph consisting of two isolated vertices \(V=\{x_{1},x_{2}\}\). The whiskered graph \(G=w(H)\) has two disjoint edges, namely \(E(G)=\{\{x_{1},y_{1}\},\{x_{2},y_{2}\}\}\), but does not satisfy \(|E(G)|\geq 2+1\). The facets of \(\operatorname{Ind}(G)\) are \[\{\{x_{1},x_{2}\},\{x_{1},y_{2}\},\{y_{1},x_{2}\},\{y_{1},y_{2}\}\}.\] While \(\operatorname{Ind}(G)\) is a pseudo-manifold, it does not have a boundary since every face of dimension \(\dim(\operatorname{Ind}(G))-1=0\) occurs in two facets. So the hypothesis \(|E(G)|\geq n+1\) in Theorem 3.1 cannot be dropped. Furthermore, using _Macaulay2_[10], it can be shown that \(\times\ell:A(G)_{1}\to A(G)_{2}\) does not have maximal rank, so we also need \(|E(G)|\geq n+1\) in Corollary 3.2. As we will see in the next section, we cannot improve Corollary 3.2 to degrees \(2\) and \(n-2\) for all whiskered graphs. ## 4. Illustrative examples We conclude this note with some illustrative examples. These examples show that some of the conclusions of Corollary 3.2 cannot be improved for whiskered graphs. Moreover, they also show that for some natural families of graphs one may wish to consider, the WLP fails. We first define a broom graph. Let \(m\geq 1\) be an integer. The _broom graph_\(B_{m}\) is a graph on \(m+3\) vertices \(V(G)=\{x_{1},x_{2},x_{3},x_{4},\ldots,x_{m+3}\}\) with edge set \[E(G)=\{\{x_{1},x_{2}\},\{x_{2},x_{3}\}\}\cup\{\{x_{3},x_{i}\}\mid i\in 4,\ldots,m+3\}.\] Note that our definition of a broom graph is similar to that \([2,3]\) which was defined for directed graphs. Our main example is based upon whiskering a broom graph. **Example 4.1**.: We consider the broom graph \(B_{5}\) and its whiskered graph \(G=w(B_{5})\); these graphs are shown in Figure 2. Using _Macaulay2_[10] to compute the Hilbert function of the graded Artinian ring \(A=A(G)\) we obtain: \[\begin{array}{c|ccccccccc}i&0&1&2&3&4&5&6&7&8&9\\ \hline H_{A}(i)&1&16&105&380&840&1167&996&477&98&0\end{array}\] Since \(B_{5}\) is a graph on \(|V(B_{5})|=8\) vertices, Corollary 3.2 implies that the map \(\times\ell:A_{7}\to A_{8}\) has maximal rank (where \(\ell\) is the sum of the variables of \(A\)). This can be verified by computer calculation. On the other hand, we can use this example to show that we cannot improve Corollary 3.2 to show that the map \(\times\ell:A(G)_{n-2}\to A(G)_{n-1}\) has maximal rank for all whiskered graphs \(G\) with \(2n\) vertices. Indeed, computer calculation shows that the map \(\times\ell:A_{6}\to A_{7}\) does not have maximal rank, that is, the map fails to be surjective. Then the map \(\times\ell:A_{5}\to A_{6}\) also fails surjectivity, as surjectivity in one degree implies surjectivity in all higher degrees (see e. g. [13, Proposition 2.1 (a)]). Interestingly, this graded Artinian ring also fails to have an injective map from degree \(4\) to \(5\). In this case, the rank of the map \(\times\ell:A_{4}\to A_{5}\) is \(826\neq\min\{840,1167\}\). **Example 4.2**.: The broom graph \(B_{1}\) is the same as the path on four vertices. The Hilbert function of the graded Artinian ring \(A=A(w(B_{1}))\) is given by \[\begin{array}{c|ccccccccc}i&0&1&2&3&4&5\\ \hline H_{A}(i)&1&8&21&22&8&0\end{array}\] Figure 2. The broom graph \(B_{5}\) and its whiskered graph \(w(B_{5})\) The map \(\times\ell:A_{2}\to A_{3}\) does not have maximal rank (found by computer computation). This example not only shows that Corollary 3.2 cannot be improved to hold for degree \(n-2\), it also cannot be improved to hold for degree \(2\). Recall that our motivating problem, Question 1.1, asks if there are families of graphs for which \(A(G)\) always have the weak Lefschetz property. Our previous example shows that such a family of graphs cannot be all bipartite graphs or all chordal graphs (a graph where the only induced cycles are three cycles), since \(w(B_{5})\) is both a bipartite and chordal graph. Even the family of unmixed trees does not always satisfy the WLP. In addition, combinatorial commutative algebra has introduced families of graphs whose independence complexes have especially nice combinatorial properties, e.g., Cohen-Macaulay, shellable, vertex decomposable (see [16] for details and definitions). Since all whiskered graphs are vertex decomposable by [7, Theorem 4.4] (and thus shellable and Cohen-Macaulay), our previous example also shows that these families of graphs do not always satisfy the weak Lefschetz property. **Remark 4.3**.: From computations in Macaulay2 for \(m=1,\ldots,8\), the map \[\times\ell:A(w(B_{m}))_{m+1}\to A(w(B_{m}))_{m+2}\] always fails maximal rank due to the element \(y_{1}y_{3}y_{4}\ldots y_{m+3}\in A(w(B_{m}))_{m+2}\) not being in the image of the multiplication map. We conjecture that this is the case for all \(m\geq 1\), and we are currently working on verifying this observation. **Remark 4.4**.: The whiskering construction of graphs has been generalized in several different ways by Biermann-Van Tuyl [1], Cook-Nagel [5], and Faridi [8]. We are currently exploring if these constructions also allow us to construct simplicial complexes that are pseudo-manifolds (perhaps with boundary), thus providing a deeper generalization Theorem 3.1. ### Acknowledgments Work on this project began at the "Workshop on Lefschetz Properties in Algebra, Geometry, Topology and Combinatorics", held at The Fields Institute for Research in Mathematical Sciences, Toronto, Canada in May 2023. We thank The Fields Institute for its hospitality and Eran Nevo for insightful conversations. Cooper's research is supported by NSERC Discovery Grant 2018-05004. Faridi's research is supported by NSERC Discovery Grant 2023-05929. Nicklasson's research is supported by the grant KAW-2019.0512. Van Tuyl's research is supported by NSERC Discovery Grant 2019-05412.
2304.14921
A technological framework for scalable ground-up formation of Circular Societies
The Circular Economy (CE) is regarded as a solution to the environmental crisis. However, mainstream CE measures skirt around challenging the ethos of ever-increasing economic growth, overlooking social impacts and under-representing solutions such as reducing overall consumption. Circular Societies (CS) address these concerns by challenging this ethos. They emphasize ground-up social reorganization,address over-consumption through sufficiency strategies, and highlight the need for considering the complex inter-dependencies between nature, society, and technology on local, regional and global levels. However, no blueprint exists for forming CSs. An initial objective of my thesis is exploring existing social-network ontologies and developing a broadly applicable model for CSs. Since ground-up social reorganization on local, regional, and global levels has compounding effects on network complexities,a technological framework digitizing these inter-dependencies is necessary. Finally, adhering to CS principles of transparency and democratization, a system of trust is necessary to achieve collaborative consensus of the network state.
Anant Sujatanagarjuna
2023-04-28T15:35:27Z
http://arxiv.org/abs/2304.14921v2
# A technological framework for scalable ground-up formation of Circular Societies ###### Abstract The Circular Economy (CE) is regarded as a solution to the environmental crisis. However, mainstream CE measures skirt around challenging the ethos of ever-increasing economic growth, overlooking social impacts and under-representing solutions such as reducing overall consumption. Circular Societies (CS) address these concerns by challenging this ethos. They emphasize ground-up social reorganization, address over-consumption through sufficiency strategies, and highlight the need for considering the complex inter-dependencies between nature, society, and technology on local, regional and global levels. However, no blueprint exists for forming CSs. An initial objective of my thesis is exploring existing social-network ontologies and developing a broadly applicable model for CSs. Since ground-up social reorganization on local, regional, and global levels has compounding effects on network complexities, a technological framework digitizing these inter-dependencies is necessary. Finally, adhering to CS principles of transparency and democratization, a system of trust is necessary to achieve collaborative consensus of the network state. circular societies, ground-up social reorganization, social-network ontologies, digitization, democratization, transparency, network consensus, 2021 In: B. Combemale, G. Mussbacher, S. Betz, A. Friday, I. Hadar, J. Sallou, I. Groher, H. Muccini, O. Le Meur, C. Herglotz, E. Eriksson, B. Penzenstadler, AK. Peters, C. C. Venters. Joint Proceedings of ICT4S 2023 Doctoral Symposium, Demonstrations & Posters Track and Workshops. Co-located with ICT4S 2023. Rennes, France, June 05-09, 2023. *Corresponding author. *[email protected] (A. Sujatanagarjuna) *0000-0003-1376-407X (A. Sujatanagarjuna) ## 1 Foreword I would like to thank the organizers of the ICT4S Doctoral Symposium for creating this platform, and allowing the participation of prospective doctoral researchers like myself. The following text details the doctoral thesis topic that I have built for myself while doing research in the Emerging Technologies for the Circular Economy research group, under the primary supervision of Dr. Benjamin Leiding. By participating in the symposium, I hope to gain valuable feedback from interactions with other researchers from diverse backgrounds, which would help me to solidify this research direction that I have built as my doctoral dissertation topic. ## 2 Background and Motivation The abundance of cheaply exploitable natural resources and human labor has fueled a global system of production and consumption, which follows a uni-directional pattern of "take, make and dispose"; beginning with material and resource extraction, followed by applying (often non-renewably sourced) energy and human labor, in order to manufacture products that are subsequently sold to consumers, who discard them when they are of no further use [1]. This economic model, popularly known as the Linear Economy (LE), has been the driving force of anthropogenic climate-change and its associated widespread socio-ecological damage. ### Circular Economy While in the LE, materials are extracted from nature and are used to manufacture products, only to be eventually turned into waste; the model of a Circular Economy (CE) employs strategies such as sharing, leasing, reusing, re-manufacturing and recycling to keep existing materials and manufactured products in use for as long as possible. The CE aims to design a system of production and consumption that eliminates the concept of "waste", by designing products that are optimized for cycles of disassembly and reuse [1]. The goal of this redesign, is to close the loop in the system of "take, make and dispose"; thus reducing material and resource extraction from nature, the creation of waste, and environmental pollution. In pursuit of similar goals, several governing bodies have adopted this model in recent years. For instance, the EU Commission has drafted a circular economy action plan as one of the main building blocks of the European Green Deal [2]. While the CE is a necessary and important step towards transitioning away from the destructive LE, CE in practice has been criticised in recent research as being insufficient to achieve the transformational change necessary to address the current socio-ecological crisis. ### Criticisms of the Circular Economy The central cause of the current socio-ecological crisis lies in the unidirectional logic that characterizes global systems of consumption and production today. This logic is fueled by industrialization and the narrative of free-market capitalism, whose primary function is to maximize economic value of natural and human resources that are converted into market commodities [3]. CE measures in practice however, tend to focus on recovery rates, resource efficiency, and waste reduction while overlooking aspects that can challenge this fundamental logic. For instance, while the necessity for radical transformation of the systems of production and consumption has been acknowledged by proponents and stakeholders of CE measures alike, the positive environmental potential of the concept of sufficiency is paradoxically disregarded for being too radical, and is excluded from current CE debates [4]. The concept of sufficiency is tightly linked with sustainability. If human society aims to be sustainable, i.e., satisfying the needs of the present without compromising the needs of future generations, it must also aim to consume enough to ensure universal human social well-being and quality of life, while restricting this consumption within the confines of the Earth's biocapacity [5]. In other words, we must also practice sufficiency. Another criticism of current CE measures, is that they often also display a lack of consideration towards social sustainability alongside economic and ecological sustainability [6]. While the CE is a necessary step towards addressing the current climate crisis, for a successful transition to a sustainable circular economy that is truly within planetary boundaries, there is also the need for an expansion of unidimensional value definitions to multidimensional and holistic constructs that highlight the importance of sufficiency and reducing resource consumption. The concept of a Circular Society (CS) is such a holistic construct. ### Circular Society The CS is a holistic version of the CE, in which transition "requires a fundamental reorientation and reorganization of practices and processes in all areas of life -- from nutrition mobility and energy use, to work models and housing concepts. This holistic vision, hearkens back to the roots of the CE; "1. to adopt a system perspective that considers the complex ways in which nature, society and technology are interdependently interacting on a local, regional and global level; 2. to aim for closed loops and organize production and consumption practices in circular flows that imitates the "eco-logic" of ecological systems; 3. to create a resilient production and consumption metabolism taking the need for regeneration of natural capital into account" [3]. Unlike modern CE measures, which implemented these concepts only through new business models and technologies, the proponents of CS also call for a "re-valuation of human labor and an enhanced role and conditions for productive work, service provision and do-it-yourself (DIY) activities" [7]. Aiming to challenge and transform the aforementioned unidimensional value definitions, CS focuses on the creation of multidimensional concepts of value creation "that define qualitative and quantitative indicators for social and ecological value creation and which take into account the many forms of work (care work, informal work, community work, DIY) that contribute to societal well-being" [7]. The CS also "aims to establish a participatory, communitarian, solidary and circular consumption and production system" [3], by championing the idea of people as 'embedded' in these complex systems, rather than as passive recipients [7]. Jaeger-Erben et. al [7] formulated some central topics for a "roadmap towards a CS", in which they highlight that unlike modern CE measures, CS concepts must also emphasize the concept of sufficiency through strategies such as "refuse, rethink and reduce". This focus on sufficiency can shift the focus to alternative value definitions that can aim to reduce, if not eliminate over-consumption. The authors [7] also identify some core prerequisites that need to be fulfilled, in order to foster community, collaboration and solidary practices of a CS. Firstly, _Accessibility and Transparency_ are recognized as central prerequisites for participation in the social and economic practices of a CS, meaning access to natural resources as well as education, health services and knowledge of the consumption and production processes is shared rather than monopolized, and "political and economic action is subject to the duty of transparency". Secondly, "_Democratization and Empowerment_ should create unconditional opportunities for participation and engagement in political, economic and cultural processes. Participation opportunities are linked with strategies for activation, capability boosting, and empowerment." A focus on encouraging sufficiency practices is also a key ingredient in achieving a circular society. ## 3 Research Gap There is a limit to what an individual alone can accomplish to incorporate sufficiency practices in their life. For instance, as individual computer scientists that recognize the socio-ecological damage associated with the lifecycle of smartphones, we can refuse to buy new smartphones, and rather utilize our expertise by repurposing electronics that we already possess, and build our own software alternatives to fit our daily needs. However, at least for me personally, the expertise ends here; since I would be unable to repair or build custom hardware components to sustain such a device forever. At a certain point, the lack of expertise of a single individual would be a barrier preventing them from pushing this practice further. Thankfully, for this specific scenario, the concept of a Hackerspace [8] is already quite popular. A Hackerspace is a collaborative workspace for people to work on projects while sharing tools and knowledge to realize projects. Generalizing, in order to break such barriers, individuals would need to collaborate and work together in order to practice sufficiency strategies that they could not have achieved on their own. Put together, several individuals can form a collaborative environment that collectively finds creative solutions to sustainability challenges; a circular society. A similar analogy can be made on a slightly more macro scale: individual local CS might also have similar limitations that they can only overcome by networking with other CSs. Working inductively, there needs to be perpetual scaling-up/networking of CSs, to actualize the goal of a socially and environmentally sustainable global circular society. Jaeger-Erben et. al. [3] recognize that "fertile ground" needs to exist for these innovative practices to be ubiquitous. Such a fertile ground needs to be capable of networking individuals together to facilitate these practices. There is however, no "blueprint" for the creation and organization of CSs. As mentioned in Section 2.3, general guiding principles for CSs exist. However, the idea of CSs is still relatively new, and there are already several examples of the principles of the CS being put into practice, e.g., the Free and Open Source Software (FOSS) movement [9], solidarity economics [10], micro-energy cooperatives [11], eco-villages and co-housing projects [12]. Exploring the commonalities and differences between such societies, is a necessary step in understanding how to replicate the fertile ground that facilitates their operation. Also, the perpetual scaling-up/networking of new and existing societies towards the goal of a global CS would require a generic framework that works on an individual, societal and macro-societal level, and that can streamline the formation and continuous operation of these CSs: the so-called fertile ground. Since this networking will exponentially increase in complexity, tracking the various interdependencies between individuals, society, nature and technology, in an analog manner would be infeasible. Hence, this would require this network of interdependencies to be digitized, calling for a technological framework (at least) consisting of: 1. An ontology (consisting of individuals, society, nature, technology, and their various relations), that is broadly applicable to, and serves as a means of understanding of the dynamics of collaborative practices within and between CSs, inspired and compatible with existing concepts such as solidarity economics, eco-villages, collaborative development, etc. 2. A common communication network that allows transparent and democratized inter- and intra- networking of CSs by forming a _decentralized knowledge graph_[13] that adheres to, and is described by the ontology. 3. A system of trust that is able to achieve (decentralized) consensus of the state of this decentralized knowledge graph. These three building-blocks can satisfy the previously mentioned prerequisites identified by Jaeger-Erben et. al [3], for a fertile ground that can foster the creation of CSs. The democratization in the formation of the _decentralized knowledge graph_ would allow for universal engagement in the sufficiency practices followed by the various actors in the network. The _decentralized knowledge graph_ being transparent, is also universally accessible, and allows knowledge to be extracted and understood based on the defined ontology, empowering others with blueprints of successful formations of CSs for independent formation. Additionally, the documentation of sufficiency practices, could also allow for their evaluation in terms of positive environmental impact. ## 4 Research Questions The main goal of my thesis is to fill the gap in the existing research identified in Section 3. To this end, I define the main research question for my thesis as follows: **RQ: How to create a technological framework to facilitate the formation and networking of circular societies?** For a more focused exploration of the main research question, I have further sub-divided the main research question into the following: * **RQ1: How to create an ontology that is broadly applicable to circular societies?** * **RQ2: How to network circular societies in a transparent, democratized, and decentralized knowledge graph?** * **RQ3: How to create a system of trust to achieve consensus in the decentralized knowledge graph?** These sub-questions are suitably phrased in order to collectively answer the main research question. In order to answer each of these questions, a preliminary research direction of my thesis would be to do systematic literature review into various subjects, including but not limited to: existing ontologies of social networks and frameworks for their development such as OWL (the Web Ontology Language) [14], decentralized knowledge graphs, technologies supporting decentralized autonomous organizations [15] (including non-blockchain [16] related solutions), and decentralized consensus mechanisms [17]. Secondary research directions of my thesis will also include exploring methods of evaluating sufficiency practices for their positive environmental impact using this technological framework. ## 5 Acknowledgments I would like to thank my members of my family, friends and colleagues; whose feedback was invaluable in writing this paper.
2310.19424
Variational Curriculum Reinforcement Learning for Unsupervised Discovery of Skills
Mutual information-based reinforcement learning (RL) has been proposed as a promising framework for retrieving complex skills autonomously without a task-oriented reward function through mutual information (MI) maximization or variational empowerment. However, learning complex skills is still challenging, due to the fact that the order of training skills can largely affect sample efficiency. Inspired by this, we recast variational empowerment as curriculum learning in goal-conditioned RL with an intrinsic reward function, which we name Variational Curriculum RL (VCRL). From this perspective, we propose a novel approach to unsupervised skill discovery based on information theory, called Value Uncertainty Variational Curriculum (VUVC). We prove that, under regularity conditions, VUVC accelerates the increase of entropy in the visited states compared to the uniform curriculum. We validate the effectiveness of our approach on complex navigation and robotic manipulation tasks in terms of sample efficiency and state coverage speed. We also demonstrate that the skills discovered by our method successfully complete a real-world robot navigation task in a zero-shot setup and that incorporating these skills with a global planner further increases the performance.
Seongun Kim, Kyowoon Lee, Jaesik Choi
2023-10-30T10:34:25Z
http://arxiv.org/abs/2310.19424v1
# Variational Curriculum Reinforcement Learning ###### Abstract Mutual information-based reinforcement learning (RL) has been proposed as a promising framework for retrieving complex skills autonomously without a task-oriented reward function through mutual information (MI) maximization or variational empowerment. However, learning complex skills is still challenging, due to the fact that the order of training skills can largely affect sample efficiency. Inspired by this, we recast variational empowerment as curriculum learning in goal-conditioned RL with an intrinsic reward function, which we name Variational Curriculum RL (VCRL). From this perspective, we propose a novel approach to unsupervised skill discovery based on information theory, called Value Uncertainty Variational Curriculum (VUVC). We prove that, under regularity conditions, VUVC accelerates the increase of entropy in the visited states compared to the uniform curriculum. We validate the effectiveness of our approach on complex navigation and robotic manipulation tasks in terms of sample efficiency and state coverage speed. We also demonstrate that the skills discovered by our method successfully complete a real-world robot navigation task in a zero-shot setup and that incorporating these skills with a global planner further increases the performance. Machine Learning, ICML, ICML ## 1 Introduction Intelligent creatures are able to efficiently explore the environments and learn useful skills in the absence of external supervision. By utilizing these skills, they can quickly accomplish tasks when they are later faced with specific tasks. To scale a learning agent to the real-world, it is crucial to achieve such ability of learning skills without supervision. Recent studies on unsupervised RL suggest ways to alleviate the need for human effort. Most of these approaches focus on reducing the burden of designing objective functions by incorporating intrinsic motivation objectives or leveraging concepts from information theory. In this work, we further reconcile with the need not only to manually engineer objective functions but to craft the order of training skills. Empowerment or MI-based RL (Klyubin et al., 2005; Salge et al., 2014) has gained traction in recent years as a means of unsupervised skill discovery due to its intuitive interpretation and empirical successes (Eysenbach et al., 2019; Sharma et al., 2019; Jabri et al., 2019). However, the common empowerment approach has been to either fix or parameterize the distribution of skills (Nair et al., 2018; Pong et al., 2020; Campos et al., 2020). The efficiency of learning skills with respect to the number of required training samples is rather limited when the agent learns complex skills from a fixed skill distribution without an organized order. The notion of _curriculum_ studies the effectiveness of the order of training skills. By selecting the order of appropriate skills, a learning agent may achieve a variety of complex skills (Florensa et al., 2018; Fang et al., 2019). However, it is both necessary to define a set of tasks that can be used to generate curriculum (Klink et al., 2020; Zhang et al., 2020) and specify a form of reward functions (Racaniere et al., 2019; Ren et al., 2019; Narvekar and Stone, 2019). To rectify this issue, we interpret empowerment as a unifying framework for curriculum learning in goal-conditioned RL (GCRL). Recasting variational empowerment as curriculum learning in GCRL with intrinsic reward function, interestingly our Variational Curriculum RL (VCRL) framework encapsulates most of the prior MI-based approaches (Nair et al., 2018; Pong et al., 2020; Campos et al., 2020). In this regard, we derive a new approach to information-theoretic skill discovery, Value Uncertainty Variational Curriculum (VUVC) that allows us to automatically generate curriculum goals which maximize the expected information approximated as the uncertainty in predictions of an ensemble of value functions. We analyze asymptotic behavior of the entropy of visited states and provide the reasons why our method results in much faster coverage of the state space compared to existing methods. The main contributions of this paper can be summarized as follows: (1) We provide the unifying framework VCRL encapsulating most of the prior MI-based approaches. (2) We propose VUVC, a value uncertainty based approach to information-theoretic skill discovery, aimed at automatically generating currulatio for training skills and which is supported by theoretical justification. (3) We show the effectiveness of our approach on complex navigation, robotic manipulation in both configuration and image state space, and real-world robotic navigation tasks and illustrate that the skills discovered by our method can be further improved by incorporating them with a global planner. ## 2 Background ### Goal-Conditioned Reinforcement Learning Goal-conditioned RL (Kaelbling, 1993) extends the standard RL framework to enable agents to accomplish a variety of tasks. It solves the problem formulated as a goal-conditioned Markov decision process (MDP) which is defined as a tuple \(\langle\mathcal{S},\mathcal{G},\mathcal{A},P,R_{g},\gamma\rangle\), where \(\mathcal{S}\) is the set of states, \(\mathcal{G}\) is the set of goals, \(\mathcal{A}\) is the set of actions, \(P:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,+\infty)\) is the transition probability, \(R_{g}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the goal-conditioned reward function and \(\gamma\in[0,1]\) is the discount factor. The objective of GCRL is to find the policy \(\pi_{\theta}(a|s,g)\) parameterized with \(\theta\) where \(s\in\mathcal{S},a\in\mathcal{A},g\in\mathcal{G}\) and \(\pi:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,+\infty)\) that maximizes the universal value function (Schaul et al., 2015): \[\theta\leftarrow\operatorname*{arg\,max}_{\theta}V^{\pi_{\theta}}(s,g)\triangleq\] \[\mathbb{E}_{\begin{subarray}{c}a_{t}\sim\pi_{\theta}(a_{t}|s_{t },g),\\ s_{t+1}\sim P(s_{t+1}|s_{t},a_{t})\end{subarray}}\Bigg{[}\sum_{t=0}^{\infty} \gamma^{t}R_{g}(s_{t},a_{t})\Big{|}s_{0}=s\Bigg{]}. \tag{1}\] ### Mutual Information and Empowerment In the context of RL, MI maximization such as empowerment generally means maximizing the mutual information between a function of states and a function of actions to learn latent-conditioned policies \(\pi(a|s,z)\) where the latent code \(z\) can be interpreted as a macro-action, skill or goal (Eysenbach et al., 2019; Sharma et al., 2019). Empowerment maximizes the following MI objective: \[\mathcal{I}(s;z) =\mathcal{H}(s)-\mathcal{H}(s|z)\] \[=\mathcal{H}(z)-\mathcal{H}(z|s)\] \[=\mathbb{E}_{z\sim p(z),s\sim p(s|z)}\left[\log p(z|s)-\log p(z)\right]\] \[\geq\mathbb{E}_{z\sim p(z),s\sim p(s|z)}\left[\log q_{\lambda}(z| s)-\log p(z)\right], \tag{2}\] where \(\mathcal{H}(\cdot)\) is the Shannon entropy, \(p(z)\) is the prior distribution, and \(q_{\lambda}(z|s)\) represents the variational approximation for intractable posterior \(p(z|s)\) parameterized with \(\lambda\), often called a discriminator (Eysenbach et al., 2019; Sharma et al., 2019; Campos et al., 2020). This objective provides a way to train a policy that guides agents to explore diverse states by maximizing \(\mathcal{H}(s)\) and makes the state \(s\) distinguishable from the latent code \(z\) by minimizing \(\mathcal{H}(s|z)\). Figure 1: An overview of our proposed method, VUVC, under the unifying framework for curriculum learning in goal-conditioned RL. The value uncertainty proposes informative goals which would generate a stronger learning signal. The density estimate of potential curriculum goals indicates the novelty of the goal to the learning agent. The density estimate model is derived from the discriminative model, which is trained alongside the agent. This discriminative model provides intrinsic rewards to the agent. VUVC combines these two measures to construct a goal generative model, promoting unsupervised exploration of the entire state space by the agent. ## 3 Variational Curriculum Reinforcement Learning To recast the aforementioned MI-based RL as VCRL, we first present that general GCRL methods optimize empowerment objective by formulating a discriminator to represent commonly used goal-conditioned reward functions. We then expand this setting to a curriculum learning framework with a goal generative model, which we name VCRL where Table 1 summarizes variants of the VCRL framework. Henceforth, we consider the latent code \(z\) in Equation 2 as a goal \(g\) and assume the goal space matches the state space, while VCRL framework is not limited to this assumption and trivially extended by introducing a state abstraction function (Ren et al., 2019). The objective now becomes equivalent to that of a GCRL where the resulting policy aims to reach \(g\)(Pong et al., 2020; Choi et al., 2021). Given a policy \(\pi_{\theta}(a|s,g)\) and a discriminator \(q_{\lambda}(g|s)\), an objective of MI-based RL is to maximize a variational lower bound: \[\mathcal{F}(\theta,\lambda)=\mathbb{E}_{\begin{subarray}{c}g\sim p(g),\\ s\sim\rho^{\pi}(s|g)\end{subarray}}[\log q_{\lambda}(g|s)-\log p(g)], \tag{3}\] where \(\rho^{\pi}(s|g)\) is a stationary state distribution induced by the goal-conditioned policy \(\pi(a|s,g)\)(Gregor et al., 2016; Campos et al., 2020). To solve this joint optimization problem, we iteratively fix one parameter and optimize the other one at each training epoch \(i\): \[\lambda^{(i)}\leftarrow\operatorname*{arg\,max}_{\lambda}\mathbb{E}_{ \begin{subarray}{c}g\sim p(g),\\ s\sim\rho^{\pi(i-1)}(s|g)\end{subarray}}[\log q_{\lambda}(g|s)-\log p(g)] \tag{4}\] \[\theta^{(i)}\leftarrow\operatorname*{arg\,max}_{\theta}\mathbb{E}_{ \begin{subarray}{c}g\sim p(g),\\ s\sim\rho^{\pi}(s|g)\end{subarray}}[\log q_{\lambda^{(i)}}(g|s)]. \tag{5}\] As described in the prior work (Warde-Farley et al., 2019; Choi et al., 2021), it has been shown that Equation 5 which is also called an intrinsic reward (Gregor et al., 2016), recovers the objective of GCRL in Equation 1 with dense rewards. By choosing a Gaussian distribution with mean \(s\) and fixed variance \(\sigma^{2}I\) for \(q_{\lambda}(g|s)\) where \(I\) is the identity matrix, this objective becomes a negative \(l_{2}\) distance between \(s\) and \(g\). Similarly, one can show that the intrinsic reward represented in Equation 5 becomes a sparse reward where an agent gets \(0\) reward if \(l_{2}\) distance between \(s\) and \(g\) is within some threshold \(\delta_{g}\) and gets \(-1\) otherwise. Other MI-based methods can also be considered a GCRL by modeling \(q_{\lambda}(g|s)\) to follow \(\mathcal{N}(\mu(s),\sigma^{2}I)\) where \(\mu(s)\) is a function approximator usually following an encoder structure. We further expand the interpretation of MI-based methods as a framework of GCRL to a framework of curriculum learning, which we term VCRL. Curriculum learning in RL studies the order of training skills or tasks. In the context of GCRL, the order of tasks, _curriculum_, is determined by characterizing a goal distribution \(p(g)\)(Fournier et al., 2018; Florensa et al., 2018; Racaniere et al., 2019; Ren et al., 2019; Zhang et al., 2020; Klink et al., 2020). Without an explicit design of \(p(g)\), VCRL is reduced to a simple GCRL where a target goal is given from the environment, \(p^{\mathrm{target}}(g)\). Otherwise, one can design a goal generative model to satisfy various purposes of the training. For instance, EDL (Campos et al., 2020), a variant of MI-based RL, aims to train a state space covering skill. EDL first learns \(p^{\mathrm{explored}}(g)\) along with an exploration policy (Lee et al., 2019) which tries to cover the entire state space. Then, it optimizes the MI objective (Equation 3) with the stationary goal distribution \(p^{\mathrm{explored}}(g)\). Skew-Fit (Pong et al., 2020) also seeks to learn a state space covering skill in an unsupervised manner. However, unlike EDL, it assumes a non-stationary goal distribution to ensure that the state density \(p(s)\) converges to uniform distribution. This is achieved by formulating the goal distribution, \(p(g)\), to be proportional to the approximate state density, \(p^{\mathrm{visited}}(s)\), raised to a skewing parameter \(\alpha\) within the range of \([-1,0)\). Similarly, RIG samples goals directly from \(p^{\mathrm{visited}}(s)\). ## 4 Value Uncertainty Variational Curriculum Despite the many empirical successes of empowerment methods, learning complex skills is still challenging since there has been little consideration of \(p(g)\) in the MI objec \begin{table} \begin{tabular}{c|c c c} \hline \hline Methods & \(q_{\lambda}(g|s)\) & \(p(g)\) & \begin{tabular}{c} Non-stationary \\ goal distribution \\ \end{tabular} \\ \hline GCRL (w/ sparse reward) & \(\frac{1}{Z}\exp(1-2\delta_{g}\mathcal{U}_{(s\pm\delta_{g})})\) & \(p^{\mathrm{target}}(g)\) & ✗ \\ GCRL (w/ dense reward) & \(\mathcal{N}(s,\sigma^{2}I)\) & \(p^{\mathrm{target}}(g)\) & ✗ \\ EDL (Campos et al., 2020) & \(\mathcal{N}(\mu(s),\sigma^{2}I)\) & \(p^{\mathrm{explored}}(g)\) & ✗ \\ RIG (Nair et al., 2018) & \(\mathcal{N}(\mu(s),\sigma^{2}I)\) & \(p_{t}^{\mathrm{visited}}(g)\) & ✓ \\ Skew-Fit (Pong et al., 2020) & \(\mathcal{N}(\mu(s),\sigma^{2}I)\) & \(\propto p_{t}^{\mathrm{visited}}(g)^{\alpha}\) & ✓ \\ VUVC (**ours**) & \(\mathcal{N}(\mu(s),\sigma^{2}I)\) & \(\propto U(g)p_{t}^{\mathrm{visited}}(g)^{\alpha}\) & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Variants of VCRL framework which encapsulate most of the prior MI-based methods, depending on the choice of a discriminator \(q_{\lambda}(g|s)\), a goal generative model \(p(g)\), and whether \(p(g)\) is stationary or not, where both \(q_{\lambda}(g|s)\) and \(p(g)\) are components of the MI objective. The discriminator determines the shape of goal-conditioned reward functions including sparse and dense shapes. tive (Achiam et al., 2018; Eysenbach et al., 2019; Warde-Farley et al., 2019; Campos et al., 2020). To efficiently learn complex skills, it is important to effectively optimize the variational empowerment in Equation 3. To this end, the agent should seek out goals from which it can learn the most. This can be formalized in the uncertainty of value functions which track the performance of the policy. To estimate the uncertainty, we use an ensemble of multiple value functions that has been widely adopted in the literature with empirical success (Osband et al., 2016; Lakshminarayanan et al., 2017; Osband et al., 2018; Zhang et al., 2020). Formally, we maintain an ensemble of parameters for value functions: \(\psi=\{\psi_{1},...,\psi_{K}\}\), which is randomly initialized independently, \[\text{Value functions }v_{\psi}:s,g\to V_{\psi}(s,g). \tag{6}\] We quantify the uncertainty of value functions in predictions of the ensemble members from the initial state by computing the variance over the ensemble of the value functions: \[\text{Uncertainty }U(g):\text{Var}[\{V_{\psi}(s_{0},g)|\psi\in\{\psi_{1},...,\psi_{K}\}]. \tag{7}\] **Proposition 1**.: _If \(V_{\psi}(s_{0},g)\) follows a log-concave distribution, then we have_ \[\mathcal{I}(V_{\psi}(s_{0},g);\psi|s_{0},g)\geq\log(2\sqrt{\text{Var}[V_{\psi} (s_{0},g)]}). \tag{8}\] Proof Sketch.: We rewrite the mutual information as the difference between conditional entropy and marginal entropy. We then use the result in (Marsiglietti and Kostina, 2018) on a lower bound on the entropy of a log-concave random variable, expressed in terms of the \(p\)-th absolute moment to obtain the conclusion. The complete proof appears in Appendix C. It follows from Proposition 1 that finding a goal which maximizes the mutual information can be relaxed into the surrogate problem, which is to select a goal that maximizes the uncertainty in predictions of an ensemble of value functions when we take \(K\rightarrow\infty\). With this intuition, one natural option to sample goals is to compute a goal probability proportional to the uncertainty \(p(g)\propto U(g)\), where \(g\in\mathrm{support}(p_{t}^{\mathrm{visited}})\). To prevent goals with lower density from being frequently proposed, we adopt the Skew strategy (Pong et al., 2020) which assigns more weight to rare samples by skewing the goal sampling probability. We therefore sample goals from the following distribution: \[p_{t}^{\mathrm{VUVC}}(g)=\frac{1}{Z_{t,\alpha}}U(g)p_{t}^{\mathrm{visited}}(g) ^{\alpha},\quad\alpha\in[-1,0), \tag{9}\] where \(Z_{t,\alpha}\) is the normalizing coefficient. We approximate \(p_{t}^{\mathrm{visited}}\) by training a generative model on samples in the replay buffer, where we use a \(\beta\)-VAE (Higgins et al., 2017) in our experiments. We term a VCRL method with a goal generative model following Equation 9 as VUVC. **Definition 1**.: (Expected Entropy Increment over Uniform Curriculum). Given the empirical distribution of the visited state \[p_{t}^{\mathrm{visited}}(s)=\sum_{i=1}^{t}\frac{\mathbb{I}(s_{i}=s)}{t}, \tag{10}\] where \(\mathbb{I}(\cdot)\) is an indicator function, uniform curriculum goal distribution \(p_{t}^{\mathcal{U}}\) and value uncertainty-based curriculum goal distribution \(p_{t}^{\mathrm{VUV}}\) are defined as follows: \[p_{t}^{\mathcal{U}}(g) =\mathcal{U}(\mathrm{support}(p_{t}^{\mathrm{visited}}))(g), \tag{11}\] \[p_{t}^{\mathrm{VU}}(g) =\frac{1}{Z_{t}}U(g)p_{t}^{\mathcal{U}}(g), \tag{12}\] where \(Z_{t}\) is the normalizing coefficient, \(p_{t}^{\mathcal{U}}\) is uniform over the support of the \(p_{t}^{\mathrm{visited}}\) and \(U(g)\) is the value uncertainty. Figure 2: Illustrations of simulated environments. (Left) Point maze navigation tasks which we name _PointMazeA_, \(B\), \(C\), and _SquareLarge_ in sequential order. The initial state and goal distribution of each task are depicted by a blue circle and red box, respectively. (Top right) Configuration-based robot manipulation tasks: _FetchPush_, _FetchPickAndPlace_ and _FetchSlide_. The goal distribution which represents the target position for the puck, is illustrated by a red cylinder. (Bottom right) Vision-based robot manipulation tasks: _SawyerDoorHook_, _SawyerPickup_ and _SawyerPush_. Then the expected entropy increment over uniform curriculum \(I_{t}\) is defined as \[I_{t}=\mathbb{E}_{g\sim p_{t}^{\mathrm{VUC}}}[\mathcal{H}(p_{t+1}^{\mathrm{ visited}})]-\mathbb{E}_{g\sim p_{t}^{\mu}}[\mathcal{H}(p_{t+1}^{\mathrm{ visited}})]. \tag{13}\] To study the asymptotic behavior of the expected next step entropy induced by VUVC, we define the expected entropy increment over uniform curriculum in Equation 13 for the case of discrete state space. However, computing the empirical distribution of the next visited state \(p_{t+1}^{\mathrm{visited}}\) requires marginalizing out the MDP dynamics which is intractable to compute. Therefore, we consider two special cases when (1) an agent always reaches the goal in Proposition 2 and (2) an agent sometimes fails to reach goals but potentially increases the amount of entropy in Proposition 3. **Proposition 2**.: _Given \(\epsilon=\frac{1}{t}\) and \(\rho^{\pi_{\theta}}(s|g)=\mathbb{I}(s=g)\), if_ \[\mathrm{Cov}[U(g),\log p_{t}^{\mathrm{visited}}(g)]\leq 0, \tag{14}\] _and take \(\epsilon\to 0\), then we have,_ \[\lim_{\epsilon\to 0}\frac{\partial}{\partial\epsilon}I_{t}=\] \[\lim_{\epsilon\to 0}\frac{\partial}{\partial\epsilon}\left( \mathbb{E}_{g\sim p_{t}^{\mathrm{VUC}}}[\mathcal{H}(p_{t+1}^{\mathrm{visited}})] -\mathbb{E}_{g\sim p_{t}^{\mu}}[\mathcal{H}(p_{t+1}^{\mathrm{visted}})]\right) >0. \tag{15}\] Proof Sketch.: We begin by deriving a next step empirical distribution of the visited state given a curriculum goal \(g\) and a stationary state distribution induced by the policy \(\rho^{\pi_{\theta}}(s|g)\), which can be written as \(p_{t+1}^{\mathrm{visited}}(s)=\frac{p_{t}^{\mathrm{visited}}(s)+\rho^{\pi_{ \theta}}(s|g)}{1+\epsilon}\). Plugging this back into Definition 1, we analyze asymptotic behavior of the expected entropy increment and obtain the conclusion with the assumption \(\rho^{\pi_{\theta}}(s|g)=\mathbb{I}(s=g)\). The complete proof is provided in Appendix C. With an accurate goal-conditioned policy and the model of dynamics, Proposition 2 gives us intuition that our VUVC is at least better than the uniform curriculum which Skew-Fit aims to converge to, if the uncertainty of the learned value functions \(U(g)\) and the log density of \(p_{t}^{\mathrm{visited}}\) are negatively correlated. We expect this negative correlation to happen frequently, since the uncertainty is positive for novel states, but it eventually reduces to zero with a sufficiently large number of samples. **Proposition 3**.: _Define the set \(\mathcal{G}=\mathcal{G}_{\mathrm{exploit}}\cup\mathcal{G}_{\mathrm{uninfo}} \cup\mathcal{G}_{\mathrm{info}}\) and positive constant \(\Delta_{1},\Delta_{2}\) where_ \[\rho^{\pi_{\theta}}(s|g)=\begin{cases}\mathbb{I}(s=g)&\text{ for }g\in \mathcal{G}_{\mathrm{exploit}}\\ \rho^{\pi_{\theta}}_{\mathrm{uninfo}}(s|g)&\text{ for }g\in\mathcal{G}_{ \mathrm{uninfo}}\\ \rho^{\pi_{\theta}}_{\mathrm{info}}(s|g)&\text{ for }g\in\mathcal{G}_{ \mathrm{info}},\end{cases} \tag{16}\] _for all \(g\in\mathcal{G}_{\mathrm{uninfo}}\),_ \[\mathbb{E}_{s\sim\rho^{\pi_{\theta}}_{\mathrm{uninfo}}(s|g)}[\log p_{t}^{ \mathrm{visited}}(s)]=\log p_{t}^{\mathrm{visited}}(g)+\Delta_{1},\] _and for all \(g\in\mathcal{G}_{\mathrm{info}}\),_ \[\mathbb{E}_{s\sim\rho^{\pi_{\theta}}_{\mathrm{info}}(s|g)}[\log p_{t}^{ \mathrm{visited}}(s)]=\log p_{t}^{\mathrm{visited}}(g)-\Delta_{2}.\] Figure 3: Learning curves for configuration-based point maze navigation tasks (top), continuous robot control tasks (middle), and vision-based continuous robot manipulation tasks (bottom). _Mean (SD)_ of each performance measure over 5 random seeds are reported where results are smoothed across 10 training epochs for each seed. VUVC consistently outperforms other VCRL variants for all tasks. _Given \(\epsilon=\frac{1}{t}\), if_ \[\mathrm{Cov}[U(g),\log p_{t}^{\mathrm{visited}}(g)] \leq 0,\] \[\mathbb{E}_{g\in\mathcal{G}_{\mathrm{unique}}}[p_{t}^{\mathrm{U}}(g )] \leq\mathbb{E}_{g\in\mathcal{G}_{\mathrm{unique}}}[p_{t}^{\mathcal{U}}(g)],\] \[\mathbb{E}_{g\in\mathcal{G}_{\mathrm{info}}}[p_{t}^{\mathrm{V}}(g )] \geq\mathbb{E}_{g\in\mathcal{G}_{\mathrm{info}}}[p_{t}^{\mathcal{U}}(g)],\] _and take \(\epsilon\to 0\), then we have,_ \[\lim_{\epsilon\to 0}\frac{\partial}{\partial\epsilon}I_{t}=\] \[\lim_{\epsilon\to 0}\frac{\partial}{\partial\epsilon}\left( \mathbb{E}_{g\sim p_{t}^{\mathrm{V}}}[\mathcal{H}(p_{t+1}^{\mathrm{visited}})] -\mathbb{E}_{g\sim p_{t}^{\mathcal{U}}}[\mathcal{H}(p_{t+1}^{\mathrm{visited}}) ]\right)>0.\] Proof Sketch.: The proof proceeds in a similar manner as Proposition 2 except for an assumption \(\mathcal{G}=\mathcal{G}_{\mathrm{exploit}}\cup\mathcal{G}_{\mathrm{uninfo}} \cup\mathcal{G}_{\mathrm{info}}\). The complete proof is in Appendix C. Proposition 3 extends Proposition 2 to the case where the goal-conditioned policy is sub-optimal and fails to achieve some of the goals. It implies that we need a curriculum method which can filter out uninformative states when the policy can not consistently achieve certain states, in order to achieve a rapid increment of entropy. Empirical observations indicate that VUVC achieves this effect (further details provided in Section 5). ## 5 Experiments ### Experimental Setup and Baselines We validate the effectiveness of VUVC on 10 different environments. They consist of point maze navigation tasks (Zhang et al., 2020; Trott et al., 2019), configuration-based robot control tasks (Plappert et al., 2018), and vision-based robot manipulation tasks (Nair et al., 2018) which are shown in Figure 2. Especially, for configuration-based robot tasks, we modify the initial state and goal distribution following from the prior work (Ren et al., 2019) to consider more complicated tasks which require extensive exploration. Further details of experimental setups are presented in Appendix D. By comparing VUVC to HER (Andrychowicz et al., 2017), we study how effectively explicit curriculum improves sample efficiency over implicit curriculum. We examine how well value uncertainty curriculum goals encourages exploration over goals from GoalGAN (Florensa et al., 2018) which generates goals by measuring task difficulty through success rate, over goals from DIAYN (Eysenbach et al., 2019) which divides the visited state space into separate sections for each skill, or over goals from RIG (Nair et al., 2018) and Skew-Fit (Pong et al., 2020) which sample goals from the density estimate. We also investigate the importance of gradually increasing state coverage for the goal distribution by comparing it to EDL (Campos et al., 2020), and investigate how efficiently VUVC increases the visited state entropy. ### Comparison of Sample Efficiency We compare the number of required samples for task completion in various environments which are based on either configuration observation or image observation. Our experimental results illustrated in Figure 3 show that VUVC outperforms a variety of VCRL variants. Note that although EDL and EDL-Oracle take advantage of an additional training phase, VUVC outperforms them. Figure 4: An illustration of the relation between value uncertainty and log density of visited states (left) and the landscape of value uncertainty (middle) and success rate (right). Figure 5: Curriculum goal distribution and accumulated visited states. The red contour line illustrates the curriculum goal distributions and cyan dots represent visited states by the agent. VUVC covers the state space significantly faster than the baselines. Point Maze Navigation TasksVUVC successfully accomplishes all tasks, while some baseline methods fail. Especially in the complicated _PointMazeSquareLarge_ environment, VUVC requires much less interaction for task completion. This result suggests the importance of an elaborate curriculum goal distribution in comparison to GoalGAN or Skew-Fit and emphasizes the importance of a gradually increasing state covering goal distribution when compared to EDL and EDL-Oracle. Configuration-based Robotic Manipulation TasksIn all three tasks, VUVC significantly outperforms all baselines. It is also noteworthy that VUVC performs better than EDL-Oracle, even though our method does not make an excessive assumption (i.e., the need for an oracle uniform goal sampler). In comparison to Skew-Fit which also generates goals from a non-stationary distribution, the success rate of VUVC increases much faster. This result indicates to us that our method increases the entropy of the visited state distribution more efficiently than Skew-Fit. Vision-based Robotic Manipulation TasksVUVC presents the best performance compared to other VCRL variants in image observation environments. We train a policy in a latent space instead of directly training in an image space, as it has been shown that this solves RL problems in an image space efficiently (Nair et al., 2018), where an encoder of state density estimate model for a goal generator is used for a mapping function from an image observation to a latent observation. Even in a poorly-structured observation space, Figure 3 shows that VUVC consistently outperforms a variety of baseline methods. Note that DIAYN struggles in the _SawyerDoorHook_ and _SawyerPickup_ tasks as its policy remain close to the initial state during the training phase. ### Impact of the Value Uncertainty To see the effects of the value uncertainty in the curriculum, in Figure 4, we investigate (1) how the value uncertainty \(U(g)\) and log density of visited states \(p_{t}^{\mathrm{visited}}\) are correlated, and (2) how well the value uncertainty filters out uninformative states. In general, we observe that \(U(g)\) and \(p_{t}^{\mathrm{visited}}\) show negative correlation, indicating that VUVC covers the state space faster than the uniform curriculum for the optimal goal-conditioned policy as the regularity condition of Proposition 2 holds empirically. In addition to this, we observe a case which satisfies the regularity condition of Proposition 3 from the landscape visualization, implying that our method is more effective than the uniform curriculum. Uncertainty is low for easily reachable goals (yellow in the success rate landscape) as well as barely reachable goals (purple). On the other hand, uncertainty of goals that are moderately reachable (green) is high, which indicates that the value uncertainty focuses more on informative goals and results in better performance as we see in Figure 3. ### Extensive Exploration for State Coverage We next evaluate the effectiveness of our method by qualitatively comparing the speed of state coverage of each method in the _PointMazeSquareLarge_ environment. Figure 5 demonstrates that VUVC efficiently increases the visited state entropy by considering the value uncertainty. Furthermore, after a sufficient number of exploration steps, the curriculum goal distribution induced by VUVC approaches a uniform distribution as the value uncertainty for every state converges to a consistent value. The results for other tasks are presented in Appendix F.3. ### Deploying Skills on the Real-world Robot We evaluate our method in a building-scale navigation task on the Husky A200 mobile robot which detects obstacles using a LiDAR sensor. We first apply our algorithm in a 2D navigation environment, and deploy learned navigation skills directly on the real robot in a zero-shot setup. Figures 6 and 7 show that the learned navigation skill can be directly used on our real-world robot without a manual design of complex reward functions and curriculum. We further demonstrate that combining learned navigation skills with Figure 6: An illustrative example of how we utilize a global planner to generate a subgoal for our real robot platform. Figure 7: Building-scale navigation task with a real-world robot without (top left) and with global planner (top right). (Bottom) Evaluation on reaching the target goal. the help of a global planner improves navigation performance. The learned skill aims to reach the local goal \(g_{t}\) that is \(d_{\mathrm{local}}\) away from the robot on the trajectory generated by the global planner. Figure 7 demonstrates that the learned skill combined with the global planner reaches the goal faster (solid line) than the learned skill itself (dashed line). Detailed description of the real-world experiment setup can be found in Appendix F.2. ## 6 Related Work ### Curriculum RL In GCRL, a goal relabeling scheme which samples goals from failed trajectories is proposed as an implicit curriculum method (Andrychowicz et al., 2017; Fang et al., 2018; Liu et al., 2018; Ding et al., 2019; Fang et al., 2019; Nair et al., 2018). Another line of work investigates curriculum generation methods that consider task difficulty. These methods explicitly model a curriculum generative model, generating goals based on task difficulty (Florensa et al., 2018; Racaniere et al., 2019), competence progress (Fournier et al., 2018), utilization of an additional agent (Narvekar and Stone, 2019), maximization of achieved goal distribution entropy with heuristic (Pitis et al., 2020), or progressive updating towards a predefined target distribution (Klink et al., 2020). However, prior works do not provide theoretical justification (Florensa et al., 2018; Racaniere et al., 2019), are limited to a given target distribution (Fournier et al., 2018; Narvekar and Stone, 2019; Klink et al., 2020), or depend on manually engineered heuristics (Pitis et al., 2020). The notion of uncertainty has been also considered in VDS (Zhang et al., 2020) which measures the uncertainty of the Q-functions to sample curriculum goals. However, this work lacks theoretical justification and assumes an oracle goal sampler accessing a uniform distribution over all valid states in a state space, which artificially ignores exploration problems by resetting the agent to any state in the environment, whereas our work does not require such an assumption. ### Empowerment and Unsupervised Skill Learning Recent studies on empowerment have studied the forms of mutual information-based objectives to learn state-covering skills (Campos et al., 2020; Pong et al., 2020), promote skill diversity (Achiam et al., 2018; Eysenbach et al., 2019; Liu et al., 2022), learn non-parametric reward functions (Warde-Farley et al., 2019), establish meta-training task distributions (Jabri et al., 2019), incorporate skill-transition dynamics models along with skill-conditioned policies for a model-based planning (Sharma et al., 2019), and enhance generalization through the successor feature framework (Hansen et al., 2020; Liu and Abbeel, 2021). In addition, a number of works have studied how to extend empowerment to high-dimensional image space by using a non-parametric nearest neighbor to estimate entropy (Liu and Abbeel, 2021; Yarats et al., 2021; Seo et al., 2021). However, most of this research assumes a fixed stationary distribution over skills (or goals) and there has been little exploration regarding the form of skill (or goal) distribution \(p(z)\) (or \(p(g)\)). Compared to prior empowerment approaches, we investigate the effectiveness of curriculum skill distribution. ### Uncertainty Quantification in RL Measures of uncertainty have played a key role in RL. Bootstrapped DQN (Osband et al., 2016) uses a bootstrapping method to estimate the uncertainty of the Q-value, and utilizes it for efficient exploration. Plan2Explore (Sekar et al., 2020) leverages an ensemble of one-step predictive models to guide the exploration. Both bootstrapping and dropout methods are used to measure the uncertainty of the collision prediction model for safe navigation (Kahn et al., 2017). PBP-RNN (Benatan and Pyzer-Knapp, 2019) uses probabilistic backpropagation as an alternative to quantify uncertainty within a safe RL scenario. PETS (Chua et al., 2018) employs trajectory sampling with probabilistic dynamics models to bridge gap model-based RL and model-free RL. ### Intrinsic Reward and Exploration In a tabular setting, visit counts can be used as exploration bonus to encourage exploration (Strehl and Littman, 2008). Count-based exploration methods are further extended to non-tabular setting by introducing the pseudo-count (Bellemare et al., 2016; Ostrovski et al., 2017) or successor representation (Machado et al., 2020). Another common approach guides the agent based on prediction errors. For instance, squared prediction error in learned dynamics models is used as exploration bonus (Stadie et al., 2015). RND (Burda et al., 2019) uses errors in a randomly generated prediction problem that predicts the output of a fixed randomly initialized neural network given the observations. Our work enables agents to reach any previously visited states by learning goal-conditioned policies that cover the entire goal space. In contrast, exploration bonuses help agents visit novel states, but they cannot reuse learned policies to solve user-specified goals as those states are quickly forgotten. ## 7 Conclusion We provide the unifying framework VCRL which recasts MI-based RL as curriculum learning in goal-conditioned RL. Under VCRL framework, we propose a novel approach VUVC for unsupervised discovery of skills which utilizes a value uncertainty for an increment in the entropy of the visited state distribution. Under regularity conditions, we prove that VUVC improves the expected entropy more than the uniform curriculum method. Our experimental results demonstrate that VUVC consistently outperforms a variety of prior methods both on configuration-based and vision-based continuous robot manipulation tasks. We also demonstrate that VUVC enables a real-world robot to learn to navigate in a long-range environment without any explicit rewards, and that incorporating skills with a global planner further improves the performance. ## Acknowledgements This work was supported by the Industry Core Technology Development Project, 20005062, Development of Artificial Intelligence Robot Autonomous Navigation Technology for Agile Movement in Crowded Space, funded by the Ministry of Trade, Industry & Energy (MOTIE, Republic of Korea) and by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation, No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)).
2304.05192
Identifying epileptogenic abnormalities through spatial clustering of MEG interictal band power
Successful epilepsy surgery depends on localising and resecting cerebral abnormalities and networks that generate seizures. Abnormalities, however, may be widely distributed across multiple discontiguous areas. We propose spatially constrained clusters as candidate areas for further investigation, and potential resection. We quantified the spatial overlap between the abnormality cluster and subsequent resection, hypothesising a greater overlap in seizure-free patients. Thirty-four individuals with refractory focal epilepsy underwent pre-surgical resting-state interictal MEG recording. Fourteen individuals were totally seizure free (ILAE 1) after surgery and 20 continued to have some seizures post-operatively (ILAE 2+). Band power abnormality maps were derived using controls as a baseline. Patient abnormalities were spatially clustered using the k-means algorithm. The tissue within the cluster containing the most abnormal region was compared with the resection volume using the dice score. The proposed abnormality cluster overlapped with the resection in 71% of ILAE 1 patients. Conversely, an overlap only occurred in 15% of ILAE 2+ patients. This effect discriminated outcome groups well (AUC=0.82). Our novel approach identifies clusters of spatially similar tissue with high abnormality. This is clinically valuable, providing (i) a data-driven framework to validate current hypotheses of the epileptogenic zone localisation or (ii) to guide further investigation.
Thomas W. Owen, Vytene Janiukstyte, Gerard R. Hall, Jonathan J. Horsley, Andrew McEvoy, Anna Miserocchi, Jane de Tisi, John S. Duncan, Fergus Rugg-Gunn, Yujiang Wang, Peter N. Taylor
2023-04-11T12:50:33Z
http://arxiv.org/abs/2304.05192v1
# Identifying epileptogenic abnormalities through spatial clustering of MEG interictal band power ###### Abstract Successful epilepsy surgery depends on localising and resecting cerebral abnormalities and networks that generate seizures. Abnormalities, however, may be widely distributed across multiple discontiguous areas. We propose spatially constrained clusters as candidate areas for further investigation, and potential resection. We quantified the spatial overlap between the abnormality cluster and subsequent resection, hypothesising a greater overlap in seizure-free patients. Thirty-four individuals with refractory focal epilepsy underwent pre-surgical resting-state interictal MEG recording. Fourteen individuals were totally seizure free (ILAE 1) after surgery and 20 continued to have some seizures post-operatively (ILAE 2+). Band power abnormality maps were derived using controls as a baseline. Patient abnormalities were spatially clustered using the k-means algorithm. The tissue within the cluster containing the most abnormal region was compared with the resection volume using the dice score. The proposed abnormality cluster overlapped with the resection in 71% of ILAE 1 patients. Conversely, an overlap only occurred in 15% of ILAE 2+ patients. This effect discriminated outcome groups well (AUC=0.82). Our novel approach identifies clusters of spatially similar tissue with high abnormality. This is clinically valuable, providing (i) a data-driven framework to validate current hypotheses of the epileptogenic zone localisation or (ii) to guide further investigation.
2305.03812
Exploring the environment, magnetic fields, and feedback effects of massive high-redshift galaxies with [CII]
Massive galaxies are expected to grow through different transformative evolutionary phases where high-redshift starburst galaxies and quasars are examples of such phases. The physical mechanisms driving these phases include companion galaxy interactions, active galactic nuclei feedback, and magnetic fields. Our aim is to characterize the physical properties and the environment of the submillimeter galaxy AzTEC-3 at z = 5.3 and the lensed quasar BRI 0952-0115 at z = 4.4, to set a limit on the polarization properties, as well as placing both in the broader context of galaxy evolution. We used full polarization, sub-arcsecond-resolution, ALMA band-7 observations of both BRI 0952-0115 and AzTEC-3 and detect [CII] line emission towards both galaxies, along with companions in each field. We present an updated gravitational lensing model for BRI 0952-0115. We present infrared luminosities, star-formation rates, and [CII] line to infrared luminosity ratios for each source. The [CII] emission line profile for both BRI 0952-0115 and AzTEC-3 exhibit a broad, complex morphology, indicating the possible presence of outflows. We present evidence of a 'gas bridge' between AzTEC-3 and a companion source. Using a simple dynamical mass estimate for the sources, we suggest that both systems are undergoing minor or major mergers. No polarization is detected for the [CII], placing an upper limit below that of theoretical predictions. Our results show that high-velocity wings are detected, indicating possible signs of massive outflows; however, the presence of companion galaxies can affect the final interpretation. Furthermore, the results provide additional evidence in support of the hypothesis that massive galaxies form in overdense regions, growing through interactions. Finally, strong, ordered magnetic fields are unlikely to exist at the kiloparsec scale in the two studied sources.
K. Kade, K. K. Knudsen, W. Vlemmings, F. Stanley, B. Gullberg, S. Konig
2023-05-05T19:33:44Z
http://arxiv.org/abs/2305.03812v1
Exploring the environment, magnetic fields, and feedback effects of massive high-redshift galaxies with [C ii] ###### Abstract Context:Massive galaxies are expected to grow through different transformative evolutionary phases. High-redshift starburst galaxies and quasars are thought to be such phases and thus provide insight into galaxy evolution. Several physical mechanisms are predicted to play an important role in driving these phases; for example, interaction with companion galaxies, active galactic nuclei feedback, and possibly magnetic fields. Aims:Our aim is to characterize the physical properties and the environment of the submillimeter galaxy AzTEC-3 at \(z=5.3\) and the lensed quasar BRI 0952\(-\)0115 at \(z=4.4\), and to set a limit on the polarization properties of the two sources. We intend to place these two sources in the broader context of galaxy evolution, specifically star formation and mass growth through cosmic time. Methods:We used full polarization, sub-arcsecond-resolution, ALMA band-7 observations of both BRI 0952\(-\)0115 and AzTEC-3. We detect [C ii] (\({}^{\rm 2}\)P\({}_{3/2}\)\(-\)P\({}_{1/2}\)) line emission towards both BRI 0952\(-\)0115 and AzTEC-3, along with companions in each field. We present an updated gravitational lensing model for BRI 0952\(-\)0115 for correction of gravitational magnification. Results:We present infrared luminosities, star-formation rates, and [C ii] line to infrared luminosity ratios for each source. The [C ii] emission line profile for both BRI 0952\(-\)0115 and AzTEC-3 exhibit a broad, complex morphology, indicating the possible presence of outflows. We present evidence of a "gas bridge" between A/TEC-3 and a companion source. Modified blackbody spectral energy distribution fitting is used to analyze the properties of [C ii] detected companion sources in the field of both the submillimeter galaxy and the quasar. We investigated the possible role of the detected companions in outflow signatures. Using a simple dynamical mass estimate for the sources, we suggest that both systems are undergoing minor or major mergers. No polarization is detected for the [C ii], placing an upper limit below that of theoretical predictions. Conclusions:Our results show that high-velocity wings are detected, indicating possible signs of massive outflows; however, the presence of companion galaxies can affect the final interpretation. Furthermore, the results provide additional evidence in support of the hypothesis that massive galaxies form in overdense regions, growing through minor or major mergers with companion sources. Finally, strong, ordered magnetic fields are unlikely to exist at the kiloparsec scale in the two studied sources. ## 1 Introduction The discovery of intense starbursts (\(100-1000\)M\({}_{\odot}\)) at high redshift demonstrates the prevalence of short, transformative phases in the evolution of massive galaxies (e.g., Smail et al. 1997; Hughes et al. 1998; Blain et al. 1999; Casey et al. 2014). Furthermore, the discovery of the relation between the mass of supermassive black holes (SMBHs) in the center of local massive galaxies and the velocity dispersion of those galaxies suggests coeval evolution between the SMBH and the host galaxy (e.g., Magorrian et al. 1998; Ferrarese & Merritt 2000; Gebhardt et al. 2000; Haring & Rix 2004; Gultekin et al. 2009; Beifiori et al. 2012; Kormendy & Ho 2013; Bennert et al. 2015; Reines & Volonteri 2015). The cosmic black hole accretion rate density has been found to follow a similar trend with redshift to that of the cosmic star-formation rate (SFR) density, indicating that a significant part of the evolution takes place during the first few billion years after the big bang (e.g., Kormendy & Ho 2013 and Madau & Dickinson 2014). Thus, characterizing the physical properties of high-redshift starburst galaxies and quasars is essential for establishing a complete and coherent description of the evolution of massive galaxies. Many physical mechanisms play a role in massive galaxy evolution. Galaxies are thought to grow through two main mechanisms, namely galaxy mergers and gas accretion from the intergalactic medium (e.g., Keres et al. 2005; Hopkins et al. 2008; Dekel et al. 2009; Genzel et al. 2010; Krumholz & Dekel 2010; Di Matteo et al. 2012; Dubois et al. 2012). Additionally, the presence of a growing SMBH, as seen in active galactic nuclei (AGNs) and quasars has been found in theoretical modeling to provide a mechanism for regulating star formation (e.g., Di Matteo et al. 2005; Narayanan et al. 2015; Harrison 2017). This implies that characterizing the environment of high-redshift quasars and starbursts, as well as understanding the role of AGN feedback, is paramount to understanding the manner in which these galaxies evolve. Simulations and theoretical predictions from the early universe have demonstrated the likelihood of ubiquitous major and minor mergers occurring at high redshift (e.g., Kaviraj et al. 2013, 2015; Fogasy et al. 2017). Cosmological hydrodynamical simulations such as Illustris (Sijacki et al., 2015) and Horizon-AGN (Dubois et al., 2014) can shed light on the effects of these interactions. Studies have shown that minor mergers (classified as those with mass-ratios of \(>\)1:4) and the effect of companion galaxy interactions systematically affect the SFRs and evolution of massive starbursting galaxies (e.g., Kaviraj et al., 2015; Sparre and Springel, 2016; Pearson et al., 2019; Patton et al., 2020). Previous studies using optical, near-infrared imaging, or Lyman-\(\alpha\) emission in the environment of high-redshift starburst galaxies and quasars have yielded conflicting results. Some of these systems show little or no evidence of companion sources (e.g., Willott et al., 2005; Banados et al., 2013; Mazzucchelli et al., 2017; Yue et al., 2019), while others find overdensities (e.g., Carilli et al., 2013; Husband et al., 2015; Fan et al., 2016). However, in the last decade, an increasing number of companion sources around high-redshift galaxies have been discovered with the Atacama Large Millimeter/Submillimeter Array (ALMA), primarily using [C ii] observations (e.g., Oteo et al., 2016; Trakhtenbrot et al., 2017; Decarli et al., 2017; Diaz-Santos et al., 2018; Wardlow et al., 2018; Casey et al., 2019; Jones et al., 2019; Litke et al., 2019; Neleleman et al., 2019; Fogasey et al., 2020; Venemans et al., 2020). Diaz-Santos et al. (2018) studied the hot dust-obscured galaxy 1(Hot DOG) W2246\(-\)0526 and found evidence of a "gas-bridge" structure between a companion source and the central source, suggesting interaction-induced gas flow in the system. The system BRI 1202\(-\)0725 has also been found to host multiple companions to the quasar and exhibits a similar bridge structure (Carilli et al., 2013). Footnote 1: Hot dust-obscured galaxies are a class of high-redshift, dust-obscured, AGN-dominated galaxies. Studies of the effect of AGNs have demonstrated that galactic-scale outflows and feedback can control star formation activity and black hole growth (e.g., Fabian, 2012; King and Pounds, 2015; Ishibashi and Fabian, 2016). The extent of this phenomena is unknown, but recent studies of both low- and high-redshift galaxies show many of these sources exhibit outflow characteristics (e.g., Heckman et al., 1990; Rupke et al., 2005; Weiner et al., 2009; Banerji et al., 2011; Cicone et al., 2014; Chisholm et al., 2015; Spilker et al., 2020) through, for example, blueshifted absorption lines, P-Cygni line profiles, or broad emission line components typically seen at higher absolute velocities away from a main emission line. However, these findings become increasingly rare at higher redshifts due to the difficulty in detecting and determining outflow signatures in high-redshift galaxies. Although direct observations of individual sources with outflows are still rare at \(z>4\)(e.g., Maiolino et al., 2012; Cicone et al., 2015; Spilker et al., 2020; Butler et al., 2021), studies using stacking of the [C ii] line have provided an alternative method of searching for outflows in a wider variety and number of sources, though also providing conflicting results (e.g., Gallerani et al., 2018; Decarli et al., 2018; Bischett et al., 2019; Stanley et al., 2019; Ginolfi et al., 2020). The strength of the [C ii] line facilitates its ubiquitous use at high redshift. At high redshifts, this line is shifted to the millimeter (mm) or submillimeter (sub-mm) regime where it is observable by ground-based facilities such as ALMA. The [CII] (\({}^{2}\)P\({}_{3/2}\)\(\rightarrow\)\({}^{2}\)P\({}_{1/2}\)) emission is produced primarily in photodissociation regions (PDRs) by gas exposed to ultraviolet (UV) radiation and acts as one of the major coolants in star-forming regions of the interstellar medium (ISM) (e.g., Stacey et al., 1991, 2010; Carilli and Walter, 2013). For this reason, it has long been utilized to study the ISM of high-redshift galaxies. Although magnetic fields are common, their presence and role in galaxy formation and evolution in the early Universe is still unclear. In the local Universe, ordered magnetic fields with a strength of several \(\mu\)G are revealed from synchrotron and Faraday rotation observations (e.g., Beck, 2015) in normal spiral galaxies. In local starburst galaxies, fields as high as \(\sim 20\) mG have been measured (Robishaw et al., 2008), and ordered magnetic fields have been found using infrared (IR) dust polarization observations (e.g., Lopez-Rodriguez et al., 2021). Exactly when these fields were generated is unclear, but models show that a strong regular field can quickly be generated from an initial weak Figure 1: _Hubble Space Telescope (HST)_/ACS F814W (left) image of BRI0952 showing both lensed images of the quasar; Img-N and Img-S correspond to the two images of the quasar BRI0952 and Comp-N and Comp-SW to the companion sources. ALMA 870 \(\mu\)m continuum map (right) was created using line-free channels. The contours are shown at \(5,10,20,30,40,50,60,70\), and 80 \(\sigma\) levels. The synthesized beam is shown in the bottom left of the ALMA image. seed field as a result of turbulence driven by, for instance supernova explosions (Rieder & Teyssier, 2017). Since atoms and ions can easily be aligned by radiation and are subsequently very sensitive to realignment under the influence of only a weak magnetic field, the strength of the [C ii] line has led to the suggestion that [C ii] could be an excellent tracer of magnetic fields (Yan & Lazarian, 2006; Zhang et al., 2015). In order to better constrain massive galaxy evolution at high redshift, high-sensitivity data is required to find faint emission from companion sources and to resolve the detailed physics ongoing within these systems. In this study, we explored the environment, searched for magnetic-field signatures, and examined possible outflow properties of two galaxies in the early Universe: the quasar BRI 0952-0115 (hereafter BRI0952) at \(z=4.433\) and the massive submillimeter galaxy (SMG) AzTEC-3 at \(z=5.3\), utilizing high-resolution ALMA band-7 [C ii] observations. These data were obtained through a proposal designed to look for magnetic fields in the early Universe and were selected based on their previously known extremely bright [C ii] emission. Due to the nature of the original project, the sources were observed in full polarization mode in an attempt to detect polarized emission from these sources. The AzTEC-3 protocluster has been extensively studied previously and encompasses the SMG itself (AzTEC-3) along with Figure 2: Moment-0 and moment-1 maps for the BRI0952 system. The top row shows moment-0 maps of the original [C ii] with velocity ranges for the individual extraction shown on the image. The middle row shows the moment-0 map of the region from which the spectra was extracted for the companions. The bottom row shows the moment-1 maps for the sources taken from the same region the spectra was extracted from. The contours are shown at \(-3\), \(-2\), \(10\), \(20\), \(30\), \(40\), \(50\), and \(60\)\(\sigma\) levels for BRI0952, and contours are at \(-3\), \(-2\), \(3\), \(4\), \(5\), \(6\), and \(7\)\(\sigma\) levels for the companions where the respective \(1\sigma\) noise level has been taken from individual moment-0 maps due to the restrictive velocity ranges (and thus differing noise levels). The black ellipses in the top row correspond to the region from which the spectra was extracted. The synthesized beam is shown in the bottom left corner of the images. a quasar 13 Mpc away in projected distance and a number of previously reported Lyman-break galaxies (LBGs) surrounding the SMG (Capak et al., 2011; Riechers et al., 2014). BRI0952 is a gravitationally lensed quasar and has previously been studied due to its strong [C ii] emission and lensing features (Maiolino et al., 2009; Gallerani et al., 2012). Here, we present findings of resolved [C ii] emission in both sources, along with an analysis of their surroundings. These findings help shed light on the role both sources have to play in the evolution of massive galaxies in the early Universe. In Section 2, we describe the ALMA observations and analysis utilized in this paper. In Section 3, we describe the results from the [C ii] line and continuum analysis and the polarization of both AzTEC-3 and BRI0952. Section 4 provides a discussion of the properties of both sources as well as the environment these sources reside in. Finally, we present our conclusions in Section 5. Throughout this paper, we adopt the term "outflows" to describe the high-velocity flow of gas away from central regions typically associated with energy-driven winds by central AGNs or through starburst activity. Similarly, the term "gas-bridge" is used to describe structures of gas connecting components within galaxy systems thought to be caused by physical interactions between these components. Furthermore, we utilized a flat \(\Lambda\)CDM cosmology with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm M}=0.3\), and \(\Omega_{\Lambda}=0.7\). ## 2 Observations and analysis We obtained ALMA band-7 data for BRI0952 and AzTEC-3 as part of a program originally designed to search for magnetic field lines in the early Universe (2018.1.01536.S, P.I. Vlemmings). The setup for both galaxies was tuned to the [C ii] 1900.5369 GHz line, and the "time domain mode" was used. Observation details are listed in Table 1. The data calibration steps were performed following the ALMA polarization calibration scripts using CASA 5.4.0. (CASA 2). This includes calibration of the phase, bandpass, flux, and gain. The phase calibrators used for BRI0952 were J0948+002, J0854+2006, and J0725-0054, and for AzTEC-3 these were J0854-2006 and J0948+0022. The quasar J0854+2006 was used for polarization calibration. The uncertainty on the absolute flux calibration is conservatively estimated to be 10%. Footnote 2: [http://casa.nrao.edu/](http://casa.nrao.edu/) (CASA McMullin et al., 2007) To analyze the Stokes-\(I\) data, imaging was done using the task clean. All sources in the field were masked during cleaning. An initial search for line emission was done using "dirty" images to identify the line-free channels available for use in continuum subtraction, which was subsequently performed using the uvcomstu task with a polynomial fit of the order of one for both AzTEC-3 and BRI0952 (see Figures 1 and 2). The continuum subtraction for BRI0952 was complicated by the appearance of unexpected lines in spectral windows adjacent to that of the [C ii] line. 3 Footnote 3: We note that the results of this paper do not change when using other tools such as imcontsub or statcount(Sánchez-Monge et al., 2018). In the case of AzTEC-3, we used the frequency range of the observations up to 300 GHz due to the possible additional [C ii] wing feature seen in the 300.5-301 GHz range and the width of the [C ii] line (see Figures 7 and 1). In the case of BRI0952, we used the frequency ranges 335.3-336.9, 347.1-347.7, 348.97-348.98, and 350.45-350.67 GHz. Line emission images were created from the continuum-subtracted data, and continuum images were made from line-free channels. We used Briggs weighting with a robust factor of 0.5 initially for both AzTEC-3 and BRI0925. However, for further analysis of the companion sources surrounding both primary targets, a robust factor of 0.0 was utilized to increase the angular resolution of the [C ii] emission cubes; these images were subsequently used for the analyses described in this paper. The continuum images were created using a robust factor of 0.5. For BRI0952, the spectral window centered on the [C ii] emission had a bandwidth of 1.8 GHz with a spectral resolution of 31 MHz. The adjacent spectral windows had bandwidths of 2.0 GHz with spectral resolutions of 15.625 MHz. For AzTEC-3, the spectral window centered on the [C ii] emission and the adjacent spectral windows had bandwidths of 2.0 GHz with a spectral resolution of 31.25 MHz. Angular resolution and sensitivity is listed in Table 1. Polarization image cubes were created at native spectral resolutions of 15.625 MHz and 31.25 MHz channels for BRI0952 and AzTEC-3, respectively, using Briggs weighting with a robust factor of 0.5. For BRI0952, the resulting beam size was \(0.51^{\prime\prime}\times 0.40^{\prime\prime}\) with a position angle of \(81^{\circ}\). For AzTEC-3, the mean was \(0.91^{\prime\prime}\times 0.64^{\prime\prime}\) (at \(82^{\circ}\)). The different observing conditions and channel width resulted in a channel rms noise in the Stokes Q and U maps of 0.46 mJy beam\({}^{-1}\) and 0.14 mJy beam\({}^{-1}\) for BRI0952 and AzTEC-3, respectively. For further analysis, including spectral energy distribution (SED) fitting, additional photometry was extracted from archival resources for both AzTEC-3 and BRI0952. _Hubble Space Telescope (HST)_ data were extracted from the _HST_ archive for both sources. For AzTEC-3, we used the ACS F606W, ACS F814W, WFC3 F105W, WFC3 F125W, and WFC3 F160W bands from the following project codes: 13641 (PI Capak), 9822 (PI COSMOS24-21), and 13384 (PI Riechers). For BRI0952, we used the WFPC2 F814W band with project code 8268 (PI Impey), cross-checked the astrometry with the Gaia2 catalog (Gaia Collaboration et al., 2018), and used photometry from the WISE All-Sky Data Release (Cutri & et al., 2012). We performed an archive search for the MIPS 24\(\mu\)m data. We used archival data of ALMA bands 3, 4, and 6 (2017.1.01081.S P.I. Leung and 2015.1.00388.S P.I. Lu). ## 3 Results ### Polarization Our full polarization observations did not reveal any polarized emission, neither from emission lines nor from the source continuum. In Table 2, we indicate the 3\(\sigma\) upper linear polarization (\(P_{I}\)) limits that we derive for both sources at the peak for the [C ii] line, both for the total integrated line and for the continuum. The linear polarization spectra and continuum values were extracted in a single beam (see Section 3.1 for the size) towards the peak of the lines and continuum, respectively. The linear polarization spectrum was produced from the Q and U spectra using \(I_{p}=\sqrt{Q^{2}+U^{2}-\sigma_{p}^{2}}\), where \(\sigma_{p}=\sqrt{\sigma_{Q}^{2}+\sigma_{U}^{2}}\) corresponds to the rms error on the linear polarization and \(\sigma_{Q,U}\) are the rms errors in Q and U, respectively. Since we detect no polarization, the derived limits include any potential remaining calibration uncertainty. According to ALMA specifications, this 3\(\sigma\) limit corresponds to 0.1%. Thus, the derived upper limits on the [C ii] lines are significantly below values that could be expected if the level of polarization in our sources is similar to that predicted around Galactic star-forming regions (Zhang & Yan, 2018). ### Brl0952-0115 #### 3.2.1 Lensing magnification The quasar BRI0952 is lensed by a single galaxy, with the lensing source at \(z=0.632\). Although previous lensing models have been constructed for BRI0952 (Lehar et al. 2000; Eigenbrod et al. 2007; Gallerani et al. 2012; Momcheva et al. 2015), the wing-like structures we observe in the [C ii] emission suggested differential lensing across the apparent surface of BRI0952 (seen in Figure 1), prompting us to create an updated lensing model using our high-resolution data. Additionally, the presence of two companion sources in close proximity to the quasar were also a factor as it was necessary to determine if these represented one double-imaged object or two or more separate sources. We determined the lensing parameters of both the source and the lens by utilizing Vishlens (Spilker et al. 2016). This code is designed specifically to model observations of gravitationally lensed sources at radio and millimeter wavelengths. Vishlens calculates the magnification factor by directly modeling the \(uv\) data rather than introducing bias by using images produced from algorithms such as CASA's CLEAN. We modeled the lens as a single isothermal ellipsoid parameterized by its location relative to the ALMA image phase center (\(x_{L},y_{L}\)), the ellipticity (\(\epsilon_{L}\)), and position angle of the major axis (\(\theta_{L}\)) in degrees east of north. We modeled the source as a Sersic source parameterized by position relative to the lens (\(x_{S},y_{S}\)), flux density, Sersic index, half-light radius, axis-ratio, and position angle. These parameters are then run through a Markov chain Monte Carlo (MCMC) fitting procedure, and the best models were output using a deviance information criterion as described in Spilker et al. (2016). Initially, we ran Vislens on band-7 continuum data in order to determine the best-fit parameters for the lens. Although lens parameters have been previously reported (Lehar et al. 2000; Eigenbrod et al. 2007; Gallerani et al. 2012; Momcheva et al. 2015), lens fitting was improved when we allowed the parameters to vary outside of previous values. During the lens optimization, we also allowed the source to vary in position and flux density. Once the lens was optimized, we reran the continuum fit with fixed lens parameters but still allowed the Sersic source profile to vary. Similarly, for the band-7 [C ii] emission data, we ran Vislens with the previously optimized lens parameters on the \(uv\) line emission data while allowing the source to vary in an attempt to increase the quality of our lensing model. The model produced by Vislens is a good fit for our data with minimal to no residual emission, and this is shown in Figure 1. The best-fit parameters are provided in Table 3. Furthermore, the magnification factor produced from this model is similar to those previously reported (\(\mu\sim 3.92\pm 1.3\)). The presence of wing-like structures across the [C ii] frequency range of our BRI0952 observations, as seen in the [C ii] line emission map (see Figure 1, panels 1 and 3 for observed [C ii] data and model output of Vishlens), prompted consideration of the effect of different lensing factors across the line (i.e., differential magnification). To investigate this, we split the [C ii] line into five bins constituting five different frequency ranges across the line (these can be seen in Figure 1) and ran Vislens, with the source and lens parameters as specified above, on these bins. The results of this are shown in Figure 1. We determine that it is likely that some amount of differential lensing across the surface of BRI0952 is occurring, and thus we caution that conclusions drawn that correlate strength of emission with location will be affected by this. However, we conclude that based on the low lensing-magnification factor, this effect will not drastically change the extent or strength of the [C ii] or continuum emission studied in this paper. We also note that images of both the data and residuals, as seen in Figures 1 and 1 are produced by Vislens rather than by CASA. Although the imaging process in itself is similar to the procedure that CASA performs, it lacks the ability to change weighting schemes and does not clean images. Thus, images produced in Vislens are dirty images with natural weighting and therefore will look slightly different than those created by CASA. As a consequence of this, faint sources in close proximity to BRI0952 are not clearly resolved and distinguishable in images produced by Vislens. This is in contrast with images produced in CASA, which are cleaned with a Briggs weighting scheme using a robust factor of 0.0. The latter are those used throughout the spectral analysis in this paper. manifest and demonstrate in Figure 1 that "residual" emission remains in the regions of the companions, lending further weight to the conclusion that the emission has a separate origin to that of the quasar. Given the relatively lower S/N of Comp-N and Comp-SW in relation to BRI0952, combined with the fact that the lensing model reproduces the flux of BRI0952 using a single Sersic brightness distribution, we treated Comp-SW and Comp-N as not sufficiently lensed to affect conclusions drawn in this analysis (in effect \(\mu_{\rm Comp-N}=\mu_{\rm Comp-SW}=1\)). We note that including additional faint sources in the lensing analysis with Vishieas did not enable a good fit for the two companion sources; however, given their location, they are likely to be magnified by a factor close to \(\mu\sim 1\). Furthermore, based on the close spatial proximity of Comp-SW and Comp-N to BRI0952, we assume the emission is [CII] and find spectroscopic redshifts of \(z_{\rm Comp-SW}=4.4323\) and \(z_{\rm Comp-N}=4.432\). To extract the spectra of BRI0952, Comp-N, and Comp-SW we defined a region around each from which to extract the [C ii] spectra individually for each component 4. For BRI0952, we used an elliptical region of \(2.1^{\prime\prime}\times 1.45^{\prime\prime}\) encompassing \(\sim 18\) beams5, containing both Img-S and Img-N of the quasar. For Comp-SW, we used an elliptical region of \(1.13^{\prime\prime}\times 0.57^{\prime\prime}\) encompassing \(\sim 4\) beams, and for Comp-N we used an elliptical region of \(1.3^{\prime\prime}\times 0.62^{\prime\prime}\) encompassing \(\sim 5\) beams. These regions are shown in Figure 2 along with moment-0 and moment-1 maps, and extracted spectra are shown in Figure 3. Both companions are faint compared to the emission from BRI0952; both are \(\leq 10\%\) of the flux from BRI0952 (see Figure 9). The gray plotted rms in Figure 3 is calculated in each channel by sampling the spectra in 25 different emission-free regions of the cube extracted from regions the same size as those described above for each individual source. Footnote 4: This extraction could be done through a number of different methods; however, we find that the outcomes of these methods remain consistent with the method employed in this paper. We also note that methods such as \(m\)-plane fitting introduce bias in a similar way to image-plane fitting through base-assumptions of, e.g., a model to describe the source, boundary conditions, and so on. Footnote 5: We define \(N_{\rm beaers}=A_{extraction}/A_{\rm beam}\). Due to the presence of high-velocity wings at an \(\sim 14\sigma\) level in the spectra of BRI0952, we find that a double-Gaussian fit was more appropriate (see Figure 3, top panel), yielding an improvement to the reduced \(\chi^{2}\) of more than a factor of four. To investigate the impact of differential magnification on the broad wings and attempt to determine if this feature is simply an artifact from lensing, we calculated the ratio of the flux in the blue and red wings of the line profile. If the ratio of the blue to red wing is more than the error on the lensing factor, we suggest that the broad wings could be due to differential magnification predominantly affecting one side of the line. We define these wings to be between \(\pm 500\) km/s and \(1/4\times{\rm F_{peak}}\), where \({\rm F_{peak}}\) is the peak of the flux. We find the ratio of the red-to-blue wing to be \(\sim 1.09\), which is less than the error on the magnification factor (1.3). This suggests that while the observed broad velocity wings may be slightly affected by lensing magnification, it is not solely responsible for them. As an additional test, we extracted the spectra from a region only corresponding to Img-N of BRI0952 to determine if the double Gaussian remained a better fit to the data (shown in Figure 1). Indeed, the broad wings remain, and a double Gaussian provides an improved fit to the data. We further discuss the implications of this in Section 4.5. We compare the line intensities of each component to those found by Gallerani et al. (2012). We find that our values are lower, both for the intensity of BRI0952 and for Comp-SW (their Comp-C). However, we note a significant discrepancy between their decomposition and ours. Gallerani et al. (2012) find that Figure 3: Spectra of [C ii] emission toward BRI0952 and companions. The two companions’ line profiles are fit with a single Gaussian, which is shown in red. The quasar’s line emission profile is fit with a single Gaussian (red) and a double Gaussian (blue); the double Gaussian corresponding to outflow signatures is clearly seen. The gray region represents the rms of the data in each channel using the procedure described in Section 3.2.2. Comp-SW is significantly brighter than either of the individual images of the quasar BRI0952 (here Img-N and Img-S), while we find the opposite to be true. We attribute this to the increased spectral resolution and baseline coverage possible with ALMA over the IRAM Plateau de Bure Interferometer (PdBI). The observations from Gallerani et al. (2012) were carried out with the IRAM Plateau de Bure Interferometer with six antennas in the extended B configuration during three observing runs and the compact C configuration during two observing runs. They obtained a sensitivity level of 0.5 Jy km s\({}^{-1}\) beam\({}^{-1}\) in a 300 km s\({}^{-1}\) channel, which corresponds to a 1\(\sigma\) rms of Figure 4: Velocity maps of BRI0952 showing [C ii] emission across the velocity range of the line. The velocity range is provided at the top of the images. The gray contours are at \(10,20,30,40,50\), and \(60\,\sigma\) levels from the moment-0 map of BRI0952, which is shown in Figure 2. The black ellipses indicate the positions and sizes of the two companions from CASA’s orbit routine. The southern companion shows a clear velocity gradient across the -157 to -14 km/s range. The size of the synthesized beam is shown in the bottom left corner of the images. Figure 5: _HST_/WFC3 F105W (left) image of AzTEC-3 overlaid with source positions, ALMA 850 \(\mu m\) continuum map (center), and SMG subtracted map (right). The continuum maps were created using line-free channels. The residual map was produced following the steps described in Section 3.3.2. The original continuum image contours are shown at \(-3,-2,5,10,20,30,40,50,60,70\), and \(80\,\sigma\) levels. The subtracted image contours are shown at \(-3,-2,3,4,5,6\), and \(7\,\sigma\) levels. Synthesized beams are shown in the bottom left of the ALMA images. 1.7 mJy beam\({}^{-1}\). For comparison, our ALMA data for a similar channel width would be \(\sim 0.1\) mJy beam\({}^{-1}\), which is about 15-17 times better in terms of sensitivity. We note that the BRI0952 field is near equatorial, meaning that many antennas possibly in combination with several antenna configurations are needed to achieve a good \(uv\)-coverage in the \(v\) direction. Given the higher sensitivity combined with the improved \(uv\) coverage thanks to the larger number of ALMA antennas, it is likely that the calibration and image reconstruction of the ALMA data is more robust. As a further comparison between the previous PdBI observations from Gallerani et al. (2012) and the new ALMA observations, the spatial resolution is \(1.08^{\prime\prime}\times 0.66^{\prime\prime}\) and \(0.54^{\prime\prime}\times 0.40^{\prime\prime}\), respectively, and the absolute flux calibration is \(\sim 20\%\) and \(\sim 10\%\), respectively. We used CASA's imprint task to fit the source size for both images of the quasar and companions. We provide [C ii] line parameters and deconvolved source sizes in Table 5. We note that, due to the lensed nature of BRI0952, this source has a more complex morphology than those typically handled in this way. The source size provided from imprint is an indication of the extent of the emission in the image plane and is not necessarily well fit by a Gaussian profile. We added an additional 30% on the error of the spatial extent of the emission reported for Img-S and Img-N corresponding to the uncertainty on the lensing magnification factor. We note the presence of a complex velocity structure between the images of the lensed quasar and the companions. The northern companion exhibits a faint velocity gradient, and the same is present for the south-western companion (see Figure 4). The implications of this are further discussed in Section 4.6. Figure 6: Moment-0 and moment-1 maps for AzTEC-3 system. The top row shows moment-0 maps of the original [C ii] and the central row shows the moment-0 map of the region from which the spectra were extracted and are centered at the respective redshifts of the sources. The bottom row shows the moment-1 maps from each using the same region that the spectra were extracted from. The contours in the first panel for AzTEC-3 are shown at \(-3\), \(-2\), \(10,20,30,40,50\), and \(60\,\nu\) levels, and the contours shown in subsequent panels are at \(-3\), \(-2\), \(3\), \(4\), \(5\), \(6\), and \(7\,\sigma\) levels where the respective \(1\sigma\) noise level has been taken from individual moment-0 maps due to the restrictive velocity ranges (and thus differing noise levels). The black circle and ellipses in the top row show the regions from which the spectra were extracted for each source. The synthesized beam is shown in the bottom left of the images. #### 3.2.3 Continuum We detect strong continuum emission toward both north and south images of BRI0952. Using CASA's imit, we fit both the north and south images of the quasar and report strong continuum emission totaling \(1.96\pm 0.43\) mJy, which is corrected for lensing. Continuum fluxes from individual images of the quasar are provided in Table 4 and images are shown in Figure 1. We do not detect the companion sources in continuum and report a \(3\sigma\) upper limit of 0.13 mJy for each. Due to the non-detection of the companions in the continuum, we defined a region corresponding to the quasar from which to extract the continuum flux. As for the line emission, the source sizes of BRI0952 reported in Table 4 are reported with an additional 30% error added to account for the complex morphology and lensing uncertainties. Since we do not detect the companions in continuum emission, we estimated their flux using relations from their \(L_{\rm[C\,II]}\) to determine if they should be detectable in our ALMA observations. We estimated their SFR's using the relation \(\log({\rm SFR})\,[{\rm M}_{\odot}/{\rm yr}]=1.0-7.06\times\log({\rm L_{[CII]}})\,[{ \rm L}_{\odot}]\) from De Looze et al. (2014) for starburst galaxies, computed their \(L_{\rm IR}\) using the relation in Section 4.3 (Carilli & Walter 2013), and used a modified blackbody approximation (e.g., Knudsen et al. 2003, Equation 2) to recover their \(S_{\rm 350GHz}\). This results in \(S_{\rm 350GHz}=0.21\) mJy for the northern companion and \(S_{\rm 350GHz}=0.10\) mJy for the south-western companion. This would correspond to an \(\sim 4\sigma\) \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline RA & DEC & \(x_{L}\) & \(y_{L}\) & M & \(e_{\rm L}\) & \(\phi_{L}\) & \(\gamma\) & \(\phi_{Y}\) & \(\mu\) \\ \([\)J2000\(]\) & \([\)J2000\(]\) & \([\)\({}^{\prime\prime}]\) & \([\)\({}^{\prime\prime}]\) & \([\)\({}^{\prime\prime}]\) & \([\)\({}_{\odot}\)\(]\) & & (deg. E of N) & & (deg. E of N) & \\ \hline 09:55:00.1 & -01:30:07.1 & 0.446 & 0.003 & \(0.619\times 10^{11}\) & 0.055 & 191 & 0.011 & 63 & 3.92 \(\pm\) 1.3 \\ \hline \end{tabular} \end{table} Table 3: Model lensing parameters for the foreground galaxy of BRI0952. From left to right: (\(x_{L}\), \(y_{L}\)) is the position of the source relative to the ALMA phase center given in the first two columns, with positive values corresponding to west; M is the mass of the lens galaxy; \(e_{\rm L}\) is the ellipticity; \(\phi_{\rm L}\) is the position of the major axis in degrees east of north; \(\gamma\) is the external tidal shear; \(\phi_{\rm J}\) is the position angle of the shear, and \(\mu\) is the derived lensing factor. Since the values were fixed during fitting of the source position, we do not report their respective errors. The magnification factor is taken to be the average across the line. Figure 7: Spectra for the AzTEC-3 system. The red line shows the single Gaussian fit to the data, and the blue line shows the double Gaussian fit to the data. For AzTEC-3, the blue wing \(<-500\) km s\({}^{-1}\) is significantly more prominent than the red wing. We fit LBG-3 and Gal-SW with double Gaussians to account for the additional “bumps” in the spectra (most obvious in LBG-3), which are likely blended flux from the SMG. The atmospheric transmission is shown as a gray line in the top subplot for every source. detection for the northern companion and an \(\sim 2\sigma\) detection for the southern companion. Thus, in the case of the southern companion with the current data a continuum emission detection is unfeasible. This may also be the case for the northern companion. However, it is also possible that without the velocity information encoded in emission line spectra (and thus without the ability to severely isolate the frequency or velocity range from which to attempt to extract information about the emission from the companions), our observations are simply insufficient to resolve any continuum emission from the companions. #### 3.2.4 SED fitting We performed IR SED fitting and decomposition of the AGN and star formation contributions, using DecompIR (Mullaney et al., 2011) following the methods of (Stanley et al., 2018). We used photometry from an archival search for WISE bands and MIPS 24 \(\mu m\). We also used ALMA archival data from bands 3, 4, and 6. We performed two sets of fits, one with only the star formation templates and one with both AGN and star formation templates. We find that the photometry of BRI0952 is best fit by a combination of AGN and star formation emission in the IR. The decomposition of the IR SED allows for a calculation of the SFR without contamination from the AGN emission, using the \(L_{\rm R_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ { }}}}}}}}}}}}}}}\) = 5.24 \(\mu\)m. IR luminosities are reported in Table 7 and discussed further in Section 4.2. ### Aztec-3 #### 3.3.1 [C ii] emission We detect [C ii] (\({}^{2}\)P\({}_{3/2}\rightarrow{}^{2}\)P\({}_{1/2}\)) emission toward the SMG AzTEC-3. We report three sources (LBG-3, Gal-S, and Gal-SW) of the same emission in the region shown in Figure 5. LBG-3 was initially detected as an Lyman-break galaxy with the COSMOS survey (Ilbert et al., 2009, optical ID 1447526) but was undetected in [C ii] in Riechers et al. (2014) and the authors presented a \(3\sigma\) upper limit on its [C ii] emission. An additional companion, LBG-2 - not shown in Figure 5, was also originally detected by COSMOS as an LBG but was also undetected in [C ii] by Riechers et al. (2014). Similarly, we do not detect the source in [C ii] emission, and thus do not investigate this source further. Gal-S and Gal-SW are first detections. The data of Riechers et al. (2014) were obtained during Cycle-0 with 16-24 antennas, with a total observing near 7000 seconds spread evenly over two pointings. As a result, the sensitivity of the data in the region around AzTEC-3 is lower by about a factor of two compared to the new data presented here; however, it is important to note that the beam area of our data is 1.7\(\times\) larger. We note that the \(3\sigma\) limit from Riechers et al. (2014) appears to be for a single beam, while we obtain a detection over a more extended region corresponding to 2.8\(\times\) the size of beam used in this analysis. Furthermore, we note that if converting their \(3\sigma\) upper limit to a \(5\sigma\) upper limit, the resulting \(I_{\rm[CII]}\) is consistent with our detection, though it is important to keep in mind that in this comparison we do not take into account the integration over an extended area. We assume that the emission from the companions is [C ii] based on their close proximity to AzTEC-3, and find a spectroscopic redshift of \(z_{\rm LBG-3}=5.284\), which is in relative agreement with the previous spectroscopic redshift of \(z=5.3\)(Riechers et al., 2014). We report spectroscopic redshifts for the additional companions, \(z_{\rm Gal-S}=5.2919\) and \(z_{\rm Gal-SW}=5.2942\), respectively. Figure 9 shows the combined spectra of the companions and AzTEC-3. We followed the same procedure as described for BRI0952 and individually extracted the spectra for the sources in this system from a region with a very limited velocity range 6. We used a circular region with a radius of \(2.4\arcsec\) to extract the spectra of AzTEC-3 encompassing \(\sim 13\) beams. We used elliptical regions for the following companions: \(1.2\arcsec\times 1.5\arcsec\) for LBG-3 encompassing \(\sim 4\) beams, \(2.6\arcsec\times 1.7\arcsec\) for Gal-S encompassing \(\sim 10\) beams, and \(1.3\arcsec\times 0.90\arcsec\) for Gal-SW encompassing \(\sim 3\) beams. These regions are shown in Figure 6. We were able to minimize the source blending of the companions with AzTEC-3 since the spectra of the companions are extracted over a very limited velocity range, as shown in Figure 6. We calculated the rms in the same way as described in Section 3.2.2. Additionally, we include the atmospheric transmission in the top subplot for every source in Figure 7 to show where the atmospheric absorption line is (shown in Figure A.1), specifically in relation to the companion spectra. Footnote 6: Again, similar to BRI0952; if other methods are used, the results remain consistent with the regional spectral extraction used here. The [C ii] spectral line profile of AzTEC-3 shows the presence of high-velocity wings at a \(\sim 24\sigma\) level -and the profile is better fit with a double Gaussian to account for this (Figure 7)- where the reduced \(\chi^{2}\) is improved by more than a factor of four using a double-Gaussian fit. We note that the blue wing is wider compared to that of the red wing, as reported in Riechers et al. (2014). We used CASA's hfit to determine deconvolved source sizes. We report source sizes and Gaussian line parameters for AzTEC-3 and companions in Table 5. The [C ii] emission from the central source is emitted over a compact region with an extent of \(4.2\pm 0.4\) kpc along the major axis, which is consistent with the emitting region of 3.9 kpc reported by Riechers et al. (2014). We do not detect a strong velocity gradient, which is also in good agreement with Riechers et al. (2014). We note an additional feature: a possible velocity gradient between AzTEC-3 and LBG-3. We show this in Figure 8 and Figure E.1. This is a tentative feature as it only appears in a few spectral elements and could simply be emission associated with AzTEC-3. Guaita et al. (2022) detected a bridge of Lyman-\(\alpha\) emission extending between these two components, indicating that tidal forces could be at work between LBG-3 and AzTEC-3. The detection of Lyman-\(\alpha\) in both AzTEC-3 and LBG-3 further \begin{table} \begin{tabular}{l c c} \hline \hline Galaxy & Region & \(S_{\nu}\) \\ & (\(\arcsec\times\arcsec\)) & [mJy] \\ \hline BRI0952\({}_{\rm N}\) & (\([0.35\pm 0.15]\times[0.20\pm 0.13]\)) & 1.44 \(\pm\) 0.4 \\ BRI0952\({}_{\rm S}\) & (\([0.31\pm 0.15]\times[0.26\pm 0.13]\)) & 0.52 \(\pm\) 0.14 \\ Comp-N & - & \(<0.13\) \\ Comp-SW & - & \(<0.13\) \\ \hline AzTEC-3 & (\([0.38\pm 0.03]\times[0.27\pm 0.07]\)) & 6.06 \(\pm\) 0.13 \\ LBG-3 & (\([0.59\pm 0.07]\times[0.35\pm 0.07]\)) & 0.19 \(\pm\) 0.02 \\ Gal-SW & (\(<0.91\times<0.66\)) & 0.07 \(\pm\) 0.02 \\ Gal-S & - & \(<0.056\) \\ ES & (\([1.09\pm 0.38]\times[0.37\pm 0.36]\)) & 0.21 \(\pm\) 0.05 \\ \hline \end{tabular} \end{table} Table 4: Continuum fluxes for both systems. The continuum fluxes are provided for both images of BRI0952 and are corrected for lensing. However, we do not correct the sizes for lensing. We note that the error on the companions of AzTEC-3 includes an additional 10% due to the additional uncertainty introduced from the deblending process and the errors on the size of both images of BRI0952 include an additional 30% to account for lensing uncertainty. supports the likelihood of an interacting system. This is further discussed in Section 4.6. #### 3.3.2 Continuum We obtain continuum emission from imaging line-free channels in the band-7 ALMA data. Due to the lower angular resolution combined with the lack of velocity information provided in the spectral emission, we were unable to isolate the emission from the companions by restricting the velocity range from which they were extracted as we did for the [C ii] emission. Hence, we extracted the continuum flux using three different methods. The first was to use CASA's mfit routine on the marginally resolved continuum image. From this we obtain a continuum flux of \(6.06\pm 0.13\) mJy for AzTEC-3, which is in good agreement with Riechers et al. (2014), \(0.071\pm 0.018\) mJy for Gal-SW, and \(0.18\pm 0.023\) mJy for LBG-3. Secondly, we used the mfit residual image with the continuum emission from AzTEC-3 removed to obtain the continuum flux of the companions. Using this method, we obtain a continuum flux of \(0.082\pm 0.02\) mJy for Gal-SW and \(0.19\pm 0.024\) mJy for LBG-3. Finally, we attempted to isolate the origin of the continuum flux for the companions by creating a continuum image corresponding only to the flux from AzTEC-3, which we were then able to subtract from the original continuum image. To model the flux from AzTEC-3, we selected emission that was at levels higher than \(9\sigma\). This emission was then subtracted from the continuum image. The original continuum image and the subtracted image are shown in Figure 5. This resulted in a continuum flux of \(0.06\pm 0.017\) mJy for Gal-SW and \(0.22\pm 0.025\) mJy for LBG-3. Due to the similarity of the results from various methods, we conclude that the intrinsic value of the continuum for each is in the range provided for the continuum for LBG-3 and Gal-SW. We do not detect Gal-S in continuum, and we report a \(3\sigma\) upper limit of 0.05 mJy. The companion sources are faint compared to AzTEC-3 (\(\sim 3\%\) for LBG-3 and \(\sim 1\%\) for Gal-SW), and their flux contribution to the central source is negligible. Flux measurements and deconvolved source sizes for the field are provided in Table 4, where average values are provided for LBG-3 and Gal-SW for both continuum flux and size estimates. We note an offset between the ALMA continuum, _HST_, and [C ii] emission for LBG-3. This is shown in Figure F.1. We suggest that this offset between the three types of emission is likely linked to spatial offset between different types of emission in this source. The offset of this emission also affects the subtraction we performed above due to the possibility that we have in fact subtracted some flux pertaining to LBG-3 that could be included in our model of the emission from AzTEC-3. Thus, we suggest that the continuum flux reported in this paper for LBG-3 be treated as Figure 8: Velocity maps of AzTEC-3 showing [C ii] emission across the velocity range of the line. The velocity range is provided at the top of the images. The gray contours are at \(10,20,30,40,50\), and \(60\,\sigma\) levels from the moment-0 map of AzTEC-3, which is shown in Figure 6. The black ellipses indicate the positions and sizes of the three companions from CASA’s mfit routine where the dashed ellipse around Gal-SW denotes an upper-limit point source. The ”gas-bridge” structure between AzTEC-3 and Gal-S is seen in the velocity range of -584 to - 358 km s\({}^{-1}\). The size of the synthesized beam is shown in the bottom left corner of the images. a lower limit. Without additional higher resolution data, a robust conclusion about the nature of this offset is not possible. We detect an additional continuum source at \(\sim 10.5\) significance, referred to extra source (ES), in the AzTEC-3 field at the position 10:00:21.066, +02:35:16.975, which is shown in Figure 5. This source is bright across the _HST_ WFC3 filters, but it is undetected in [C ii] emission, indicating that it is at a different redshift. Continuum flux measurements and source size are provided in Table 4. #### 3.3.3 SED Fitting We fit the SED of AzTEC-3 using photometry extracted from _HST_ imaging utilizing the source extraction algorithm SExtractor (Bertin & Arnouts, 1996) and our ALMA photometry (see Table 6). We matched the resolution of the _HST_ images to the lowest resolution filter; we note that the companion sources are not blended with the emission from AzTEC-3 in the _HST_ imaging. We fit the SED using Bagpipes (Carnall et al., 2018), assuming a constant star formation history, a Calzetti reddening law to describe the dust attenuation, and a Chabrier initial mass function (IMF, Chabrier, 2003). The mass of the central source was allowed to vary from \(10^{8}\) - \(10^{14}\)M\({}_{\odot}\), and the metallicity was allowed to vary from 0.01-1.0 Z\({}_{\odot}\). We used the SED fit from Bagpipes to calculate the \(L_{\rm IR}\) (integrated from 8-1000 \(\mu\)m) of AzTEC-3, which is subsequently utilized below in determination of the SMG's properties. ## 4 Discussion ### Magnetic fields The mechanisms for the predicted polarization of the [C ii] fine structure line has been named ground state alignment (GSA) (Yan and Lazarian, 2006, 2012; Zhang et al., 2015). GSA occurs due to the interaction of an anisotropic radiation field with atoms or ions with fine- or hyperfine-structures. A magnetic field induces precession, which causes the atom or ion to align with the magnetic field. As a result, the emission or absorption of the spectral lines becomes polarized and the polarization direction reflects the direction of the magnetic field. The atoms align predominantly at their ground state level. Due to the low emission and absorption rates involved in these transitions, and hence their long life spans, already weak fields (\(B>10^{-15}\) G) can cause alignment. Predictions for the level of [C ii] polarization based on the GSA effect have been made for [C ii] emission near galactic star-forming regions (Zhang and Yan, 2018). Near the strong anisotropic radiation field produced from the regions, [C ii] polarization up to \(\sim 30\%\) could be expected. Since the radiation field in starburst galaxies and AGN hosts are similarly energetic, comparable levels of polarization could have been expected in early star-forming galaxies and AGNs (Zhang and Yan, 2018). Our observations do not reveal any linear polarization of [C ii] with \(3\sigma\) limits to the polarization percentages at the peak of emission \(<\)5% and for the integrated emission \(<\)1%. There are two possible explanations for such low or non-existent levels of polarization. Firstly, the GSA prediction might not be relevant in conditions where the [C ii] emission originates in the observed SMG and quasar, because the anisotropic radiation field is not sufficient to cause significant polarization. Alternatively, if the magnetic field is sufficiently irregular or has a large turbulent component within our resolution elements, beam depolarization will reduce the polarization fraction. Our angular resolution corresponds to \(\sim 3.1\) kpc and \(\sim 4.7\) kpc for BRI0952 and AzTEC-3, respectively. These scales are not significantly larger than those where structured fields are observed in nearby galaxies (e.g., Beck, 2015; Lopez-Rodriguez et al., 2021), but if the magnetic field follows spiral arms or warped disks, like in the case of Centaurus A (Lopez-Rodriguez, 2021), depolarization could still be large. Without a better estimate of the expected linear polarization fraction in SMGs and around quasars, we cannot use our limits to provide meaningful constraints on the level of \begin{table} \begin{tabular}{l l} \hline \hline Filter & Magnitude \\ & [mag\({}_{\rm AB}\)] \\ \hline F606W & \(28.89\pm 0.30\) \\ F814W & \(27.17\pm 0.10\) \\ F105W & \(24.92\pm 0.03\) \\ F125W & \(24.68\pm 0.02\) \\ F160W & \(24.52\pm 0.02\) \\ \hline \end{tabular} \end{table} Table 6: Photometry for AzTEC-3 extracted from _HST_ images using SExtractor. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline Name & R.A. & Dec. & Distance & \(z_{\rm[CII]}\) & \(I_{\rm[CII]}\) & FWHM\({}_{\rm[CII]}\) & \(A_{\rm[CII]}\) \\ & [J2000] & [J2000] & [′′] & & [Jy km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [′′\(\times\)′′] \\ \hline BRI0952 & N: 09:55:00.10 & \(-\)01:30:06.61 & & 4.433 & \(15.45\pm 1.5\) & \(182\pm 5\) & \((\{0.51\pm 0.19\})\)\(\times(\{0.285\pm 0.12\})\) \\ & S: 09:55:00.06 & \(-\)01:30:07.29 & & 1.0 & & & \(410\pm 21\) & \([(0.36\pm 0.13)\)\(\times\)\(\{0.29\pm 0.21\}]\) \\ Comp-N & 09:55:00.09 & \(-\)01:30:05.89 & 0.75 & 4.432 & \(0.62\pm 0.06\) & \(130\pm 10\) & \((\{0.79\pm 0.32\}\times\)\(\{0.23\pm 0.11\})\) \\ Comp-SW & 09:55:00.00 & \(-\)01:30:08.05 & 2.2 & 4.432 & \(0.30\pm 0.04\) & \(122\pm 13\) & \((\{0.76\pm 0.30\}\times\)\(\{0.23\pm 0.13\})\) \\ \hline AzTEC-3 & 10:00:20.696 & \(+\)02:35:20.35 & & 5.2988 & \(11.34\pm 1.0\) & \(320\pm 12\) & \((\{0.69\pm 0.06\}\times\)\(\{0.42\pm 0.10\})\) \\ & & & & & & \(660\pm 20\) & \\ LBG-3 & 10:00:20.766 & \(+\)02:35:21.39 & 1.5 & 5.2841 & \(0.78\pm 0.07\) & \(630\pm 36\) & \((\{0.8\pm 0.02\}\times\)\(\{0.46\pm 0.02\})\) \\ & & & & & & \(165\pm 24\) & \\ Gal-S & 10:00:20.68 & \(+\)02:35:18.07 & 2.3 & 5.2919 & \(0.52\pm 0.05\) & \(190\pm 10\) & \((\{0.53\pm 0.10\}\times\)\(\{0.41\pm 0.15\})\) \\ & & & & & & \(92\pm 23\) & \\ Gal-SW & 10:00:20.60 & \(+\)02:35:19.28 & 1.8 & 5.2942 & \(0.15\pm 0.01\) & \(325\pm 24\) & \((\{0.36\pm 0.04\}\times\)\(\{0.08\pm 0.10\})\) \\ \hline \end{tabular} \end{table} Table 5: Line parameters for the [C ii] emission lines. The distance is measured between the central source (AzTEC-3 and the northern image of BRI0952, respectively) and the surrounding sources. \(I_{\rm[CII]}\) is the line intensity, and FWHM is the full width at half maximum of the lines (one entry for single-Gaussian fit and two entries for double-Gaussian fit). \(A_{\rm[CII]}\) is the length of the major axis of the deconvolved source size from CASA’s http, we note that BRI0952 has a more complex morphology, and thus the sizes include an additional 30% error on the reported value corresponding to the uncertainty on the lensing factor. depolarization, and thus the level of structure in the magnetic field. However, unless the GSA mechanism is much less effective than expected, it appears unlikely that a strong ordered magnetic field exists at the kiloparsec scale in the two sources in this paper. ### Infrared luminosity We obtain an \(L_{\rm IR}\) estimate for AzTEC-3 by integrating our SED from Bagpipes from 8-1000 \(\mu\)m of \(L_{\rm IR}=(7.3\pm 0.2)\times 10^{13}\) L\({}_{\odot}\). Although the central source is blended with surrounding companions, we anticipate that the uncertainty caused by the companion's IR luminosity will be small and we do not add additional errors to the \(L_{\rm IR}\) of AzTEC-3. This value is consistent with that reported by Capak et al. (2011), as they provide a wide range of possible \(L_{\rm IR}\) values ranging from \((2.2-11)\times 10^{13}\) L\({}_{\odot}\) for the 8-1000 \(\mu\)m wavelength range. Riechers et al. (2014) reported a \(L_{\rm IR}\) that is a factor of \(\sim 6.5\) times lower than our reported \(L_{\rm IR}\), which is outside the range of error typically ascribed to differences between \(L_{\rm IR}\) and \(L_{\rm FIR}\)7(Carilli & Walter, 2013). We suggest that discrepancies between these two reported values, apart from the utilized wavelength range, derive from the number of data points used for each fit. Riechers et al. (2014) used a higher number of data points than in our SED fitting; however, both analyses lack IR coverage, making it difficult to obtain high levels of accuracy, and thus \(L_{\rm FIR}\) or \(L_{\rm IR}\), from the SED fit. Footnote 7: In this analysis, we assume \(L_{\rm FIR}\sim 0.75\times L_{\rm IR}\), following Decarli et al. (2017). To determine a lower limit on the IR luminosity of the companions of AzTEC-3, we used a modified blackbody approximation (e.g., Knudsen et al. (2003), Equation 2), assuming a temperature of 45 K and \(\beta=1.7\)(typical values for high-redshift sources; Beelen et al. 2006; Dunne et al. 2011; Carniani et al. 2019). We utilized this approach rather than an SED fit as the SED of the companions is poorly sampled, especially in the far-IR, and the accuracy of sub-mm photometry is affected by the blending of the continuum emission from the central SMG. Calculated lower limits are provided in Table 7. A similar fit for the IR luminosity was performed for BRI0952, integrating the SED between 8 and 1000 \(\mu\)m. We assume the same approximation as for AzTEC-3: the effect of the companion sources on the SED and \(L_{\rm IR}\) will be negligible due to their faintness in comparison to the quasar. Specifically, we find that the IR luminosity due to star formation is \(L_{\rm IR_{\rm IR}}=(7.66\pm 2.49)\times 10^{12}\) L\({}_{\odot}\) and the IR luminosity due to AGN is \(L_{\rm IR_{\rm AGN}}=(2.23\pm 0.72)\times 10^{14}\) L\({}_{\odot}\). We note that the error on the IR luminosity comes from the uncertainty on the lensing factor. We find that \(L_{\rm IR_{\rm IR}}\) is a factor of \(\sim 5\) higher than that reported by Gallerani et al. (2012). We suggest that this is for two main reasons: (i) the methodology used for fitting - Gallerani et al. (2012) scaled a template SED to the 870\(\mu m\) continuum flux, while our model fits based on a number of photometric values, increasing the accuracy of the fit - and (ii) the wavelength range used in fitting (the range 42-122 \(\mu m\) is used in Gallerani et al. 2012). ### Star-formation rate We calculated the SFR of BRI0952 and AzTEC-3, along with their surrounding sources, using different methods. By assuming a Chabrier IMF we can infer the SFR of each source via the following relation: SFR \(\sim 10^{-10}L_{\rm IR}\), where \(L_{\rm IR}\) is given in \(L_{\odot}\)(Carilli & Walter, 2013). This yields an SFR of 770 M\({}_{\odot}\) yr\({}^{-1}\) for the quasar BRI0952 using \(L_{\rm IR_{\rm FIR}}\) and \(\sim 7340\) M\({}_{\odot}\) yr\({}^{-1}\) for AzTEC-3 (significantly higher than that reported by Riechers et al. (2014) of 1100 M\({}_{\odot}\) yr\({}^{-1}\)). These extremely high values are similar to those reported for other peculiar sources in the high-redshift universe. Daddi et al. (2009) reported an SFR of \(>1000\)M\({}_{\odot}\)/yr in the SMGs GN20 and GN20.2 and notes that there is no evidence of AGN activity. Similarly, SFRs of \(\geq 1000\) M\({}_{\odot}\) yr\({}^{-1}\) have been reported in HDF850.1 (Walter et al., 2012), AzTEC-1 (Yun et al., 2015; Sharda et al., 2019), HELS3 (Robson et al., 2014; Cooray et al., 2014), and BRI1202 (Carilli et al., 2013). Following the same procedure we infer the SFRs of the companions of AzTEC-3 from the modified blackbody \(L_{\rm IR}\) fit, reported in Table 7. The companions exhibit significantly lower SFRs than AzTEC-3, although we assume that these values provide a lower limit for the companions. We find comparable SFRs to those reported for the quasar companions in Neeleman et al. (2019). We derive SFRs through an additional method using the [C ii] -SFR relation for starburst galaxies from De Looze et al. (2014) (provided in Section 3.2.2), which are given in Table 7. This results in a similar distribution to the above method, demonstrating the significant discrepancy between the SFR of the companions and that of the central sources in both fields. We caution that using [C ii] as a means of inferring SFR may be inaccurate as the [C ii] emission is likely tracing other processes in addition to star formation within the galaxies. The Figure 9: [C ii] spectra for the companions of BRI0952 and AzTEC-3. The spectra of both the quasar and the SMG have been truncated so that the companion spectra are easily seen. The single- (dashed) and double- (dotted) Gaussian fits for both AzTEC-3 and BRI0952 are overplotted, along with the Gaussian fits to the companion sources. We highlight that the companions to BRI0952 are located at a very similar systemic velocity, whereas AzTEC-3’s companions are located in the blue part of the spectrum. [C ii] emission could be tracing more extreme processes or neutral gas (Pavesi et al., 2018) within the galaxy. If this is the case in AzTEC-3 or BRI0952, this could also contaminate the measurement of the SFR from the [C ii] emission. We further note the degeneracy inherently present in determining a SFR from a galaxy's [C ii] luminosity due to the [C ii] versus FIR deficit, which is explored below. ### [CII] Deficit The [C ii] line is well known to exhibit a deficit with increasing IR luminosity (e.g., Diaz-Santos et al., 2013; Gullberg et al., 2015; Diaz-Santos et al., 2017; Gullberg et al., 2018; Lagache et al., 2018). This has been heavily investigated at high redshift (e.g., Stacey et al., 2010; Wang et al., 2013; Gullberg et al., 2015; Decarli et al., 2017; Gullberg et al., 2018; Lagache et al., 2018; Neeleman et al., 2019), with many proposed explanations including the physical scale of star formation, [C ii] saturation, optical depth effects, increased dust grain charge in PDRs and the ISM, and AGN activity (Casey et al., 2014). Some studies at high redshift have suggested that the lowest ratios occur preferentially in AGN host galaxies (Stacey et al., 2010). This may not be exclusively limited to [C ii] emission, as other fine structure lines such as [O i], [O iii], [N ii], and [N ii] have been found to exhibit this deficit as well (e.g., Gracia-Carpio et al., 2011; Decarli et al., 2012; Farrah et al., 2013). We plot the \(L_{\rm[CII]}/L_{\rm IR}\) ratios as a function of the IR luminosity for the sources in our sample, as well as other low- and high-redshift galaxies in Figure 10. The ratios we find for BRI0952 and AzTEC-3 are similar to those found in HFSL3 (Riechers et al., 2013) and the ratios reported for two of the four quasars studied in Decarli et al. (2017). For the companion galaxies, we also investigated the \(L_{\rm[CII]}/L_{\rm IR}\) ratio, though we note that the errors are very large due to the additional uncertainty caused by the deblending of emission from the central bright source. As the IR luminosity is seen as a lower limit (see Section 4.2), the ratio can be treated as an upper limit. The resulting values are consistent with those of local star-forming galaxies (Diaz-Santos et al., 2013) and are higher than those reported for high-redshift companion sources detected in [C ii] (e.g., Carilli et al., 2013; Decarli et al., 2017; Neeleman et al., 2019). A possible explanation for the ratios observed in AzTEC-3 and BRI0952 is that of [C ii] saturation (e.g., Munoz & Oh, 2016; Gullberg et al., 2018). These sources are both hosts to extreme star formation and, in the case of BRI0952, AGN activity. If the temperatures in the majority of the environments in which [C ii] is produced exceed the ground state temperature (92 K), we can expect this line to saturate and other fine structure lines to become the primary coolants of the ISM. If this is the case for these two sources, it could explain the deficits observed in both. Observations of other fine structure lines in these sources could provide further clues to the origin of the deficit. ### Outflows and turbulence As shown in Sections 3.3.1 and 3.2.2, the [C ii] emission line profiles of the bright target sources, namely BRI0952 and AzTEC-3, are better fit when including an additional broad Gaussian function. The presence of broad, higher velocity wings in the line profiles are often interpreted as an indication of high-velocity outflows. We note that this need not be a unique interpretation; however, as this is a common analysis that has been done in many previous works (e.g., Feruglio et al., 2010; Aalto et al., 2012; Maiolino et al., 2012; Cicone et al., 2014, 2015; Gallerani et al., 2018; Bischetti et al., 2019; Stanley et al., 2019; Ginolfi et al., 2020). We pursue this below. We used the luminosity of the broad component of the [C ii] line in both AzTEC-3 and BRI0952 to infer a mass outflow rate using the equation from Hailey-Dunsheath et al. (2010) to calculate the mass of the outflow. Similarly to the method employed by Maiolino et al. (2012), Cicone et al. (2015), and Stanley et al. (2019), we assumed \(\mathrm{X_{C^{*}}}=1.4\times 10^{-4}\), \(T=200\) K, and \(n\gg n_{\rm crit}\) - typical values for PDRs. We estimated the velocity of the outflow by assuming a constant outflow rate of \(v_{\rm outf}=0.5\times\mathrm{FWHM_{broad}}\) over a region with a radius of \(\mathrm{R_{out}}\), allowing us to calculate the outflow rate using \(\dot{M}_{\rm out}=M_{\rm out}\times\mathrm{V_{out}}/R_{\rm out}\). We determined the outflow radius as the major axis of the extent of the [C ii] emission for AzTEC-3 (\(\sim 7.5\) kpc) and BRI0952 (\(\sim 2.3\) kpc); for BRI0952, this measurement was taken using the major axis through both the image plane top and bottom images combined, and corrected for lensing magnification 8. We investigated alternative methods for determining the mass outflow rate below. We find outflow rates of \(\dot{M}_{\rm out}=238\pm 30\mathrm{M_{\odot}/yr}\) for AzTEC-3 and \(\dot{M}_{\rm out}=98\pm 19\mathrm{M_{\odot}/yr}\) for BRI0952. Although these values are lower than those found for other individual detections of quasar-driven outflows (assumed to be the case for BRI0952; e.g., Feruglio et al., 2010; Sturm et al., 2011; Cano-Diaz et al., 2012; Maiolino et al., 2012; Cicone et al., 2014; Feruglio et al., 2015), they are in good agreement with outflow rates estimates through stacking analyses (e.g., Gallerani et al., 2018; Stanley et al., 2019; Ginolfi et al., 2020). The latter samples are comprised of more 'normal' high-redshift sources, so the valid \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Name & \(L_{\rm[CII]}\) & \(L_{\rm IR}\) & SFR\({}_{\rm IR}\) & SFR\({}_{\rm[CII]}\) & \(L_{\rm[CII]}/L_{\rm IR}\) & \(\dot{M}_{\rm out}\) & \(M_{\rm dyn,[CII]}\) \\ & [\(10^{12}\,\mathrm{L_{\odot}}\)] & [\(10^{12}\,\mathrm{L_{\odot}}\)] & [\(\mathrm{M_{\odot}\,yr^{-1}}\)] & [\(\mathrm{M_{\odot}\,yr^{-1}}\)] & [\(10^{-3}\)] & [\(\mathrm{M_{\odot}\,yr^{-1}}\)] & [\(10^{10}\,\mathrm{M_{\odot}}\)] \\ \hline BRI0952 & \(2.46\pm 0.23\) & \(7.66\pm 2.49\) & 770 & 247 & 0.37 & \(98\pm 19\) & 0.90 \\ Comp-N & \(0.39\pm 0.04\) & \(<\)0.2 & \(<\)21 & 34 & \(<\)2.0 & - & 0.5 \\ Comp-SW & \(0.19\pm 0.03\) & \(<\)0.2 & \(<\)21 & 16 & \(<\)1.0 & - & 0.4 \\ \hline AzTEC-3 & \(9.3\pm 0.80\) & \(73.4\pm 2\) & 7340 & 809 & \(0.12\pm 0.01\) & 238 \(\pm\) 30 & 6.5 \\ LBG-3 & \(0.63\pm 0.05\) & \(0.40\pm 0.5\) & \(40\pm 46\) & 55 & \(1.6\pm 0.14\) & - & 0.95 \\ Gal-S & \(0.43\pm 0.04\) & \(<0.1\) & \(<12.0\) & 37 & \(<3\) & - & 0.83 \\ Gal-SW & \(0.12\pm 0.01\) & \(0.15\pm 0.2\) & \(15\pm 23\) & 11 & \(0.85\pm 0.13\) & - & 1.6 \\ \hline \end{tabular} \end{table} Table 7: Derived properties for sources. \(L_{\rm IR}\) refers to the integrated IR luminosity from 8-1000 \(\mu\)m for BRI0952 (using \(L_{\rm IR_{\rm out}}\)) and AzTEC-3 and to a modified blackbody approximation for the companion sources. SFR\({}_{\rm IR}\) is the SFR derived from the \(L_{\rm IR}\), SFR\({}_{\rm[CII]}\) refers to the SFR derived from the relation given in De Looze et al. (2014) for starburst galaxies. \(\dot{M}_{\rm out}\) gives the mass outflow rate, and \(M_{\rm dyn}\) is the dynamical mass. Upper limits are provided for inferred properties for Gal-S, Comp-N, and Comp-SW as we do not detect these sources in continuum emission. The luminosities reported for BRI0952 are corrected for magnification. ity of the comparison is tenuous and the methodology used is discussed at the end of this section. As noted above, there are different methods for disentangling a potential contribution of a high-velocity outflow to the line profile, and we note that the velocity profile of the outflow need not be described by a single Gaussian. As an alternative, we determined the strength of the outflow by utilizing only the flux not accounted for in a single Gaussian fit to the [C ii] emission; effectively subtracting the single component from the broad one. This yields a significantly lower mass outflow rate, which we treated as a lower limit for BRI0952 and AzTEC-3. Using this method, we find mass outflow rates of \(28\pm 8\)M\({}_{\odot}\)/yr for AzTEC-3 and \(12\pm 4\)M\({}_{\odot}\)/yr for BRI0952. We attribute the discrepancy between the two methods, at least in part, to be the effect of the companion sources on the [C ii] line emission profile. If the emission from the central source is blended with that of the companions, it is possible that the double-Gaussian fit (and thus the broad complex emission line profiles) are simply an artifact caused by this blending. We caution that the differential lensing of the quasar may be contributing to the high-velocity wings. To this end, for an additional investigation into the mass outflow rate of BRI0952 we used the spectra extracted from Img-N (as mentioned in Section 3.2.2 and shown in Figure 3). We use a radius of \(0.35\arcsec\) (\(\sim 2.38\)kpc), corrected using the same lensing factor as described above), FWHM\({}_{\rm broad}=543\) km s\({}^{-1}\), and \(L_{\rm[C\,{\sc ii}]_{\rm broad}}=0.22\times 10^{9}\) L\({}_{\odot}\), resulting in an outflow of \(\dot{M}_{\rm out}=74\pm 19\) M\({}_{\odot}\)/yr. The exact impact of the gravitational lensing, and in particular differential lensing, is challenging to estimate, and further modeling based on higher resolution and higher sensitivity data across multiple wavelengths would be needed. We note that systematic errors on the mass outflow rate estimates, along with the systematic errors on other quantities used for comparison (e.g., SFR), are likely to dominate over the effect of the differential lensing. An additional important consideration is the origin of this broad component, which we interpret as an outflow above; for example, we ask ourselves whether the broad component is tracing high-velocity gas outflowing from AzTEC-3 or an interaction between the SMG and the companion Gal-S. This is further discussed in section 4.6. We show the mass outflow rate estimates as a function of SFR in Figure 11, together with similar estimates for low- and high-redshift sources. With star-formation-rate estimates that are \(\sim\) or \(>1000\) M\({}_{\odot}\) yr\({}^{-1}\) for both BRI0952 and AzTEC-3, both sources are seen in a similar region to other results for high-redshift galaxies. For both sources, the mass outflow rate estimates are on the lower side of the average; however, we note that the large uncertainties in the \(\dot{M}_{\rm out}\) and SFR do not allow for additional interpretation. Outflows are generally studied through different probes, including both emission from high-velocity outflows as well as absorption line studies. In terms of high-velocity outflows, detections have been published using different lines, including CO, [C ii], and [O iii]\(\lambda 5007\) at low and high redshift (e.g., Cicone et al. 2014, 2015; Carniani et al. 2016; Brusa et al. 2018). So far, only a few robust detections from single \(z>4\) quasars using [C ii] have been published (Maiolino et al. 2012; Carilli et al. 2013). Studies using stacking analyses for larger samples of \(z\sim 6\) quasars provide conflicting results (e.g., Decarli et al. 2018; Stanley et al. 2019; Bischetti et al. 2019), ranging from non-detection to claimed detections. In terms of star-forming galaxies, stacking of [C ii] for \(z=4-6\) galaxies in the ALPINE survey revealed a high-velocity outflow component, finding mass outflow rates that are consistent with our results (e.g., Ginolfi et al. 2020). The use of broad [C ii] emission as a means of detecting outflows was called into question by Spilker et al. (2020a) following their non-detection of broad wings associated with a sample of dusty star-forming galaxies with clear OH outflow absorption features. It is thus possible that the broad wings we detect are dominated by emission from the companion sources around AzTEC-3 and BRI0952, where the latter could also be affected by differential lensing magnification. In addition, recent studies have reexamined previous results, indicating the need for a broad component in the fitting of [C ii] spectra and found these components superfluous (e.g., Meyer et al. 2022); therefore, we caution that the true nature of the broad component cannot be confirmed as an outflow in BRI0952 and AzTEC-3 without additional outflow tracers detected in these galaxies. ### Environment The impact of the environment in which massive galaxies evolve at high-redshift remains an open question. These two systems present extreme situations in which to study the effects of faint companion sources in the early universe. The AzTEC-3 system, with the presence of multiple close companions, detected either in [C ii] or in optical or continuum observations, provide an exceptional laboratory to study the effect of close companions near intense starbursts in the early Universe. There are three companion galaxies within a projected distance of \(\sim 18\) kpc from the central SMG, and an additional system of possibly merging galaxies located at a projected distance of \(\sim 95\) kpc from the central source (Riechers et al. 2014), though the latter is not covered by our observations. We detect a bridge-like structure between the companion galaxy Gal-S and AzTEC-3, suggesting the possible occurrence of a gas-exchange Figure 10: \(L_{\rm[CII]}/L_{\rm IR}\) ratio as a function of \(L_{\rm IR}\) for local and high-redshift sources. Local (\(z<1\)) sources are taken from Diaz-Santos et al. (2013). High-redshift sources are from De Looze et al. (2014) and Gullberg et al. (2015). Literature results for high-redshift quasars and companion sources are taken from Decarli et al. (2017) and Neselem et al. (2019) for these we take an average of the reported \(L_{\rm IR}\)). We note that we only consider the \(L_{\rm[CII]}/L_{\rm IR}\) as an upper limit for all companion sources. The error bar we used for BRI0952 comes from the uncertainty on the lensing factor on \(L_{\rm IR}\). The error bar on the \(L_{\rm IR}\) and hence \(L_{\rm[CII]}/L_{\rm IR}\) ratio of AzTEC-3 is taken to be the range of \(L_{\rm IR}\) values provided by Capak et al. (2011). For data points taken from other papers, we assumed \(L_{\rm IR}\sim 0.75\times L_{\rm IR}\) (following Decarli et al. 2017), but added an additional indicative error bar to the bottom left of the plot indicating the conservative estimate \(\sim 30\)% due to this assumption (Carilli & Walter 2013). between the two galaxies extending over \(\sim 12\) kpc. This "gas bridge" between Gal-S and AzTEC-3 is very similar to that observed by Diaz-Santos et al. (2018) between the Hot DOG W2246 and a companion, although only about half as large in spatial extent. Further investigation of this requires higher resolution ALMA data and an improved method for isolating and subtracting the emission from the central source. As mentioned in Section 3.3, we detect a velocity gradient between LBG-3 and AzTEC-3, indicating an additional possible interaction. The detections of Lyman-\(\alpha\) between these two galaxies in Guaita et al. (2022) also indicate an interacting system. In the field of BRI0952, Comp-N and Comp-SW are detected in [C ii] with no further sources detected out to a projected distance of 62 kpc (corresponding to the radius of the primary beam). Also, no additional companions are seen at other wavelengths, most notably the archival _HST_ data. Due to the non-detection of the companions in current _HST_ and _Herschel_ imaging, an analysis of comparable level to that of AzTEC-3 is currently not feasible. We note the possibility that one or both of Comp-N and Comp-SW could be in the process of merging with BRI0952. If this is the case, it may suggest that we are observing the quasar in a post-starburst state in which recent galaxy interactions and ongoing mergers have triggered extreme star formation and AGN activity. In order to investigate this possibility, higher-resolution data of multiple emission lines combined with a robust source plane reconstruction would be needed; this is beyond the scope of this paper. For the BRI0952 companion sources, we see no clear signs of gas-bridge-like structures (as were seen for the AzTEC-3 companion galaxies). We also note that an alternative interpretation of either of the companions Comp-N and Comp-SW could be that they represent an extended substructure in the gas distribution. If that is the case, it would likely indicate the presence of merger activity, as the gravitational forces from minor or major merger interactions could cause a more complex gas distribution (e.g., Konig et al., 2014; Harada et al., 2018; Konig et al., 2018; Young et al., 2021). These two systems seem to be examples of long-sought-after, theorized, typical, over-dense environments of massive sources with numerous faint companions in the high-redshift Universe. The companion sources of both systems contribute less than 10% to the total [C ii] emission, and even less to the total IR luminosity between 8-1000 \(\mu m\). Other systems observed in recent years have also been found to have companion sources; however, most of these companions have luminosities comparable to that of the central SMG or quasar host galaxy (e.g., Clements et al., 2009; Carilli et al., 2013; Robson et al., 2014; Fogasy et al., 2017; Trakhtenbrot et al., 2017; Wardlow et al., 2018; Diaz-Santos et al., 2018; Neeleman et al., 2019; Fogasy et al., 2021; Bischetti et al., 2021). The detection of faint companion sources in these two fields, together with the results of W2246\(-\)0526 and BRI1202\(-\)0725 (Carilli et al., 2013; Diaz-Santos et al., 2018) and other such systems, are increasing the sample enabling investigation of the role of less massive companion sources on massive galaxy evolution. Theoretical predictions from semi-analytical model simulations suggest that 22% of quasars should have at least one companion galaxy with stellar masses \(>10^{8}\) M\({}_{\odot}\)(Fogasy et al., 2017). Additionally, studies show that minor mergers, especially in the high-redshift Universe, are common. Kaviraj et al. (2015) utilized the Horizon-AGN hydrodynamical cosmological simulation to show that by \(z\sim 1\) all massive galaxies (\(>10^{10}\) M\({}_{\odot}\)) have undergone a major or minor merger, and that minor mergers (those with a mass ratio \(>4\):1) are around 2.5\(\times\) more frequent than major mergers between \(1<z<4\). Their work also suggests that major mergers are not the dominant source of star-formation enhancement at high redshift (see Figure 5 in Kaviraj et al., 2015). This is indicative of the need for minor mergers as fuel providers for high-redshift galaxies and is especially important for extreme SMGs hosting maximum starbursts such as AzTEC-3. The lack of current detections of smaller companion sources is likely due, in part, to the long integration time required to observe them. In order to categorize these systems as possible minor mergers (be it progenitors or ongoing processes), we calculated the mass of the central source using a virial mass estimator following the procedure used by Riechers et al. (2014). We find the dynamical mass inferred from the [C ii] emission to be \(\sim 6.5\times 10^{10}\) M\({}_{\odot}\) for AzTEC-3 and \(\sim 0.90\times 10^{10}\) M\({}_{\odot}\) for BRI0952 (lensing corrected). We used the same method to compute the masses of the companions, reported in Table 7. In the AzTEC-3 system, we find that the companions have dynamical masses of \(\leq 4\times\) that of AzTEC-3. This would classify these companions as minor mergers should they interact with the central source; we already observe signs of this for AzTEC-3 and its companions in the form of the gas bridge between these objects. These companion sources are significantly less massive than those reported by Neeleman et al. (2019). For the companions of BRI0952 we find the companions have masses \(\sim 1.5-2\times\) smaller than that of the quasar. These values for the companions are closer to the values found for the quasar companions in Neeleman et al. (2019). We note that the companion masses are likely overestimated as we do not correct for lensing along the major axis of the companions used for the virial mass estimates due to uncertainties in our lensing model, and therefore the above estimate can be seen as an upper limit. The effect of the companions on massive sources remains to be seen. If these faint companions are dynamically interacting in some manner, such as providing gas to the central sources, Figure 11: Mass outflow rate versus SFR for our objects and low- and high-redshift galaxies, including stacking approaches at high redshift. The green and orange rectangles represent the range of possible values for AzTEC-3 and BRI0952, respectively. The low-\(z\) comparison sample is taken from Fluetsch et al. (2019), high-redshift direct observations are taken from Maiolino et al. (2012); George et al. (2014); Feruglio et al. (2017); Brusa et al. (2018); Herrera-Camus et al. (2019); Jones et al. (2019); Spilker et al. (2020); Butler et al. (2021), and high-redshift stacking averages are taken from Gallerani et al. (2018); Bischetti et al. (2019); Ginolfi et al. (2020). For Ginolfi et al. (2020), we plot the high-SFR sample and the median-SFR sample in their stacking methodology as separate points. We utilized an average of the mass outflow rates if a range is provided for an object. this could supply the needed materials for the extreme star formation occurring in these systems. Additionally, as suggested by McGreer et al. (2014), the possibility of mergers occurring on a relatively fast timescale as a short transitional phase could drastically limit our ability to obtain high number density observations of similar systems. We further suggest that the increased resolution now possible with ALMA will allow for increased detections of SMG and quasar systems with numerous and faint surrounding objects. ## 5 Conclusions In this paper, we present observational results of [C ii] emission from BRI0952 and AzTEC-3, along with respective companion sources. Our results lend credibility to the paradigm of major and minor mergers in the early Universe as progenitors for the massive galaxies we see in studies of the local Universe. We summarize our conclusions below. 1. We detect [C ii] emission in the lensed quasar BRI0952 at \(z\sim 4.433\) and the SMG AzTEC-3 at \(z\sim 5.3\). We report serendipitous detections of [C ii] emission from two previously unreported companion sources around BRI0952 and new detections of [C ii] emission from three companions surrounding AzTEC-3. These companions are each located within 3\({}^{\prime\prime}\) (18 kpc) of the central source. 2. We present a full-polarization analysis of the [C ii] emission lines for both main targets. No polarization was detected, and upper limits are provided. The results suggest that strong ordered magnetic fields are unlikely to exists at the kiloparsec scale in the two studied sources, unless ground state alignment is a less effective mechanism than expected. 3. We constructed a new lensing model for BRI0952 using Vishlens (Spilker et al. 2016), yielding a lensing magnification factor of \(\mu\sim 4\) for the quasar and insignificant lensing magnification of the two companion sources. Our model suggests that differential lensing is occurring across the surface of BRI0952 in both [C ii] and continuum emission. This difference is likely insubstantial (or within errors) for our purposes, but it is important to keep it in mind when considering the physical properties of the lensed images. 4. The inferred SFR from the IR luminosity of both the central SMG AzTEC-3 and the quasar BRI0952 suggest that both sources harbor starbursts of \(\sim\) or \(>1000\) solar masses per year. 5. The central SMG AzTEC-3 and a companion galaxy (Gal-S) in the field show evidence of an interlinking gas bridge. Although we do not find a strong velocity gradient across the central source, we suggest that this bridge may be indicative of an ongoing gas-exchange process or merger. 6. The [C ii] line profiles for both central sources exhibit complex broad features indicating the possible presence of outflows. The mass outflow rates of both BRI0952 and AzTEC-3 are similar to results for high-redshift galaxies; any discrepancies we find are likely symptomatic of large uncertainties on both the mass outflow rate and the SFR. 7. The outflow features, combined with the observed gas-bridge structure between AzTEC-3 and its southern companion (and possibly others) and velocity-gradients between BRI0952 and companions, suggest both are interacting systems. The extent of this interaction is unknown, but if both systems are either entering or exiting a merger phase, this could explain the extreme star formation events occurring in both. Growing evidence in recent years suggests that overdense regions leading to major and minor mergers are the progenitors of the massive galaxies we see in the local Universe. The improved resolution possible with ALMA will allow for the increased detection of companion galaxies in high-redshift environments, allowing us to explore the credibility of mergers as means of creating the extreme SFRs observed in these objects. ###### Acknowledgements. We thank the anonymous referee for their helpful comments and insights. Kiana Kade acknowledges support from the Nordic ALMA Regional Centre (ARC) node based at Onsala Space Observatory. The Nordic ARC node is funded through Swedish Research Council grant No 2017-00648. Kirsten Knubsen acknowledges support from the Swedish Research Council and the Knut and Alice Wallenberg Foundation. BG acknowledges support from the Carlsberg Foundation Research Grant Grant C20-6644 "Physical Properties of the Interstellar Medium in Luminous Infrared Galaxies at High redshift": PRISM-LIGHT '. Sabine Konig gratefully acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 789410). This paper makes use of the following ALMA data: 2018.1.01536.S. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. This paper makes use of the following _HST_ projects: 13641 (PI Capst), 9822 (PI COSMOS24-21), and 13384 (PI Riechs), 8268 (PI Impey). Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STSc/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA) This research made use of APLpy, an open-source plotting package for Python (Robitiaille & Bressert 2012).
2310.09637
Asymptotic symmetries of gravity in the gauge PDE approach
We propose a framework to study local gauge theories on manifolds with boundaries and asymptotic symmetries, which is based on representing them as so-called gauge PDEs. These objects extend the conventional BV-AKSZ sigma-models to the case of not necessarily topological and diffeomorphism invariant systems and are known to behave well with respect to restrictions to submanifolds and boundaries. We introduce the notion of gauge PDE with boundaries, which takes into account generic boundary conditions, and apply the framework to asymptotically flat gravity. In so doing we start with a suitable representation of gravity as a gauge PDE with boundaries which implements the Penrose's description of asymptotically simple spacetimes. We then derive the minimal model of the gauge PDE induced on the boundary and observe that it provides the Cartan (frame-like) description of a (curved) conformal Carollian structure on the boundary. Furthermore, imposing a suitable version of the familiar boundary conditions in the induced boundary gauge PDE immediately leads to the conventional BMS algebra of asymptotic symmetries. Finally, we briefly sketch the construction in the case of asymptotically (A)dS gravity.
Maxim Grigoriev, Mikhail Markov
2023-10-14T18:19:05Z
http://arxiv.org/abs/2310.09637v2
# Asymptotic symmetries of gravity in the gauge PDE approach ###### Abstract We propose a framework to study local gauge theories on manifolds with boundaries and asymptotic symmetries, which is based on representing them as so-called gauge PDEs. These objects extend the conventional BV-AKSZ sigma-models to the case of not necessarily topological and diffeomorphism invariant systems and are known to behave well with respect to restrictions to submanifolds and boundaries. We introduce the notion of gauge PDE with boundaries, which takes into account generic boundary conditions, and apply the framework to asymptotically flat gravity. In so doing we start with a suitable representation of gravity as a gauge PDE with boundaries which implements the Penrose's description of asymptotically simple spacetimes. We then derive the minimal model of the gauge PDE induced on the boundary and observe that it provides the Cartan (frame-like) description of a (curved) conformal Carollian structure on the boundary. Furthermore, imposing a suitable version of the familiar boundary conditions in the induced boundary gauge PDE immediately leads to the conventional BMS algebra of asymptotic symmetries. Finally, we briefly sketch the construction in the case of asymptotically (A)dS gravity. ###### Contents * 1 Introduction * 2 Gauge PDEs with boundaries and their symmetries * 2.1 Gauge PDEs * 2.2 Gauge PDEs with boundaries * 2.3 (Asymptotic) symmetries in gPDE terms * 2.4 Asymptotic symmetries in the presymplectic gPDE framework * 3 GR as a gauge PDE * 3.1 Off-shell GR as a gauge PDE * 3.2 Conformal-like on-shell GR * 3.3 Pre-minimal model for conformal-like on-shell GR * 4 Boundary systems and asymptotic symmetries * 4.1 Asymptotically simple GR as a gPDE with boundaries * 4.2 Minimal model for the boundary gPDE of asymptotically simple GR * 5 * 4.3 Boundary conditions and BMS symmetries * 4.4 Field-theoretical interpretation of the minimal model * 4.5 Asymptotically (A)dS spaces ## 1 Introduction Asymptotic symmetries play a prominent role in modern QFT and gravity [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. They originate from gauge transformations that preserve the boundary conditions imposed on fields but, at the same time, are not to be considered as genuine gauge transformations and hence define new physical symmetries. In particular, asymptotic symmetries critically depend on the choice of the boundary conditions for fields and gauge parameters. Historically, the first and very influential example of asymptotic symmetries is the Bondi-Metzner-Sachs (BMS) symmetries of asymptotically flat gravity [1, 2]. In contrast to naive expectations, the simplest natural choice of boundary conditions employed in [1, 2] results in the enhancement of the Poincare group by the so-called supertranslations, giving the infinite-dimensional symmetry group which is now known as the BMS group. The latter group, or its generalizations associated with different choices of boundary conditions, is now considered to be a proper symmetry group of the gravitational S-matrix and has been related [10, 12] to the celebrated soft graviton theorem and the gravitational memory effect. The proper geometrical setup for studying asymptotic symmetries of gravity was proposed by Roger Penrose, who introduced the notion of asymptotically simple spacetime [13]. More specifically, this is a spacetime \((\widetilde{M},\widetilde{g})\) that is diffeomorphic to the interior of the spacetime \((M,g)\) with boundary such that in the interior \(g=\Omega^{2}\widetilde{g}\) for some smooth function \(\Omega\) satisfying \(\Omega>0\) and \(\Omega|_{\mathcal{J}}=0,d\Omega|_{\mathcal{J}}\neq 0\), where \(\mathcal{J}=\partial M\) is the boundary. In other words, the idea is to realize the boundary at infinity as the usual boundary of the auxiliary spacetime. More details can be found in e.g. [14, 15, 16]. Recent decades have shown an increasing interest in asymptotic symmetries, not only in the context of gravity but also in general gauge theories, including Yang-Mills, topological systems, and higher-spin gauge theories [17, 18, 19, 20, 21, 22, 23, 24]. From this perspective, asymptotic symmetries are to be considered as a general feature of gauge theories on manifolds with (asymptotic) boundaries. This calls for a proper gauge-theoretical understanding of asymptotic symmetries. Various approaches are available in the literature. In particular, the first principle understanding of asymptotic symmetries is provided within the Hamiltonian approach [25, 4], see also [17], at the price of manifest covariance. A covariant generalization can be achieved with one or another version of the covariant phase space approach [26, 27, 28], see also [29] and references therein. A powerful and systematic framework for (quantum) gauge theories is provided by the Batalin-Vilkovisky (BV) formalism [30, 31] or, more precisely, its modern enhancements, such as the jet-bundle BV approach to local gauge theories, see e.g. [32, 26]. Of special attention in the present work is the so called BV-AKSZ framework [33], initially proposed in the context of topological models. An interesting feature observed in [34, 35, 36, 37, 38] (see also [39, 40, 40]) is that a BV-AKSZ system naturally induces a shifted AKSZ system on any space-time submanifold. For instance, an AKSZ version of the Hamiltonian BFV formulation is induced on a space-like submanifold of spacetime. By combining this observation with the construction of [39], see also [40, 38], which allows one to reformulate a general local gauge system as an AKSZ-like model, one arrives at the framework to analyze boundary values and symmetries of generic local gauge systems. This approach has been successfully employed in [41, 42, 43] in the study of boundary values of generic gauge fields on AdS space, see also [44] for a related approach, and in the reconstruction of bulk theories from the boundary data [45, 46]. Note that it does not employ the symplectic structure of BV formalism and is applicable to non-Lagrangian systems or systems whose Lagrangian is not specified. Let us also mention a somewhat related approach of [37] to Lagrangian gauge systems on manifolds with boundaries, see also [47, 48, 49]. In this work we develop an approach to gauge theories on manifolds with boundaries and their asymptotic symmetries, which is based on representing a given local gauge theory as a so-called _gauge PDE_. Gauge PDE (gPDE) is a generalization of the non-Lagrangian BV-AKSZ formulation to the case of general gauge theories. Although the term gauge PDE and its geometrical definition was introduced only in [50], the framework was originally put forward already in [39], see also [51], under the name _parent formulation_. Just like AKSZ systems, gPDEs behave well with respect to the restriction to space-time submanifolds and hence provide a natural framework to study gauge theories on manifolds with boundaries. More precisely, a gPDE is a bundle over spacetime and its pullback to a submanifold is again a gPDE, see e.g. [52] and references therein. Gauge PDEs can be also considered as a BV-BRST extension and generalization of the so-called unfolded formalism developed in the context of higher spin theories [53, 54]. We propose the notion of gPDEs with boundaries, which takes into account boundary conditions on fields and gauge parameters. More precisely, the boundary conditions are described by a sub-gPDE of the induced boundary gPDE which is, by definition, the initial gPDE pulled back to the boundary. In these terms asymptotic symmetries can be defined in a rather general and purely geometrical way, giving a systematic description of such systems and their symmetries in terms of differential graded geometry. The approach is applied to asymptotically flat gravity and is shown to reproduce celebrated BMS symmetries once a gauge PDE version of the well-known boundary conditions is taken. A crucial point of the construction is the gauge theoretical implementation of the Penrose asymptotically simple spacetime. This is achieved by introducing a Weyl compensator field \(\Omega\). In so doing the metric sector of the system can be considered as that describing conformal geometry, which allows to employ the equivalent reduction [55] (see also [56]) known in the context of conformal gravity. This later step leads to a remarkably simple boundary system which also resembles the conformal-geometry approach [57, 58, 59] to BMS symmetries and more general boundary calculus [60, 61]. The paper is organized as follows: in Section **2**, we briefly recall the gauge PDE approach to local gauge theories and propose its extension to theories on manifolds with boundaries and generally nontrivial boundary conditions. We then define asymptotic symmetries in this setup. In Section **3**, we present a reformulation of gPDE for general relativity in a form convenient for studying the asymptotic structure of this theory. The form is inspired by the Penrose's notion of an asymptotically simple spacetime. In Section **4** we derive the induced boundary system and analyze boundary conditions and asymptotic symmetries in the asymptotically flat case. This involves derivation of a concise minimal model of the induced boundary system, which makes manifest the Carrollian geometry structure. Finally, we sketch the construction in the the case of nonzero cosmological constant and present the respective minimal model. ## 2 Gauge PDEs with boundaries and their symmetries ### Gauge PDEs In the approach we employ in this work, local gauge theories, considered at the level of equations of motion, are encoded in the geometrical objects called gauge PDEs (or gPDE for short). A gPDE can be seen as a generalization of the jet-bundle non-Lagrangian BV formalism, i.e. a jet-bundle of a graded fiber bundle, equipped with the BV-BRST differential, which underlies a conventional BV formalism for local gauge theories, see e.g. [32]. The non-Lagrangian version was suggested in [51], see also [62, 63, 64]. We first need to briefly recall the necessary prerequisites. More detailed exposition can be found in [50]. **Definition 2.1**.: _A \(Q\)-manifold (also called dg-manifold) is a \(\mathbb{Z}\)-graded supermanifold equipped with a homological vector field \(Q\), i.e. a vector field of degree 1 satisfying \(Q^{2}=\frac{1}{2}[Q,Q]=0\), \(\mathrm{gh}(Q)=1\), \(|Q|=1\), where \(\mathrm{gh}(\cdot)\) denotes \(\mathbb{Z}\)-degree (often called ghost degree), and \(|\cdot|\) denotes Grassmann parity._ In this work we only deal with bosonic systems and hence one can simply assume \(|f|=\mathrm{gh}(f)=\,\mathrm{mod}\,2\) for any homogeneous functions, form, vector field, etc. Of course, the framework extends to systems with fermions in a standard way. The standard simplest example of a \(Q\) manifold is a shifted tangent bundle \(T[1]X\) over a smooth manifold \(X\). Its algebra of functions is just the algebra of differential forms on \(X\). Under this identification the de Rham differential corresponds to a homological vector field \(\mathrm{d}_{\mathrm{X}}\) on \(T[1]X\). If \(x^{\mu}\) are local coordinates on \(X\) and \(\theta^{\mu}\) the associated coordinates on the fibers, \(\mathrm{d}_{\mathrm{X}}\equiv\theta^{\mu}\frac{\partial}{\partial x^{\mu}}\). Let us also recall the definition of a \(Q\)-bundle, i.e. a fiber bundle in the category of \(Q\)-manifolds: **Definition 2.2**.: _[_65_]_ _1. \(Q\)-bundle \(\pi:(M,Q)\to(N,q)\) is a locally trivial bundle of graded manifolds \(M\) and \(N\) such that \(\pi^{*}\circ q=Q\circ\pi^{*}\)._ _2. A section \(\sigma:N\to M\) is called a \(Q\)-section if \(q\circ\sigma^{*}=\sigma^{*}\circ Q\)._ _3. A \(Q\)-bundle \(\pi:(M,Q)\to(N,q)\) is called locally trivial (as a \(Q\)-bundle) if it's locally isomorphic to a direct product of the base \((N,q)\) and the fiber \((F,q^{\prime})\) in such a way that \(Q=q+q^{\prime}\), i.e. \(Q\) is locally the direct product \(Q\)-structure._ There is a natural notion of equivalence for \(Q\) manifolds, which, roughly speaking, corresponds to elimination of contractible pairs. From gauge-theoretical viewpoint, such contractible coordinates correspond to auxiliary fields, pure gauge variables and their associated ghosts/antifields. More precisely: **Definition 2.3**.: _1. A contractible \(Q\)-manifold is a \(Q\)-manifold of the form \((T[1]W,d_{W})\), where \(W\) is a graded vector space considered as a graded manifold, and \(d_{W}\) is the de Rham differential on \(T[1]W\)._ _2. A \(Q\)-manifold \((N,q)\) is called an equivalent reduction of \((M,Q)\) if \((M,Q)\) is a locally trivial \(Q\)-bundle over \((N,q)\) admitting a global \(Q\)-section, and the fibers of this bundle are contractible \(Q\)-manifolds._ Equivalent reduction generates the notion of equivalence. In particular, cohomology of natural complexes, e.g. \(Q\)-cohomology in differential forms on \(E\), multivector fields, etc. on equivalent \(Q\)-manifolds are isomorphic. Locally, the statement that \((N,q)\) is an equivalent reduction of \((M,Q)\) implies that, seen as a \(Q\)-manifold, \((M,Q)\) is a direct product of \((N,q)\) and a contractible \(Q\)-manifold. A useful way [51] to identify an equivalent reduction in practice is to find independent functions \(w^{a}\) such that \(Qw^{a}\) are independent functions as well. It then follows that at least locally a submanifold defined by \(Qw^{a}=0\) and \(w^{a}=0\) is an equivalent reduction of the initial \(Q\)-manifold. It follows one can find functions \(\phi^{i}\) such that \(w^{a},v^{a}=Qw^{a},\phi^{i}\) form a local coordinate system and \(Q\phi^{i}=Q^{i}(\phi)\). In this case, \(w^{a}\) and \(v^{a}\) are standard contractible pairs known in the context of local BRST cohomology, see e.g. [66] and reference therein. The above notions of equivalent reduction and of equivalence extend to \(Q\)-bundles over the same base: **Definition 2.4**.: _Let \((M^{\prime},Q^{\prime})\) and \((M,Q)\) are \(Q\)-bundles over the same base \((N,q)\). \((M,Q)\) is called an equivalent reduction of \((M^{\prime},Q^{\prime})\) if \((M,Q)\) is a locally trivial \(Q\)-bundle over \((M,Q)\) such that the projection and the local trivializations maps are compatible with projections to \((N,q)\) (i.e. it is a bundle in the category of bundles over \((N,q)\)), \((M^{\prime},Q^{\prime})\) admits a global \(Q\)-section, and, moreover, the fiber is a contractible \(Q\)-manifold._ This generates an equivalence relation for \(Q\)-bundles. Again, a practical way to identify an equivalent reduction is to find functions \(w^{a}\) such that \(w^{a},Qw^{a}\) are independent functions that remain independent when restricted to a fiber (i.e. they can be taken as a part of a fiber coordinate system). It follows that at least locally the subbundle of \((M^{\prime},Q^{\prime})\) singled out by \(w^{a}=0\) and \(Qw^{a}=0\) is an equivalent reduction. 1 Footnote 1: Strictly speaking, in the infinite-dimensional case one should also require the existence of complementary fiber coordinates \(\phi^{i}\) such that \(Q\phi^{i}=Q^{i}(\phi)\). See [39] for more details. Finally, we are ready to formulate the definition of Gauge PDEs, which we abbreviate gPDE in what follows. **Definition 2.5**.: _1. Gauge PDE \((E,Q,T[1]X)\) is a \(Q\) bundle \(\pi:(E,Q)\to(T[1]X,\mathrm{d}_{\mathrm{X}})\), where \(X\) is a real manifold (independent variables). In addition it is assumed that \((E,Q,T[1]X)\) is locally equivalent to nonnegatively graded \(Q\)-bundle. Moreover, it should be equivalent to a jet-bundle BV-formulation seen as \(Q\)-bundle over \(T[1]X\) with \(Q=\mathrm{d}_{\mathrm{h}}+s\), where \(s\) is the BV-BRST differential and \(\mathrm{d}_{\mathrm{h}}\) the horizontal differential._ _2. Two gauge PDEs over \(T[1]X\) are considered equivalent if they are equivalent as \(Q\)-bundles._ Gauge PDEs encode local gauge theories. In particular, field configurations are identified with their sections while equations of motion arise as differential conditions on sections. More precisely, section \(\sigma:T[1]X\to E\) is a solution to \((E,Q,T[1]X)\) if \[\mathrm{d}_{\mathrm{X}}\circ\sigma^{*}=\sigma^{*}\circ Q \tag{2.1}\] Infinitesimal gauge transformations of the section \(\sigma\) are defined as \[\delta\sigma^{*}=\mathrm{d}_{\mathrm{X}}\circ\chi_{\sigma}^{*}+\chi_{\sigma}^ {*}\circ Q, \tag{2.2}\] where \(\chi_{\sigma}^{*}:\mathcal{C}^{\infty}(E)\to\mathcal{C}^{\infty}(T[1]X)\) is of degree \(-1\), satisfies \[\chi_{\sigma}^{*}(fg)=\chi_{\sigma}^{*}(f)\sigma^{*}(g)+(-1)^{|f|}\sigma^{*}(f )\chi_{\sigma}^{*}(g)\,,\qquad\forall f,g\in\mathcal{C}^{\infty}(E)\,, \tag{2.3}\] and \(\chi_{\sigma}^{*}(\pi^{*}(h))=0\) for all \(h\in\mathcal{C}^{\infty}(T[1]X)\). The map \(\chi_{\sigma}^{*}\) is interpreted as a gauge parameter. It is easy to check that the above gauge transformation is an infinitesimal symmetry of the equations of motion (2.1). In a similar way one defines gauge for gauge symmetries. It is often convenient to parameterize \(\chi_{\sigma}^{*}\) in terms of a vertical vector field \(Y\) on \(E\) of degree \(-1\): \(\chi_{\sigma}^{*}=\sigma^{*}\circ Y\). It is easy to check that for this choice \(\chi_{\sigma}^{*}\) (2.3) is indeed satisfied. Using this representation the gauge transformation of \(\sigma\) can be written as \(\delta\sigma^{*}=\sigma^{*}\circ[Q,Y]\). Note that a vector field \(V\equiv[Q,Y]\) is an infinitesimal symmetry of \((M,Q,T[1]X)\) because it preserves \(Q\), the degree, and the bundle structure. In the case of diffeomorphism-invariant systems, for instance gravity, their gPDE description usually requires the additional condition on the allowed class of sections. More precisely, the fiber coordinates typically involve a subset of "diffeomorphism ghosts" \(\xi^{a}\), \(a=0,\ldots\dim X-1\), \(\mathrm{gh}(\xi^{a})=1\) and sections are restricted by the condition that \(e_{\mu}^{a}(x)\) defined via \(\sigma^{*}(\xi^{a})=e_{\mu}^{a}(x)\theta^{\mu}\), are invertible. This is of course a gPDE counterpart of the familiar condition in the frame-like formulation of gravity. All the systems considered in this work are of this type and the nondegeneracy condition on sections is assumed in what follows. To complete the discussion of gPDEs let us note that gPDE automatically determine a nonlagrangian jet-bundle formulation of the underlying gauge system. This is induced on the bundle of super-jets of \(E\) and its BV-BRST differential is the vertical part of the prolongation of \(Q\) to the super-jet bundle. More details can be found in [50, 52], see also [39] for the original construction and local proof. ### Gauge PDEs with boundaries Let \(X\) be a space-time manifold but now we assume that \(X\) has a nontrivial boundary \(\Sigma=\partial X\) and let \(i:\Sigma\to X\) denotes the embedding of the boundary. Suppose we are given with a gPDE \((E,Q,T[1]X)\) on \(X\). This induces a new gPDE \(i^{*}E\) on \(\Sigma\) given by a pullback of \(E\) to \(T[1]\Sigma\subset T[1]X\) (here by a slight abuse of notation \(i\) also denotes an induced pushforward \(T[1]\Sigma\to T[1]X\)). It is easy to check that this is again a gPDE (e.g. by regarding it as a \(Q\)-submanifold of \(E\)), which we call induced boundary gPDE. \(i^{*}E\) can be considered as a gPDE describing a gauge theory of unconstrained boundary values of the fields encoded in \((E,Q,T[1]X)\), see [52] for more details and [41, 42] for the earlier and less general construction and applications in the context of higher spin holography. Now we are interested in the gPDE description of systems with possibly nontrivial boundary conditions. We have the following: **Definition 2.6**.: _By a gauge PDE with boundares we mean the following data: \((E,Q,T[1]X,E_{\Sigma},T[1]\Sigma)\), where gPDE \((E,Q,T[1]X)\) is a gPDE on \(X\) and \((E_{\Sigma},Q_{\Sigma},T[1]\Sigma)\) is a gPDE on the boundary \(\Sigma=\partial X\), which is a sub-gPDE of \(i^{*}E\). In particular, \(Q_{\Sigma}\) is a restriction of \(Q\) to \(E_{\Sigma}\subset i^{*}E\subset E\)._ Gauge PDE \((E_{\Sigma},Q_{\Sigma},T[1]\Sigma)\), which is a part of the above definition, can be regarded as a _gPDE of boundary conditions_. For instance, if \(E_{\Sigma}=i^{*}E\) this means that no boundary conditions are imposed. General situations are described by nontrivial \(Q\)-subbundles of \(i^{*}E\). It is important to stress that in general, \(E_{\Sigma}\) restricts not only the boundary values of fields but also the boundary values of gauge parameters and parameters of gauge-for-gauge symmetries. **Remark 2.1**.: _Even if \(E_{\Sigma}\) doesn't coincide with \(i^{*}E\) it doesn't necessarily mean that we are dealing with nontrivial boundary conditions. This happens if \(E_{\Sigma}\) is an equivalent reduction of \(i^{*}E\) in which case \(E_{\Sigma}\) implements elimination of auxiliary fields and pure gauge variables._ Although the above definition is quite general, in the context of asymptotic symmetries we need to allow \((E,Q,T[1]X)\) to be slightly locally nontrivial. Namely, \(E\) restricted to the interior of \(X\) and \(i^{*}E\) are still required to be locally trivial while the typical fiber of \(i^{*}E\) can differ from the fiber over the interior by a subset of measure zero. For our present purposes it is enough to allow the fiber of \(i^{*}E\) to be a manifold with boundary whose interior coincides with the typical fiber over the interior. In this case the total space of \(E\) restricted to the interior of \(X\) is itself the interior of a manifold with corners. Restricting it to \(\partial X\) gives a total space of \(i^{*}E\) whose fiber is a manifold with boundary. As we are going to see, at more practical level we actually work with \(i^{*}E\) and its sub-gPDE \(E_{\Sigma}\) which are locally trivial. Note that a gPDE with boundary could be defined in terms of a single locally-nontrivial bundle \(E^{\prime}\to T[1]X\) such that its fibers over the boundary shrinks to those of \(E_{\Sigma}\) but we prefer to keep boundary conditions explicit. The field theoretical interpretation of the above definition becomes clear with the help of: **Definition 2.7**.: 1. _A solution of_ \((E,Q,T[1]X,E_{\Sigma},T[1]\Sigma)\) _is a section_ \(\sigma:T[1]X\to E\) _satisfying_ \(\sigma^{*}\circ Q=\mathrm{d}_{\mathrm{X}}\circ\sigma^{*}\) _and such that its restriction to_ \(T[1]\Sigma\) _belongs to_ \(E_{\Sigma}\)_, i.e. is a solution to_ \((E_{\Sigma},Q_{\Sigma},T[1]\Sigma)\)_._ 2. _A gauge parameter is a vertical vector field_ \(Y\) _on_ \(E\) _such that_ \(\mathrm{gh}(Y)=-1\) _and its restriction to_ \(i^{*}E\) _is tangent to_ \(E_{\Sigma}\subset i^{*}E\)_. In other words, gauge parameters should satisfy the boundary conditions encoded in_ \(E_{\Sigma}\subset i^{*}E\)_._ 3. _A gauge transformation of section_ \(\sigma\) _is defined as_ \[\delta_{Y}\sigma^{*}=\mathrm{d}_{\mathrm{X}}\circ\sigma^{*}\circ Y+\sigma^{*} \circ Y\circ Q\] (2.4) The following comments are in order: it is easy to check that if \(\sigma\) is a solution then \(\sigma+\delta_{Y}\sigma\) with \(\delta_{Y}\sigma\) determined by (2.4), is again a solution (to first order in \(Y\)). Moreover, in this case the gauge transformation (2.4) can be rewritten as: \[\delta_{Y}\sigma^{*}=\sigma^{*}\circ[Q,Y]\,. \tag{2.5}\] Restricting (2.4) to \(T[1]\Sigma\) one finds a standard gauge transformation for \((E_{\Sigma},Q_{\Sigma},T[1]\Sigma)\) whose parameter is \(Y\) restricted to \(E_{\Sigma}\subset i^{*}E\subset E\) (recall that \(Y\) is vertical and hence is tangent to \(i^{*}E\) and the above definition requires \(Y\) to be tangent to \(E_{\Sigma}\)). In a similar way one can define gauge-for-gauge transformations. In particular, parameters of the gauge-for-gauge transformations of stage 1 are vertical vector fields of degree \(-2\) and tangent to \(E_{\Sigma}\). All the above definitions can be generalised to the case where \(\Sigma\) is a generic submanifold. This version can be relevant in describing theories with defects. Generalization to the case of manifolds with corners or higher codimension strata is also possible. ### (Asymptotic) symmetries in gPDE terms Let us now turn to the discussion of symmetries. Given a gauge PDE, an _infinitesimal symmetry_ is by definition a vector field \(W\) which preserves all the structures, i.e. \([Q,W]=0,W\) is vertical, and \(\mathrm{gh}(W)=0\) (though symmetries of nonvanishing ghost number are also of interest). The infinitesimal transformation of a solution \(\sigma\) under a symmetry transformation determined by \(W\) is defined to be: \[\delta_{W}\sigma^{*}=\sigma^{*}\circ W\,. \tag{2.6}\] It is easy to check that it defines an infinitesimal symmetry transformation that takes solutions to solutions. Gauge symmetries are the ones where \(W=[Q,Y]\), where \(Y\) is a gauge parameter. In particular, in this case the above transformation coincides with (2.5) so that it is natural to regard symmetries of the form \(W=[Q,Y]\) as trivial. As we only consider infinitesimal symmetries, in what follows we systematically omit "infinitesimal". _Inequivalent symmetries_ (also known as _global_ or _physical_) can be defined as the respective quotient of all symmetries modulo the ideal of the gauge ones and hence are given by \(Q\)-cohomology in vertical vector fields. One can check that in the case of usual jet-bundle BV formulation of a local gauge theory, this definition reproduces the standard one, at least locally. Details can be found in Appendix A. As far as gauge PDEs with boundaries are concerned a natural definition of symmetry is that it is a vertical vector field \(W\) such that \([Q,W]=0\) and \(W\) restricted to \(i^{*}E\) is tangent to \(E_{\Sigma}\subset i^{*}E\). At the same time, genuine gauge symmetries in this case are those symmetries of the form \(W=[Q,Y]\) whose parameters \(Y\) are tangent to \(E_{\Sigma}\). All the above discussion applies to symmetries of arbitrary definite ghost degree. Let now \((E,Q,T[1]X,E_{\Sigma},T[1]\Sigma)\) be a gPDE with boundaries. A common lore is that asymptotic symmetries are gauge symmetries of the system (defined as if there were no boundary conditions) that preserve boundary conditions while those whose parameters satisfy boundary conditions for gauge parameters are genuine gauge symmetries and should be considered trivial. In the case of Lagrangian systems the extra requirements are to be imposed. In the gPDE approach this can be formalised as follows: **Definition 2.8**.: _Asymptotic symmetries of \((E,Q,T[1]X,E_{\Sigma},T[1]\Sigma)\) are symmetries of \((E_{\Sigma},Q_{\Sigma},T[1]\Sigma)\), which are restrictions to \(E_{\Sigma}\) of those gauge symmetries of \(i^{*}E\) that preserve \(E_{\Sigma}\subset i^{*}E\). Gauge symmetries of \((E_{\Sigma},Q_{\Sigma},T[1]\Sigma)\), i.e. vector fields on \(E_{\Sigma}\) of the form \([Q,Y]|_{E_{\Sigma}}\) with \(Y\) tangent to \(E_{\Sigma}\), are considered trivial asymptotic symmetries._ More explicitly, asymptotic symmetries are vertical vector fields that have the form \([Q,Y]\) with \(Y\) vertical and are tangent to \(E_{\Sigma}\). These are considered modulo vector fields vanishing on \(E_{\Sigma}\). Vector fields \([Q,Y]\) with \(Y\) tangent to \(E_{\Sigma}\) are trivial (genuine gauge symmetries). Asymptotic symmetries form a subalgebra of all symmetries of \((E_{\Sigma},Q_{\Sigma},T[1]\Sigma)\). An alternative would be to define asymptotic symmetries to be all symmetries of \((E_{\Sigma},Q_{\Sigma},T[1]\Sigma)\) modulo its own gauge symmetries. Another alternative is to require asymptotic symmetries to arise as restrictions to \(E_{\Sigma}\) of symmetries of \(E\), but this does not seem to make any difference if one restricts to local analysis as we do in this work. Later on we also need a slightly more general framework applicable to the case where \(E_{\Sigma}\) is not a regular submanifold of \(i^{*}E\) while its prolongation to the bundle of jets of its sections is. This occurs in applications because in the minimal formulations the frame fields arise as components of \(\sigma^{*}(\xi^{a})\), where \(\sigma:T[1]\Sigma\to i^{*}E\) and the condition that the frame field is invertible can not be implemented in terms of the fiber geometry and is imposed as a condition on sections. As we are going to see such formulations arise in practice if one is after a concise formulation of the boundary gPDE. To cover this situation one allows \(Y\) to be a generalized vector field, i.e. its coefficients are allowed to depend on jets of sections. If \(\mathcal{I}_{\Sigma}\) is the ideal of \(E_{\Sigma}\) then the condition that \([Q,Y]\) is tangent to \(E_{\Sigma}\) is replaced by \[\mathrm{d}_{\mathbb{X}}\sigma^{*}(Yf)+\sigma^{*}(YQf)=0\quad\forall f\in \mathcal{I}_{\Sigma}\,. \tag{2.7}\] This should hold for all sections of the gPDE of boundary condition, i.e. sections of \(i^{*}E\) such that \(\sigma^{*}(\mathcal{I}_{\Sigma})=0\). This is precisely the condition that the symmetry transformation satisfying this condition sends solutions to \(E_{\Sigma}\) to themselves. Note that if one doesn't want to employ such a generalization it is always possible to work in terms of the associated parent gPDE whose underlying bundle is the super-jet bundle of \(i^{*}E\) and hence the prolongation of \(E_{\Sigma}\) is a smooth submanifold. ### Asymptotic symmetries in the presymplectic gPDE framework In the case of Lagrangian systems, asymptotic symmetries can be defined as gauge transformations whose associated charges become nontrivial due to boundary conditions. Althogh in this work we restricted ourselves to the analysis at the level of equations of motion, let us briefly comment on how such an approach can be implemented in the gPDE framework. A Lagrangian system can be described by a gPDE \((E,Q,T[1]X)\) equipped with the compatible presymplectic structure \(\omega\) of degree \(n-1\), \(n=\dim X\), such that: \[d\omega=0\,,\qquad L_{Q}\omega\in\mathcal{I}\,,\qquad i_{Q}L_{Q}\omega\in \mathcal{I}\,, \tag{2.8}\] where \(\mathcal{I}\) denotes the ideal of forms on \(E\) generated by the forms of positive degree, pulled back from the base, i.e. by \(dx^{\mu},d\theta^{\mu}\) in standard coordinates. We also fix a presymplectic potential \(\chi\) such that \(\omega=d\chi\). Note that for \(n>1\) it exists globally as the respective de Rham cohomology is empty. More details on presymplectic gPDEs can be found in [52, 56, 67], see also [68, 69, 70] for earlier relevant works. We say that boundary gPDE \(E_{\Sigma}\) and symplectic potential \(\chi\) are compatible if the pullback of \(\chi\) to \(E_{\Sigma}\) vanishes. It turns out that this is a sufficient condition for the respective action to be differentiable. Indeed, the above data defines a presymplectic AKSZ-like action (also known as intrinsic action): \[S[\sigma]=\int_{T[1]X}\sigma^{*}(\chi)(\mathrm{d}\chi)+\sigma^{*}(H)\,, \tag{2.9}\] where the "covariant Hamiltonian/BRST charge" \(H\) is defined through \(i_{Q}\omega+dH\in\mathcal{I}\). Note that picking an equivalent \(\chi=\chi+d\alpha\) doesn't affect equations of motion but adds a boundary term \(\int\mathrm{d}\chi\sigma^{*}(\alpha)\) to the above action. Consider a variation of \(S[\sigma]\) under \(\sigma\to\sigma+\delta\sigma\). Representing \(\delta\sigma\) as \(\sigma^{*}\circ V\), where \(V\) is a vertical vector field on \(E\), one finds that the boundary term has the form: \[\delta S=\int\text{``EOM''}\,\,\delta\sigma+\int\sigma^{*}(i_{V}\chi) \tag{2.10}\] Because \(\delta\sigma\) should preserve boundary conditions, \(V\) is tangent to \(E_{\Sigma}\) so that boundary contribution vanishes provided \(\chi\) and \(E_{\Sigma}\) are compatible as we assume in what follows. Let us now turn to conserved currents (conservation laws) associated to symmetries. A symmetry \(W\) is called compatible with presymplectic structure if \[L_{W}\omega+L_{Q}d\alpha\in\mathcal{I}\,, \tag{2.11}\] for some \(1\)-form \(\alpha\) of ghost degree \(\mathrm{gh}(\alpha)=\mathrm{gh}(W)+n-2\). Given a compatible symmetry one can define an associated generalised conserved current, which is a degree \(\mathrm{gh}(W)+n-1\) function defined through: 2 Footnote 2: The discussion of global symmetries and conserved currents in the presymplectic BV-BRST approach can be found in [64]. \[i_{W}\omega-(-1)^{|W|}L_{Q}\alpha-dH_{W}\in\mathcal{I}\,, \tag{2.12}\] The consistency condition \(d(i_{W}\omega-(-1)^{|W|}L_{Q}\alpha)\in\mathcal{I}\) holds thanks to (2.11). If \(\alpha\) is fixed \(H_{W}\) is defined modulo functions of the form \(\pi^{*}(f),f\in\mathcal{C}^{\infty}(T[1]X)\). Moreover, it follows \(d(QH_{W})\in\mathcal{I}\) and hence \(QH_{W}=\pi^{*}(h)\) for some function \(h\) on \(T[1]X\) so that by adding to \(H_{W}\) a function of the form \(\pi^{*}(f)\) one can achieve \(QH_{W}=0\). This defines a map from compatible symmetries to conserved currents. Given a conserved current, i.e. a \(Q\)-closed function \(H_{W}\) on \(E\), one can define the respective charge as \[\mathbf{H_{W}}[\sigma]=\int_{T[1]C}\sigma^{*}(H_{W})\,, \tag{2.13}\] where \(\sigma\) is a solution (\(Q\)-section) of \(E\) restricted to a shifted tangent bundle of submanifold \(C\subset X\) of codimension \(1-\mathrm{gh}(W)\). Note that adding a \(Q\)-exact piece to \(H_{W}\) results in the addition of a \(\mathrm{d}\chi\)-exact term in the integrand and hence this only contributes to the boundary term (is \(C\) has a nontrivial boundary). The above charge doesn't change under deformations of \(C\) provided \(\partial C\) is kept undeformed, because \(\mathrm{d}\chi\sigma^{*}(H_{W})=\sigma^{*}(QH_{W})=0\) if \(\sigma\) is a solution. If \(W\) is a gauge symmetry, i.e. \(W=[Q,Z]\) with \(Z\) vertical it is automatically compatible with \(\omega\) because \(L_{[Q,\,Z]}\omega=-(-1)^{|Z|}L_{Q}dL_{Z}\chi+\mathcal{I}\). The associated conserved current determined by the above map is \(Q\)-exact and can be taken in the form \(H_{W}=Q(i_{Z}\chi)\). In particular, in the case of a gPDE with boundary the currents associated to gauge symmetries (i.e. with \(Z\) tangent to \(E_{\Sigma}\)) necessarily vanish on the boundary provided \(\chi\) and \(E_{\Sigma}\) are compatible. In other words charges associated to genuine gauge symmetries vanish while those associated to asymptotic symmetries are generally nontrivial. GR as a gauge PDE ### Off-shell GR as a gauge PDE We start by reformulating Riemannian geometry as a local gauge theory or, more precisely, a gauge PDE. This system can also be seen as an off-shell gravity, i.e. a gauge theory whose fields are components of the metric and gauge transformations act as diffeomorphisms. In the gPDE language the underlying bundle is given by: \[\mathcal{E}\to X\,,\qquad\mathcal{E}=(T^{*}X\lor T^{*}X)_{\mathrm{nd}}\oplus T [1]X\,. \tag{3.1}\] Sections of the first summand are metrics \(\widetilde{g}_{ab}(x)\) while coordinates on the fibers of the second summand are diffeomorphism ghosts \(\xi^{a}\). The desired gPDE \(\widetilde{E}\to T[1]X\) can be taken to be \(J^{\infty}(\mathcal{E})\to X\), pulled back to \(T[1]X\) by the canonical projection \(T[1]X\to X\). In plain words, the fiber coordinates are \(D_{(a)}\widetilde{g}_{bc},D_{(a)}\xi^{b}\), where \(D_{a}\) denote canonical total derivative in \(J^{\infty}(\mathcal{E})\), \((a)\) denotes a symmetric multi-index, and the degree is assigned in a standard way: \(\mathrm{gh}(D_{(a)}\widetilde{g}_{bc})=0,\mathrm{gh}(D_{(a)}\xi^{b})=1\). In terms of local coordinates the \(Q\) structure is determined by \[Qx^{\mu}=\theta^{\mu},\qquad Q\widetilde{g}_{bc}=\xi^{a}D_{a}\widetilde{g}_{ bc}+\widetilde{g}_{ac}D_{b}\xi^{a}+\widetilde{g}_{ba}D_{c}\xi^{a},\qquad Q \xi^{b}=\xi^{a}D_{a}\xi^{b}, \tag{3.2}\] and \([Q,D_{a}]=0\). Note that \(\widetilde{g}_{\mathrm{tc}}=\widetilde{g}_{cb}\) and \(\widetilde{g}_{bc}\) is invertible. Note also that here we use generic coordinates \(x^{\mu}\) on the base \(X\). To be more specific, once the jet bundle and the action of \(Q\) on the fiber is defined in terms of a fixed coordinate system \(x^{a}\) we have a freedom of using any coordinate system \(x^{\mu}\) on the base. This happens because \(Q\) is locally a product \(Q\)-structure of \(\mathrm{d}_{\mathrm{X}}\) and the \(Q\)-structure of the typical fiber or, in other words, the underlying bundle is locally trivial as a \(Q\)-bundle. In what follows we often refer to \(\widetilde{g}_{ab}\) as metric. Similarly, we refer to the Cristoffel symbols, Riemann curvature, etc. seen as respective functions in \(D_{(a)}\widetilde{g}_{bc}\) as just Christoffel symbols, Riemann curvature, etc. This is natural because such local functions coincide with the respective objects if one evaluates them on the prolongation of a section \(\sigma_{0}\,:\,X\,\to\mathcal{E}\). However, from the gPDE point of view this identification only happens in a particular gauge (3.3). **Remark 3.1**.: _The above gPDE is not exactly the standard BV-BRST jet-bundle equipped with the horizontal differential \(\mathrm{d}_{\mathrm{h}}\) and the BRST differential \(\gamma\), see e.g. [71, 72]. Although the action of \(Q\) on fiber variables coincides with that of the standard BV-BRST differential it actually corresponds to the total BRST differential \(\mathrm{d}_{\mathrm{h}}+\gamma\). More precisely, thanks to the diffeomorphism invariance one can bring \(\mathrm{d}_{\mathrm{h}}+\gamma\) to the form (3.2) by a change of fiber coordinates, see e.g. [39] for more details._ A local proof of the equivalence of the above gPDE and the standard jet-bundle BV-BRST formulation of off-shell GR can be found in [39]. In any case, it is not difficult to explicitly consider solutions and gauge transformations and check that we are indeed dealing with the off-shell GR. Probably the simplest way to see the equivalence is to observe that the gauge condition \[\sigma^{*}(\xi^{a})=\theta^{a},\qquad\sigma^{*}(D_{(b\ldots)}\xi^{a})=0 \tag{3.3}\] is reachable locally. In this gauge the remaining equations of motion simply tell us that \(\widetilde{g}_{ab}\) is unconstrained and that \(\sigma^{*}(D_{(a)}\widetilde{g}_{bc})=\partial_{(a)}\sigma^{*}(\widetilde{g} _{bc})\). Moreover, in this gauge the residual gauge parameters \(\chi^{*}(D_{(a)}\xi^{b})\) are all determined by \(\epsilon^{a}(x)=\chi^{*}(\xi^{a})\) and the residual gauge transformation of \(\sigma^{*}(\widetilde{g}_{ab})\) is given by \(L_{\epsilon}\sigma^{*}(\widetilde{g}_{ab})\) so that indeed we are dealing with off-shell gravity. More detailed discussion of the analogous gauges in a more general context can be found in [73]. The above system can be equivalently extended to provide a gauge-theoretical implementation of the Penrose description of asymptotically simple spaces [74, 3]. More specifically, we extend the fiber of \(\mathcal{E}\) with extra coordinates \(\Omega\), \(\Omega>0\) and \(\lambda\), with \(\mathrm{gh}(\Omega)=0,\mathrm{gh}(\lambda)=1\) and extend \(Q\) as follows: \[Q\Omega=\xi^{a}D_{a}\Omega+\lambda\Omega,\qquad Q\lambda=\xi^{a}D_{a}\lambda, \qquad[D_{a},Q]=0\,. \tag{3.4}\] Condition \(\Omega>0\) is crucial in ensuring the equivalence of the initial and the extended system in the sense of **2.5**. Thanks to this condition we can introduce new coordinates \(g_{bc}\equiv\Omega^{2}\widetilde{g}_{bc}\). In these coordinates the action of \(Q\) is given by: \[Qx^{\mu}=\theta^{\mu},\quad Qg_{bc}=\xi^{a}D_{a}g_{bc}+g_{ac}D_{b}\xi^{a}+g_{ba} D_{c}\xi^{a}+2\lambda g_{bc},\quad Q\xi^{b}=\xi^{a}D_{a}\xi^{b}\,. \tag{3.5}\] **Definition 3.1**.: _The extended system with \(\Omega>0\) and the \(Q\) structure determined by (3.5),(3.4) is called conformal-like off-shell gravity._ Note that the gauge transformations of \(\sigma^{*}(g_{ab})\) can be identified (for instance by employing partial gauge condition (3.3) along with \(\sigma^{*}(\lambda_{(a)})=0\)) with the action of diffeomorphisms and Weyl transformations whose parameters are associated to ghosts \(\xi^{a}\) and \(\lambda\). In particular, the sub-gPDE determined by \(\Omega=1,\,D_{(a...)}\Omega=0,D_{(a)}\lambda=0\) is the gPDE reformulation of the conformal geometry, known in the literature in one or another version [55, 75, 56]. ### Conformal-like on-shell GR From the field theory perspective the systems presented above are off-shell gauge theories, i.e. theories equivalent to a set of unconstrained fields subject to gauge transformations. We are mostly interested in gravity-like theories, where fields are subject to nontrivial differential equations. The respective gPDE description can be obtained by considering a \(Q\)-subbundle of the initial off-shell system. In the case of off-shell GR (3.2) the \(Q\)-subbundle is defined as an infinite prolongation of the Einstein equations \[D_{(a)}(\widetilde{R}_{bc}-\frac{\widetilde{g}_{bc}}{d}\widetilde{R})=0,\quad \widetilde{R}=\frac{2d}{d-2}\Lambda, \tag{3.6}\] where \(\widetilde{R}_{bc}\), \(\widetilde{R}\equiv\widetilde{g}^{bc}\widetilde{R}_{bc}\) are local functions in \(D_{(a)}\widetilde{g}_{bc}\) corresponding to Ricci tensor and the scalar curvature respectively. It is easy to see that \(Q\) restricts to the submanifold and hence this indeed defines a gPDE, to which we refer in what follows as _on-shell GR_. In what follows we often encounter gPDEs defined as subbundles of other gPDEs i.e. jet-bundles. A convenient way to describe (coordinate) functions on such a subbundle is to regard them as the equivalence classes of functions modulo those vanishing on the subbundle. Alternatively, the restrictions of the ambient coordinates to the subbundle can be regarded as an overcomplete coordinate system therein. Our aim now is to equivalently reformulate on-shell GR as a sub-gPDE of the conformal-like off-shell GR defined in **3.1**. To this end consider a subbundle singled out by the following constraints: \[D_{(a)}F_{bc}=0,\qquad\Omega\rho+\frac{g^{ab}}{2}D_{a}\Omega D_{b}\Omega=- \frac{\Lambda}{(d-1)(d-2)}\,, \tag{3.7}\] where \[F_{bc}\equiv D_{b}D_{c}\Omega-\Gamma_{bc}^{d}D_{d}\Omega+\Omega P _{bc}+\rho g_{bc}, \tag{3.8}\] \[\rho\equiv-\frac{1}{d}g^{bc}(D_{b}D_{c}\Omega-\Gamma_{bc}^{d}D_{d }\Omega+P_{bc}\Omega) \tag{3.9}\] and \(\Gamma_{bc}^{d}\), \(P_{bc}\) are respectively the Christoffel symbols and the Schouten tensor seen as functions in the jets of the metric. Equations (3.7) are known as _almost Einstein equation_. However, usually they are interpreted as equations on \(\Omega\) while metric \(g_{ab}\) is considered fixed, see [76, 77] for more details. Now we treat (3.7) as equations restricting both \(g_{ab}\) and \(\Omega\). **Definition 3.2**.: _The sub-gPDE of the conformal-like off-shell GR **3.1**, which is determined by constraints (3.7), is called conformal-like on-shell GR._ The name is justified by the following: **Proposition 3.3**.: _For \(d\,{\geqslant}\,3\) conformal-like on-shell GR is equivalent to on-shell GR (3.6)._ Proof.: First of all recall that \(\Omega>0\). In terms of \(\widetilde{g}_{ab}=\Omega^{-2}g_{ab}\) Einstein equations have the standard form (3.6) while (the derivatives of) \(\Omega\) and \(\lambda\) form contractible pairs and can be eliminated. There remains to show that the Einstein equations rewritten in terms of \(g_{ab}\) are equivalent to (3.7). This can be checked using the well-known, see e.g. [76], transformation rules of the Schouten tensor under Weyl transformations. It is important to stress that the above equivalence crucially relies on the condition \(\Omega>0\). At the same time, the conformal-like on-shell GR is perfectly well-defined without this condition and is going to be very instrumental in studying the boundary behaviour. Note also that the above system with no restrictions on \(\Omega\) provides a gPDE description of tractor geometry though, to keep the exposition concise, we refrain from giving details here. ### Pre-minimal model for conformal-like on-shell GR In what follows we need a certain equivalent reduction (in the sense of definition 2.5) of conformal-like on-shell GR. This can be done in two steps. The first step is to concentrate on the sector of \(g_{ab},\xi^{a},\lambda\) and their jets. This sector is precisely the one that gives a gPDE description of conformal geometry (which can also be regarded as the off-shell conformal gravity) so that one can eliminate contractible pairs as explained in [55] (see also [56] for the discussion in simlar language). Namely, \[Q\Gamma^{c}_{ab}=\cdots+D_{a}D_{b}\xi^{c},\qquad QP_{ab}=\cdots+D_{a}D_{b}\lambda \tag{3.10}\] allows one to eliminate \(\Gamma^{c}_{ab}\), \(D_{a}D_{b}\xi^{c}\), \(P_{ab}\), \(D_{a}D_{b}\lambda\) as well as all their symmetrized total derivatives. This reduction is quite different for the cases \(d=3\) and \(d\,{\geqslant}\,4\), so in what follows we assume \(d\,{\geqslant}\,4\). However, generalization to \(d=3\) is possible. The remaining jets of the metric have the meaning of the Weyl tensor \({\rm W}^{b}_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ component in \(D_{(a_{1}\ldots)}F_{a_{n-1}a_{n}}=0\). These equations fix all the jets of \(\Omega\) except for \(\Omega\), \(\Omega_{a}\) and \(\rho\). For \(n=3\) the third equation in (3.12) takes the form \[\mathrm{W}^{d}{}_{a_{n}a_{n-2}a_{n-1}}\nabla_{d}\Omega+\mathrm{C}_{a_{n}a_{n-2} a_{n-1}}\Omega=0 \tag{3.13}\] and is known as a part of the Fridrich equations, see e.g. [78]. After taking into account the above equations and introducing \(n^{a}\equiv g^{ab}\Omega_{b}\) in place of \(\Omega_{a}\), the action of \(Q\) on \(\Omega_{a},n^{a},\rho\) takes the form: \[Q\Omega=\xi^{a}g_{ab}n^{b}+\lambda\Omega,\quad Qn^{b}=-\xi^{b}\rho-n^{a}C_{a} ^{\ b}+\lambda^{b}\Omega-\lambda n^{b},\quad Q\rho=-\lambda\rho-\lambda_{a}n^{ a}. \tag{3.14}\] The results of this subsection can be summarized in the following proposition: **Proposition 3.4**.: _For \(d\,{\geqslant}\,4\) the gPDE defined in 3.2 is equivalent to its sub-gPDE \((E,Q,T[1]X)\) with the following overcomplete set of fiber coordinates \(\{g_{bc},\Omega,n^{b},\rho,\xi^{b},C_{b}^{\ \ },\lambda,\lambda^{b},\mathrm{W}^{b}_{\ \mathit{cde};(a)},|a| \,{\geqslant}\,0\}\) which are understood modulo the ideal generated by the following constraints:_ \[\begin{split}&\Omega\rho+\frac{1}{2}g_{ab}n^{a}n^{b}=-\frac{ \Lambda}{(d-1)(d-2)},\\ &\nabla_{a_{1}}\cdots\nabla_{a_{n}}(\mathrm{W}^{b_{3}}{}_{c_{1}b _{2}}n^{c}-\mathrm{C}^{b_{3}}{}_{b_{1}b_{2}}\Omega)=0,\quad n\,{\geqslant}\,0.\end{split} \tag{3.15}\] _The action of \(Q\) on all coordinates except curvatures \(\mathrm{W}^{b}{}_{\mathit{cde};(a)}\) is given by (3.11), (3.14)._ ## 4 Boundary systems and asymptotic symmetries ### Asymptotically simple GR as a gPDE with boundaries Having obtained a description of gravity in the bulk as a gPDE one can immediately construct the induced gPDE \((i^{*}E,Q,T[1]\mathcal{J})\) on the boundary \(\mathcal{J}\). More specifically, we start with the gPDE defined in the Proposition 3.4, which encodes the conformal-like on-shell GR in the bulk. A slight but important modification is that fibers of \(E\) over the boundary are extended by their own boundary by allowing \(\Omega\) to take value \(0\) (recall that in the bulk \(\Omega>0\)). In what follows we restrict ourselves to the local analysis and hence do not discuss global geometry of the space-time and its boundary. More specifically, we restrict to a neighbourhood of the boundary of topology \(S^{d-2}\times\mathbb{R}\). Now we identify a gPDE with boundaries which describes asymptotically simple GR. To this end we impose additional conditions which implements Penrose's definition of asymptotically simple spacetime in the gPDE terms. More specifically, we take \(E_{B}\subset i^{*}E\) to be a sub gPDE of \(i^{*}E\) determined by \[\Omega=0\,,\qquad Q\Omega=0\,.\qquad D_{a}\Omega\neq 0\,. \tag{4.1}\] This gives a gPDE with boundaries \((E,Q,T[1]X,E_{B},T[1]\mathcal{J})\) which we refer to as _asymptotically simple GR_. Here we keep using \(Q\) to denote the homological vector field on \(i^{*}E\) as well as on \(E_{B}\) as these are restrictions of the initial \(Q\) on \(E\) to the respective submanifolds. Analogous systems for asymptotically-simple spacetimes are obtained by not imposing the Einstein equations. It is important to stress that if \(D_{a}\Omega\) were nonvanishing everywhere in \(i^{*}E\), functions \(\Omega\) and \(Q\Omega\) would be independent on \(i^{*}E\) so that setting \(\Omega=0\) and \(Q\Omega=0\) can be understood as an equivalent reduction. However, \(D_{a}\Omega\neq 0\) is imposed at \(\Omega=0\) only so that it is better to regard (4.1) as the boundary conditions determining asymptotically simple GR. In any case, (4.1) effectively implements only minor restrictions on the moduli of solutions, which can be thought of as partial gauge conditions. Another remark is that, as we discussed in Section 2, the total space \(E\) can be extended to a manifold with corners by allowing \(\Omega\,{\geqslant}\,0\) everywhere. From this perspective \(E_{B}\) can be identified with the respective corner provided one also excludes points where \(D_{a}\Omega\neq 0\). As an (overcomplete) coordinate system on \(i^{*}(E)\) we use coordinates on \(E\) restricted to \(i^{*}E\) seen as a submanifold in \(E\). In particular, \(\Omega\) in (4.1) is, strictly speaking, a restriction of the initial coordinate \(\Omega\) to \(i^{*}(E)\). In what follows we restrict ourselves to the local analysis and hence do not discuss global geometry of the space-time and its boundary. Taking into account constraints (4.1) in \((i^{*}E,Q,T[1]\mathcal{J})\) results in the boundary gPDE \((E_{B},Q,T[1]\mathcal{J})\). The overcomplete set of fiber coordinates can be obtained by restricting the coordinates from Proposition **3.4** to \(E_{B}\) and is given by \[\{g_{bc},n^{b},\rho,\xi^{b},C_{b}^{\;c},\lambda,\lambda^{b},{\rm W}^{b}_{\;cde ;(a)},|a|\geqslant 0\}\,. \tag{4.2}\] The action of \(Q\) on some of the coordinates is easily obtained by restricting (3.11), (3.14): \[\begin{split}& Qg_{bc}=C_{b}^{\;a}g_{ac}+C_{a}^{\;a}g_{ba}+2 \lambda g_{bc},\quad Q\xi^{b}=\xi^{a}C_{a}^{\;b},\\ & Qn^{b}=-\xi^{b}\rho-n^{a}C_{a}^{\;b}-\lambda n^{b},\quad Q \lambda^{b}=C^{b}_{\;a}\lambda^{a}+\frac{1}{2}\xi^{a}\xi^{c}{\rm C}^{b}_{\;ac },\\ & Q\rho=-\lambda\rho-\lambda_{a}n^{a},\quad Q\lambda=\xi^{a} \lambda_{a},\\ & QC_{b}^{\;c}=C_{b}^{\;a}C_{a}^{\;c}+\lambda_{b}\xi^{c}-\lambda ^{c}\xi_{b}+\delta_{b}^{c}\lambda_{a}\xi^{a}+\frac{1}{2}\xi^{a}\xi^{d}{\rm W}^ {c}_{\;bab}\,.\end{split} \tag{4.3}\] At the same time constraints (3.15) take the form: \[\begin{split}& g_{ab}n^{a}n^{b}=-\frac{2}{(d-1)(d-2)}\Lambda,\\ &\left(\nabla_{a_{1}}\cdots\nabla_{a_{n}}({\rm W}^{b_{\;a}}{}_{b _{1}b_{2}}n^{c}-{\rm C}^{b_{\;b}}{}_{b_{1}b_{2}}\Omega)\right)\big{|}_{\Omega =0}=0,\quad n\geqslant 0,\;,\end{split} \tag{4.4}\] and, finally, the last constraint is given by \(\xi^{a}g_{ab}n^{b}=0\) and originates from \(Q\Omega=0\). **Definition 4.1**.: _The above gPDE \((E_{B},Q,T[1]\mathcal{J})\) is refereed to as the boundary gPDE for asymptotically simple GR._ As we are going to see it is very convenient to work in terms of the minimal model of the system **4.1**. However, the respective minimal gPDE crucially depends on the value of the cosmological constant. As we are mostly interested in the null-infinity we assume \(\Lambda=0\) unless otherwise specified. ### Minimal model for the boundary gPDE of asymptotically simple GR We now take \(\Lambda=0\) and find a minimal model of the boundary gPDE obtained in the previous Section. We have the following: **Proposition 4.2**.: _In the case \(\Lambda=0\) and \(g_{ab}\) of Lorentz signature, gPDE \((E_{B},Q,T[1]\mathcal{J})\) defined in **4.1** is equivalent to its subbundle determined by the following conditions:_ \[g_{ab}=\begin{pmatrix}0&1&0\\ 1&0&0\\ 0&0&-\delta_{AB}\end{pmatrix},\quad n^{a}=\begin{pmatrix}0\\ 1\\ 0\end{pmatrix}, \tag{4.5}\] \[C_{a}^{\;b}=\begin{pmatrix}-\lambda&0&C_{A}\\ 0&-\lambda&0\\ 0&-C_{A}&\rho_{A}{}^{\;B}-\lambda\delta_{A}^{B}\end{pmatrix}, \tag{4.6}\] \[\rho=0,\quad\xi^{\Omega}=0,\lambda^{\Omega}=0, \tag{4.7}\] _where we used the adapted basis so that \(\{a\}=\{\Omega,u,A\}\), \(A=1,\ldots,d-2\) and introduced the following new coordinates: \(\rho_{AB}\equiv C_{[AB]}\), \(C_{A}\equiv C_{\Omega\,A}\). Among the constraints (4.4) on the degree-zero variables there only remain:_ \[\nabla_{a_{1}}\cdots\nabla_{a_{n}}{\rm W}_{bsub_{1}b_{2}}-\sum_{i=1}^{n}g_{ua_ {i}}\nabla_{a_{1}}\cdots\widehat{\nabla}_{a_{i}}\cdots\nabla_{a_{n}}{\rm C}_{ bab_{1}b_{2}}=0,\quad n\geqslant 0. \tag{4.8}\] _Note that \(g_{ua_{i}}=\delta_{a_{i}\Omega}\) on the subbundle._ We denote the minimal model introduced in the above Proposition by \((E_{B}^{\rm min},Q,T[1]\mathcal{J})\). This gPDE is explicitly defined as a sub-gPDE of \((E_{B},Q,T[1]\mathcal{J})\) which, in its turn, is a sub-gPDE of \(i^{*}E\). Proof.: As usual, the proof is based on identification of the contractible pairs. Using \[Qn^{b}=-\xi^{b}\rho-n^{a}C_{a}{}^{b}-\lambda n^{b} \tag{4.9}\] and taking into account \(n^{b}\neq 0\) one can set: \[n^{\Omega}=0,\quad C_{u}{}^{\Omega}=-\xi^{\Omega}\rho,\qquad n^{u}=1,\quad C_{ u}{}^{u}=-\xi^{u}\rho-\lambda,\qquad n^{A}=0,\quad C_{u}{}^{A}=-\xi^{A}\rho\,, \tag{4.10}\] which also gives \(g_{uu}=0\) thanks to the first constraint in (4.4). Using then \[Qg_{ua}=-\xi^{\Omega}\rho g_{\Omega a}-\xi^{u}\rho g_{ua}-\xi^{B}\rho g_{Ba}+C _{a}{}^{b}g_{ub}+\lambda g_{ua}\,, \tag{4.11}\] we can eliminate \(g_{ua}\) as well as \(C_{a}{}^{b}g_{ub}\). Note that \(g_{ub}\neq 0\) because of \(det(g_{ab})\neq 0\). More precisely, we set \[\begin{split} g_{u\Omega}=1,\quad C_{\Omega}{}^{\Omega}=\xi^{u} \rho-\lambda,\\ g_{uA}=0,\quad C_{A}{}^{\Omega}=\xi^{\Omega}\rho g_{\Omega A}+\xi^{ B}\rho g_{BA}.\end{split} \tag{4.12}\] The second constraint in (4.4) then gives \(\xi^{\Omega}=0\). Using \[Q\rho=-\lambda\rho-\lambda^{\Omega}, \tag{4.13}\] allows us to set \(\rho=0\) and \(\lambda^{\Omega}=0\). Furthermore, using \[Qg_{\Omega\Omega}=2C_{\Omega}{}^{u}+2C_{\Omega}{}^{B}g_{B\Omega}+2\lambda g_{ \Omega\Omega} \tag{4.14}\] we can set \(g_{\Omega\Omega}=0\) and \(C_{\Omega}{}^{u}=-C_{\Omega}{}^{B}g_{B\Omega}\). Similarly, \[Qg_{\Omega A}=\lambda g_{\Omega A}+C_{\Omega}{}^{B}g_{BA}+C_{A}{}^{u} \tag{4.15}\] allows us to set \(g_{\Omega A}=0\) and \(C_{A}{}^{u}=-C_{\Omega}{}^{B}g_{BA}\). Finally, eliminating the remaining components \(g_{AB}\) of the metric we set \[g_{AB}=-\delta_{AB},\quad C_{(AB)}=\lambda\delta_{AB}. \tag{4.16}\] To summarize, we explicitly found a minimal model of the boundary gPDE for asymptotically simple GR. Its overcomplete fiber coordinates are \(\{\xi^{u},C^{A},\xi^{A},\rho_{A}{}^{B},\lambda,\lambda^{u},\lambda^{A},{\rm W }^{b}{}_{\text{$\mathit{cde}$};(a)},|a|\geqslant 0\}\). The action of \(Q\) on the degree \(1\)-coordinates \(\xi^{A},\lambda,\lambda_{A},\rho_{A}{}^{B}\) is given by: \[\begin{split}& Q\xi^{A}=\xi^{B}\rho_{B}{}^{A}-\xi^{A}\lambda,\\ & Q\rho_{A}{}^{B}=\rho_{A}{}^{C}\rho_{C}{}^{B}+\lambda_{A}\xi^{B} -\lambda^{B}\xi_{A}+\frac{1}{2}\xi^{C}\xi^{D}{\rm W}^{B}{}_{\text{$\mathit{ACD} $}},\\ & Q\lambda=\xi^{A}\lambda_{A},\\ & Q\lambda^{A}=\rho^{A}{}_{B}\lambda^{B}-\lambda\lambda^{A}+ \frac{1}{2}\xi^{C}\xi^{D}{\rm C}^{A}{}_{\text{$\mathit{CD}$}}+\xi^{u}\xi^{D}{ \rm C}^{A}{}_{\text{$\mathit{uD}$}}.\end{split} \tag{4.17}\] Setting the curvatures (these enter the right hand sides multiplied by \(\xi^{a}\xi^{b}\)) to zero gives the Chevalley-Eilenberg differential of the \(so(d-1,1)\) subalgebra of \(iso(d-1,1)\) algebra. This subalgebra can be identified with the conformal algebra associated to \(d-2\)-dimensional submanifold of the boundary. Moreover, setting to zero only the components \({\rm C}^{A}{}_{\text{$\mathit{uD}$}}\) gives the respective sector of the minimal model of the conformal geometry in \(d-2\) dimensions. Note, however, that the entire system differs form that of conformal geometry. In particular, extra curvatures are present and the action of \(Q\) on the curvatures is different. The action of \(Q\) on the remaining degree-1 coordinates reads as: \[\begin{split}& Q\xi^{u}=-\xi^{u}\lambda-\xi^{A}C_{A},\\ & QC^{A}=C^{B}{\rho_{B}}^{A}+\lambda^{u}\xi^{A}-\lambda^{A}\xi^{u}+ \frac{1}{2}\xi^{C}\xi^{D}{\rm W}^{A}{}_{\Omega CD},\\ & Q\lambda^{u}=C^{A}\lambda_{A}-\lambda\lambda^{u}+\frac{1}{2} \xi^{C}\xi^{D}{\rm C}_{\Omega CD}+\xi^{u}\xi^{D}{\rm C}_{\Omega uD}.\end{split} \tag{4.18}\] With all the curvatures set to zero, the actions of \(Q\) is that of the Chevalley-Eilenberg differential of \(iso(d-1,1)\), where (4.17) corresponds to \(so(d-1,1)\) while (4.18) to the \(iso(d-1,1)\) translations. Fields parameterizing solutions of the above minimal model are the \(iso(d-1,1)\) connection on the boundary along with the bunch of the degree zero fields (curvatures) some of which are expressed in term of the connection through the equations of motion (and generally impose some differential equations on the connection) while the remaining ones are independent fields. It is of course natural that the boundary system can be formulated in terms of the Poincare connection because the minimal model of the bulk gravity has an analogous formulation. Similar but not identical formulations of the asymptotically simple GR were considered in [79],[59]. More details on the field theory encoded in the above minimal model are given in Section **4.4**. In the next two sections, in order to agree with the standard conventions for connections and curvatures, we redefine all fiber coordinates \(\varphi\) such that \({\rm gh}(\varphi)=1\) as \(\varphi\to-\varphi\). In particular, this affects the explicit formulas for the action of \(Q\) on fiber coordinates (an alternative way is to reverse the sign at the vertical part of \(Q\)). ### Boundary conditions and BMS symmetries Now we plan to identify a proper counterpart of the BMS boundary conditions in this setup. Strictly speaking the minimal model constructed in the previous section is too "minimal" to incorporate a subgPDE of boundary conditions as a regular submanifold. Nevertheless it is not difficult to identify, generally non-regular, constraints which do the job, giving a rather concise description of the boundary conditions and asymptotic symmetries. As explained in Section **2.3** in this setup we are forced to allow \(Y\) to depend on jets of sections of \(E^{min}_{B}\) (recall that we treat the gPDE of boundary conditions as a subbundle in \(E^{min}_{B}\)). Here we use \(D^{\theta}_{\mu}\) to denote the total derivative in \(\theta^{\mu}\) direction (it's a total derivative in the super-jet bundle of \(E^{min}_{B}\) and should not be confused with the total derivative in the initial jet-bundle from which \(E^{min}_{B}\) has been constructed). For instance, if \(\sigma\) is a section and \(\sigma^{*}(\lambda_{A})=\lambda_{A\mu}(x)\theta^{\mu}\) then \(\sigma^{*}(D^{\theta}_{\mu}\lambda_{A})=\lambda_{A\mu}(x)\). Note that \(\sigma^{*}(D^{\theta}_{\mu}D^{\theta}_{\nu}\lambda_{A})=0\) by the degree reasoning. We define \(E_{\mathcal{J}}\subset E^{min}_{B}\) as a zero locus of the constraints defined on \(E^{min}_{B}\). We first introduce constraints which set the frame field encoded in \(\xi^{a}\) to be a fixed frame: \[\xi^{A}-e^{A}\sim 0,\qquad\xi^{u}-\theta^{u}\sim 0\,, \tag{4.19}\] where we use adapted coordinates \(y^{\alpha}\) and \(u\) on the boundary \(\mathcal{J}\) and assumed for simplicity that \(e^{A}=e^{A}{}_{\alpha}(u,y)\theta^{\alpha}\). It is easy to see that \(Q\) is not tangent to the surface and hence extra boundary conditions are necessary. Consider the following extra constraints: \[\lambda\sim 0\,,\quad\lambda_{A}\xi^{A}\sim 0\,,\quad C_{A}\xi^{A}\sim 0\,, \quad{\rm d}_{\mathcal{J}}e^{A}+\xi^{B}{\rho_{B}}^{A}\sim 0\,, \tag{4.20}\] where the last three ones coincide with \(Q\lambda\), \(Q(\xi^{u}-\theta^{u})\), and \(Q(\xi^{A}-e^{A})\) modulo terms proportional to \(\lambda\). This can be easily seen using (4.17) and (4.18) as well as the following representation: \[C_{A}\xi^{A}=Q\xi^{u}-\xi^{u}\lambda\,,\qquad\rho^{A}{}_{B}\xi^{B}=Q\xi^{A}- \xi^{A}\lambda\,. \tag{4.21}\] Constraints (4.20) and (4.19) define an ideal \(\mathcal{I}_{\mathcal{J}}\) in the algebra of functions on \(E^{min}_{B}\) introduced in Proposition **4.2**. It is easy to see that \(Q\) is well defined on the quotient as \(\mathcal{I}_{\mathcal{J}}\) is \(Q\)-invariant. Because some of the constraints are quadratic, the quotient is not an algebra of functions on a regular subbundle. However, we can still think of it as determining a \(Q\)-subbundle \(E_{\mathcal{J}}\) which is defined in the algebraic sense only. This does not really lead to problems because its prolongation to jets of supersections is a regular submanifold, provided we restrict to sections such that the frame field is invertible. In this sense working in terms of \(E_{\mathcal{J}}\) only gives an economical framework to analyse asymptotic symmetries. All the steps can be repeated in terms of its jet-prolongation which is a genuine subbundle of the jet-bundle. Disregarding the above subtlety, constraints (4.20) and (4.19) define a gauge PDE with boundary in the sense of Definition 2.6. Indeed, \(E_{\mathcal{J}}\) is a sub-gPDE of the \(E^{min}_{B}\) which, in turn, is defined as a sub-gPDE of \(i^{*}E\). Now we are ready to study gauge symmetries that preserve the gPDE of boundary conditions. Consider a gauge parameter vector field \[Y=\epsilon^{u}\frac{\partial}{\partial\xi^{u}}+\epsilon^{A}\frac{\partial}{ \partial\xi^{A}}+\bar{\lambda}\frac{\partial}{\partial\lambda}+\bar{\lambda}_ {A}\frac{\partial}{\partial\lambda_{A}}+\bar{\rho}^{AB}\frac{\partial}{ \partial\rho^{AB}}+\bar{C}^{A}\frac{\partial}{\partial C^{A}}+\bar{\lambda}^{ u}\frac{\partial}{\partial\lambda^{u}}\;, \tag{4.22}\] where \(\epsilon^{u},\epsilon^{A},\bar{\lambda},\ldots\) are functions in \(x\) while \(\bar{\lambda}_{A},\bar{C}^{A},\bar{\rho}^{AB}\) are also allowed to depend on the \(\theta\)-jets of \(\lambda_{A},C^{A},\rho^{AB}\). Note that the component \(\bar{\lambda}^{u}\frac{\partial}{\partial\lambda^{u}}\) clearly preserves the constraints and hence correspond to trivial asymptotic symmetries. We are interested in \(Y\) such that the respective symmetry transformations preserves the ideal and hence induces a symmetry transformation that take solutions of \(E_{\mathcal{J}}\) to solutions. We have: \[\mathrm{d}_{\mathcal{J}}\sigma^{*}(Yf)+\sigma^{*}(YQf)=0\quad\forall f\in \mathcal{I} \tag{4.23}\] This should hold for all section of the gPDE of boundary condition, i.e. sections of \(E^{min}_{B}\) such that \(\sigma^{*}(\text{``constraints''})=0\). Taking \(f=\lambda\) gives \[\sigma^{*}(\mathrm{d}_{\mathcal{J}}\bar{\lambda}-\epsilon^{A}\lambda_{A}+ \xi^{A}\bar{\lambda}_{A})=0 \tag{4.24}\] This implies \(\partial_{u}\bar{\lambda}=0\) because we assumed \(\theta^{u}\) unconstrained and because \(\sigma^{*}(\lambda_{A})=\lambda_{AB}(x)e^{B}\) for some \(\lambda_{AB}(x)=\lambda_{BA}(x)\) thanks to \(\sigma^{*}(\xi^{A}\lambda_{A})=0\). Furthermore, (4.24) also implies: \[e_{B}{}^{\alpha}\partial_{\alpha}\bar{\lambda}-\epsilon^{A}\lambda_{AB}+ \sigma^{*}(\bar{\lambda}_{B})=0\,. \tag{4.25}\] This can be solved for \(\bar{\lambda}_{B}\) by e.g. \(\bar{\lambda}_{B}=-e_{B}{}^{\alpha}\partial_{\alpha}\bar{\lambda}+e_{B}{}^{ \alpha}\epsilon^{A}D^{\theta}_{\alpha}\lambda_{A}\). Indeed, \(\sigma^{*}(D^{\theta}_{\alpha}\lambda_{A})=\lambda_{AB}(x)e^{B}{}_{\alpha}\) and hence imposes no restrictions on \(\partial_{\alpha}\bar{\lambda}\). Taking \(f=\xi^{u}-\theta^{u}\) in (4.23) one finds \[\mathrm{d}_{\mathcal{J}}\epsilon^{u}+\sigma^{*}(\epsilon^{u}\lambda-\xi^{u} \bar{\lambda}+\epsilon^{A}C_{A}-\xi^{A}\bar{C}_{A})=0 \tag{4.26}\] This implies \(\epsilon^{u}=u\bar{\lambda}(y)+T(y)\), with \(T\) unconstrained. Moreover, the remaining equation can be satisfied by taking \(\bar{C}_{A}=e_{A}{}^{\alpha}(\partial_{\alpha}\epsilon^{u}+\epsilon^{B}D^{ \theta}_{\alpha}C_{B})\). Taking \(f=\xi^{A}-e^{A}\) one gets \[\mathrm{d}_{\mathcal{J}}\epsilon^{A}+\sigma^{*}(-\epsilon^{B}\rho_{B}{}^{A}+ \xi^{B}\bar{\rho}_{B}{}^{A}+\epsilon^{B}\lambda-\xi^{B}\bar{\lambda})=0 \tag{4.27}\] Thanks to \(\sigma^{*}(\mathrm{d}_{\mathcal{J}}e^{A}+\rho^{A}{}_{B}\xi^{B})=0\) one finds that \(\sigma^{*}(\rho^{A}{}_{B})=\omega^{A}{}_{B\mu}\theta^{\mu}\), where \(\omega^{A}{}_{B\alpha}(u,y)\) can be expressed in terms of \(e^{A}=e^{A}{}_{\alpha}(u,y)\theta^{\alpha}\) through standard formulas for Levi-Civita connection and \(\omega_{B}{}^{A}{}_{u}=\sigma^{*}(e^{\alpha}{}_{B}\partial_{u}e^{A}{}_{ \alpha})\). Then, in terms of \(\epsilon^{\alpha}\equiv e^{\alpha}{}_{A}\epsilon^{A}\), the equation (4.27) implies \[\partial_{u}\epsilon^{\alpha}=0,\quad\partial_{\alpha}\epsilon_{\beta}-\Gamma^{ \gamma}_{\alpha\beta}(e)\epsilon_{\gamma}+e^{A}{}_{\alpha}e^{B}{}_{\beta} \sigma^{*}(\bar{\rho}_{AB})-g_{\alpha\beta}\bar{\lambda}=0. \tag{4.28}\] Here, similarly to the standard formulas, \(g_{\alpha\beta}\equiv e_{A\alpha}e^{A}{}_{\beta}\) and \(\Gamma^{\gamma}{}_{\alpha\beta}\equiv e^{\gamma}{}_{A}(\partial_{\alpha}e^{A}{} _{\beta}-e^{B}{}_{\beta}\omega_{B}{}^{A}{}_{\alpha})\). The antisymmetric part of the second equation in (4.28) can be solved for \(\bar{\rho}_{AB}\), and the symmetric part is nothing but a conformal Killing equation. In particular this fixes \(\bar{\lambda}\) in terms of \(\epsilon^{A}\). Finally, there remains to check (4.23) for the last three constraints from (4.20). However, these three are all of the form \(Qg\), modulo terms proportional to \(\lambda\), with \(g\) being \(\lambda\) or \(\xi^{u}-\theta^{u}\) or \(\xi^{A}-e^{A}\). It follows (4.23) always holds because \[\mathrm{d}_{\mathcal{J}}\sigma^{*}(YQg)+\sigma^{*}(YQQg)=\mathrm{d}_{\mathcal{J} }\sigma^{*}(YQg)=-\mathrm{d}_{\mathcal{J}}(\mathrm{d}_{\mathcal{J}}\sigma^{*}(Yg ))=0\,, \tag{4.29}\] where in the last equality we made use of (4.23), with \(f\) replaced by \(g\), and the fact that \(Y\) was chosen in such a way that (4.23) holds for \(f\) being \(\lambda\) or \(\xi^{u}-\theta^{u}\) or \(\xi^{A}-e^{A}\). In this way we are left with \(Y\) parameterized by \(u\)-independent \(T\) and \(\epsilon^{\alpha}\). Interpreting \(\epsilon^{u}(u,y),\epsilon^{\alpha}(y)\) as components of a vector field on the boundary it is easy to check that this is precisely BMS vector field on the boundary, which encodes conformal isometries of \(d-2\)-dimensional space and supertranslations. More specifically, the BMS vector field on \(E_{\mathcal{J}}\) reads as \[\epsilon^{BMS}=(u\bar{\lambda}+T(y))\frac{\partial}{\partial u}+\epsilon^{ \alpha}(y)\frac{\partial}{\partial y^{\alpha}}\,, \tag{4.30}\] where we use adapted coordinates \(u,y^{\alpha}\) on \(\mathcal{J}\) and where \(T(y)\) is a generic function in \(y^{\alpha}\), \(\epsilon^{\alpha}(y)\) are components of a conformal Killing vector in \(d-2\) dimensions, and \(\bar{\lambda}\) is determined by (4.28). This is precisely how the infinitesimal BMS transformations act as symmetries of the conformal Carrollian geometry, see e.g. [80] for more details. To make sure we are dealing with nontrivial asymptotic symmetries one should, strictly speaking, show that these symmetries are not equivalent to trivial. I.e. that \([Q,Y]|_{E_{\mathcal{J}}}\) can not be represented as \([Q|_{E_{\mathcal{J}}},Y^{\prime}]\) for some vertical vector field \(Y^{\prime}\) on \(E_{\mathcal{J}}\). Considering \(Y^{\prime}\) as a representative of an equivalence class of vertical vector fields tangent to \(E_{\mathcal{J}}\) modulo those vanishing on \(E_{\mathcal{J}}\) one can assume that \(Y^{\prime}\lambda=Y^{\prime}\xi^{A}=Y^{\prime}\xi^{u}=0\). Repeating the analysis of this section for such \(Y^{\prime}\) one concludes that \(\sigma^{*}(\bar{\lambda}_{A})=\sigma^{*}(\bar{C}_{A})=\sigma^{*}(\bar{\rho}_{ B}^{A})=0\). Considering, for instance, \(\sigma^{*}([Q,Y^{\prime}]C_{A})\) one finds that \((\delta_{Y^{\prime}}\sigma)^{*}(C_{A})=\sigma^{*}(e^{A}\bar{\lambda}^{u})\). Then introducing components \(C_{AB}\) as \(\sigma^{*}(C_{A})=e^{B}C_{BA}(x)\), the transformation takes the form: \[\delta_{Y^{\prime}}C_{AB}=\eta_{AB}\bar{\lambda}^{u} \tag{4.31}\] so that it cannot affect the trace-free components of coordinates \((C_{AB})\). As will be shown in the next section, these components parameterize the asymptotic shear. At the same time transformations with nontrivial \(\epsilon^{u},\epsilon^{\alpha}\) do affect the asymptotic shear. ### Field-theoretical interpretation of the minimal model As we have seen the minimal model \((E_{B}^{min},Q,T[1]\mathcal{J})\) of the boundary gPDE for asymptotically simple GR, defined in Section 4.2, plays a crucial role in our approach to asymptotic symmetries. In this section we study its solutions and gauge symmetries and explain how the BMS symmetries can be derived in these terms. We now study the space of solutions, i.e. sections of \((E_{B}^{min},Q,T[1]\mathcal{J})\) satisfying \(\mathrm{d}_{\mathcal{J}}\circ\sigma^{*}=\sigma^{*}\circ Q\). By some abuse of notation we introduce the following parameterization of sections: \[\sigma^{*}\rho_{A}{}^{B}=\omega_{A}{}^{B},\quad\sigma^{*}\xi^{A}=e^{A},\quad \sigma^{*}\xi^{u}=l\,, \tag{4.32}\] where all the new functions are linear in \(\theta^{\mu}\), i.e. can be seen as 1-forms on \(X\), by the degree reasoning. For the remaining fiber coordinates we take \(\sigma^{*}\phi=\phi(x,\theta)\). The equations of motion in the sector of degree-1 fiber coordinates read as: \[\begin{split}\mathrm{d}_{\mathcal{J}}e^{A}+\omega^{A}{}_{B}e^{B }+\lambda e^{A}=0,\qquad\mathrm{d}_{\mathcal{J}}\lambda+e^{A}\lambda_{A}=0, \qquad\mathrm{d}_{\mathcal{J}}l+\lambda l-e^{A}C_{A}=0\,,\\ \mathrm{d}_{\mathcal{J}}\omega_{A}{}^{B}+\omega_{A}{}^{C}\omega _{C}{}^{B}+\lambda_{A}e^{B}-\lambda^{B}e_{A}=\frac{1}{2}e^{C}e^{D}{\rm W}_{A }{}^{B}{}_{CD},\\ \mathrm{d}_{\mathcal{J}}\lambda^{A}+\omega^{A}{}_{B}\lambda^{B}- \lambda\lambda^{A}=-le^{D}{\rm C}^{A}{}_{uD}-\frac{1}{2}e^{C}e^{D}{\rm C}^{A} {}_{CD},\\ \mathrm{d}_{\mathcal{J}}C^{A}+\omega^{A}{}_{B}C^{B}+\lambda^{u}e^{ A}-\lambda^{A}l=\frac{1}{2}e^{C}e^{D}{\rm W}_{\Omega}{}^{A}{}_{CD},\\ \mathrm{d}_{\mathcal{J}}\lambda^{u}+C_{A}\lambda^{A}-\lambda \lambda^{u}=-le^{D}{\rm C}_{\Omega uD}-\frac{1}{2}e^{C}e^{D}{\rm C}_{\Omega CD }\,.\end{split} \tag{4.33}\] These are Cartan structure equations for the \(iso(1,d-1)\) connection written in the special basis, where the \(so(1,d-1)\) subalgebra is made explicit as the conformal algebra in \(d-2\)-dimensions and in contrast to the usual Cartan description of Riemannian or Einstein geometry these equations are defined in \(d-1\)-dimensional space rather than \(d\)-dimensional one. Moreover, the curvatures appearing in the right hand sides of the above equations are subject to specific constraints. For instance, components of the curvature in the sector of variables \(e^{A},l\) and \(\lambda\) vanish. In the case of \(d=4\) the curvature of this connection contains 5 independent components, namely \(\mathrm{C}_{\Omega cd}\) and \(\mathrm{C}_{Bcd}\) (other components vanish in \(d=4\)), which can be identified with Newman-Penrose coefficients \(\Psi_{4},\Psi_{3},Im\Psi_{2}\) encoding the gravitational radiation. At the same time, fields \(\mathrm{W}_{A\Omega\Omega B}\), \(\mathrm{W}_{\Omega uA\Omega}\), and \(\mathrm{W}_{\Omega uu\Omega}\) also contain 5 independent components which correspond to the remaining Newman-Penrose coefficients \(\Psi_{0},\Psi_{1}\) and \(Re(\Psi_{2})\), see e.g. [81]. Let us stress that in contrast to the former, the latter 5 components do not enter the Cartan structure equations and hence can not be interpreted as components of the curvature of the \(iso(1,d-1)\)-connection on the boundary. These are known to capture the longitudinal information and indeed are not described by the curvature [82], see also [57, 59] for more details.3 Footnote 3: They are analogous to the components of subleading modes appearing in the near-boundary analysis of critical fields in the AdS/CFT context, see e.g. [83, 84]. For generic fields, these modes were described in [41, 42] within a version of gPDE approach. Let us introduce the components of the dual frame according to \(e^{A}=\sigma^{*}(\xi^{A})=e^{A}{}_{\mu}\theta^{\mu}\) and \(l=\sigma^{*}(\xi^{u})=l_{\mu}\theta^{\mu}\) and restrict to sections with invertible frame. Components \((e^{\mu}{}_{A},n^{\mu})\) of the frame are introduced via \[n^{\mu}l_{\mu}=1,\quad n^{\mu}e^{A}{}_{\mu}=0,\quad l_{\mu}e^{\mu}{}_{A}=0, \quad e^{A}{}_{\mu}e^{\mu}{}_{B}=\delta^{A}_{B}. \tag{4.34}\] Taking into account the constraints on the curvature one can check that as independent components of the connection one can take \(\{l_{\mu},e^{A}{}_{\mu},e^{\nu}{}_{A}\lambda_{\nu},C_{(AB)}\}\) because the remaining components can be expressed through them. The parameterization of the space of solutions to (4.33) can be described more efficiently if one makes use of the gauge freedom (2.2). Introducing gauge parameter vector field \(Y\) as in (4.22) and assuming coefficients to depend on \(x\) only the gauge transformation for \(\lambda_{\mu}\) reads as \[\lambda_{\mu}\to\lambda_{\mu}+\partial_{\mu}\bar{\lambda}-\epsilon^{A}\lambda _{A\mu}+e^{A}{}_{\mu}\bar{\lambda}_{A}\,, \tag{4.35}\] so that the following gauge condition can be imposed: \[e^{\nu}{}_{A}\lambda_{\nu}=0\,. \tag{4.36}\] In this gauge the components of the gauge parameter satisfy \(\bar{\lambda}_{A}=-e^{\mu}{}_{A}(\partial_{\mu}\bar{\lambda}-\epsilon^{B} \lambda_{B\mu})\). In a similar way, we can achieve \(C^{A}{}_{A}=0\), leading to further relations between gauge parameters: \[\bar{\lambda}^{u}=\frac{1}{d-2}e^{\mu}{}_{A}(\partial_{\mu}\bar{C}^{A}-\bar{C} ^{B}\omega_{B}{}^{A}{}_{\mu}+C^{B}{}_{\mu}\bar{\rho}{}_{B}{}^{A}+\lambda^{u}{} _{\mu}\epsilon^{A}-\lambda^{A}{}_{\mu}\epsilon^{u}). \tag{4.37}\] Furthermore, using \[\delta l_{\mu}=\partial_{\mu}\epsilon^{u}+\epsilon^{u}\lambda_{\mu}-l\bar{ \lambda}+\epsilon^{A}C_{A\mu}-e^{A}{}_{\mu}\bar{C}_{A}\,, \tag{4.38}\] the following gauge can be reached \(l_{\mu}=\partial_{\mu}u\), where \(u\) is a function of \(x^{\mu}\) satisfying \(n^{\mu}\partial_{\mu}u=1\). Function \(u\) is often employed in the literature on BMS symmetries and it is convenient to take it as one of the coordinate functions \(\{x^{\mu}\}\to\{u,y^{\alpha}\}\), \(\alpha=1,\ldots,d-2\). Let us also list the constraints on gauge parameters, which ensure preservation of \(l_{\mu}=\partial_{\mu}u\): \[\bar{C}_{B}=e^{\mu}{}_{B}(\partial_{\mu}\epsilon^{u}+\epsilon^{u}\lambda_{\mu} +\epsilon^{A}C_{A\mu}),\qquad\bar{\lambda}=n^{\mu}(\partial_{\mu}\epsilon^{u}+ \epsilon^{u}\lambda_{\mu}+\epsilon^{A}C_{A\mu})\,. \tag{4.39}\] To summarize: by imposing gauge condition as explained above one can parameterize the connection in terms of algebraically independent components \(\{e^{A}{}_{\mu},C_{(AB)}|_{tf}\}\) (of course there can be nontrivial differential constraints following from the constraints on the curvature). In so doing \(e^{A}{}_{\mu}\) encodes the degenerate metric \(g_{\mu\nu}\equiv e^{A}{}_{\nu}g_{AB}e^{B}{}_{\nu}\) whose kernel is generated by \(n=\frac{\partial}{\partial u}\), while \(-\frac{1}{2}C_{(AB)|_{tf}}\) is the so-called asymptotic shear, see e.g.[85], which parameterize torsion-free and metric-compatible affine connections on the boundary. Recall that such a connection is not unique if metric is degenerate. The geometry determined by \(g_{\mu\nu}\) and \(n^{\mu}\) defined up to an overall Weyl-like rescalings is often refereed to as conformal Carroll geometry. The setup of this section gives an alternative framework to study asymptotic symmetries, which in contrast to the more algebraic approach of Section **4.3**, is somewhat analogous to the standard analysis, see e.g. [79] for the derivation in the first-order formalism. Let us sketch how this can be done. First of all one imposes boundary condition on sections of \(E_{B}^{min}\) and then, in order to simplify the system, one imposes partial gauge conditions, e.g. the one discussed above. In the next step one studies gauge transformations that preserve this boundary conditions. For instance, to arrive at BMS symmetries in the present framework it is enough to fix a concrete frame \(e^{A}=e^{A}{}_{\mu}(x)\theta^{\mu}\) and set \(\lambda=0\). BMS symmetries are then obtained as the residual symmetries preserving these boundary conditions. Note that these boundary conditions correspond to only a subset of the conditions (4.19) and (4.20) of Section **4.3**. The remaining conditions correspond to solving some of the equations of motion and imposing partial gauge conditions, cf. Remark **2.1**. ### Asymptotically (A)dS spaces In the above analysis we concentrated on asymptotically flat spacetimes. It turns out that the boundary system \((E_{B},Q,T[1]\mathcal{J})\) defined in **4.1** equally well works for asymptotically (A)dS spacetimes4. In this case the metric induced on the boundary is nondegenerate and hence the respective minimal model differs substantially from the case of \(\Lambda=0\). More precisely, we have Footnote 4: In our analysis \(\Lambda>0\) for asymptotically AdS spacetimes or \(\Lambda<0\) for asymptotically dS spacetimes, since we work in the signature \((+,-,\ldots,-)\). See e.g. [78] **Proposition 4.3**.: _For \(\Lambda\neq 0\) Gauge PDE (4.1) is equivalent to its sub-gPDE defined as follows:_ \[\begin{split}& g_{ab}=\begin{pmatrix}-\widetilde{\Lambda}&0\\ 0&\eta^{\varepsilon}_{AB}\end{pmatrix},\quad n^{a}=\begin{pmatrix}1\\ 0\end{pmatrix},\quad C_{a}{}^{b}=\begin{pmatrix}-\lambda&0\\ 0&\rho_{A}{}^{B}-\lambda\delta^{B}_{A}\end{pmatrix},\\ &\rho=0,\quad\xi^{\Omega}=0,\quad\lambda^{\Omega}=0,\end{split} \tag{4.40}\] _where the adapted partition of indexes, e.g. \(\{a\}=\{\Omega,A\}\), \(A=0,\ldots,d-1\) has been employed and_ \[\eta^{\varepsilon}_{AB}\equiv(\varepsilon,-1,\ldots,-1),\quad\rho_{AB}\equiv C _{[AB]},\quad\widetilde{\Lambda}\equiv\frac{2}{(d-1)(d-2)}\Lambda,\quad \varepsilon\equiv\operatorname{sign}\Lambda\;. \tag{4.41}\] _Among the constraints on the degree-zero variables (4.4) there only remain:_ \[\nabla_{a_{1}}\cdots\nabla_{a_{n}}\mathrm{W}_{b_{2}\Omega b_{1}b_{2}}-\sum_{i= 1}^{n}g_{\Omega a_{i}}\nabla_{a_{1}}\cdots\widehat{\nabla}_{a_{i}}\cdots \nabla_{a_{n}}\mathrm{C}_{b_{2}b_{1}b_{2}}=0,\quad n\mathop{\geqslant}0, \tag{4.42}\] _where the hatted symbols are assumed omitted and \(g_{\Omega a_{i}}=-\widetilde{\Lambda}\delta_{\Omega a_{i}}\)._ The proof is fully analogous to that of **4.2**. As an overcomplete coordinate system on the above sub-gPDE we can take the restrictions of : \(\{\xi^{A},\rho_{A}{}^{B},\lambda,\lambda^{A},\mathrm{W}^{m}{}_{nkp;(a)},|a| \mathop{\geqslant}0\}\). In these coordinates the action of \(Q\) on the degree \(1\) coordinates reads as: \[\begin{split}& Q\xi^{A}=\xi^{B}\rho_{B}{}^{A}-\xi^{A}\lambda,\\ & Q\rho_{A}{}^{B}=\rho_{A}{}^{C}\rho_{C}{}^{B}+\lambda_{A}\xi^{B} -\lambda^{B}\xi_{A}+\frac{1}{2}\xi^{C}\xi^{D}\mathrm{W}^{B}{}_{ACD},\\ & Q\lambda=\xi^{A}\lambda_{A},\\ & Q\lambda^{A}=\rho^{A}{}_{C}\lambda^{C}-\lambda\lambda^{A}+\frac {1}{2}\xi^{B}\xi^{C}\mathrm{C}^{A}{}_{BC}\,.\end{split} \tag{4.43}\] It is easy to see that this coincides with the definition of CE differential of \(o(d-1,1)\) for dS and \(o(d-1,2)\) for AdS respectively, written in the conformal-like basis. This of course signals that in the case at hand the boundary is naturally equipped with the conformal structure. More precisely, solutions to the above sub-gPDE in the sector of degree \(1\) coordinates define a Cartan connection of the respective conformal geometry. However, the gauge theory encoded in this sub-gPDE is not generally equivalent to conformal geometry. For instance, in the case of \(d=5\) the respective conformal geometry is Bach-flat. The Bach flatness condition is encoded in the equations on curvatures arising in the sector of degree \(0\) variables. This is the realization in our approach of the well-known Fefferman-Graham analysis [86, 83] (see also [41, 42, 43, 45] for the analogous considerations for generic gauge fields within a version of gPDE framework). As for asymptotic symmetries, one can consider an analogous boundary condition \(\sigma^{*}(\xi^{A})=e^{A}\), where \(e^{A}\) is a fixed frame on the boundary. The analysis of Section 4.4 can be easily repeated in the case at hand, giving the conformal Killings of \(g_{\mu\nu}e^{A}_{\mu}e^{B}_{\nu}\) as the basis in the algebra of asymptotic symmetries. Of course, one can equally well repeat the analysis of Section 4.3 in which case together with \(\xi^{A}-e^{A}\sim 0\) and \(Q(\xi^{A}-e^{A})\sim 0\) one should also impose additional boundary conditions \(\lambda\sim 0\) and \(Q\lambda\sim 0\). ## Acknowledgments We wish to thank I. Dneprov and Th. Popelensky for fruitful discussions. M.G. is also grateful to G. Barnich, X. Bekaert, M. Henneaux, and J. Herfray for useful exchanges. The work of M. M. was supported by the Russian Science Foundation grant No 22-72-10122 ([https://rscf.ru/en/project/22-72-10122/](https://rscf.ru/en/project/22-72-10122/)). Part of this work was done when MG participated in the thematic program "Emergent Geometries from Strings and Quantum Fields" at the Galileo Galilei Institute for Theoretical Physics, Florence, Italy. ## Appendix A Symmetries Let us restrict to local analysis. Because locally gauge PDE can be equivalently represented by the non-lagrangian local BV system it is enough to give a proof in this setup. In this case \(E\) is a \(J^{\infty}(\mathcal{E})\to X\) pulled back to \(T[1]X\). In particular, functions on \(E\) can be identified with horizontal forms on \(J^{\infty}(\mathcal{E})\). \(E\) is equipped with the evolutionary homological vector filed \(s\) of ghost degree \(1\) and the homological vector field \(\mathrm{d}_{\mathrm{h}}=\theta^{a}D_{a}\) of \(\theta\)-homogeneity \(1\). We have the following: **Proposition A.1**.: _Locally, cohomology of \([\mathrm{d}_{\mathrm{h}},\cdot]\) in the space of vertical vector fields is trivial in positive form degree (homogeneity in \(\theta\)). In the space of vanishing \(\theta\)-homogeneity vertical vector fields, it is given by evolutionary vector fields on \(E\)._ Proof.: To give an idea of the proof let us work in local coordinates \(x^{a},\theta^{b},\phi^{i}_{(a)}\) and conisider a \(\theta\)-homogeneity 1 vector field to begin with. It reads as \[V=\theta^{a}V^{i}_{a}\frac{\partial}{\partial\phi^{i}}+\theta^{a}V^{i}_{a|b} \frac{\partial}{\partial\phi^{i}_{b}}+\ldots\] (A.1) The cocycle condition reads as \([\mathrm{d}_{\mathrm{h}},V]=0\) and implies, in particular, \[[\mathrm{d}_{\mathrm{h}},V]\phi^{i}=\theta^{a}\theta^{b}(D_{b}V^{i}_{a}-V^{i}_ {a|b})=0\] (A.2) At the same time the coboundary \([\mathrm{d}_{\mathrm{h}},W]\) acting on 0-th jets has the following structure: \[[\mathrm{d}_{\mathrm{h}},W]\phi^{i}=\theta^{a}(D_{a}W^{i}-W^{i}_{a})\] (A.3) It follows, by adding a coboundary one can always set the coefficient \(V^{i}_{a}=\frac{\partial}{\partial\theta^{a}}(V\phi^{i})\) to zero. Then the cocycle condition implies that \(V^{i}_{a|b}-V^{i}_{b|a}=0\) so that \(V^{i}_{b|a}\) can be also set to zero by adding \([\mathrm{d}_{\mathrm{h}},W]\) such that the only nonvanishing coefficient is \(W^{i}_{ab}=W\phi^{i}_{(ab)}=-V^{i}_{(a|b)}\). The proof can be completed by induction. The analysis for higher \(\theta\)-homogeneity vector field is analogous. Let us now turn to the cohomology of \([Q,\cdot]\), \(Q=\mathrm{d_{h}}+s\) in the space of vertical vector fields. Expanding the cocycle condition in the \(\theta\)-homogeneity one gets \[\begin{split}[s,V_{0}]=0\,,\quad[s,V_{1}]+[\mathrm{d_{h}},V_{0}]=0 \,,\quad\ldots\\ [s,V_{1}]+[\mathrm{d_{h}},V_{0}]=0\,,\quad[\mathrm{d_{h}},V_{k}]=0 \end{split}\] (A.4) where we assumed that \(V_{l}=0\) for all \(l>k\), with \(0<k\leqslant n\). Applying the above Proposition we conclude that \(V_{l}=[\mathrm{d_{h}},W_{l-1}]\). Subtracting the trivial cocycle \([Q,W_{l-1}]\) we arrive at a new representative \(V^{\prime}\) for which \(V^{\prime}_{l}=0\) for all \(l>k-1\). Applying the same procedure again we arrive at an equivalent representative for which \(V_{l}=0\) for \(l>0\). The cocycle condition then implies that \([s,V_{0}]=0\) and \([\mathrm{d_{h}},V_{0}]=0\). In other words we have arrived at the standard representative of a global symmetry. Note that in the above we did not assume \(\mathrm{gh}(V)=0\) so it applies to generalized symmetries as well.
2302.09426
Security of IT/OT Convergence: Design and Implementation Challenges
IoT is undoubtedly considered the future of the Internet. Many sectors are moving towards the use of these devices to aid better monitoring, controlling of the surrounding environment, and manufacturing processes. The Industrial Internet of things is a sub-domain of IoT and serves as enablers of the industry. IIoT is providing valuable services to Industrial Control Systems such as logistics, manufacturing, healthcare, industrial surveillance, and others. Although IIoT service-offering to ICS is tempting, it comes with greater risk. ICS systems are protected by isolation and creating an air-gap to separate their network from the outside world. While IIoT by definition is a device that has connection ability. This creates multiple points of entry to a closed system. In this study, we examine the first automated risk assessment system designed specifically to deal with the automated risk assessment and defining potential threats associated with IT/OT convergence based on OCTAVE Allegro- ISO/IEC 27030 Frameworks.
Bassam Zahran, Adamu Hussaini, Aisha Ali-Gombe
2023-02-18T21:40:57Z
http://arxiv.org/abs/2302.09426v1
# Security of IT/OT Convergence: ###### Abstract IoT is undoubtedly considered the future of the Internet. Many sectors are moving towards the use of these devices to aid better monitoring, controlling of the surrounding environment, and manufacturing processes. The Industrial Internet of things is a sub-domain of IoT and serves as enablers of the industry. IIoT is providing valuable services to Industrial Control Systems such as logistics, manufacturing, healthcare, industrial surveillance, and others. Although IIoT service-offerring to ICS is tempting, it comes with greater risk. ICS systems are protected by isolation and creating an air-gap to separate their network from the outside world. While IIoT by definition is a device that has connection ability. This creates multiple points of entry to a closed system. In this study, we examine the first automated risk assessment system designed specifically to deal with the automated risk assessment and defining potential threats associated with IT/OT convergence based on OCTAVE Allegro-ISO/IEC 27030 Frameworks. IIOT, ICS, cybersecurity, IoT, risk analysis ## I Introduction Industrial Control Systems (ICS) is a combination of software and hardware designed to execute and manage industrial operations. The Operational Technologies (OT) are the networking devices and protocols serving the ICS internally. The most used technologies in Industrial Control Systems are Supervisory Control and Data Acquisition (SCADA), Programmable Logic Controllers (PLCs), and Plant Distribution Control Systems (DCSs). IIoT is a sub-domain of the Internet of Things (IoT) and they are interconnected devices intended to improve access, productivity, and decision making in ICS systems. The Industrial Internet of Things (IIoT) provides functionalities like observing power consumption, control of leakage, and many others including safety and security. Recently, Information technology and Operational technologies are converging on a larger scale. This creates a union of IT/OT technologies that improve connectivity, data analytics, and real-time information reporting which supports decision making. Since security is considered a major worry with regards to IT/OT convergence, research must continue to utilize better ways to be ready for the progressively intimidating threat. In this study, we are focusing on evaluating a solution to assess vulnerabilities and threats fronting the IT/OT convergence. In this paper, we present the basic functionalities of the newly developed automated risk assessment system for ICS-IIoT systems titled: Industrial Internet of Things Automated Risk Assessment System (IIoT-ARAS). Our tool performs generic information security assessment, vulnerability analysis, and penetration testing based on the Octave Allegro[3] and ISO/IEC 27030[4] (Draft, expected to be published in 2022) risk assessment methodologies. Currently used risk assessment systems are typically built for OT or IoT systems only. To the best of our knowledge, this is the first automated risk assessment system designed specifically to deal with the potential threats associated with IT/OT convergence based on OctaveAllegro-ISO/IEC 27030 methodologies. ## II background ### _ICS Existing Threats and Potential Impacts_ The exceptional nature of ICS systems makes it a puzzling task to modify or suggest improvements to security. Most Industrial Control Systems formats are built to operate for lengthy periods of time with no disruptions. If a malfunction occur, a fail-safe or a reserve system would come in place instantly to ensure continuousness. The main business goal for ICS is to maximizing productivity and eliminating any chance for overheads or delay. This explains the rejection or at least the resistance for any effort to advise changes. System steadiness and air-gapping are important requirements in an ICS environment. The air-gapping is isolating the production network from all other external networks. However, with the growing demand for connectivity to the outside world, data collection, cloud computing, and emerging technologies, OT struggles to handle with these changes. It is a known point that risks have a larger impact on ICS/OT than IoT/IT. Over the years, the ICS defensive means depends regularly on being an inaccessible system by creating an air-gap to separate the OT system from the outside world. Besides, investing in physical security. Several studies and researches suggested to ensure redundancy, implementation diversity, and hardening and reinforcing components to avoid tampering [8]. Currently, this risk dodging technique is not adequate. In fact, even the air-gap setup has become vulnerable to attacks like AirHopper, BitWhisper, GSMem, OOB-CCs, Ramsay, and lately, the Stuxnet [9]. Figure 1 (the blue section) maps existing challenges/Threats in ICS systems to Impact/Risk-based on the survey of related literature [2]. ### _ItoT Existing Threats and Potential Impacts_ The IIoT gets all vulnerabilities that come with IoT setups. IIoT inserts some more dangers and risks because of the integration with Industrial Control Systems (ICS). While protecting confidentiality, integrity, and availability is the ultimate security goal when bringing devices online, this is not straightforwardly doable when it comes to IIoT. IIoT networks experience many limitations and constraints that make reaching the CIA model a difficult and complicated task. This is particularly true since the IIoT networks can expand in a large geographical space and be either indoor or outdoor. IIoT security is applied by the vigilant consideration of the confidentiality of information, accuracy, and availability of all objects in the network, security protocols, and capability to connect to objects and devices from a variety of vendors and specifications. Each of the IIoT layers suffers from several risks. The threats or attacks can be from within the network or from outside and can be through exploiting a vulnerability, misuse, or even human error. In addition to common and developing threats to several of these devices, there is also the possibility of infection through cross-platform malware that might cause to expand the risk to other surrounding networks [1]. Efficient defenses and mitigation of possible malware infections and other threats could only be achieved after an in-depth understanding, assessment, and evaluation of the security risk in the IIoT environment. Figure 1 (the red section) lists the most common threats and potential risks in the IIoT network. ## III IIoT/OT Introduced Threats and Risks The idea of security in Industrial Control Systems is based on a risk dodging tactic, where critical systems are isolated from other networks. As a defense mechanism, the OT in ICS is created as an inaccessible system that is air-gaped to assure safe and uninterruptible processes. This technique was adequate and served the need for traditional ICS for a long time without the risk of compromise or security violation except if it was a physical attack on equipment. The key factor of ICSs is to enhance productivity while reducing processing overheads. Unfortunately, security is often considered an overhead in the OT environment. Insertion of IIoT in the industrial arena and the merger between IT and OT has led to an alteration in industrial models. The union of IIoT connectivity and data-oriented techniques into ICS's process-oriented isolated system has introduced threats into a highly productive ICS network, creating multiple entry points to the supposedly closed environment. Besides, the ICS/OT and IIoT/IT have different business objectivity. Devices in the ICS environment are designed to operate for a lengthy period of time and have a whole domain of legacy equipment still in active use. Systems in the ICS network habitually do not support basic protection methods applied in the implementation of the IIoT systems. For example, authentication and cryptography methodologies are not supported in older, difficult to swap legacy devices and software. Since most ICS devices are special-purpose machines and not general-purpose, it is challenging to introduce customization and implement competent security measures. Installing or updating a security patch to a running ICS system is considered a major task and would not be acknowledged or at least welcomed in such an environment. More importantly, the recent security specifications are more like business practices rather than abiding policies. Therefore, this special nature of IT/OT convergence requires a distinctive and tactical approach to general security and risk assessment. [2]. ## IV Methodology Risk is considered the product of probability of occurrence of an issue and the associated impact on a given entity. A comprehensive comparison shows that each and every existing risk assessment methodology has its own strength and weakness. Risk Assessment is crucial in identifying vulnerabilities and potential threats to the system. Risk Assessment methodologies are many, but few tackle the area of IT/OT convergence. Our methodology for IIoT-ARAS is based on customized subset of OCTAVE Allegro and ISO/IEC 27030 frameworks. Combining segments of both guidelines is a trial to circumvent issues related to the heterogeneous IT/OT environment. We Fig. 1: Potential Risks in IT/OT Convergence. [2] have decided to select OCTAVE Allegro and ISO/IEC 27030 since both frameworks complete each other. ### _Risk Assessment Frameworks_ #### Iii-A1 Octave Allegro The OCTAVE Allegro approach is designed to allow a wide range of assessment of operational risk environment to harvest more reliable results without the need for extensive risk assessment knowledge. The approach focuses on information assets in the context of usability, storage areas, data transport, information processing, probability of exposure to threats, vulnerabilities logging, disruptions, and impacts. OCTAVE Allegro consists of guidance, worksheets, and questionnaires. The assessment is done manually by users. Our main focus is to map OCTAVE Allegro assessment processes to be a guideline for an automated risk assessment system. The OCTAVE Allegro methodology involves eight steps that are organized into four phases, as shown in figure2. In phase 1, the risk measurements are defined and mapped to organizational drivers. In the second phase, the prioritization of information assets is based on criticality and importance. The process of profiling assets creates clear boundaries, identifies the security requirements, and recognizes all exact locations where information assets are stored, transported, or processed. In phase 3, based on the previously located information assets, threats to the information asset are identified. In the final phase, risks to information assets are acknowledged and considered consequently, risk mitigation plans are developed. OCTAVE Allegro demands that organizations develop asset profiles to enable a more accurate report of the boundaries of an information asset by ensuring consistency, clarity, and approved definitions for the asset. In the process of asset profiling, the company assigns ownership, sets security requirements, and captures the information asset's value. A newly created asset profile can be reused, modified, and updated to match other assets to enhance simplicity and reduce the amount of work for future assessments. Since the Industrial Internet of Things is communicating with other devices in the network and with the Industrial Control Systems linked to it mostly by TCP Stack, asset discovery and profiling is done through a ping sweep, uPnP,...etc. The main target of OCTAVE Allegro is to eliminate uncertainty for security requirements. The drawback is that this approach requires human factors to apply solutions to the found risk analysis and identification. The current OCTAVE methods exploit threat trees as a guide for identifying threats. While this approach provides a regulated resource for identifying and recording different threat scenarios, users might find them difficult and confusing to use, particularly users with limited risk management experience. The manual process of going through OCTAVE worksheets [3] to identify vulnerabilities to enable risk identification is very lengthy and can delay considerably the assessment plan. In practice, users find that performing tool-based vulnerability identification does not provide important additional information that cannot be obtained through scenario identification. [3] As a solution, a fully customizable automated risk assessment tool that requires minimum interference from users is highly appreciated. #### Iii-A2 Iso/iec 27030 ISO/IEC 27030 -- Information technology -- Security techniques -- Guidelines for security and privacy in Internet of Things (IoT) The standard is being developed to support and guide the assessment of information risk and controls for Internet of Things which in return, partially applicable to Industrial Internet of Things. The standard will be specific to IoT, covering both information security and privacy. IoT devices have the ability to connect to the internet. This might have an impact on the security of the network. Hence, proper security measures and privacy controls are essential. The standard will provide security and privacy guidance for IoT systems, services, and solutions. The standard may also cover device and network trustworthiness and will expectantly align with other IoT standards. [4] It is expected that the ISO/IEC 27030 will be comprehensive and covering many areas that are not covered by other frameworks. The standard will be based on ISO/IEC 27005:2011 Series which considered one of the most Fig. 2: IIoT-ARAS: OCTAVE Allegro Segments mature and completed risk assessment methodologies. [10] Examples of risks to be addressed by the standard: * High probability of risk impacts, potentially including privacy violation, damage of property, health issues and more; * Many of IoT devices are built to last for a long time. This Long lifecycle of some devices present major issues on ability to secure communication, apply patches,...etc.; * Shortage in regulating standards with regards to Internet of Things domain, leads to difficulties in managing, monitoring and controlling all devices from different vendors; * Interoperability in a heterogeneous environment and other external networks; * Reduced set of instructions and performance of IoT devices; * Incompatibility among different IoT vendors and manufacturers; * The use of IoT specific device may change with time to adopt with a new scenario. Security measures must adopt with the change. [4] IoT vendors and manufacturers and to certain extent users, could be unaware of the information risks and required basic controls, hence ISO/IEC 27303 is aiming to raise awareness and push maturity on both the provider and the user sides. IoT is changing the world in ways that are difficult to predict. This indeed presents a massive privacy threat that has to be controlled to avoid major issues in the future. The standard is due to be published in 2022. Sensitive and critical sites are susceptible to incidents that would lead to catastrophic impacts. The use of the Industrial Internet of Things to monitor, analyze, and sometimes control industrial systems has an endless number of advantages. On the other hand, ICS systems are used to secure their environment by isolation or air-gapping [2]. The IIoT creates multiple entry points to a secured close system. IIoT is increasingly used to support and monitor power grids, nuclear plants, the energy sector, and more. IIoT is susceptible to cross-platform malware attacks as well [1]. These networks have no tolerance for incidents or failure. As the infamous Stuxnet incident confirmed, even isolated, air-gapped critical systems do not completely guarantee their security. ### _System Overview_ Given the aforementioned, and concerning IT/OT convergence, a need appears for a customized approach to assess vulnerabilities, threats, and potential risks. Therefore, this research's contribution builds a risk assessment solution titled IIoT-ARAS that combines and automates a segment of the best practices suggested in the OCTAVE Allegro and ISO/IEC 27030. Current frameworks designed for OT security, such as the Industrial Internet Consortium, ISACA, International Society of Automation, NIST, Technical Support Working Group, etc., all provide necessary details on securing industrial control systems operational technology areas; however, minimum consideration of IT/OT convergence. IIoT-ARAS as shown in figure 3, on the other hand, is designed to execute automated regular asset inventory checks custom-designed to comply with the IT/OT heterogeneous nature. IIoT-ARAS is designed with four major phases, which are; Preparation, Information Gathering, Analysis and Assessment and Reporting and Closure phases. While these subsystems are highly dependent on each other, however, each performs very unique task as itemized below. * Preparation: This phase is primarily for the preliminary assessment and asset discovery. Best practices from the ISO/IEC 27030 and OCTAVE Allegro are modeled to extract the approved security standards such as thresholds, optimization, power consumption etc. These standards are used as tuning criteria that are then passed to the network discovery component to identify the network's available devices. The Information discovered will then be passed as input to the next phase. Data such as IP addresses, MAC addresses, IDs, User IDs, wired or wireless networks, Operating Systems, and Network Switches and Routers are organized in a log file as illustrated in Figure 9. These data are later used in the analysis and assessment phase, e.g. input vector for anomaly detection. * This phase is designed to assist in classifying assets and risk criteria. The discovered assets in phase 1 are scanned for vulnerabilities and the results are classified as low, med or high vulnerabilities based on the modeled security standards as well as the priority and sensitivity of the data. Although, the asset containers principle that comes from OCTAVE Allegro identifies the priority of an asset from the type of information it passes or store, however, in certain scenario, this phase Fig. 3: IIoT-ARAS System Overview. [2] may require some manual effort from network admins to determine and prioritize asset containers based on importance and sensitivity of data. * Analysis and Assessment: Threats on the IIoT network are analyzed and assessed in this phase. The components are responsible for detecting additional unknown or undiscoverable devices and exploring if the detected vulnerabilities are exploitable vulnerabilities that can potentially threaten the system. * Reporting and Closure: Finally, the last phase is the reporting and closure. In this phase, the components provide comprehensive and detailed reporting to aid decision-making. By applying the right filtering, we produce detailed log files and illustrative diagrams for a better insight of network status (ex. Figure 9 and 10). Furthermore, IIoT-ARAS has an agentless implementation, resulting in a minimal interruption to the OT environment while still utilizing the best security practices to collect data and optimize an acceptable threshold. The tool is under continuous development to support optimization, probability computation, risk evaluations, and contingency plan configuration. ## V Implementation ### _Environment Setup_ OMNeT++ discrete event simulator. OMNeT++ is an object-oriented modular discrete event network simulation framework. It consists of generic architecture. OMNeT++ by itself does not serve as a comprehensive simulator, instead, it provides with tools and structure to write simulations. [5] After vigilant study, a number of simulation frameworks are selected to be the base of IIoT-ARAS (TableI). The heterogeneous environment resulting from IT/OT convergence requires different tools from different domains. #### V-A1 Infrastructure Due to large number of heterogeneous networks, different frameworks, compatibility issues among several installations, the setup is distributed and being tested over different operating systems environments. #### V-A2 Limitations The setup is intended to perform experiments on simulated networks. IIoT-ARAS supports simulation networks and not tested on actual physical networks due to extensive resources to build ICS/IIoT physical network. ### _Frameworks Implemented_ Omnet++ is a powerful tool that enables simulation frameworks for network connections, communications, attack scenarios, etc. Figure 4 illustrates some of the capabilities of Omnet++ used to connect to physical networks through sockets. To simulate IIoT and assess risks, several existing frameworks as itemized in Table I are integrated, and new customized tools that provide the needed environment for building and testing IIoT-ARAS are created. #### V-A1 INET Framework INET component is an open-source framework designed for the OMNeT++ simulation-enabling environment. It provides a set of protocols and other models for research in communication networks. INET is specifically valuable when designing and validating new protocols, or exploring new or unusual scenarios. INET contains most of the TCP stack protocols. Also, to support many other protocols simulation, several other simulation approaches take INET as a base and extend it into specific routes, such as vehicular networks, overlay/peer-to-peer networks, or LTE.. [6] #### V-A2 NETA Framework NETwork Attacks (NETA) is an attempt to simulate common attacks in heterogeneous networks using OMNeT++ and the INET-Framework. NETA is aimed to be a practical tool in the network security arena. This tool makes it easy to validate the effectiveness of defense security methods or solutions against network attacks as well as for matching the capabilities of various defense practices. [6][12] Figure 9 presents and example of a simulated SinkHole Attack. #### V-A3 Castalia Framework Catalia is a simulation tool used to design Wireless Sensor Networks (WSN), Body Area Networks (BAN), and commonly networks of low-power embedded devices as IoT as shown in Figure 8. Castalia is designed to support researchers to examine distributed algorithms and protocols in like-true wireless channel and radio models, with a realistic node performance especially relating Fig. 4: Socket example Fig. 5: NETA attacks to access to the radio. Castalia's noticeable features include: the variant exploration of path loss, channel interference and RSSI calculation, physical process modeling, node clock drift, and several popular MAC protocols implemented. Castalia is highly parametric. It provides tools to help run large simulation analyses, develop and present the results graphically. [6][13] #### Iii-B4 INetMANET Framework INETMANET 4.x is a branch of the INET Framework 4.x simulator, maintained by Alfonso Ariza Quintana [6]. INETMANET is kept recent with INET, and extends functionalities with several extra experimental trials and protocols. It is mainly used for mobile ad hoc networks. [6][16] #### Iii-B5 ANSA Framework The ANSA (Automated Network Simulation and Analysis) project is dedicated to the development of a variety of protocol models, based on RFC specifications and/or reference implementations. The ANSA package extends INET Framework with several protocol models. ANSA is may be publicly used as the routing/switching baseline for further research initiatives, i.e., in simulations proving (or disproving) certain aspects of networking technologies (e.g., finding bottlenecks and single-point of failures, configuration errors, faulty network states, etc.). ANSA is a long-term project carried out by researchers and students at Brno University of Technology, Czech Republic. [6][15] #### Iii-B6 FiCo4OMNET Framework FiCo4OMNeT executes fieldbus communication. Currently, this framework contains two known communication technologies (CAN and FlexRay). Both technologies are applied according to the specification with some adjustments to fit in the simulation platform. Implemented by the CoRE (Communication over Realtime Ethernet) research group with support from the INET (Internet Technologies) research group at the HAW Hamburg (Hamburg University of Applied Sciences). [7] #### Iii-B7 FLoRa Framework FLoRa is a specialized Framework for LoRa. It is a simulation platform for handling end-to-end Fig. 8: IoT wireless nodes communicates with server Fig. 6: SinkHole Attack example Fig. 7: Change in Threshold simulations for LoRa networks. It is based on the OMNeT++ network simulator and exploits components from the INET framework. FLoRa utilizes OMNeT 5.x and INET 3.x. FLoRa is authored by Mariusz Slabicki and Gopika Premsankar. [6][14] FLoRa basically, allows the creation of LoRa networks with specific modules designed for LoRa nodes, gateways and network servers. Application logic can be arranged as standalone modules that connect to the network server. The network server and connected nodes support dynamic management of configuration parameters through Adaptive Data Rate (ADR). Furthermore, the energy consumption statistics contains the topology of the network as well as the modeled ISO/OCTAVE allegro security standards. Subsequently, an initialization file (.ini) is created to control the parameters and execution of the network description file. Furthermore, we integrate a customized component in the.ini file to perform ping sweep. At this phase the simulated network is ready to perform network discovery using the given parameters in the (.ned) file. These parameters can further be fine-tuned using an optimized component written in c++. #### Iv-B2 Information Gathering Phase In this phase, a combination of frameworks are used for a much deeper stress testing and vulnerability scan of the discovered assets. For instance, ANSA was used for enabling further configuration of active devices to predict network behavior. ANSA contains many improvements to networking protocols used in INET framework. Supported protocols include Gateway Load-Balancing Protocol (GLBP), Intermediate System to Intermediate System (IS-IS), and many layer 2 (L2) management protocols. This allows gathering further information about routing and switching [6][15]. FLoRa framework is used in this phase to support end-to-end simulations for gateways and low power devices like IIoT. Also, FLoRa supports gathering information about power consumption which is considered valuable in risk analysis [14]. To have the ability to simulate Wireless Sensor Networks (WSN), Body Area Networks (BAN), and embedded devices, Castalia framework is implemented in this phase to provide realistic information about path loss, interference, and insight about MAC protocols in use. In the Information Gathering phase, a support for Industrial Control Systems is required [13]. FiCo4OMNET framework is designed to give support for CAN and Flexray technologies to enable simulation of remote mobility [7]. Finally, Fieldbus framework is utilized due to its capabilities of simulating ICS networks and collecting information [6]. #### Iv-B3 Analysis and Assessment Phase In IIoT-ARAS implementation, the component in this phase will examine the network strength and availability by examining the network bandwidth, bottlenecks, points of failure and network configuration errors in general using ANSA framework. In addition, simulated attacks is performed to test compliance with CIA triad and to examine if the vulnerability will result in threat using the NETA framework. #### Iv-B4 Reporting and Closure Phase In this phase, the use of all frameworks is necessary to guarantee detailed reporting and closure. The phase is under development to calculate probability and impacts, threat and risk predictions, in addition to vulnerable configurations and setups. Each of the previously used framework (INET, INETMANET, ANSA, NETA, FLoRa, Castalia, FiCo4OMNET, Fieldbus) provides a number of logs and data that can be used to present status of the network. Integrating and analyzing collected data will aid in successful risk assessment and prediction. Fig. 9: Log files for predicting anomalies ## VI IIot-ARAS Initial Testing ### _IP Dropping Attack_ In IP Dropping Attack, nodes affected will drop received packets instead of forwarding them to intended party. The attack can compromise bandwidth, quality of service (QoS), and availability of network resources. Figure 10 shows the ratio between sent and received packets. It is clear that the packet loss ratio is significant. For evaluation purposes, the simulation of attack succeeded and expected results are generated. For future testing and evaluation, we will add more testing and performance metrics like Packet Delivery Ratio (PDR) and the Dropping Ratio (DR): \[PacketsDelivered(Pd)/PacketsTransmitted(Pt) \tag{1}\] \[PacketsLost(Pl)/PacketsTransmitted(Pt) \tag{2}\] Pd: Packets Delivered, Pt: Packets transmitted, Pl: Packets lost. ## VII Limitation and Future Work The main challenge we face is to test and assess a heterogeneous network that was built using different components that communicates by several protocols. The ever changing nature and incompatibility issues in the IT/OT convergence world will never end. For testing purposes, we ran and analyze sample attacks on simulated network to ensure validity. But nonetheless, we will explore the performance of IIot-ARAS on much larger scale real network. In Addition, we will work on integrating all different components of IIot-ARAS into one structured system, introduce socket programming to be able to connect the tool to real physical networks, create more simulated network attacks for testing, and work on enhancing assets discovery by adopting enhanced version of known protocols like uPNP, SNMP, SolarWind's Ping Sweep and others. Furthermore, we will work on creating a simulation framework for BACnet protocol being one of the most common ICS used protocols [17]. As a next phase, we consider introducing Machine Learning principles to IIot-ARAS to aid self-decision and protection by utilizing in-memory objects. ## VIII Conclusion The IIot-ARAS is a tool used to evaluate and assess vulnerabilities and risks associated based on OCTAVE Allegro and initial ISO/IEC 27030 guidelines. The main purpose is to examine the IT/OT convergence where information technology meets industrial operations. A basic IT/OT network is evaluated against a number of known attacks to record the change in threshold and behavior. This will present an input to further study of malware detection and prevention. The work is a continuous effort and will be tested against physical IT/OT networks. ## Acknowledgement This work is supported by the National Science Foundation (NSF) under Grant Number 1850054.
2301.08268
Nonlocality and entanglement in measured critical quantum Ising chains
We study the effects of measurements, performed with a finite density in space, on the ground state of the one-dimensional transverse-field Ising model at criticality. Local degrees of freedom in critical states exhibit long-range entanglement, and as a result, local measurements can have highly nonlocal effects. Our analytical investigation of correlations and entanglement in the ensemble of measured states is based on properties of the Ising conformal field theory (CFT), where measurements appear as (1+0)-dimensional defects in the (1+1)-dimensional Euclidean spacetime. So that we can verify our predictions using large-scale free-fermion numerics, we restrict ourselves to parity-symmetric measurements. To describe their averaged effects analytically we use a replica approach, and we show that the defect arising in the replica theory is an irrelevant perturbation to the Ising CFT. Strikingly, the asymptotic scalings of averaged correlations and entanglement entropy are therefore unchanged relative to the ground state. In contrast, the defect generated by postselecting on the most likely measurement outcomes is exactly marginal. We then find that the exponent governing postmeasurement order parameter correlations, as well as the ''effective central charge'' governing the scaling of entanglement entropy, vary continuously with the density of measurements in space. Our work establishes new connections between the effects of measurements on many-body quantum states and of physical defects on low-energy equilibrium properties.
Zack Weinstein, Rohith Sajith, Ehud Altman, Samuel J. Garratt
2023-01-19T19:03:37Z
http://arxiv.org/abs/2301.08268v2
# Nonlocality and entanglement in measured critical quantum Ising chains ###### Abstract We study the effect of measurements, performed with a finite density in space, on the ground state of the one-dimensional transverse-field Ising model (TFIM) at criticality. Local degrees of freedom in critical states exhibit long-range entanglement and, as a result, local measurements can have highly nonlocal effects. Our analytical investigation of correlations and entanglement in the ensemble of measured states is based on properties of the Ising conformal field theory (CFT), where measurements appear as (1+0)-dimensional defects in the (1+1)-dimensional Euclidean spacetime. So that we can verify our predictions using large-scale free-fermion numerics, we restrict ourselves to parity-symmetric measurements. To describe their averaged effect analytically we use a replica approach, and we show that the defect arising in the replica theory is an irrelevant perturbation to the Ising CFT. Strikingly, the asymptotic scaling of averaged correlations and entanglement entropy are therefore unchanged relative to the ground state. In contrast, the defect generated by postselecting on the most likely measurement outcomes is exactly marginal. We then find that the exponent governing post-measurement order parameter correlations, as well as the "effective central charge" governing the scaling of entanglement entropy, vary continuously with the density of measurements in space. Our work establishes new connections between the effects of measurements on many-body quantum states and of physical defects on low-energy equilibrium properties. + Footnote †: ZW and RS contributed equally to this work. ## I Introduction Measuring one of the qubits in a Bell pair nonlocally alters the state of the unmeasured qubit. In many-body systems, where the entanglement of a state can be highly complex, the nonlocal effects of measurements can give rise to a remarkable variety of different structures [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Strikingly, even when starting from a state with short-ranged entanglement, one can use measurements to create topological order and other long-range entangled states [10; 11; 12; 13; 3; 5; 6; 3; 14; 3]. In the context of the measurement-induced phase transition in quantum circuits [15; 16; 17; 18; 19; 20; 21; 22; 23], the nonlocal effects of measurements are known to be crucial for the emergence of conformal symmetry at the critical point [2]. Given such a diverse range of phenomena, it is important to seek unifying principles underlying the effects of measurements on many entangled degrees of freedom. Critical ground states [24] in one spatial dimension here offer a high degree of theoretical control because universal structures are described at long distances by (1+1)-dimensional conformal field theories (CFTs) [25; 26; 27]. Moreover, as shown in Ref. [9], studies of the effects of measurements on these states are closely related to problems arising in the theory of surface critical phenomena [28]. This connection has more recently appeared in studies of the effects of local decoherence on topological [29; 30; 31] and critical states [32; 31]. In this work we set out to understand the effects of measurements on the ground state of the transverse-field Ising model (TFIM) at criticality, which is described at long distances by the Ising CFT [24], and to initiate the study of the entanglement entropy of the post-measurement quantum states. The structure of these states can be understood by considering the introduction of (1+0)-dimensional defects to the Ising CFT, a problem which has been studied extensively both in and out of equilibrum [33; 34; 35; 36; 37; 38]. Our study of the TFIM is motivated in part by the aim of observing measurement-induced collective phenomena in experimental quantum simulators. As discussed in Ref. [9] (see also Refs. [11; 39; 40]), the effects of large numbers of measurements can, quite generally, be observed when experimental data is complemented by results from a simulation, thereby avoiding the infamous 'postselection problem' [39; 18; 41]. This raises the question of which phenomena can be observed in exact simulations. The TFIM is a natural setting to explore this because, within certain measurement schemes, the many-body state can be represented exactly with polynomial Figure 1: Schematic depiction of the measurement protocol considered in this work. The ground-state \(\ket{\psi_{\mathrm{g.s.}}}\) of the critical transverse-field Ising model (1) is measured using the measurement operator \(\hat{M}_{\mathbf{m}}\), which is a product of an extensively large set of local projectors. The remaining state \(\ket{\psi_{\mathbf{m}}}\propto\hat{M}_{\mathbf{m}}\ket{\psi_{\mathrm{g.s.}}}\) retains nontrivial long-range correlations and entanglement scaling. computational resources. With this hybrid of quantum and classical simulation in mind, throughout this work we will emphasize the connection between the effects of projective measurements on a lattice model, which bears a close relation to the situation in experiment, and that of defects in a CFT. We focus on parity-preserving local measurements performed with a finite density in space. We first consider their averaged effect, weighting the contributions from different outcomes according to the Born rule. To characterize a post-measurement state one can compute the expectation value of an observable but, since this object is linear in the post-measurement density matrix, a naive average over runs of the experiment converts our local measurements into a dephasing channel [42]. In order to diagnose the effects of measurement on average, it is instead necessary to consider quantities post-measurement that are nonlinear in the density matrix, such as connected correlation functions and the entanglement entropy of a subregion. Following Ref. [9] we use a replica approach to study averages of these nonlinear objects. These observables can then be studied at long distances using a replicated Ising CFT, where measurements give rise to an inter-replica coupling at a fixed imaginary time (a'spacelike' defect). For averages over parity-preserving measurements this defect is irrelevant under the renormalization group (RG), and consequently long-distance properties of the ensemble of post-measurement states are not significantly modified relative to the ground state. Remarkably, critical correlations are therefore robust to measurements: the exponents governing power-law correlations between unmeasured qubits, and the logarithmic scaling of entanglement entropy, are unchanged. Following this, we consider the effects of 'forced' measurements, which in practice would correspond to postselecting for a particular set of outcomes. We focus our attention on the single most likely measurement outcome for a given set of measurement locations; these generate a marginal defect which, in the Ising CFT, appear as energy operators inserted along a line of fixed imaginary time. The type of defect has been analyzed in the context of classical statistical mechanics [33; 34; 35; 36], and is known to result in order-parameter correlations with a continuously-varying power-law exponent, which we observe numerically. A defect of this kind at a fixed point in space (i.e. a 'timelike' defect) has also been shown to result in a half-system entanglement entropy with a continuously-varying effective central charge [37; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 101; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 289; 281; 285; 286; 287; 288; 289; 291; 288; 289; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 329; 333; 341; 342; 343; 351; 352; 353; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 383; 384; 385; 386; 387; 388; 388; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 68; 69; 70; 71; 73; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 88; 89; 90; 84; 86; 87; 88; 89; 91; 85; 89; 92; 86; 88; 89; 93; 94; 87; 89; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 116; 117; 118; 119; 121; 133; 135; 136; 137; 138; 139; 140; 141; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 180; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 211; 22; 23; 24; 25; 26; 27; 28; 293; 28; 294; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 43; 45; 46; 47; 48; 49; 51; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 71; 72; 73; 74; 75; 76; 77; 78; 79; 81; 82; 83; 84; 85; 86; 87; 89; 90; 87; 88; 88; 92; 89; 93; 940; 88; 89; 95; 88; 96; 97; 98; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 117; 119; 133; 141; 145; 157; 158; 169; 170; 181; 199; 197; 182; 198; 199; 201; 21; 231; 22; 24; 26; 27; 28; 299; 30; 31; 32; 33; 35; 36; 37; 38; 39; 41; 42; 43; 44; 44; 45; 46; 47; 49; 50; 51; 52; 54; 57; 58; 59; 61; 70; 71; 72; 73; 74; 75; 76; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 99; 90; 91; 92; 93; 94; 95; 96; 97; 101; 11; 12; 13; 14; 15; 16; 17; 18; 199; 198; 199; 202; 21; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 36; 37; 38; 39; 40; 41; 43; 45; 46; 47; 48; 49; 52; 40; 42; 43; 47; 48; 49; 53; 54; 55; 56; 57; 58; 59; 60; 62; 61; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 75; 76; 78; 79; 83; 88; 89; 94; 89; 95; 96; 97; 98; 99; 99; 99; 100; 112; 1 where \(x\) is the spatial coordinate and \(\tau\) is the imaginary time. Here \(\psi=[\psi_{1},\psi_{2}]^{T}\) is a two-component Grassmann field, with \(\psi_{1}(x)\) and \(\psi_{2}(x)\) reproducing correlations of \(\gamma_{2j-1}\) and \(\gamma_{2j}\) respectively, and \(\sigma^{x}\) and \(\sigma^{y}\) are the Pauli matrices acting on the two components of \(\psi\). The Ising CFT is obtained upon setting \(m\propto g-1\) to zero. It is useful to note that \(\psi\) has scaling dimension \([\psi]=[x^{-1/2}]=1/2\). We provide more details in Appendix B on the correspondence between the lattice model and the continuum field theory. Throughout this work, we primarily focus on three observables: namely, the order-parameter correlation function \(C(r)\), the connected energy density correlation function \(G(r)\), and the entanglement entropy \(S(r)\) of a contiguous subregion \(A=[0:r)\) of \(r\) sites. In the ground state, these are [52; 26] \[\begin{split} C_{\text{g.s.}}(r)&\equiv\left\langle Z _{0}Z_{r}\right\rangle_{\text{g.s.}}\sim r^{-1/4},\\ G_{\text{g.s.}}(r)&\equiv\left\langle X_{0}X_{r} \right\rangle_{\text{g.s.}}-\left\langle X_{0}\right\rangle_{\text{g.s.}} \left\langle X_{r}\right\rangle_{\text{g.s.}}\sim r^{-2},\\ S_{\text{g.s.}}(r)&\equiv-\operatorname{tr}\rho_{ \text{g.s.}}^{A}\log\rho_{\text{g.s.}}^{A}\sim\frac{1}{6}\log r+b_{0},\end{split} \tag{4}\] where \(\sim\) indicates the asymptotic scaling behavior of these three observables for large \(r\) at the critical point, \(\left\langle\cdot\right\rangle_{\text{g.s.}}=\left\langle\psi_{\text{g.s.}} \right|\cdot\left|\psi_{\text{g.s.}}\right\rangle\) denotes ground-state correlations, and \(\rho_{\text{g.s.}}^{A}=\operatorname{tr}_{A^{c}}\left|\psi_{\text{g.s.}} \right\rangle\!\!\left\langle\psi_{\text{g.s.}}\right|\) is the reduced density matrix of subsystem \(A\). In the last equation, the coefficient \(1/6\) corresponds to a central charge \(c=1/2\), while \(b_{0}\) is a nonuniversal constant [52]. ## III Born ensemble projective measurements Due to its algebraic correlations and long-range entanglement, local measurements performed on the ground state \(\left|\psi_{\text{g.s.}}\right\rangle\) of the critical TFIM can potentially exhibit highly nonlocal effects [9]. To determine the effect of projective measurements on the ground state, we randomly perform a projective measurement of \(X_{j}\) at each site \(j\) with probability \(p\), with measurement outcomes sampled according to the Born rule. The post-measurement states remain nontrivial on the \(\sim(1-p)N\) unmeasured qubits, and in this section we aim to characterize the average long-distance behavior of correlations and entanglement in the ensemble of such measured states.. Our measurement protocol is conveniently described using a measurement operator \(\hat{M}_{\mathbf{m}}=\prod_{j=1}^{N}\hat{M}_{m_{j},j}\), which is a product of the local measurement operators \[\hat{M}_{0,j}=\sqrt{1-p},\quad\hat{M}_{\pm 1,j}=\sqrt{p}\frac{1\pm X_{j}}{2}. \tag{5}\] Here \(m_{j}=0\) corresponds to a measurement outcome in which no measurement is performed on site \(j\), while \(m_{j}=\pm 1\) corresponds to a projective measurement of \(X_{j}\) with result \(\pm 1\). Naturally, the full set of measurement operators satisfy the probability-conserving condition \(\sum_{\mathbf{m}}\hat{M}_{\mathbf{m}}^{2}=1\) (in other words, the set of all \(\hat{M}_{\mathbf{m}}^{2}\) constitute a positive operator-valued measure [42]). The measurement outcome \(\mathbf{m}\) occurs with Born probability \(p_{\mathbf{m}}\) and results in the post-measurement state \(\left|\psi_{\mathbf{m}}\right\rangle\), where \[\left|\psi_{\mathbf{m}}\right\rangle=\frac{\hat{M}_{\mathbf{m}}\left|\psi_{ \text{g.s.}}\right\rangle}{\sqrt{\left\langle\hat{M}_{\mathbf{m}}^{2}\right\rangle _{\text{g.s.}}}},\quad p_{\mathbf{m}}=\left\langle\hat{M}_{\mathbf{m}}^{2} \right\rangle_{\text{g.s.}}. \tag{6}\] In Appendix G we additionally discuss projective measurements of \(Z_{j}Z_{j+1}\) for each bond. The results are qualitatively similar to case of \(X_{j}\) measurements discussed here; in the continuum limit, both operators are given to leading order by the Ising CFT energy operator [53]. We would like to determine the typical behavior of long-range correlations in the states \(\left|\psi_{\mathbf{m}}\right\rangle\). As has been elaborated elsewhere [9], although a given set of measurements can have nonlocal effects on the ground state, the averaged behavior of linear observables \(\overline{\left\langle O\right\rangle_{\mathbf{m}}}=\sum_{\mathbf{m}}p_{ \mathbf{m}}\left\langle\psi_{\mathbf{m}}\right|O\left|\psi_{\mathbf{m}}\right\rangle\) is identical to the behavior of observables following a series of local quantum channels. Since local quantum channels can only exhibit local effects on the ground state, the nonlocality of measurements is hidden from these averages. Instead, we focus on the measurement-averaged behavior of observables which are nonlinear in the density matrix \(\rho_{\mathbf{m}}\equiv\left|\psi_{\mathbf{m}}\right\rangle\!\!\left\langle \psi_{\mathbf{m}}\right|\): namely, the _squared_ order-parameter correlation function \(C_{\mathbf{m}}^{2}(r)\), as well as the connected energy density correlation function \(G_{\mathbf{m}}(r)\) and the entanglement entropy \(S_{\mathbf{m}}(r)\), the latter two of which are already nonlinear observables. Here, the subscripts \(\mathbf{m}\) indicate that these observables are computed with respect to the post-measurement state \(\left|\psi_{\mathbf{m}}\right\rangle\), rather than \(\left|\psi_{\text{g.s.}}\right\rangle\) as in Eq. (4). Explicitly, \[\begin{split} C_{\mathbf{m}}^{2}(r)&\equiv\left\langle Z _{0}Z_{r}\right\rangle_{\mathbf{m}}^{2},\\ G_{\mathbf{m}}(r)&\equiv\left\langle X_{0}X_{r} \right\rangle_{\mathbf{m}}-\left\langle X_{0}\right\rangle_{\mathbf{m}}\left \langle X_{r}\right\rangle_{\mathbf{m}},\\ S_{\mathbf{m}}(r)&\equiv-\operatorname{tr}\rho_{ \mathbf{m}}^{A}\log\rho_{\mathbf{m}}^{A}.\end{split} \tag{7}\] We now describe how averages of these objects, with weights given by the Born probabilities \(p_{\mathbf{m}}\), can be studied analytically. ### Replica field theory To analyze the average effect of measurements on these nonlinear observables, we develop a replica approach analogous to the one employed in Ref. [9]. The resulting replica observables are described by a replicated Ising CFT in the continuum limit, and we show that the average effect of projective measurements on the ground state is to couple the replicas together along the \(\tau=0\) axis in Euclidean spacetime. A simple scaling analysis will then suggest that this coupling is irrelevant. For purposes of illustration, consider the average of \(G_{\mathbf{m}}(r)\). We will comment on \(C_{\mathbf{m}}^{2}(r)\) and \(S_{\mathbf{m}}(r)\) at the end of this section. Starting with just the disconnected piece, the average is given by (8) where we have used \(\left[X_{0},\hat{M}_{\mathbf{m}}\right]=0\). The difficulty in averaging this quantity directly lies in the nontrivial denominator arising from the normalization of \(\left|\psi_{\mathbf{m}}\right\rangle\). In order to compute observables of this form, we employ the following replica scheme: (9) In this scheme, we effectively weight each set of measurement outcomes \(\mathbf{m}\) by the alternative probability distribution \(p_{\mathbf{m}}^{n}/\sum_{\mathbf{m}^{\prime}}p_{\mathbf{m}^{\prime}}^{n}\), thereby biasing the distribution towards the most likely measurement outcomes. By writing the product of expectation values as a single expectation value over an \(n\)-fold replicated Hilbert space, we obtain \(\overline{G_{\mathbf{m}}(r)}\) as the \(n\to 1\) replica limit of \[\overline{G_{\mathbf{m}}^{(n)}(r)}=\frac{\left\langle\psi_{\text{g.s.}}^{ \otimes n}\right|(X_{0}^{(0)}X_{r}^{(0)}-X_{0}^{(0)}X_{r}^{(1)})\hat{M}_{\text{ avg}}\left|\psi_{\text{g.s.}}^{\otimes n}\right\rangle}{\left\langle\psi_{\text{g.s.}}^{ \otimes n}\right|\hat{M}_{\text{avg}}\left|\psi_{\text{g.s.}}^{\otimes n} \right\rangle}. \tag{10}\] Here \(X_{j}^{(\alpha)}\) denotes the \(X_{j}\) operator in replica \(\alpha\), while \(\hat{M}_{\text{avg}}\) is given by \[\hat{M}_{\text{avg}} \equiv\sum_{\mathbf{m}}[\hat{M}_{\mathbf{m}}^{2}]^{\otimes n} \tag{11}\] \[=\prod_{j=1}^{N}\Bigg{\{}(1-p)^{n}+p^{n}\sum_{m_{j}=\pm 1}\left( \frac{1+m_{j}X_{j}}{2}\right)^{\otimes n}\Bigg{\}}\] \[\propto\prod_{j=1}^{N}\Bigg{\{}1+\mu\sum_{r=1}^{\lfloor n/2 \rfloor}\sum_{1\leq\alpha_{1}<\ldots<\alpha_{2r}\leq n}\!\!\!X_{j}^{(\alpha_{1 })}\ldots X_{j}^{(\alpha_{2r})}\Bigg{\}},\] where \(\mu=[1+2^{n-1}(p^{-1}-1)^{n}]^{-1}\) is a monotonic function of \(p\), and we have neglected an overall constant which cancels between the numerator and denominator. Acting on \(\left|\psi_{\text{g.s.}}^{\otimes n}\right\rangle\), \(\hat{M}_{\text{avg}}\) has the effect of weakly locking the multiple replicas together by favoring spin configurations in which \(X_{j}^{(1)}=\ldots=X_{j}^{(n)}\). As in Ref. [9], we now interpret the insertion of \(\hat{M}_{\text{avg}}\) as a spacelike defect in Euclidean spacetime. Towards this end, we rewrite both the numerator and denominator of Eq. (10) using an imaginary-time path integral of Majorana fermions, and perform a continuum limit; technical details are contained in Appendix D. The denominator of Eq. (10) is then given by the partition function \(\mathcal{Z}_{M}^{(n)}\) of a multi-replica Ising field theory with an inter-replica coupling along the \(\tau=0\) line, defined by: \[\mathcal{Z}_{M}^{(n)}\equiv\int\prod_{\alpha=1}^{n}D\psi^{(\alpha)}\,e^{- \sum_{\alpha=1}^{n}\mathcal{S}_{0}[\psi^{(\alpha)}]-\mathcal{S}_{M}^{(n)}[ \{\psi^{(\alpha)}\}]}, \tag{12}\] where \(\mathcal{S}_{M}^{(n)}[\{\psi^{(\alpha)}\}]\) gives the coupling between replicas due to measurements: \[\mathcal{S}_{M}^{(n)}=-\mu\sum_{\alpha<\beta}\int\mathrm{d}x\,(\psi^{T}\sigma ^{y}\psi)^{(\alpha)}(\psi^{T}\sigma^{y}\psi)^{(\beta)}+\ldots, \tag{13}\] where the ellipsis denotes four-replica terms and higher, which are less relevant than the two-replica term written explicitly. The numerator of Eq. (10) is given by a multi-replica correlation function having the same action. Note that the fields in \(\mathcal{S}_{M}^{(n)}\) are evaluated strictly at \(\tau=0\). From a simple scaling analysis, one immediately finds that \(\mu\) has dimension \(-1\), and is therefore irrelevant. Furthermore, we show in Appendix D that higher-order corrections in the perturbative RG cannot generate relevant or marginal terms; more precisely, we show that any possible marginal terms generated by the perturbative RG are inconsequential to observables in the \(n\to 1\) replica limit. ### Correlation Functions Having developed a field-theoretical framework for analyzing the effect of measurements on the TFIM ground state, we now discuss the consequences for the nonlinear observables of Eq. (7). In the previous section we showed that the average effect of \(X_{j}\) measurements on the correlation function \(\overline{G_{\mathbf{m}}(r)}\), with outcomes sampled according to the Born rule, is to contribute an irrelevant defect-like perturbation to the replicated Ising CFT. We therefore expect \(\overline{G_{\mathbf{m}}(r)}\) to exhibit the same asymptotic scaling as in the unmeasured ground state. Specifically, we expect \[\overline{G_{\mathbf{m}}(r)}\sim r^{-2}\quad(r\gg 1). \tag{14}\] On the other hand, the preceding analysis does not immediately apply to \(\overline{C_{\mathbf{m}}^{2}(r)}\), since \(Z_{j}\) does not commute with the measurement operator \(\hat{M}_{\mathbf{m}}\) whenever site \(j\) is measured. Instead, it is useful to note that both \(G_{\mathbf{m}}(r)\) and \(C_{\mathbf{m}}(r)\) vanish for every measurement realization in which either site \(0\) or site \(r\) is measured. We can therefore freely replace our measurement averages in both quantities with a restricted ensemble in which sites \(0\) and \(r\) are unmeasured. The resulting average measurement operator in this case then commutes with both \(X\) and \(Z\) observables, and the above mapping follows identically for both cases. We elaborate this discussion in more detail in Appendix E, where we show explicitly that \(\overline{C_{\mathbf{m}}^{2}(r)}\) is given at long distances by \[\overline{C_{\mathbf{m}}^{2}(r)}\simeq\lim_{n\to 1}\frac{\left\langle\psi_{ \text{g.s.}}^{\otimes n}\right|Z_{0}^{(0)}Z_{0}^{(0)}Z_{r}^{(0)}Z_{r}^{(1)}\hat {M}_{\text{avg}}\left|\psi_{\text{g.s.}}^{\otimes n}\right\rangle}{\left\langle \psi_{\text{g.s.}}^{\otimes n}\right|\hat{M}_{\text{avg}}\left|\psi_{\text{g.s.}}^{\otimes n}\right\rangle}. \tag{15}\] Whereas (10) is exact, Eq. (15) is expected to hold asymptotically at long distances. We may now immediately apply the analysis of the preceding section: since the contribution (13) to the action due to measurements is irrelevant, we again expect \(\overline{C_{\mathbf{m}}^{2}(r)}\) to asymptotically recover its ground-state scaling at long distances: \[\overline{C_{\mathbf{m}}^{2}(r)}\sim r^{-1/2}\quad(r\gg 1). \tag{16}\] We now numerically verify these analytical predictions. A crucial benefit of focusing on parity-preserving projective measurements of the TFIM is that our analytical predictions can be confirmed using large-scale free-fermion numerics [49; 50; 51]. We provide explicit details of our numerical approach in Appendix A; in short, since arbitrary \(k\)-point correlations of the quadratic Hamiltonian (2) can be obtained using Wick's theorem, the full physical content of the state \(\ket{\psi_{\text{g.s.}}}\) is contained in the \(\frac{1}{2}N(N-1)\) entries of the covariance matrix \(G_{ij}=\bra{i\gamma_{i}\gamma_{j}}_{\text{g.s.}}-i\delta_{ij}\), rather than in \(2^{N}\) complex amplitudes (as would be the case in a generic nonintegrable system). Figure 2 depicts the ensemble-averaged correlation functions \(\overline{G_{\mathbf{m}}(r)}\) and \(\overline{C_{\mathbf{m}}^{2}(r)}\) for several measurement probabilities and system sizes, computed numerically using Monte-Carlo sampling of both the measurement locations and outcomes. Utilizing conformal invariance, we show in Appendix C that both of these correlation functions are predicted to be functions of the single parameter \[s=\frac{N}{\pi}\sin\Big{(}\frac{\pi r}{N}\Big{)}. \tag{17}\] From the analysis of Section III.1, we expect \(\overline{C_{\mathbf{m}}^{2}(r)}\sim s^{-2}\) and \(\overline{G_{\mathbf{m}}(r)}\sim s^{-1/2}\) at sufficiently large values of \(s\). Figure 2 supports this conclusion, with excellent finite-size scaling collapses of both correlation functions. Interestingly, \(\overline{C_{\mathbf{m}}^{2}(r)}\) does not exhibit any pronounced crossover behavior at any measurement probability: even at \(p=0.8\), measurements reduce the power-law prefactor without altering the \(s^{-1/2}\) scaling. In contrast, \(\overline{G_{\mathbf{m}}(r)}\) exhibits stronger crossover behavior at short distances, and it would be interesting to understand the origin of this effect. Note that we have omitted the \(p=0.8\) curve in \(\overline{G_{\mathbf{m}}(r)}\) since this exhibits strong finite-size effects. We nevertheless expect that the \(s^{-2}\) decay observed at smaller measurement probabilities will be recovered at sufficiently large values of \(s\). ### Entanglement Entropy Finally, we address the average behavior of the entanglement entropy by noting that it can be obtained via the replica limit [20] \[\overline{S_{\mathbf{m}}(r)}=\lim_{n\to 1}\frac{1}{1-n}\log\Bigg{\{}\frac{ \sum_{\mathbf{m}}p_{\mathbf{m}}^{n}\operatorname{tr}\big{[}(\rho_{\mathbf{m}} ^{A})^{n}\big{]}}{\sum_{\mathbf{m}}p_{\mathbf{m}}^{n}}\Bigg{\}}, \tag{18}\] where \(\rho_{\mathbf{m}}^{A}=\operatorname{tr}_{A^{c}}[\hat{M}_{\mathbf{m}}\ket{ \psi_{\text{g.s.}}}\!\bra{\psi_{\text{g.s.}}}\hat{M}_{\mathbf{m}}]/p_{\mathbf{ m}}\). Following Refs. [52; 54], the numerator within the logarithm can be understood as the partition function of the same model defined on an \(n\)-sheeted Riemann surface with a branch cut running from \((\tau,x)=(0,0)\) to \((\tau,x)=(0,r)\). The impurity (13) due to measurements, which couples fields Figure 2: Ensemble-averaged correlation functions \(\overline{C_{\mathbf{m}}^{2}(r)}\) (left) and \(\overline{G_{\mathbf{m}}(r)}\) (right) as defined in Eq. (7), for measurement probabilities \(p=0.2\) (blue), \(0.5\) (green), and \(0.8\) (red), and for system sizes \(N=32\), \(64\), \(128\), and \(256\) (light to dark). Data is plotted as a function of \(s=\frac{N}{\pi}\sin\Big{(}\frac{\pi r}{N}\Big{)}\) to achieve scaling collapse of the various system sizes. Dotted lines depict the behavior in the unmeasured system. Both correlation functions exhibit excellent scaling collapses with the power law exponents of the unmeasured system at sufficiently large distances. between sheets of the Riemann surface, can be taken to lie at \(\tau=0^{-}\) just below the branch cut. Given that the impurity is irrelevant, we expect that the asymptotic logarithmic scaling \(\frac{1}{6}\log r\) will be recovered at sufficiently large \(r\), up to a renormalization of the nonuniversal constant \(b_{0}\). Figure 3 depicts the ensemble-averaged entanglement entropy \(\overline{S_{\mathbf{m}}(r)}\) for several measurement probabilities and system sizes, again plotted as a function of \(s\). In the unmeasured system, \(S(r)\sim\frac{1}{6}\log s+b_{1}(p)\) for a \(r\)-independent constant \(b_{1}(p)\) which decreases with \(p\). Remarkably, we see from Fig. 3 that the logarithmic scaling of the entanglement entropy, and the prefactor \(1/6\), are unaffected by measurements, even at large measurement strengths. Here we have shown that, on average, the correlations characteristic of the critical TFIM are robust to parity-preserving measurements. However, as we show in the next section, for the most likely measurement outcomes correlations are altered radically relative to the ground state. ## IV Forced projective measurements In the previous section, we found that parity-preserving projective measurements sampled according to the Born rule fail to alter the asymptotic scaling of correlations or entanglement of the critical TFIM ground state \(\ket{\psi_{\text{g.s.}}}\). It is natural to ask whether an alternative measurement scheme can exhibit a larger effect on these observables. In previous work [9] we found that postselected "no-click" density measurements performed uniformly throughout a Luttinger liquid are relevant (irrelevant) for Luttinger parameters \(K<1\) (\(K>1\)). Motivated by this result, we now consider postselecting on a particular set of measurement outcomes in the TFIM. Since the \(K=1\) Luttinger liquid is related to two copies of the Ising CFT via bosonization [55], it is particularly interesting to consider postselected measurements in the TFIM: if a finite density of postselected projective measurements are believed to behave qualitatively similarly to a uniform strength of weak measurements, then the postselected measurements are expected to contribute marginally. Our measurement scheme is as follows: we again perform \(X_{j}\) measurements on each site with probability \(p\), but we now force the outcome \(X_{j}=+1\) for each measured site. This outcome corresponds to qubits aligned along the local fields, and is the single most likely outcome given the chosen measurement locations. It is convenient to describe such a measurement protocol with a measurement operator \(\hat{K}_{\mathbf{k}}=\prod_{j=1}^{N}\hat{K}_{k_{j},j}\) given by a product of local measurement operators \(\hat{K}_{k_{j},j}\), defined here as \[\hat{K}_{0,j}=1,\quad\hat{K}_{1,j}=\frac{1+X_{j}}{2}. \tag{19}\] Unlike the previous measurement scheme, where \(m_{j}=0,\pm 1\) is sampled according to the Born rule, in this scheme we simply choose to measure site \(j\) (\(k_{j}=1\)) or leave site \(j\) unmeasured (\(k_{j}=0\)) with probabilities \(p\) and \(1-p\), respectively. The state \(\ket{\psi_{\mathbf{k}}}\) is obtained with probability \(p_{\mathbf{k}}\), where \[\ket{\psi_{\mathbf{k}}}=\frac{\hat{K}_{\mathbf{k}}\ket{\psi_{\text{g.s.}}}}{ \sqrt{\left<\hat{K}_{\mathbf{k}}\right>_{\text{g.s.}}}},\quad p_{\mathbf{k}}=p ^{\ket{\mathbf{k}}}(1-p)^{N-\ket{\mathbf{k}}}, \tag{20}\] where \(\ket{\mathbf{k}}=\sum_{j=1}^{N}k_{j}\) is the number of measurements performed. Our focus here will be on correlation functions \(G_{\mathbf{k}}(r)\) and \(C_{\mathbf{k}}(r)\), as well as the entanglement entropy \(S_{\mathbf{k}}(r)\), in the post-measurement states \(\ket{\psi_{\mathbf{k}}}\). These correlation functions are defined in analogy with \(G_{\mathbf{m}}(r)\), \(C_{\mathbf{m}}(r)\), and \(S_{\mathbf{m}}(r)\) [see Eq. (7)], respectively, differing only in the fact that they are evaluated for states \(\ket{\psi_{\mathbf{k}}}\) rather than \(\ket{\psi_{\mathbf{m}}}\). We now show how averages of these objects with respect to \(p_{\mathbf{k}}\) can be studied analytically. ### Replica Field Theory Since measurement outcomes are not sampled according to the Born rule, even observables linear in the post-measurement density matrix \(\rho_{\mathbf{k}}\equiv\ket{\psi_{\mathbf{k}}}\bra{\psi_{\mathbf{k}}}\) can be sensitive to the nonlocal effect of measurements. Due to the nontrivial denominator appearing in correlation functions arising from the normalization of \(\ket{\psi_{\mathbf{k}}}\), we nevertheless require a replica approach to average over measurement locations. In the following we show how the forced projective measurements appear in the field theory [see Eq. (25)]. Taking \(G_{\mathbf{k}}(r)\) again as an example, the average over disorder realizations is given by (21) where we have used \(\left[X_{j},\hat{K}_{\mathbf{k}}\right]=0\). Due to the absence of Born factors in the sampling probabilities \(p_{\mathbf{k}}\), we can employ a replica approach directly analogous to those used in the classical statistical mechanics of disordered systems [56]. We obtain \(\overline{G_{\mathbf{k}}(r)}\) from the \(n\to 0\) limit of the replica quantity \(\overline{G_{\mathbf{k}}^{(n)}(r)}\), defined as \[\overline{G_{\mathbf{k}}^{(n)}(r)} =\frac{1}{\sum_{\mathbf{k}}p_{\mathbf{k}}\left<\hat{K}_{\mathbf{k} }\right>_{\text{g.s.}}}\sum_{\mathbf{k}}p_{\mathbf{k}}\Big{[}\left<\hat{K}_{ \mathbf{k}}\right>_{\text{g.s.}}^{n-1}\left<X_{0}X_{r}\hat{K}_{\mathbf{k}} \right>_{\text{g.s.}}\] \[\qquad\qquad-\left<\hat{K}_{\mathbf{k}}\right>_{\text{g.s.}}^{n-2} \left<X_{0}\hat{K}_{\mathbf{k}}\right>_{\text{g.s.}}\left<X_{r}\hat{K}_{ \mathbf{k}}\right>_{\text{g.s.}}\Big{]}\] \[=\frac{\left<\psi_{\text{g.s.}}^{\otimes n}\right|\left(X_{0}^{(0)} X_{Y}^{(0)}-X_{0}^{(0)}X_{Y}^{(1)}\right)\hat{K}_{\text{avg}}\ket{\psi_{\text{g.s.}}^{ \otimes n}}}{\left<\psi_{\text{g.s.}}^{\otimes n}\right|\hat{K}_{\text{avg}} \ket{\psi_{\text{g.s.}}^{\otimes n}}}. \tag{22}\] As in the Born ensemble case, we have written the product of expectation values as an expectation value over an \(n\)-fold replicated ground state \(\ket{\psi_{\text{g.s.}}^{\otimes n}}\). The average measurement operator \(\hat{K}_{\text{avg}}\) is given by \[\hat{K}_{\text{avg}} \equiv\sum_{\mathbf{k}}p_{\mathbf{k}}[\hat{K}_{\mathbf{k}}]^{ \otimes n} \tag{23}\] \[=\prod_{j=1}^{N}\Bigg{\{}(1-p)+p\bigg{(}\frac{1+X_{j}}{2}\bigg{)} ^{\otimes n}\Bigg{\}}\] \[\propto\prod_{j=1}^{N}\Bigg{\{}1+\nu\sum_{r=1}^{n}\sum_{1\leq \alpha_{1}<\ldots<\alpha_{r}\leq n}X_{j}^{(\alpha_{1})}\ldots X_{j}^{(\alpha_ {r})}\Bigg{\}}.\] The average effect of forced measurements on the multi-replica ground-state \(\ket{\psi_{\text{g.s.}}^{\otimes n}}\) is once again to weakly lock the replicas together. However, unlike \(\hat{M}_{\text{avg}}\), \(\hat{K}_{\text{avg}}\) contains terms with an odd number of \(X_{j}^{(\alpha)}\) replicas. These terms bias towards amplitudes for which \(X_{j}^{(\alpha)}=+1\), as expected from the measurement scheme. We can again interpret the insertion of \(\hat{K}_{\text{avg}}\) as a defect along the \(\tau=0\) line in Euclidean spacetime. The denominator of (22) is given by a partition function \(\mathcal{Z}_{K}^{(n)}\), analogous to that of Eq. (12): \[\mathcal{Z}_{K}^{(n)}\equiv\int\prod_{\alpha=1}^{n}D\psi^{(\alpha)}\,e^{-\sum _{\alpha=1}^{\alpha}\mathcal{S}_{0}[\psi^{(\alpha)}]-\mathcal{S}_{K}^{(n)}[ \{\psi^{(\alpha)}\}]}, \tag{24}\] where \(\mathcal{S}_{K}^{(n)}[\{\psi^{(\alpha)}\}]\) is given by \[\mathcal{S}_{K}^{(n)}=\nu\sum_{\alpha=1}^{n}\int\mathrm{d}x\,(\psi^{T}\sigma^ {y}\psi)^{(\alpha)}+\ldots, \tag{25}\] and the ellipsis denotes irrelevant terms, including those listed explicitly in Eq. (13). The translation-invariant perturbation in Eq. (25) represents the dominant averaged effect of the forced projective measurements. In fact, this perturbation also arises from a forced weak measurement scheme that is manifestly translation invariant [9], as we discuss in Ref. [13]. Notably, this perturbation is _exactly_ marginal, and as we show below it has interesting consequences for the behavior of both correlation functions and the entanglement entropy. Since the leading term decouples across replicas, it will in fact be sufficient to focus on the single replica theory in discussions of long-distance properties. We then arrive at a continuum theory identical to one arising in studies of lines of weakened bonds in two-dimensional classical Ising models [35], and therefore of local defects in the Hamiltonians of TFIMs. ### Correlation Functions First we discuss the effects of the perturbation (25) on the few-body correlation functions \(C_{\mathbf{k}}\) and \(G_{\mathbf{k}}\). Exact calculations in two-dimensional classical Ising models and based on field-theoretic techniques [35] have shown that energy density correlators along the defect line retain the same scaling form as in the homogeneous Ising CFT. This can be understood simply by noting that the quadratic perturbation (25) does not modify the scaling dimension of the fermion operators \(\psi(\tau,x)\), and therefore cannot modify the scaling form of observables which are local in the fermion representation. We therefore once again expect at sufficiently long distances \[\overline{G_{\mathbf{k}}(r)}\sim r^{-2}\quad(r\gg 1), \tag{26}\] as in the unmeasured case. The order-parameter correlations \(C_{\mathbf{k}}(r)\), on the other hand, are nonlocal in the fermionic representation, and can be strongly modified by the defect (25). In particular, Refs. [33; 34] demonstrated that order-parameter correlations along the defect line of a classical Ising model exhibit nonuniversal scaling with a continuously varying exponent. We therefore similarly expect \(\overline{C_{\mathbf{k}}(r)}\) to exhibit a continuously varying power law: \[\overline{C_{\mathbf{k}}(r)}\sim r^{-2\Delta(p)}\quad(r\gg 1), \tag{27}\] where \(\Delta(p)\) defines the power-law scaling of \(\overline{C_{\mathbf{k}}(r)}\), with \(\Delta(0)=1/8\). Heuristically, the asymptotic limit of \(\Delta(p)\) as \(p\to 1\) can be inferred by writing \(C_{\mathbf{k}}(r)\) as \[\left\langle Z_{j}Z_{j+r}\right\rangle_{\mathbf{k}}=\left\langle\gamma_{2j-1} \Bigg{[}\prod_{i=j}^{j+r-1}X_{i}\Bigg{]}\gamma_{2j+2r-1}\right\rangle_{ \mathbf{k}}. \tag{28}\] At large values of \(p\), \(X_{j}\ket{\psi_{\mathbf{k}}}=+\ket{\psi_{\mathbf{k}}}\) for a large fraction of sites \(j\). The leading contribution to \(C_{\mathbf{k}}(r)\) then comes from \(\left\langle i\gamma_{2j-1}\gamma_{2j+2r-1}\right\rangle_{\text{g.s.}}\sim r^{-1}\), and we therefore expect \(\lim_{p\to 1}\Delta(p)=1/2\). The analytically predicted behavior of these two correlation functions can again be verified numerically. Figure 4 depicts the averaged correlation functions \(\overline{G_{\mathbf{k}}(r)}\) and \(\overline{C_{\mathbf{k}}(r)}\) for various measurement probabilities and system sizes, once again plotted as a function of the single parameter \(s\) [see Eq. (17)]. As in the case of measurements sampled from the Born ensemble, we observe an excellent finite-size scaling collapse of both correlation functions. We observe as predicted that \(\overline{G_{\mathbf{k}}(r)}\) retains its \(s^{-2}\) scaling for each measurement probability, while \(\overline{C_{\mathbf{k}}(r)}\sim s^{-2\Delta(p)}\) obtains a continuously varying critical exponent \(\Delta(p)\). With increasing \(p\), \(\Delta(p)\) increases monotonically towards an asymptotic value of \(1/2\). ### Entanglement Entropy Whereas the correlation functions \(\overline{G_{\mathbf{k}}(r)}\) and \(\overline{C_{\mathbf{k}}(r)}\) have natural interpretations in terms of analogous observables in either classical Ising models with defect lines or the TFIM with an ordinary timelike defect, the entanglement entropy \(\overline{S_{\mathbf{k}}(r)}\) has no immediately obvious analogue in either of these models. In this section, we will utilize conformal invariance of the Ising CFT to demonstrate a nontrivial connection between \(\overline{S_{\mathbf{k}}(r)}\) and the entanglement entropy of a model with ordinary timelike defects. In particular, we will show that the average entanglement entropy following forced projective measurements retains its logarithmic scaling, but with an _effective_ central charge \(c_{\text{eff}}(p)\) which continuously decreases with increasing measurement probability: \[\overline{S_{\mathbf{k}}(r)}\sim\frac{c_{\text{eff}}(p)}{3}\log r+b_{2}(p)\quad (r\gg 1). \tag{29}\] Here \(c_{\text{eff}}(p)\) is a monotonically decreasing function, with \(c_{\text{eff}}(0)=1/2\) and \(c_{\text{eff}}(1)=0\), and \(b_{2}(p)\) is a \(r\)-independent contribution that is generically different from \(b_{1}(p)\) in Sec. III.3, which we also expect to continuously decrease with increasing measurement probability. We present the basic qualitative argument here, and leave certain technical details for Appendix F. The irrelevance of inter-replica couplings in Eq. (25) indicates that it is sufficient to work directly at the fixed point [see also Appendix H]. We therefore consider the entanglement entropy of a contiguous subregion \(A\) of length \(r\) of the Ising CFT with a measurement defect along the \(\tau=0\) line, with the action \[\mathcal{S}^{*}[\psi]=\mathcal{S}_{0}[\psi]+\nu\int\mathrm{d}x\,\psi^{T} \sigma^{y}\psi, \tag{30}\] where \(\mathcal{S}_{0}\) is given by Eq. (3) with \(m=0\), and in the latter term \(\psi(\tau,x)\) is taken along the line \(\tau=0\). Following Refs. [52; 54], the entanglement entropy can be computed from the \(n\to 1\) limit of the ratio of two partition functions: \[S^{*}(r)=\lim_{n\to 1}\frac{1}{1-n}\log\bigg{\{}\frac{\mathcal{Z}_{n}}{ \mathcal{Z}_{1}^{n}}\bigg{\}}. \tag{31}\] Here \(S^{*}(r)\) denotes the entanglement entropy in the fixed-point model (30), \(\mathcal{Z}_{1}=\int D\psi\,e^{-\mathcal{S}^{*}[\psi]}\) is the single-replica partition function, and \(\mathcal{Z}_{n}\) is the partition function of an \(n\)-fold replicated theory subjected to the boundary conditions \[\psi^{(\alpha)}(\tau=0^{-},x)=\begin{cases}\psi^{(\alpha)}(\tau=0^{+},x),&x \not\in A\\ \psi^{(\alpha+1)}(\tau=0^{+},x),&x\in A.\end{cases} \tag{32}\] Alternatively, one can consider the \(n\) fields \(\psi^{(\alpha)}\) as a single field defined on an \(n\)-sheeted Riemann surface. The Riemann surface has a branch cut along the \(\tau=0^{+}\) axis, just above the measurement defect, running from \(x=-r/2\) to \(x=r/2\). Utilizing conformal invariance, we are free to perform a scaling transformation so as to set \(r=2\). The entanglement branch cut then lies along the \(x\)-axis with branch points located at \(x=\pm 1\), as depicted in Fig. 5(ai). We can now continuously deform the branch cut from the real line to the unit semicircle in the upper-half plane, as shown in Fig. 5(aii). As a theory defined on an \(n\)-sheeted Riemann surface, the precise location of the branch cut is unphysical and can be freely deformed, so long as the branch points at \(x=\pm 1\) are left unmodified. Equivalently, as a theory of \(n\) replicated fields subjected to the boundary conditions (32), the deformation of the branch cut amounts to defining a new set of fields \(\tilde{\psi}^{(\alpha)}(\tau,x)\) via \[\tilde{\psi}^{(\alpha)}(\tau,x)=\begin{cases}\psi^{(\alpha)}(\tau,x),&(\tau,x) \not\in\mathcal{D}\\ \psi^{(\alpha-1)}(\tau,x),&(\tau,x)\in\mathcal{D},\end{cases} \tag{33}\] where \(\mathcal{D}\) is the filled semicircle in the upper-half plane, shown in Fig. 5(aii). By deforming the entanglement cut onto the unit semicircle, we can relate the entanglement entropy in our measurement problem to that of a problem with ordinary timelike defects. Letting \(z=x+i\tau\) and \(z^{\prime}=x^{\prime}+i\tau^{\prime}\), we use the conformal mapping \[z\mapsto z^{\prime}=f(z)=-i\frac{L}{2\pi}\log z \tag{34}\] to map the infinite plane to a cylinder of circumference \(L\). The measurement defect maps to _two_ timelike defects at locations \(x^{\prime}=0\) and \(x^{\prime}=L/2\), while the deformed entanglement cut maps to a spacelike entanglement cut from \(x^{\prime}=0\) to \(x^{\prime}=L/2\), as shown in Fig. 5(aiii). We therefore obtain a relation between the average entanglement entropy \(S^{*}(r)\) of the Ising CFT in the presence of forced measurements, and the half-system entanglement entropy \(S^{*}_{d}(L/2)\) of the Ising CFT on a cylinder with ordinary defects at the entangling boundaries. Having established this connection, we can now make contact with previous studies of the effects of physical defects on the entanglement entropy [37; 38; 44; 45; 46]. For the case presented here with exactly marginal defect lines, these works suggest that the entanglement entropy should maintain its logarithmic growth in \(L\), but with an effective central charge \(c_{\text{eff}}^{*}(\nu)\) which continuously decreases with increasing defect strength: \[S_{d}^{*}(L/2)=\frac{c_{\text{eff}}^{*}(\nu)}{3}\log L+b_{d}(\nu). \tag{35}\] Using the transformation properties of correlation functions under conformal transformations, the results of the above sequence of mappings suggest the form (29) for the entanglement entropy of a subregion of length \(r\) in the original problem with forced measurements. If the microscopic measurement probability \(p\) results in a fixed-point defect strength \(\nu(p)\), then \(c_{\text{eff}}(p)=c_{\text{eff}}^{*}(\nu(p))\). We can once again numerically verify the predicted behavior (29) for the entanglement entropy following forced projective measurements. Figure 5(b) depicts the average entanglement entropy \(\overline{S_{\mathbf{k}}(r)}\) for various measurement probabilities and system sizes; we again obtain an excellent finite-size scaling collapse by plotting as a function of the parameter \(s\) [see Eq. (17)]. As predicted, \(\overline{S_{\mathbf{k}}(r)}\) retains its logarithmic scaling at all observed measurement probabilities \(p\), with a continuously decreasing effective central charge \(c_{\text{eff}}(p)\). To verify the proposed connection to the Ising CFT with ordinary timelike defects, we additionally numerically simulate the ground state \(\ket{\psi_{d}}\) of the critical TFIM with two defect transverse fields, for several system sizes \(N\). The Hamiltonian is \[H_{d}=H-g_{d}\big{[}X_{1}+X_{N/2}\big{]}, \tag{36}\] where \(H\) is the TFIM Hamiltonian in Eq. (1) with \(g=1\), and \(g_{d}\) gives an enhancement of the transverse field at the defect sites \(j=1\) and \(j=N/2\). Using free-fermion numerics [see Appendix A], we compute order-parameter correlations \(\bra{\psi_{d}}Z_{1}Z_{N/2}\ket{\psi_{d}}\) between the two defect sites and the entanglement entropy \(S_{d}(N/2)=-\operatorname{tr}\rho_{d}^{A_{d}}\log\rho_{d}^{A_{d}}\) of the subregion \(A_{d}=[1:N/2]\) containing \(N/2\) sites, including both defect sites. As expected from previous works on the TFIM with defects [37; 38; 44; 45; 46], we find \[\begin{split}&\bra{\psi_{d}}Z_{1}Z_{N/2}\ket{\psi_{d}}\sim N^{-2 \Delta_{d}(g_{d})},\\ & S_{d}(N/2)\sim\frac{c_{\text{eff},d}(g_{d})}{3}\log N+b_{3}(g_{ d}).\end{split} \tag{37}\] A priori, it is difficult to directly compare the effective central charge \(c_{\text{eff},d}(g_{d})\) in the defect model (36) with Figure 5: (a): Spacetime diagrams depicting the relation between the average entanglement entropy \(\overline{S_{\mathbf{k}}(r)}\) of the TFIM following forced projective measurements and the entanglement entropy \(S_{d}(N/2)\) of a dual impurity problem. (ai): One sheet of the Riemann surface used to compute the partition function \(\mathcal{Z}_{n}\) in Eq. (31). Red line denotes the measurement defect along the \(\tau=0\) line, while the blue line denotes the entanglement branch cut. (aii): By redefining the replica fields \(\psi^{(\alpha)}(\tau,x)\) in region \(\mathcal{D}\) as in Eq. (33), the branch cut is continuously deformed from the real axis onto the semicircle. (aiii): using the conformal mapping of Eq. (34), the infinite plane is mapped to a cylinder of circumference \(L\). The spacelike measurement defect is mapped to two timelike impurities at \(x^{\prime}=0\) and \(x^{\prime}=L/2\), while the deformed entanglement cut is mapped to a spacelike cut along the \(\tau^{\prime}=0\) axis. (b): Ensemble-averaged entanglement entropy \(\overline{S_{\mathbf{k}}(r)}\) of a contiguous subregion of \(r\) sites in the forced-measurement ensemble, for measurement probabilities \(p=0.2\) (blue), \(0.5\) (green), and \(0.8\) (red), and for system sizes \(N=32\), \(64\), \(128\), and \(256\) (light to dark). Data is plotted as a function of \(s=\frac{N}{\pi}\sin\left(\frac{\pi r}{N}\right)\) to achieve scaling collapse of the various system sizes. Dotted line depicts the behavior in the unmeasured system. We observe \(\overline{S_{\mathbf{k}}(r)}\sim\frac{c_{\text{eff}}(p)}{3}\log s+b_{2}(p)\) exhibits logarithmic scaling at all measurement probabilities with a continuously decreasing effective central charge \(c_{\text{eff}}(p)\). For the measurement probabilities shown, \(c_{\text{eff}}(0.2)\simeq 0.478\), \(c_{\text{eff}}(0.5)\simeq 0.339\), and \(c_{\text{eff}}(0.8)\simeq 0.105\). (c): Numerical comparison between the effective central charge charge \(c_{\text{eff}}(p)\) in the average entanglement entropy \(\overline{S_{\mathbf{k}}(r)}\) in the forced-measurement ensemble, and the effective central charge \(c_{\text{eff},d}(g_{d})\) of the half-system entanglement entropy \(S_{d}(N/2)\) of a dual TFIM with defects described by the Hamiltonian (36). Purple dots: effective central charge \(c_{\text{eff}}(p)\) for several measurement probabilities \(p\) between \(0\) and \(0.95\) in increments of \(0.05\), as a function of the effective scaling dimension \(\Delta(p)\) governing the decay of \(\overline{C_{\mathbf{k}}(r)}\). Black curve: effective central charge \(c_{\text{eff},d}(g_{d})\) as a function of the scaling dimension \(\Delta_{d}(g_{d})\) governing the decay of order-parameter correlations \(\bra{\psi_{d}}Z_{1}Z_{N/2}\ket{\psi_{d}}\) between the two impurities. the effective central charge \(c_{\rm eff}(p)\) following forced measurements; although both models are described at long distances by the same Ising CFT with a defect line, there is no simple relation between the microscopic parameters \(p\) and \(g_{d}\) and the defect strength \(\nu\) at the fixed point. Instead, noting that both the order-parameter scaling dimension \(\Delta(p)\) and the effective central charge \(c_{\rm eff}(p)\) are controlled by the fixed-point defect strength \(\nu\) (and similarly for \(\Delta_{d}(g_{d})\) and \(c_{\rm eff,d}(g_{d})\)), we eliminate \(\nu\) altogether by plotting \(c_{\rm eff}(p)\) as a function of \(\Delta(p)\) and \(c_{\rm eff,d}(g_{d})\) as a function of \(\Delta_{d}(g_{d})\). The result is shown in Fig. 5(c), with the black line denoting data obtained from the defect model (36), and with purple dots depicting data obtained from the large-\(s\) behavior of \(\overline{S_{\bf k}(r)}\) and \(\overline{C_{\bf k}(r)}\) for measurement probabilities \(p\) between \(0\) and \(0.95\) in increments of \(0.05\). We find a remarkable agreement between the data of the two models, providing strong numerical support for the analytical mapping discussed in this section. ## V Discussion Measuring part of a many-body quantum state can give rise to surprising new correlations. In this work we have studied the effects of local measurements on the critical one-dimensional TFIM, a highly-entangled system for which exact numerical calculations are possible. Our focus has been on the partial collapse of the ground state that arises from parity-preserving measurements of a finite fraction \(\sim p\) of the degrees of freedom. We have shown that, although measuring all degrees of freedom (\(p=1\)) certainly destroys quantum correlations, if a finite fraction \(\sim(1-p)\) remain unmeasured then the original critical correlations survive on average at long distances. The origin of this robustness can be understood from properties of the Ising CFT. We have developed the replica framework of Ref. [9] to include the physically realistic case of projective measurements, and in this way we have established a direct link between a microscopic lattice description of measurements of the TFIM and of defects in the Ising CFT. In particular, parity-preserving measurements with outcomes sampled according to the Born rule fail to alter long-distance correlations (for \(p<1\)) because they correspond to an irrelevant perturbation in the replica theory: While the unperturbed Ising theory in \((1+1)\) dimensions is quadratic in the fermion field \(\psi(\tau,x)\), the perturbation is quartic in \(\psi(\tau,x)\) and acts only on a \((1+0)\)-dimensional surface of fixed imaginary time [see Eq. (13)]. However, postselecting on certain outcomes of parity-preserving measurements does lead to interesting new correlations. Measuring \(X_{j}\) (or \(Z_{j}Z_{j+1}\), see Appendix G) and forcing the outcome \(X_{j}=+1\) generates a marginal perturbation in the field theory (quadratic in fermionic fields rather than quartic). We have shown in Sec. IV that the exponents governing the post-measurement power laws vary continuously with the fraction of measured sites. Continuously-varying power laws of this kind were identified some time ago in studies of the statistical mechanics of two-dimensional classical Ising models with modified couplings along a line [33; 34; 35] and the continuum description of this system is essentially the same as for our measured ground state with forced measurements. A quantity that is meaningful in the problem we have considered, but which does not arise naturally in classical statistical mechanics, is the entanglement entropy. In addition to modifying correlation functions, we have shown in Fig. 5(b) that forced measurements lead to a variation of the effective central charge. A key contribution of this work is to show that the entanglement entropy of a finite subregion in this measurement problem can be mapped, through a conformal transformation, to an entanglement entropy of a system with two physical defects; the latter problem having been the subject of a number of previous studies [37; 38; 43; 44; 45; 46]. By comparing long-distance properties of lattice models corresponding to the two sides of this duality transformation, we have confirmed numerically that the effective central charges coincide. An advantage when working with the integrable TFIM is that its ground state, and the effects of parity-preserving measurements, can be described exactly with polynomial computational resources. This has allowed us to verify the above predictions numerically. While our numerical method relies on the fact the that system is integrable, aspects of our field-theoretic analysis do not. For example, if we introduce to the Hamiltonian an irrelevant integrability-breaking perturbation then we expect that correlations will be modified at short but not at long distances. The change in short-distance correlations could lead to a renormalization of the effective measurement probability, but we nevertheless expect long-distance post-measurement correlations to decay with the same exponents as in the ground state. If one moves away from free fermion simulations, it is natural to consider the effects of measurements which do not preserve the parity of the state. In particular, one can perform local measurements of the order parameter, i.e. of the \(Z_{j}\) operators. Within the replica description of the Born ensemble in Sec. III one immediately finds that, since the scaling dimension of the order parameter is \(1/8\) in the Ising CFT, measuring these operators generates a relevant perturbation. This suggests that measuring \(Z_{j}\) typically causes the post-measurement field theory to flow to a'strong measurement' fixed point, where the long-distance properties of correlation functions are modified relative to the ground state. The possibility for simulating the effects of measurements has important implications for experiments. This is because the effects of measurement can be observed without postselection provided one has access to an appropriate simulation on a classical computer [9; 11; 39; 40]. Only averages of quantities nonlinear in the post-measurement density matrix, such as \(\langle Z_{0}Z_{r}\rangle_{\bf m}^{2}\), are sensitive to the effects of measurement as distinct from dephasing, but these cannot be determined directly since each outcome \(\mathbf{m}\) occurs at most once [see discussion in e.g. Ref. [9]]. Instead of trying to determine averages such as \(\overline{\langle Z_{0}Z_{r}\rangle_{\mathbf{m}}^{\mathrm{cl}}}\), which suffer from a postselection problem, one can weight the results of measurements of the operator \(Z_{0}Z_{r}\) by estimates for its expectation value coming from a simulation on a classical computer \(\langle Z_{0}Z_{r}\rangle_{\mathbf{m}}^{\mathrm{cl}}\). In this way one can obtain the 'quantum-classical estimator' [9]\(\overline{\langle Z_{0}Z_{r}\rangle_{\mathbf{m}}^{\mathrm{cl}}\langle Z_{0}Z_{r }\rangle_{\mathbf{m}}}\) (or 'computationally-assisted observable' [11]) which is the cross-correlation between the experiment and our prediction, and the 'classical-classical estimator' \(\overline{(\langle Z_{0}Z_{r}\rangle_{\mathbf{m}}^{\mathrm{cl}})^{2}}\), which is simply the prediction. Coincidence between these two objects provides a necessary condition that the quantum system studied in experiment has exhibited the same behavior as the classical simulation. With regard to experimental platforms, Rydberg quantum simulators have proved to be a highly-controllable setting for the study of quantum Ising models [57; 58], with the important caveat that the long-range van der Waals interactions render the effective Ising models nonintegrable. However, since these interactions decay as the sixth power of the separation between qubits, they are an irrelevant perturbation to the Ising CFT, and certain coarse-grained features of a classical simulation of the integrable TFIM should match those of a quantum simulation using Rydberg atoms. It is natural to ask whether, by cross-correlating results from a Rydberg quantum simulator with the results of exact free-fermion numerics, the effects of measurements can be observed without postselection. One can also address this kind of question numerically: given two different lattice simulations of the same critical theory, to what extent are coarse-grained correlations post-measurement sensitive to differences on short length scales? ## Appendix A Free Fermion Simulation Here we summarize some of the technical details required for simulations based on fermionic Gaussian states. On a finite system of size \(N\) with periodic boundary conditions, the Hamiltonian reads \[H=-J\sum_{j=1}^{N}\left\{gX_{j}+Z_{j}Z_{j+1}\right\}=-iJg\sum_{j=1}^{N}\gamma_{ 2j-1}\gamma_{2j}-iJ\sum_{j=1}^{N-1}\gamma_{2j}\gamma_{2j+1}+iJ\Pi\gamma_{2N} \gamma_{1}, \tag{10}\] where we have used the Jordan-Wigner transformation \[\gamma_{2j-1}=\left[\prod_{i=1}^{j-1}X_{i}\right]Z_{j},\quad\gamma_{2j}= \left[\prod_{i=1}^{j-1}X_{i}\right]Y_{j}. \tag{11}\] The Majorana fermions \(\gamma_{j}\) satisfy the anticommutation relations \(\{\gamma_{i},\gamma_{j}\}=2\delta_{ij}\), as well as the identities \(X_{j}=i\gamma_{2j-1}\gamma_{2j}\) and \(Z_{j}Z_{j+1}=i\gamma_{2j}\gamma_{2j+1}\). We have also defined the total parity operator \[\Pi=\prod_{j=1}^{N}X_{j}=i^{N}\prod_{j=1}^{2N}\gamma_{j}, \tag{12}\] which appears, with periodic boundary conditions, in the bond connecting sites \(j=1\) and \(j=N\). Thus, the Majorana representation of the TFIM has antiperiodic boundary conditions in the parity-even (\(\Pi=+1\)) sector of the Hilbert space, while it has periodic boundary conditions in the parity-odd (\(\Pi=-1\)) sector. Since the exact ground state of \(H\) lies in the parity-even sector [48], we can freely set \(\Pi=+1\) so long as we consider measurements and observables which preserve the parity of the ground state. The Hamiltonian (10) with \(\Pi=1\) is quadratic, and therefore its ground-state correlations can be efficiently computed [49; 50; 51; 60]. Let us briefly review the method for a generic quadratic Hamiltonian of the form \[H=\frac{i}{4}\sum_{i,j=1}^{2N}\gamma_{i}A_{ij}\gamma_{j}. \tag{11}\] Here \(A\) is a \(2N\times 2N\) real antisymmetric matrix, and so can be block-diagonalized into blocks of the form \(\varepsilon_{\alpha}(i\sigma^{y})\) by a matrix \(R\in\mathrm{SO}\) (\(2N\)) [61]. To do this one can first diagonalize the (fully imaginary) Hermitian matrix \(-iA\), whose (real) eigenvalues come in oppositely-signed pairs. Note then that the diagonalized \(-iA\) is expressed in terms of \(2\times 2\) blocks \(\varepsilon_{\alpha}\sigma^{z}\), which are unitarily related to \(\varepsilon_{\alpha}\sigma^{y}\). Finally we identify \(\varepsilon_{\alpha}(i\sigma^{y})\) as the real antisymmetric blocks of the transformed matrix \(A\), and from this procedure we extract \(R\). If we then apply the orthogonal transformation \(R\) to the Majoranas (which preserves the anticommutation relations) we obtain \[H=\frac{i}{2}\sum_{\alpha=1}^{N}\varepsilon_{\alpha}\eta_{2\alpha-1}\eta_{2 \alpha},\quad\eta_{\alpha}=\sum_{i=1}^{2N}R_{i\alpha}\gamma_{i}. \tag{12}\] Choosing each \(\varepsilon_{\alpha}\) to be positive, the ground state and ground-state energy are immediately found by demanding \(i\eta_{2\alpha-1}\eta_{2\alpha}=-1\). In particular, the two-point correlations in the ground state are \[G_{ij}\equiv\left\langle i\gamma_{i}\gamma_{j}\right\rangle-i\delta_{ij}=\sum _{\beta,\gamma=1}^{2N}R_{i\beta}R_{j\gamma}[\left\langle i\eta_{\beta}\eta_{ \gamma}\right\rangle-i\delta_{\beta\gamma}]=-\sum_{\alpha=1}^{N}(R_{i,2\alpha -1}R_{j,2\alpha}-R_{i,2\alpha}R_{j,2\alpha-1}), \tag{13}\] where we have used the fact that \(\left\langle\eta_{\beta}\eta_{\gamma}\right\rangle=0\) unless \(\beta,\gamma=2\alpha-1,2\alpha\) (in either order). The ground state of a quadratic Hamiltonian (11), or more generally a thermal state of any inverse temperature1\(\beta\), is called a Gaussian state [51]. Once the covariance matrix \(G_{ij}\) of a Gaussian state has been obtained, all higher-order correlations of the Majoranas are determined via Wick's theorem [49]. For example, Footnote 1: To be precise, the set of Gaussian states consist of density matrices of the form \(\rho=\frac{1}{2}e^{-\beta H}\), where \(H\) is of the form (11) and \(\mathcal{Z}=\mathrm{tr}\,e^{-\beta H}\). In this equation, \(\beta=\infty\) recovers the ground-state density matrix \(\rho=\left|\psi\right\rangle\!\!\left\langle\psi\right|\). Even more generally, \(H\) is allowed to have individual single-particle energies \(\varepsilon_{\alpha}=\pm\infty\), corresponding to definite fermion parities \(\left\langle i\eta_{2\alpha-1}\eta_{2\alpha}\right\rangle=\mp 1\) amongst other indefinite fermion parities. \[i^{2}\left\langle\gamma_{i}\gamma_{j}\gamma_{k}\gamma_{\ell}\right\rangle= \left\langle i\gamma_{i}\gamma_{j}\right\rangle\left\langle i\gamma_{k} \gamma_{\ell}\right\rangle-\left\langle i\gamma_{i}\gamma_{k}\right\rangle \left\langle i\gamma_{j}\gamma_{\ell}\right\rangle+\left\langle i\gamma_{i} \gamma_{\ell}\right\rangle\left\langle i\gamma_{j}\gamma_{k}\right\rangle=G_{ ij}G_{k\ell}-G_{ik}G_{j\ell}+G_{i\ell}G_{jk}. \tag{14}\] In general, a \(2n\)-point correlation function \(i^{n}\left\langle\gamma_{i_{1}}\ldots\gamma_{i_{2n}}\right\rangle\) can be computed using the Pfaffian of a submatrix of \(G_{ij}\), containing only the rows and columns \(i_{1}\) through \(i_{2n}\). Explicitly, \[i^{n}\left\langle\gamma_{i_{1}}\ldots\gamma_{i_{2n}}\right\rangle =\frac{1}{2^{n}n!}\sum_{\sigma\in S_{2n}}(-1)^{\sigma}\left\langle i \gamma_{i_{\sigma(1)}}\gamma_{i_{\sigma(2)}}\right\rangle\ldots\left\langle i \gamma_{i_{\sigma(2n-1)}}\gamma_{i_{\sigma(2n)}}\right\rangle \tag{15}\] \[=\frac{1}{2^{n}n!}\sum_{\sigma\in S_{2n}}(-1)^{\sigma}G_{i_{\sigma( 1)}i_{\sigma(2)}}\ldots G_{i_{\sigma(2n-1)}i_{\sigma(2n)}}\] \[\equiv\mathrm{Pf}_{i_{1},\ldots,i_{2n}}[G],\] where \(S_{2n}\) is the permutation group of \(2n\) elements, \((-1)^{\sigma}=\pm 1\) is the sign of the permutation \(\sigma\), and the indices of Pf denote the subset of rows and coulums of \(G\) appearing in the third line. Such a Pfaffian can be computed efficiently using the algorithm of Ref. [62]. We also note that Gaussian states necessarily commute with parity, which immediately implies that odd (\(2n+1\))-point correlators vanish. As an application of the equations (14) and (15), we provide explicit formulae for the correlators \(\left\langle X_{j}X_{j+r}\right\rangle-\left\langle X_{j}\right\rangle\left\langle X _{j+r}\right\rangle\) and \(\left\langle Z_{j}Z_{j+r}\right\rangle\) employed in the main text. The former correlator is local in the Majorana representation and therefore has a simple representation in terms of the covariance matrix: \[\left\langle X_{j}X_{j+r}\right\rangle-\left\langle X_{j}\right\rangle\left\langle X _{j+r}\right\rangle=G_{2j-1,2j+2r}G_{2j,2j+2r-1}-G_{2j-1,2j+2r-1}G_{2j,2j+2r}. \tag{16}\] On the other hand, the latter expression is nonlocal in the Majorana representation, and requires computing a Pfaffian of a \(2r\times 2r\) submatrix of \(G\): \[\langle Z_{j}Z_{j+r}\rangle=\langle i^{r}\gamma_{2j}\ldots\gamma_{2j+2r-1} \rangle=\mathrm{Pf}_{2j,\ldots,2j+2r-1}[G]. \tag{10}\] Projective measurements of the pairing operators \(i\gamma_{k}\gamma_{\ell}\) preserve the Gaussianity of the ground state [49; 51]; in particular, measurements of both \(X_{j}=i\gamma_{2j-1}\gamma_{2j}\) and \(Z_{j}Z_{j+1}=i\gamma_{2j}\gamma_{2j+1}\) preserve Gaussianity. Up to a normalization factor, the effect of such a projective measurement on a state \(\ket{\psi}\) is \[\ket{\psi}\mapsto P_{k\ell}^{\pm}\ket{\psi},\quad P_{k\ell}^{\pm} =\frac{1\pm i\gamma_{k}\gamma_{\ell}}{2}, \tag{11}\] where the outcomes \(i\gamma_{k}\gamma_{\ell}=\pm 1\) occur with probability \(\bra{\psi}P_{k\ell}^{\pm}\ket{\psi}\) respectively, according to the Born rule. Following the measurement, the covariance matrix evolves to \[G_{ij}\mapsto G_{ij}^{\prime}=\frac{\bra{\psi}P_{k\ell}^{\pm}i \gamma_{i}\gamma_{j}P_{k\ell}^{\pm}\ket{\psi}}{\bra{\psi}P_{k\ell}^{\pm}\ket{ \psi}}, \tag{12}\] which can be evaluated using Wick's theorem if the initial state \(\ket{\psi}\) is Gaussian. Finally, computation of the entanglement entropy \(S(A)\) of a subregion \(A\) can be performed efficiently using the covariance matrix. We first note that the reduced density matrix \(\rho^{A}=\mathrm{tr}_{A^{e}}\ket{\psi}\!\bra{\psi}\) is automatically Gaussian if \(\ket{\psi}\) is Gaussian, since all of its correlations can be obtained using Wick's theorem. Its correlation matrix \(G_{ij}^{A}\) is simply a submatrix of \(G_{ij}\) with entries from region \(A\). We can then infer the spectrum of \(\rho^{A}\) directly from the spectrum of \(G_{ij}^{A}\): block-diagonalizing \(G_{ij}^{A}\) using an orthogonal matrix \(R_{i\alpha}^{A}\), \[G_{\alpha\beta}^{A}=\sum_{i,j\in A}R_{\alpha i}^{A}R_{\beta j}^{A }G_{ij}^{A}=\bigoplus_{\alpha=1}^{N_{A}}\begin{pmatrix}0&\lambda_{\alpha}\\ -\lambda_{\alpha}&0\end{pmatrix}, \tag{13}\] where \(\ket{\lambda_{\alpha}}<1\). The unique Gaussian reduced density matrix reproducing these correlations is then \[\rho^{A}=\prod_{\alpha=1}^{N_{A}}\bigg{(}\frac{1+i\lambda_{\alpha }\xi_{2\alpha-1}\xi_{2\alpha}}{2}\bigg{)},\quad\xi_{\alpha}=\sum_{i\in A}R_{i \alpha}^{A}\gamma_{i}, \tag{14}\] where \(N_{A}\) is the number of sites in region \(A\). From this expression we can immediately read off the spectrum of \(\rho^{A}\), and thereby compute the entanglement entropy: \[S(A)=-\operatorname{tr}\rho^{A}\log\rho^{A}=-\sum_{\alpha=1}^{N _{A}}\bigg{[}\bigg{(}\frac{1+\lambda_{\alpha}}{2}\bigg{)}\log\bigg{(}\frac{1+ \lambda_{\alpha}}{2}\bigg{)}+\bigg{(}\frac{1-\lambda_{\alpha}}{2}\bigg{)}\log \bigg{(}\frac{1-\lambda_{\alpha}}{2}\bigg{)}\bigg{]}. \tag{15}\] We can therefore numerically compute \(S(A)\) simply by block-diagonalizing \(G_{ij}^{A}\), or equivalently by diagonalizing \(iG_{ij}^{A}\). ## Appendix B Ising Conformal Field Theory In this section, we provide a brief account on the relation between the microscopic lattice Hamiltonian (1) for the TFIM and the continuum action (3) for the Ising CFT. Starting from the Majorana representation (2), we can trivially rewrite the Hamiltonian as \[H=-\frac{iJ}{2}\sum_{j=1}^{N}\Big{\{}\gamma_{2j}(\gamma_{2j+1}- \gamma_{2j-1})+\gamma_{2j-1}(\gamma_{2j}-\gamma_{2j-2})+(g-1)(\gamma_{2j-1} \gamma_{2j}-\gamma_{2j}\gamma_{2j-1})\Big{\}}. \tag{16}\] In the scaling limit \(g\to 1\), the correlation length diverges and the lattice Hamiltonian can be traded for a continuum description. We introduce a lattice spacing \(a\to 0\) and a two-component spinor \(\hat{\psi}(x=ja)=\frac{1}{\sqrt{2a}}[\gamma_{2j-1},\gamma_{2j}]^{T}\), whose components satisfy \(\Big{\{}\hat{\psi}_{a}(x),\hat{\psi}_{b}(x^{\prime})\Big{\}}=\frac{1}{a} \delta_{ab}\delta_{jj^{\prime}}\rightarrow\delta_{ab}\delta(x-x^{\prime})\). Up to irrelevant terms, the Hamiltonian is written in terms of \(\hat{\psi}\) as \[H=\frac{v}{2}\int\mathrm{d}x\,\hat{\psi}^{T}[-i\sigma^{x}\partial_{x}+m\sigma ^{y}]\hat{\psi}, \tag{17}\] where \(v=2Ja\) is the Fermi velocity from the exact solution of the TFIM, and \(m=(g-1)/a\) vanishes at the critical point. To derive the path integral representation of the above continuum model [55], we introduce a second copy of the same system, written in terms of a Majorana spinor \(\hat{\chi}(x)\). We can then combine \(\hat{\psi}\) and \(\hat{\chi}\) into a single Dirac spinor, \(\hat{D}=\frac{1}{\sqrt{2}}(\hat{\psi}+i\hat{\chi})\), which is an ordinary complex Dirac fermion. We then write the partition function using the usual Grassmann coherent-state path integral, trading the fermion operators \(\hat{D}_{a}\) and \(\hat{D}_{a}^{\dagger}\) for Grassmann numbers \(D_{a}\) and \(\bar{D}_{a}\). Finally, we re-express the "complex" Grassmann spinor \(D=\frac{1}{\sqrt{2}}(\psi+i\chi)\) in terms of "real" Grassmann spinors \(\psi\) and \(\chi\), which are decoupled from each other, and integrate over \(\chi\). Absorbing \(v\) into the definition of the imaginary time \(\tau\), the result of this computation is the imaginary-time action \[\mathcal{S}_{0}[\psi]=\frac{1}{2}\int\mathrm{d}\tau\,\mathrm{d}x\,\psi^{T}[ \partial_{\tau}-i\sigma^{x}\partial_{x}+m\sigma^{y}]\psi. \tag{12}\] Following this same procedure for arbitrary correlation functions of \(\hat{\psi}\), one finds that each such correlation function is obtained in the path integral representation simply by replacing operators \(\hat{\psi}\) with Grassmann numbers \(\psi\). At the critical point \(m=0\), \(S_{0}[\psi]\) is one of the simplest examples of a CFT [26]. The Ising CFT in particular is characterized by two scaling operators \(\sigma(\tau,x)\) and \(\varepsilon(\tau,x)\), with scaling dimensions \(\Delta_{\sigma}=1/8\) and \(\Delta_{\varepsilon}=1\) respectively. These represent the two relevant perturbations to the Ising critical point, and respectively reproduce the correlations of the operators \(Z_{j}\) and \(X_{j}\) at long distances: \[\left\langle Z_{0}Z_{r}\right\rangle_{\mathrm{g.s.}}\sim\left\langle\sigma(0) \sigma(x)\right\rangle=\frac{1}{x^{1/4}},\quad\left\langle X_{0}X_{r}\right\rangle _{\mathrm{g.s.}}-\left\langle X_{0}\right\rangle_{\mathrm{g.s.}}\left\langle X _{r}\right\rangle_{\mathrm{g.s.}}\sim\left\langle\varepsilon(0)\varepsilon(x) \right\rangle=\frac{1}{x^{2}}, \tag{13}\] where we are restricted always to equal-\(\tau\) correlations, and we have set \(x=ra\). The energy operator2\(\varepsilon=2\pi i:\!\psi_{1}\psi_{2}\!\cdot\!\) is invariant under the \(\mathbb{Z}_{2}\) Ising symmetry, and can therefore be expressed locally in the fermionic representation. On the other hand, the spin operator \(\sigma\) is nonlocal in the fermionic representation. Nevertheless, the correlators (13) can be obtained both directly within the fermionic representation [63] or by utilizing bosonization techniques [26; 64]. Footnote 2: Here \(i:\psi_{1}\psi_{2}:=i\psi_{1}\psi_{2}-\left\langle i\psi_{1}\psi_{2}\right\rangle\) denotes normal-ordering. Normal-ordering is a priori unnecessary at the fixed point, since the symmetry \(\psi\to\sigma^{x}\psi\) implies \(\left\langle i\psi_{1}\psi_{2}\right\rangle=0\). However, since this symmetry is broken by irrelevant perturbations, we keep normal-ordering to ensure \(\left\langle\varepsilon(x)\right\rangle=0\) and \(\varepsilon\to-\varepsilon\) under Kramers-Wannier duality. It is well-known that critical one-dimensional systems exhibit logarithmic-scaling entanglement entropy. In particular, it can be shown quite generally that the entanglement entropy of a contiguous region \([0:r)\) of length \(r\) in the ground state of a one-dimensional CFT is given by [52] \[S(r)=\frac{c}{3}\log r+b_{0}, \tag{14}\] where \(b_{0}\) is a nonuniveral constant, and \(c\) is the so-called _central charge_ of the CFT. In the Ising CFT, \(c=1/2\). ## Appendix C Finite-Size Scaling In numerical simulations of finite-sized systems, it is convenient to work with periodic boundary conditions, \(j\cong j+N\). Conformal invariance of the low-energy theory then allows for a precise prediction of the behavior of correlation functions \(C(r)\) and \(G(r)\) as a function of the system size \(N\)[26]. Specifically, if the continuum model (12) is taken at the critical point \(m=0\), we expect that correlation functions will transform covariantly under holomorphic mappings \(z\mapsto z^{\prime}=f(z)\) of the complex variable \(z=x+i\tau\). If the model is initially defined on the cylinder, such that \(x\cong x+L\) with \(L\equiv Na\), then we can obtain correlation functions on the cylinder from correlation functions on the infinite plane using the mapping \[z^{\prime}=f(z)=L\tan\Big{(}\frac{\pi z}{L}\Big{)}, \tag{15}\] which maps the cylinder to the infinite plane. This particular mapping is especially useful for our purposes, since it preserves the \(\tau=0\) line. Since measurements in our models appear as defects along the \(\tau=0\) line of Euclidean spacetime, we can therefore predict the numerically observed effect of measurements on finite-sized systems using analytical calculations in the thermodynamic limit. Specifically, the above mapping suggests that the correlators (43) on the cylinder are given by3 Footnote 3: In CFT, \(\sigma\) and \(\varepsilon\) are examples of so-called “primary operators.” Under a conformal transformation \(z\mapsto f(z)\), a generic primary operator \(\phi(z,\bar{z})\) transforms inside correlation functions as \(\phi(z,\bar{z})\mapsto\phi^{\prime}(z^{\prime},\bar{z}^{\prime})=[f^{\prime}(z)]^ {-h}[\bar{f}^{\prime}(\bar{z})]^{-\bar{h}}\phi(z,\bar{z})\), where \((h,\bar{h})\) are the so-called “conformal dimensions” of \(\phi\). Using \(h_{\sigma}=h_{\sigma}=1/16\) and \(h_{\varepsilon}=h_{\varepsilon}=1/2\), one immediately obtains the given expressions for the correlations \(\langle\sigma(0)\sigma(x)\rangle_{\mathrm{g.s.}}^{\mathrm{cycl}}\) and \(\langle\varepsilon(0)\sigma(x)\rangle_{\mathrm{g.s.}}^{\mathrm{cycl}}\) on the cylinder. \[\langle\sigma(0)\sigma(x)\rangle_{\mathrm{g.s.}}^{\mathrm{cycl}}=\left[\frac{L }{\pi}\sin\left(\frac{\pi x}{L}\right)\right]^{-1/4},\quad\langle\varepsilon( 0)\varepsilon(x)\rangle_{\mathrm{g.s.}}^{\mathrm{cycl}}=\left[\frac{L}{\pi} \sin\left(\frac{\pi x}{L}\right)\right]^{-2}. \tag{45}\] We therefore expect the correlators \(G(r)\) and \(C(r)\), which are a priori functions of \(r\) and \(N\) separately, to be functions of the single variable \(s=\frac{N}{\pi}\sin\left(\frac{\pi r}{N}\right)\). The infinite-plane behavior is recovered in the limit \(N\to\infty\), upon which \(s\to r\) for any finite \(r\). A similar result holds for the entanglement entropy of a finite system with periodic boundary conditions [52]: \[S^{\mathrm{cycl}}(r)=\frac{c}{3}\log\left[\frac{N}{\pi}\sin\left(\frac{\pi r}{ N}\right)\right]+b^{\prime}_{0}. \tag{46}\] These finite-size expressions allow for excellent scaling collapses of various numerically computed observables across several system sizes, as demonstrated in the main text. ## Appendix D Continuum Limit of \(\hat{M}_{\mathrm{avg}}\) and \(\hat{K}_{\mathrm{avg}}\) In this section, we explain the continuum descriptions of the averaged measurement operators \(\hat{M}_{\mathrm{avg}}\) and \(\hat{K}_{\mathrm{avg}}\) defined in Eqs. (11) and (23) respectively. In particular, we show that these two measurement operators result in defects along the \(\tau=0\) line in Euclidean spacetime; the former of these defects is irrelevant, while the latter contains an exactly marginal perturbation to the Ising CFT. We also show that the irrelevant contributions to the former averaged measurement operator cannot generate marginal terms at higher orders of the perturbative RG - or, more precisely, that any generated marginal terms are inconsequential to observables in the replica limit. Starting with \(\hat{M}_{\mathrm{avg}}\), the denominator of \(n\)-replica correlation functions of the form (10) takes the form of a partition function \[\mathcal{Z}_{M}^{(n)}\equiv\bra{\psi_{\mathrm{g.s.}}^{\otimes n}}\hat{M}_{ \mathrm{avg}}\ket{\psi_{\mathrm{g.s.}}^{\otimes n}}\propto\left\langle\prod_{ j=1}^{N}\left\{1+\mu\sum_{r=1}^{\lfloor n/2\rfloor}\sum_{1\leq\alpha_{1}<\ldots< \alpha_{2r}\leq n}(i\gamma_{2j-1}\gamma_{2j})^{(\alpha_{1})}\ldots(i\gamma_{2j -1}\gamma_{2j})^{(\alpha_{2r})}\right\}\right\rangle_{\mathrm{g.s.}}. \tag{47}\] Note that from the definition of \(\mathcal{Z}_{M}^{(n)}\) we have the normalization \(\mathcal{Z}_{M}^{(n)}=1\) for \(p=0\). The notation \((i\gamma_{2j-1}\gamma_{2j})^{(\alpha)}\) simply means \(i\gamma_{2j-1}^{(\alpha)}\gamma_{2j}^{(\alpha)}\), the product of Majorana operators within replica \(\alpha\). We can write \(\mathcal{Z}_{M}^{(n)}\) within the path integral formalism by simply replacing each \(\gamma_{j}\) with the Grassmann field4\(\sqrt{2}\psi_{j}(\tau)\), evaluated at \(\tau=0\): Footnote 4: The factor of \(\sqrt{2}\) arises from our normalization convention for the Majorana operators, \(\{\gamma_{i},\gamma_{j}\}=2\delta_{ij}\), rather than \(\{\gamma_{i},\gamma_{j}\}=\delta_{ij}\). It can be obtained by retracing the steps outlined in Appendix B for deriving the path integral representation of the Majorana system. \[\begin{split}\mathcal{Z}_{M}^{(n)}&=\int\prod_{ \alpha=1}^{n}D\psi^{(\alpha)}\,e^{-\sum_{\alpha=1}^{n}\mathcal{S}_{0}[\psi^{( \alpha)}]}\prod_{j=1}^{N}\Bigg{\{}1+\mu\sum_{r=1}^{\lfloor n/2\rfloor}\sum_{1 \leq\alpha_{1}<\ldots<\alpha_{2r}\leq n}(2i\psi_{2j-1}(0)\psi_{2j}(0))^{( \alpha_{1})}\ldots(2i\psi_{2j-1}(0)\psi_{2j}(0))^{(\alpha_{2r})}\Bigg{\}}\\ &=\int\prod_{\alpha=1}^{n}D\psi^{(\alpha)}\,e^{-\sum_{\alpha=1}^ {n}\mathcal{S}_{0}[\psi^{(\alpha)}]}\exp\Bigg{\{}\mu\sum_{j=1}^{N}\sum_{1\leq \alpha<\beta\leq n}(2i\psi_{2j-1}(0)\psi_{2j}(0))^{(\alpha)}(2i\psi_{2j-1}(0) \psi_{2j}(0))^{(\beta)}+\ldots\Bigg{\}},\end{split} \tag{48}\] where the ellipsis denotes four-replica terms and higher. Finally, we take the continuum limit by constructing the continuum Grassmann spinor \(\psi(\tau,x=ja)=\frac{1}{\sqrt{a}}[\psi_{2j-1}(\tau),\psi_{2j}(\tau)]^{T}\). Rewriting \(2i\psi_{2j-1}\psi_{2j}=-a\psi^{T}\sigma^{g}\psi\), we obtain the result \[\begin{split}\mathcal{Z}_{M}^{(n)}&=\int\prod_{ \alpha=1}^{n}D\psi^{(\alpha)}\,e^{-\sum_{\alpha=1}^{n}\mathcal{S}_{0}[\psi^{( \alpha)}]}\exp\Bigg{\{}\tilde{\mu}\sum_{1\leq\alpha<\beta\leq n}\int\mathrm{d}x \,(\psi^{T}\sigma^{g}\psi)^{(\alpha)}(\psi^{T}\sigma^{g}\psi)^{(\beta)}+ \ldots\Bigg{\}}\\ &\equiv\int D\psi\,e^{-\mathcal{S}_{0}[\psi]-\mathcal{S}_{M}^{(n)} [\psi^{(\alpha)}]},\end{split} \tag{49}\] where \(\tilde{\mu}=\mu a\) has dimension \(-1\); in the main text we set \(a=1\) for simplicity. In the first equation, the field \(\psi\) in the latter exponential must be understood as \(\psi(\tau=0,x)\), and in the second line we have defined \(\mathcal{S}_{M}^{(n)}[\{\psi^{(\alpha)}\}]\) as the exponent appearing in braces \(\{\cdots\}\) in the first. Considering the quantity in braces as a perturbation to the Ising CFT localized to the \(\tau=0\) line, one immediately finds from dimensional analysis that the parameter \(\tilde{\mu}\) is irrelevant. A similar analysis applies to \(\hat{K}_{\rm avg}\). Performing the same continuum limit as above, we obtain \[\left\langle\psi_{\rm g.s.}^{\otimes n}\right|\hat{K}_{\rm avg} \left|\psi_{\rm g.s.}^{\otimes n}\right\rangle \equiv\mathcal{Z}_{K}^{(n)}\propto\left\langle\prod_{j=1}^{N} \left\{1+\nu\sum_{r=1}^{n}\sum_{1\leq\alpha_{1}<\ldots<\alpha_{r}\leq n}(i \gamma_{2j-1}\gamma_{2j})^{(\alpha_{1})}\ldots(i\gamma_{2j-1}\gamma_{2j})^{( \alpha_{r})}\right\}\right\rangle_{\rm g.s.}\] \[=\int\prod_{\alpha=1}^{n}D\psi^{(\alpha)}\,e^{-\sum_{\alpha=1}^{ n}\mathcal{S}_{0}[\psi]}\prod_{j=1}^{N}\left\{1+\nu\sum_{r=1}^{n}\sum_{1\leq \alpha_{1}<\ldots<\alpha_{r}\leq n}(2i\psi_{2j-1}\psi_{2j})^{(\alpha_{1})} \ldots(2i\psi_{2j-1}\psi_{2j})^{(\alpha_{r})}\right\}\] \[=\int\prod_{\alpha=1}^{n}D\psi^{(\alpha)}\,e^{-\sum_{\alpha=1}^{ n}\mathcal{S}_{0}[\psi]}\exp\Biggl{\{}-\tilde{\nu}\sum_{\alpha=1}^{n}\int{\rm d}x \,(\psi^{T}\sigma^{y}\psi)^{(\alpha)}+\ldots\Biggr{\}}\] \[\equiv\int\prod_{\alpha=1}^{n}D\psi^{(\alpha)}e^{-\sum_{\alpha=1} ^{n}\mathcal{S}_{0}[\psi^{(\alpha)}]-\mathcal{S}_{L}^{(n)}[\{\psi^{(\alpha)} \}]},\] where the ellipsis again denotes higher-order irrelevant terms, including those written explicitly in (30). As above, in the final line we have defined the perturbation to the action, here \(\mathcal{S}_{L}^{(n)}[\{\psi^{(\alpha)}\}]\), as the exponent appearing in braces in the previous line. An important question is whether higher orders in the perturbative RG can generate relevant or marginal terms in (30). It is immediately clear that no relevant terms can be generated: since any Grassmann-even contribution to the action contains at minimum two \(\psi\)'s with scaling dimension \(1/2\), power-counting suggests that there are no relevant perturbations upon restricting to the \(\tau=0\) line. There are, on the other hand, marginal perturbations of the form \[\delta\mathcal{S}_{M,2}=-\mu_{2}\sum_{\alpha=1}^{n}\int{\rm d}x\,(\psi^{T} \sigma^{y}\psi)^{(\alpha)}. \tag{34}\] While this term properly respects the replica symmetry, it does not respect the symmetry \(\psi^{(\alpha)}\to\sigma^{x}\psi^{(\alpha)}\) present at the unperturbed critical point, and so it cannot be generated under the perturbative RG. An important subtlety, however, is that this symmetry is _explicitly broken_ by irrelevant perturbations to \(\mathcal{S}_{0}\). If we intend to understand measurements of the microscopic lattice TFIM, rather than measurements of the Ising CFT, then these irrelevant terms must be taken into consideration. These irrelevant perturbations result in a nonzero expectation value \(\left\langle\psi^{T}\sigma^{y}\psi\right\rangle_{\rm g.s.}\). As a result, performing a single step of first-order perturbative RG on \(\mathcal{S}_{M}^{(n)}\) yields \[\delta\mathcal{S}_{M,2}=-\tilde{\mu}\sum_{1\leq\alpha<\beta\leq n}\int{\rm d} x\,\Bigl{[}B(\psi^{T}\sigma^{y}\psi)^{(\beta)}+(\psi^{T}\sigma^{y}\psi)^{( \alpha)}B\Bigr{]}=-\tilde{\mu}B(n-1)\sum_{\alpha=1}^{n}\int{\rm d}x\,(\psi^{T }\sigma^{y}\psi)^{(\alpha)}, \tag{35}\] where \(B\) is a constant originating from the integration of fast modes5. We therefore find, at first order in the perturbative RG, that \(\mu_{2}=\tilde{\mu}B(n-1)\). Footnote 5: To avoid unnecessary details, we have not explained the perturbative RG explicitly. The constant \(B\) is given by the expectation of the modes with momenta between a shell of width \({\rm d}\ell\). The rescaling step is inconsequential, since the term is marginal at tree level. While it seems that the RG has generated an exactly marginal term, it is important to note that its prefactor \(\mu_{2}\) is proportional to \(n-1\). As a result, it is inconsequential to any correlation function upon taking the replica limit \(n\to 1\): performing a perturbative expansion in \(\delta S_{M,2}\), each term containing a factor \(\mu_{2}\) will vanish upon taking the replica limit. This observation can be understood simply on physical grounds: since probability conservation requires \(\sum_{\bf m}\hat{M}_{\bf m}^{2}=1\), we must have \(\mathcal{Z}_{M}^{(1)}=\langle\psi_{\rm g.s.}|\psi_{\rm g.s.}\rangle=1\). As a result, all single-replica contributions to to the action of the form (35) must vanish in the \(n\to 1\) limit. The vanishing of \(\mu_{2}\) as \(n\to 1\) is therefore completely general, to all orders of the perturbative RG, and this term can be discarded for purposes of computing correlation functions in the replica limit. ## Appendix E Noncommuting Measurements and Observables In the main text, our analytical approach relied on assuming that the measurement operator commuted with the observable being investigated. In this section we show how to analyze observables which do not commute with the measurement operators using the same analytical mapping. The details are slightly more technically involved than the approach followed in the main text, but are conceptually similar. We demonstrate our approach explicitly for in the ensemble-average measurement scheme; other cases, such as in the forced measurement scheme, or under \(ZZ\) measurements (see Sec. G), follow similarly. To start, it is useful to note that vanishes whenever sites \(0\) or \(r\) are measured. Using the replica scheme of Eq. (9), the numerator reads (22) To recover the form (11) for the averaged measurement operator on the right, we simply multiply and divide by the local terms in braces on the right corresponding to sites \(0\) and \(r\). We note that these operators have strictly positive spectra, and therefore have well-defined inverses. In particular, (23) where \(\omega\) is a monotonically increasing function of \(p\). We therefore obtain (24) This equation is thus far exact, and is simply a sum of higher-order multi-replica correlation functions with respect to multi-replica ground state coupled by the defect \(\hat{M}_{\text{avg}}\). At sufficiently long distances, higher-order terms proportional to \(\omega\) or \(\omega^{2}\) decay more rapidly than the zeroth order term proportional to \(\omega^{0}\). Since we are anyway primarily concerned with the long-range behavior, we are justified in dropping all of these higher-order terms. Altogether, we find (25) ## Appendix F Forced Measurement Entanglement Entropy In Sec. IV.3, we used the conformal invariance of the Ising CFT to provide a qualitative argument relating the average entanglement entropy \(\overline{S_{\mathbf{k}}(r)}\) of the TFIM following forced projective measurements to the half-system entanglement entropy \(S_{d}(N/2)\) of a dual ground-state defect problem. Here we provide some of the technical details required to complete the argument. As in Sec. IV.3, we are interested in computing the entanglement entropy \(S^{*}(r)\) of the fixed-point model (30), for a subregion of length \(r\). This can be computed from the \(n\to 1\) limit of the following ratio of two partition functions: \[S^{*}(r,\epsilon_{0})\equiv\lim_{n\to 0}\frac{1}{1-n}\log\bigg{\{}\frac{ \mathcal{Z}_{n}(r,\epsilon_{0})}{\mathcal{Z}_{1}^{n}}\bigg{\}}. \tag{26}\] Here \(\mathcal{Z}_{1}\equiv\int D\psi\,e^{-\mathcal{S}[\psi]}\) is the single-replica partition function, while \(\mathcal{Z}_{n}(r,\epsilon_{0})\) is the partition function of an \(n\)-fold replicated theory subjected to the boundary conditions given in Eq. (32). Alternatively, we can think of the \(n\) replica fields \(\psi^{(\alpha)}(\tau,x)\) as a single field defined on an \(n\)-sheeted Riemann surface, with a branch cut running from \(x=-r/2\) to \(x=+r/2\) along the \(\tau=0\) axis. Both partition functions contain a defect running along the \(\tau=0\) axis6, as given in (30). In defining (F1), we have introduced explicit dependence on a short-distance cutoff \(\epsilon_{0}\) into the definition of both \(S^{*}(r,\epsilon_{0})\) and \(\mathcal{Z}_{n}(r,\epsilon_{0})\), which widens the branch points at \(x=\pm r/2\) into circles of radius \(\epsilon_{0}\). The value of \(\epsilon_{0}\) is fixed but otherwise arbitrary, and is necessary for obtaining a finite expression for the entanglement entropy [52; 54]. Footnote 6: To avoid the branch cut lying at exactly the same imaginary time as the defect, we can take the second term in the action (30) to be evaluated at \(\tau=0^{-}\). By returning to the properly discretized expression for the entanglement entropy and applying the cyclicity of the trace, it is clear that the entanglement cut can be chosen to lie just above the measurement defect. In order to exploit conformal invariance of the Ising CFT, it is useful to introduce the complex coordinate \(z\equiv x+i\tau\). Then, using the scale invariance of the action (30), we first rescale \(z\to 2z/r\). This transformation maps the branch points from \(z=\pm r/2\) to \(z=\pm 1\), but modifies the short-distance cutoff from \(\epsilon_{0}\) to \(2\epsilon_{0}/r\). We therefore have \(\mathcal{Z}_{n}(r,\epsilon_{0})=\mathcal{Z}_{n}(2,2\epsilon_{0}/r)\). Without loss of generality, may therefore set \(r=2\) and compute \(\mathcal{Z}_{n}(2,\epsilon)\), from which we obtain the result for general \(r\) by setting \(\epsilon=2\epsilon_{0}/r\). We next deform the entanglement branch cut from the real line onto the unit semicircle in the upper-half plane, as depicted in Fig.5(a). As discussed in Sec. IV.3, the precise location of the branch cut on a Riemann surface is immaterial and can be deformed freely so long as its endpoints remain fixed. Alternatively, as a theory of \(n\) replica fields, the field redefinitions \(\psi^{(\alpha)}(\tau,x)\rightarrow\tilde{\psi}^{(\alpha)}(\tau,x)\) of Eq. (33) can be used to move the branch cut. Explicitly, the boundary condition \(\psi^{(\alpha)}(0^{-},x)=\psi^{(\alpha+1)}(0^{+},x)\) for \(|x|<1\) (i.e., at the bottom boundary of the semicircle \(\mathcal{D}\)) is equivalent to the continuity condition7\(\tilde{\psi}^{(\alpha)}(0^{-},x)=\tilde{\psi}^{(\alpha)}(0^{+},x)\). Similarly, the continuity of \(\tilde{\psi}^{(\alpha)}(\tau,x)\) at the top boundary of region \(\mathcal{D}\) becomes a matching condition analogous to that of Eq. (32). We emphasize that this transformation of fields is exactly what is done to deform a branch cut in the analysis of an ordinary complex function defined on a Riemann surface. Footnote 7: Note that we are being somewhat cavalier about \(\psi^{(\alpha)}(\tau,x)\) being a Grassmann field; strictly speaking, the field \(\psi^{(\alpha)}(\tau,x)\) is indeterminate, and its continuity is ill-defined. Instead one should consider a properly discretized path integral, where the continuity condition is realized by couplings between lattice points across the \(\tau=0\) boundary. Next, we use the conformal transformation \[z\mapsto z^{\prime}=f(z)\equiv-i\frac{L}{2\pi}\log z\] (F2) which maps the complex plane to a cylinder of circumference \(L\), with complex coordinate \(z^{\prime}=x^{\prime}+i\tau^{\prime}\) with \(x^{\prime}\cong x+L\) [see Fig. 5(aiii)]. The defect due to measurements, which lies along the \(\tau=0\) line of the original complex plane, maps to _two_ timelike defects on the cylinder: the positive real axis maps to the line \(x^{\prime}=0\), while the negative real axis maps to the line \(x^{\prime}=L/2\). Meanwhile, the branch cut along the unit semicircle in the upper-half plane is mapped to the semicircle from \(x^{\prime}=0\) to \(x^{\prime}=L/2\) along the \(\tau^{\prime}=0\) line. Under the transformation (F2), a short-distance cutoff of size \(\epsilon\) on the complex plane becomes a cutoff of size \(L\epsilon/2\pi\) on the cylinder; explicitly, an infinitesimal rectangle of area \(\epsilon^{2}\) centered on \(z=\pm 1\) is mapped to an infinitesimal rectangle of area \((\epsilon L/2\pi)^{2}\) on the cylinder. Let \(\mathcal{Z}_{n}^{\rm cyl}(L,\epsilon)\) denote the partition function of the \(n\)-fold replicated cylinder, with defects along \(x^{\prime}=0\) and \(x^{\prime}=L/2\), and with branch cut between \(x^{\prime}=0\) and \(x^{\prime}=L/2\) along \(\tau^{\prime}=0\). Then, the above sequence of mappings has shown the following equivalence of partition functions: \[\mathcal{Z}_{n}(r,\epsilon_{0})=\mathcal{Z}_{n}(2,2\epsilon_{0}/r)=\mathcal{Z}_ {n}^{\rm cyl}(L,L\epsilon_{0}/\pi r).\] (F3) It is also useful to note that the same transformation implies \(\mathcal{Z}_{1}=\mathcal{Z}_{1}^{\rm cyl}\), where \(\mathcal{Z}_{1}^{\rm cyl}\) is the partition function of the single-replica cylinder with the given defects. Having formally established the connection between partition functions, we may now make connection with known results from the literature. From previous studies of the entanglement entropy of the TFIM and the Ising CFT in the presence of exactly marginal defects [37; 38; 44; 45; 46], it is known that the half-system entanglement entropy with defects at the entangling boundaries exhibits the following form: \[S_{d}^{*}(L/2,\epsilon)\equiv\lim_{n\to 1}\frac{1}{1-n}\log\left\{\frac{ \mathcal{Z}_{n}^{\rm cyl}(L,\epsilon)}{[\mathcal{Z}_{1}^{\rm cyl}]^{n}}\right\} =\frac{c_{\rm eff}(\nu)}{3}\log\frac{L}{\epsilon}+b_{d}(\nu),\] (F4) where \(c_{\rm eff}^{*}(\nu)\) is a continuously varying effective central charge which depends on the defect strength \(\nu\), while \(b_{d}(\nu)\) is an \(L\)-independent constant which depends on the precise regularization scheme used. Substituting \(\epsilon=L\epsilon_{0}/\pi r\), we obtain the following result for the original entanglement entropy (F1): \[S^{*}(r,\epsilon_{0})=S_{d}^{*}(L/2,L\epsilon_{0}/\pi r)=\frac{c_{\rm eff}^{*}( \nu)}{3}\log\left(\frac{L}{L\epsilon_{0}/\pi r}\right)+b_{d}(\nu)=\frac{c_{ \rm eff}^{*}(\nu)}{3}\log r+b^{\prime}(\nu),\] (F5) where we have absorbed the constant \(\log(\pi/\epsilon_{0})\) into the definition of \(b^{\prime}(\nu)\). Notably, all dependence on the cylinder circumference \(L\) has been eliminated from the final expression; since \(L\) can be chosen arbitrarily, this is to be expected. We therefore arrive at the desired result: the entanglement entropy of a subregion of length \(r\) in the Ising CFT, in the presence of a marginal defect along the \(\tau=0\) line (such as that arising due to forced projective measurements), scales logarithmically with \(r\) with an effective central charge \(c_{\text{eff}}^{*}(\nu)\) that continuously varies with the defect strength. ## Appendix G \(Z_{j}z_{j+1}\) Measurements Throughout the main text, we considered measurements of the observable \(X_{j}\) throughout the Ising chain. An alternative scheme is to consider measurements of \(Z_{j}Z_{j+1}\). Since \(X_{j}\) and \(Z_{j}Z_{j+1}\) are related by the Kramers-Wannier duality [65], it is natural to expect that measurements of these observables have similar effects on the ground state. There are, however, important differences in the behavior of observables. #### Born Projective \(Z_{j}z_{j+1}\) Measurements We first consider a measurement protocol analogous to that of Sec. III: for each site \(j\), we perform a projective measurement of \(Z_{j}Z_{j+1}\) with probability \(p\), and sample the measurement outcome according to the Born rule. Such a protocol is described by a measurement operator \(\hat{M}_{\mathbf{m}}^{Z}\), which is a product of local measurement operators: \[\hat{M}_{\mathbf{m}}^{Z}=\prod_{j=1}^{N}\hat{M}_{m_{j},j}^{Z},\quad\hat{M}_{0, j}^{Z}=\sqrt{1-p},\quad\hat{M}_{\pm 1,j}^{Z}=\sqrt{p}\frac{1\pm Z_{j}Z_{j+1}}{2}. \tag{11}\] We obtain the state \(\left|\psi_{\mathbf{m}}^{Z}\right\rangle\) with probability \(p_{\mathbf{m}}^{Z}\), where \[\left|\psi_{\mathbf{m}}^{Z}\right\rangle=\frac{\hat{M}_{\mathbf{m}}^{Z}\left| \psi_{\text{g.s.}}\right\rangle}{\sqrt{\left([\hat{M}_{\mathbf{m}}^{Z}]^{2} \right)_{\text{g.s.}}}},\quad p_{\mathbf{m}}^{Z}=\left\langle[\hat{M}_{\mathbf{ m}}^{Z}]^{2}\right\rangle_{\text{g.s.}}. \tag{12}\] The discussion of Sec. III.1 then follows identically, with the caveat that we focus on the correlator \([C_{\mathbf{m}}^{Z}(r)]^{2}\equiv[\left\langle Z_{0}Z_{r}\right\rangle_{ \mathbf{m}}^{Z}]^{2}\) in developing our replica scheme, where \(\left\langle\cdot\right\rangle_{\mathbf{m}}^{Z}\equiv\left\langle\psi_{ \mathbf{m}}^{Z}\right|\cdot\left|\psi_{\mathbf{m}}^{Z}\right\rangle\). To study \(G_{\mathbf{m}}^{Z}(r)\equiv\left\langle X_{0}X_{r}\right\rangle_{\mathbf{m}}^ {Z}-\left\langle X_{0}\right\rangle_{\mathbf{m}}^{Z}\left\langle X_{r}\right\rangle _{\mathbf{m}}^{Z}\), we use the approach of Appendix E to handle noncommuting measurements and observables. Following the same steps, we arrive at an averaged measurement operator \(\hat{M}_{\text{avg}}^{Z}\) which couples the \(n\) replicas: \[\hat{M}_{\text{avg}}^{Z}\equiv\sum_{\mathbf{m}}([\hat{M}_{\mathbf{m}}^{Z}]^{2 })^{\otimes n}\propto\prod_{j}\Bigg{\{}1+\mu\sum_{r=1}^{\lfloor n/2\rfloor} \sum_{1\leq\alpha_{1}<\ldots<\alpha_{2r}\leq n}Z_{j}^{(\alpha_{1})}Z_{j+1}^{ (\alpha_{1})}\ldots Z_{j}^{(\alpha_{2r})}Z_{j+1}^{(\alpha_{2r})}\Bigg{\}}. \tag{13}\] The derivation of the continuum limit of \(\hat{M}_{\text{avg}}^{Z}\) then follows nearly identically to that of \(\hat{M}_{\text{avg}}\) in Appendix D. The only difference lies in the replacement of the operators \(X_{j}=i\gamma_{2j-1}\gamma_{2j}\) with \(Z_{j}Z_{j+1}=i\gamma_{2j}\gamma_{2j+1}\). The partition function \(\mathcal{Z}_{M,Z}^{(n)}\equiv\left\langle\psi_{\text{g.s.}}^{\otimes n}\right| \hat{M}_{\text{avg}}^{Z}\left|\psi_{\text{g.s.}}^{\otimes n}\right\rangle\) is then given by the same expression as Eq. (12), with the replacement8 Footnote 8: In the final term, we have integrated by parts and dropped a total derivative, which vanishes under the integral \(\int\mathrm{d}x\). \[2i\psi_{2j-1}\psi_{2j}\to i\psi_{2j}\psi_{2j+1}=-2i\psi_{2j-1}\psi_{2j}+2i \psi_{2j}(\psi_{2j+1}-\psi_{2j-1})=a\psi^{T}\sigma^{y}\psi+ia^{2}\psi^{T} \sigma^{x}\partial_{x}\psi. \tag{14}\] Altogether, the defect created by \(Z_{j}Z_{j+1}\) measurements sampled according to the Born rule is described by a contribution to the action of the form \[\mathcal{S}_{M,Z}^{(n)}=-\mu\sum_{\alpha<\beta}\int\mathrm{d}x\left[\psi^{T} \sigma^{y}\psi+ia\psi^{T}\sigma^{x}\partial_{x}\psi\right]^{(\alpha)}\!\left[ \psi^{T}\sigma^{y}\psi+ia\psi^{T}\sigma^{x}\partial_{x}\psi\right]^{(\beta)}+\ldots. \tag{15}\] The extra derivative terms \(i\psi^{T}\sigma^{x}\partial_{x}\psi\) are furthermore irrelevant compared to the term \(\psi^{T}\sigma^{y}\psi\). We therefore arrive at the same conclusion as in Sec. III: a nonzero density of \(Z_{j}Z_{j+1}\) measurements performed on the ground state \(\left|\psi_{\text{g.s.}}\right\rangle\) of the TFIM, with outcomes sampled according to the Born rule, do not affect the asymptotic structure of correlation functions or entanglement. Figure 6 gives the numerically computed correlation functions \(\overline{[C_{\mathbf{m}}^{M}(r)]^{2}}\) and \(\overline{G_{\mathbf{m}}^{Z}(r)}\) and the entanglement entropy \(\overline{S_{\mathbf{m}}^{Z}(r)}\), each defined analogously to Eq. (7) with the replacement \(\left|\psi_{\mathbf{m}}\right\rangle\rightarrow\left|\psi_{\mathbf{m}}^{Z}\right\rangle\). As in the case of \(X_{j}\) measurements, we find excellent scaling collapse by plotting each observable as a function of \(s=\frac{N}{\pi}\sin\left(\frac{\pi r}{N}\right)\). As expected from the above discussion, we find \(\overline{[C_{\mathbf{m}}^{Z}(r)]^{2}}\sim s^{-1/2}\) and \(\overline{G_{\mathbf{m}}^{Z}(r)}\sim s^{-2}\) at sufficiently large \(s\), as well as \(\overline{S_{\mathbf{m}}^{Z}(r)}\sim\frac{1}{6}\log s+b_{4}(p)\). Two differences appear between the numerical results of \(Z_{j}Z_{j+1}\) measurements and \(X_{j}\) measurements. First, whereas the power-law coefficient in Fig. 2(a) decreases with increasing \(X_{j}\) measurement probability, the power-law coefficient of Fig. 6(a) increases with increasing \(Z_{j}Z_{j+1}\) measurement probability. This feature is easily understood on physical grounds: by projecting a large fraction of the ground state onto \(X_{j}=\pm 1\) the short-range ferromagnetic correlations are reduced; in particular, since \(\left\langle Z_{0}Z_{r}\right\rangle_{\mathbf{m}}=0\) whenever \(m_{0}=\pm 1\) or \(m_{r}=\pm 1\), \(\overline{C_{\mathbf{m}}^{2}(r)}\) is bounded above by the probability \((1-p)^{2}\) that both sites \(0\) and \(r\) remain unmeasured. In contrast, projecting a large fraction of the ground state onto \(Z_{j}Z_{j+1}=\pm 1\) gives the resulting post-measurement state \(\left|\psi_{\mathbf{m}}^{Z}\right\rangle\) short-range'spin-glass' structure; in particular, \(\left[\left\langle Z_{0}Z_{r}\right\rangle_{\mathbf{m}}^{Z}\right]^{2}=+1\) with probability at least \(p^{r}\). Remarkably, even when at large \(p\) when typical states \(\left|\psi_{\mathbf{m}}^{Z}\right\rangle\) feature such spin-glass structure throughout the majority of the system, at sufficiently long distances the power-law scaling of the ground state is recovered. Second, whereas the contribution \(b_{1}(p)\) to the entanglement entropy \(\overline{S_{\mathbf{m}}(r)}\) strictly decreases with increasing measurement probability [see Fig. 3], the analogous contribution \(b_{4}(p)\) to \(\overline{S_{\mathbf{m}}^{Z}(r)}\) exhibits non-monotonic behavior [see Fig. 6(c)]. In contrast to the \(X_{j}\) measurement scheme of Sec. III, where measurements are localized within subsystem \(A=[0:r)\) or its complement and strictly decrease the entanglement entropy, here measurements of \(Z_{r-1}Z_{r}\) or \(Z_{-1}Z_{0}\) are capable of increasing the entanglement entropy between the two subsystems. #### a.2.2 Forced Projective \(Z_{j}Z_{j+1}\) Measurements We can similarly analyze the effect of forced \(Z_{j}Z_{j+1}\) measurements, in which we postselect on the outcome \(+1\) for each measurement of \(Z_{j}Z_{j+1}\). Analogously to the discussion of Sec. IV, we describe this measurement protocol with the measurement operator \[\hat{K}_{\mathbf{k}}^{Z}=\prod_{j=1}^{N}\hat{K}_{k_{j},j}^{Z},\quad\hat{K}_{0,j}^{Z}=1,\quad\hat{K}_{1,j}^{Z}=\frac{1+Z_{j}Z_{j+1}}{2}. \tag{10}\] Figure 6: Ensemble-averaged correlation functions \(\overline{[C_{\mathbf{m}}^{Z}(r)]^{2}}\) and \(\overline{G_{\mathbf{m}}^{Z}(r)}\) and entanglement entropy \(\overline{S_{\mathbf{m}}^{Z}(r)}\) following \(Z_{j}Z_{j+1}\) measurements with outcomes sampled according to the Born rule, for measurement probabilities \(p=0.2\) (blue), \(0.5\) (green), and \(0.8\) (red), and for system sizes \(N=32\), \(64\), \(128\), and \(256\) (light to dark). Data is plotted as a function of \(s=\frac{N}{\pi}\sin\left(\frac{\pi r}{N}\right)\) to achieve scaling collapse of the various system sizes. Similar to the ensemble with \(X_{j}\) measurements [see Sec. III], both correlation functions retain their power-law scaling with exponents of the unmeasured system at sufficiently long distances, while the entanglement entropy retains its logarithmic scaling with central charge \(c=1/2\) of the unmeasured system. We then obtain the state \(\left|\psi_{\mathbf{k}}^{Z}\right\rangle\) with probability \(p_{\mathbf{k}}\), where \[\left|\psi_{\mathbf{k}}^{Z}\right\rangle=\frac{\hat{K}_{\mathbf{k}}^{Z}\left| \psi_{\mathbf{g},\mathbf{s},\mathbf{}}\right\rangle}{\sqrt{\langle\hat{K}_{ \mathbf{k}}^{Z}\rangle_{\mathbf{g},\mathbf{s},}}},\quad p_{\mathbf{k}}=p^{| \mathbf{k}|}(1-p)^{N-|\mathbf{k}|}, \tag{100}\] where \(|\mathbf{k}|=\sum_{j=1}^{N}k_{j}\) is the number of measurements performed. The analysis of Sec. IV.1 then follows identically, with the replacement (101) in the averaged measurement operator. The end result is a defect described by the action \[\mathcal{S}_{K,Z}^{(n)}=-\nu\sum_{\alpha=1}^{n}\int\mathrm{d}x\,(\psi^{T} \sigma^{y}\psi)^{(\alpha)}+\dots, \tag{101}\] where as usual \(\psi\) is here evaluated strictly at \(\tau=0\), and the ellipsis again contains irrelevant terms, including the derivative term of Eq. (101). Notably, \(Z_{j}Z_{j+1}\) measurements yield an exactly marginal term identical to that of Eq. (25), but crucially with the opposite sign. This change of sign does not affect the scaling dimension of \(\psi\) and therefore is not expected to affect the asymptotic scaling of \(\widetilde{G_{\mathbf{k}}^{Z}(r)}\). On the other hand, it has crucial effects on the behavior of the correlations \(C_{\mathbf{k}}^{Z}(r)=\left\langle Z_{0}Z_{r}\right\rangle_{\mathbf{k}}^{Z}\). As a continuum analogue of a two-dimensional classical Ising model with a defect line, the perturbation (25) corresponds to weakened bonds along the defect line, and results in weaker ferromagnetic correlations along the defect. In contrast, Eq. (101) corresponds to _strengthened_ bonds along the defect line, and results in enhanced ferromagnetic correlations. Following the results of Refs. [33; 34], we expect \(\overline{C_{\mathbf{k}}^{Z}(r)}\sim r^{-2\Delta^{Z}(p)}\) to again exhibit a continuously varying power law, but with a scaling dimension \(\Delta^{Z}(p)\) which _decreases_ with increasing measurement strength. Asymptotically as \(p\to 1\), we expect projecting \(Z_{j}Z_{j+1}=+1\) almost everywhere results in near-perfect long-range order, so that \(\Delta^{Z}(p)\to 0\) as \(p\to 1\). Figure 7 depicts the numerically computed correlation functions \(\overline{C_{\mathbf{k}}^{Z}(r)}\) and \(\overline{G_{\mathbf{k}}^{Z}(r)}\) and the entanglement entropy \(\overline{S_{\mathbf{k}}^{Z}(r)}\), defined analogously to Eq. (7) with the replacement \(\left|\psi_{\mathbf{m}}\right\rangle\rightarrow\left|\psi_{\mathbf{k}}^{Z}\right\rangle\). As expected, \(\overline{G_{\mathbf{k}}^{Z}(r)}\sim s^{-2}\) exhibits the same power-law scaling as in the unmeasured system, while \(\overline{C_{\mathbf{k}}^{Z}(r)}\sim s^{-2\Delta^{Z}(p)}\) features a continuously varying power law characterized by a scaling dimension \(\Delta^{Z}(p)\). As \(p\) increases, \(\Delta^{Z}(p)\) decreases towards zero, resulting in longer-ranged order parameter correlations. Similar to the entanglement entropy \(\overline{S_{\mathbf{k}}(r)}\) in Sec. IV.3, the entanglement entropy \(\overline{S_{\mathbf{k}}^{Z}(r)}\sim\frac{c_{\sigma_{\mathbf{k}}^{Z}(p)}^{Z} }{3}\log s+b_{5}(p)\) exhibits a continuously decreasing effective central charge, which can be understood by mapping to a problem with ordinary timelke impurti ## Appendix H 'No-Click' Measurements Throughout the main text, we have considered two random projective measurement schemes. Focusing on projective measurements provides a closer comparison to typical experimental platforms, while performing measurements randomly throughout space restores translation invariance on average and effectively softens the average strength of the a priori completely disentangling projective measurements. One conceptual downside to random measurement schemes, however, is the requirement of replicas in order to perform statistical averages. Random measurement schemes also impose an additional computational overhead due to Monte Carlo sampling, which becomes especially severe in the case of non-self-averaging observables. An alternative deterministic measurement scheme, which retains translation invariance, is to consider the effect of postselected weak measurements which only partially collapse the ground state. Here we consider a particular postselected weak measurement scheme which we call "no-click" measurements. A similar measurement scheme was used in the previous work of Ref. [9]. To derive the no-click measurement of the observable \(X_{j}\), we imagine introducing an ancillary qubit initialized in the state \(\ket{0}\). We then couple this ancillary qubit to the \(j\)th spin of the system via the unitary \[U_{\text{n.c.}}=\exp\!\left\{-i\alpha\!\left(\frac{1-X_{j}}{2} \right)\otimes\sigma^{y}\right\}=\left(\frac{1+X_{j}}{2}\right)+\left(\frac{1 -X_{j}}{2}\right)\otimes e^{-i\alpha\sigma^{y}}, \tag{11}\] where \(\sigma^{y}\) acts on the ancillary qubit, and \(0\leq\alpha\leq\pi/2\). Following the evolution by \(U_{\text{n.c.}}\), the state of the system plus ancilla is given by \[U_{\text{n.c.}}\ket{\psi_{\text{g.s.}}}\otimes\ket{0}=\left[1+( \cos\alpha-1)\!\left(\frac{1-X_{j}}{2}\right)\right]\ket{\psi_{\text{g.s.}}} \otimes\ket{0}+\sin\alpha\!\left(\frac{1-X_{j}}{2}\right)\ket{\psi_{\text{g.s.}}}\otimes\ket{1}. \tag{12}\] Finally, we measure the ancilla qubit in the computational basis. A 'click' of the measurement apparatus corresponds to the outcome \(1\), which projects \(X_{j}\) into the eigenstate \(-1\). In the absence of a click, corresponding to the outcome \(0\), the amplitude for \(X_{j}=-1\) is only partially suppressed. Postselecting on the latter outcome, the effect of the no-click measurement is given by \[\ket{\psi_{\text{g.s.}}}\mapsto\frac{e^{\lambda X_{j}}\ket{\psi_ {\text{g.s.}}}}{\left\langle e^{\lambda X_{j}}\right\rangle_{\text{g.s.}}}, \tag{13}\] where \(\lambda\) is a monotonic function of \(\alpha\), with \(\lambda=\infty\) corresponding to the projective measurement \(\alpha=\pi/2\). Performing the same no-click measurement on each qubit, we altogether obtain the state \[\ket{\psi_{\text{n.c.}}}=\frac{\hat{K}_{\text{n.c.}}\ket{\psi_{ \text{g.s.}}}}{\left\langle\hat{K}_{\text{n.c.}}^{2}\right\rangle_{\text{g.s.}}},\quad\hat{K}_{\text{n.c.}}=\prod_{j=1}^{N}e^{\lambda X_{j}}. \tag{14}\] We are interested in comparing the behavior of observables in the no-click state \(\ket{\psi_{\text{n.c.}}}\) to that of the unmeasured ground state \(\ket{\psi_{\text{g.s.}}}\). For example, the connected energy density correlator is given by \[G_{\text{n.c.}}(r)\equiv\bra{X_{0}X_{r}}_{\text{n.c.}}-\bra{X_{0}}_{\text{n.c.}} \bra{X_{r}}_{\text{n.c.}}=\frac{\bra{X_{0}X_{r}\hat{K}_{\text{n.c.}}^{2}}_{ \text{g.s.}}}{\bra{\hat{K}_{\text{n.c.}}^{2}}_{\text{g.s.}}}-\frac{\bra{X_{0} \hat{K}_{\text{n.c.}}^{2}}_{\text{g.s.}}}{\bra{\hat{K}_{\text{n.c.}}^{2}}_{ \text{g.s.}}}\frac{\bra{X_{r}\hat{K}_{\text{n.c.}}^{2}}_{\text{g.s.}}}{\bra{ \hat{K}_{\text{n.c.}}^{2}}_{\text{g.s.}}}, \tag{109}\] where we have used \([X_{j},\hat{M}_{\text{n.c.}}]=0\). Order-parameter correlations \(C_{\text{n.c.}}(r)=\bra{Z_{0}Z_{r}}_{\text{n.c.}}\) and entanglement entropy \(S_{\text{n.c.}}(r)=-\operatorname{tr}\rho_{\text{n.c.}}^{A}\log\rho_{\text{n.c.}}^{A}\) are defined similarly, the former of which can be analyzed within the same framework using the method of Appendix E. From this expression, we see that correlations in the post-measurement state can be obtained from correlations of the following partition function: \[\mathcal{Z}_{\text{n.c.}}\equiv\bra{\psi_{\text{g.s.}}}\hat{K}_{\text{n.c.}}^ {2}\ket{\psi_{\text{g.s.}}}=\int D\psi\,\exp\biggl{\{}-\mathcal{S}_{0}[\psi]+ \tilde{\lambda}\int\mathrm{d}x\,\psi^{T}\sigma^{y}\psi\biggr{\}}, \tag{110}\] where \(\tilde{\lambda}\) is a monotonic function of \(\lambda\), and in the latter term \(\psi\) is evaluated at \(\tau=0\). We immediately see that no-click measurements result in an exactly marginal defect along the \(\tau=0\) line, of exactly the same form as in the forced measurement scheme [see Eq. (25)]. We therefore expect identical phenomenology for the long-distance correlations: in particular, we expect \(G_{\text{n.c.}}(r)\sim r^{-2}\) to retain the same scaling as in the unmeasured state, while \(C_{\text{n.c.}}(r)\sim r^{-2\Delta_{\text{n.c.}}(\alpha)}\) obtains a continuously varying power-law exponent with \(\Delta_{\text{n.c.}}(0)=1/8\) and \(\lim_{\alpha\to\pi/2}\Delta_{\text{n.c.}}(\alpha)=1/2\). We additionally expect the entanglement entropy to exhibit a continuously varying effective central charge, \(S_{\text{n.c.}}(r)\sim\frac{c_{\text{eff.n.c.}}(\alpha)}{3}\log r+b_{\text{n. c.}}(\alpha)\), such that \(c_{\text{eff,n.c.}}(0)=\pi/2\) and \(c_{\text{eff,n.c.}}(\alpha)\) decreases towards zero as \(\alpha\to\pi/2\). These qualitative features are verified numerically in Fig. 8. We may alternatively consider no-click measurements of \(Z_{j}Z_{j+1}\). The unitary \(U_{\text{n.c.}}^{Z}\) which implements such a measurement is obtained simply by replacing \(X_{j}\) with \(Z_{j}Z_{j+1}\) in Eq. (105). The resulting state following \(Z_{j}Z_{j+1}\) no-click measurements for each site \(j\) is \[\ket{\psi_{\text{n.c.}}^{Z}}=\frac{\hat{K}_{\text{n.c.}}^{Z}\ket{\psi_{\text{ g.s.}}}}{\bra{[\hat{K}_{\text{n.c.}}^{Z}]^{2}}_{\text{g.s.}}},\quad\hat{K}_{ \text{n.c.}}^{Z}=\prod_{j=1}^{N}e^{\lambda Z_{j}Z_{j+1}}. \tag{111}\] Similarly to the discussion of Appendix G, replacing \(X_{j}\) by \(Z_{j}Z_{j+1}\) results in the same type of defect as in (110) (up to irrelevant terms), but with an altered sign. Explicitly, the partition function used to evaluate correlation functions is given by \[\mathcal{Z}_{\text{n.c.}}^{Z}\equiv\bra{\psi_{\text{g.s.}}}\hat{K}_{\text{n.c. }}^{2}\ket{\psi_{\text{n.c.}}}=\int D\psi\,\exp\biggl{\{}-\mathcal{S}_{0}[\psi] -\tilde{\lambda}_{Z}\int\mathrm{d}x\,\psi^{T}\sigma^{y}\psi\biggr{\}}, \tag{112}\] where \(\lambda_{Z}\) is a monotonic function of \(\lambda\), the latter term in the exponential is again evaluated strictly at \(\tau=0\), and we have neglected irrelevant terms. Identically to the case of forced \(Z_{j}Z_{j+1}\) measurements, we expect \(C^{Z}_{\text{n.c.}}(r)\sim r^{-2\Delta^{Z}_{\text{n.c.}}(\alpha)}\) to exhibit a continuously _decreasing_ power-law exponent \(\Delta^{Z}_{\text{n.c.}}(\alpha)\) with increasing measurement strength, while \(S^{Z}_{\text{n.c.}}(r)\sim\frac{c^{Z}_{\text{eff.n.c.}}(\alpha)}{3}\log r+b^{Z }_{\text{n.c.}}(\alpha)\) exhibits a continuously decreasing effective central charge \(c^{Z}_{\text{eff,n.c.}}(\alpha)\). These features are again verified numerically in Fig. 9.
2301.06245
Deformations of $\mathbb Z_2$-Harmonic Spinors on 3-Manifolds
A $\mathbb Z_2$-harmonic spinor on a 3-manifold $Y$ is a solution of the Dirac equation on a bundle that is twisted around a submanifold $\mathcal Z$ of codimension 2 called the singular set. This article investigates the local structure of the universal moduli space of $\mathbb Z_2$-harmonic spinors over the space of parameters $(g,B)$ consisting of a metric and perturbation to the spin connection. The main result states that near a $\mathbb Z_2$-harmonic spinor with $\mathcal Z$ smooth, the universal moduli space projects to a codimension 1 submanifold in the space of parameters. The analysis is complicated by the presence of an infinite-dimensional obstruction bundle and a loss of regularity in the first variation of the Dirac operator with respect to deformations of the singular set $\mathcal Z$, necessitating the use of the Nash-Moser Implicit Function Theorem.
Gregory J. Parker
2023-01-16T03:31:24Z
http://arxiv.org/abs/2301.06245v1
# Deformations of \(\mathbb{Z}_{2}\)-Harmonic Spinors on 3-Manifolds ###### Abstract A \(\mathbb{Z}_{2}\)-harmonic spinor on a 3-manifold \(Y\) is a solution of the Dirac equation on a bundle that is twisted around a submanifold \(\mathcal{Z}\) of codimension 2 called the singular set. This article investigates the local structure of the universal moduli space of \(\mathbb{Z}_{2}\)-harmonic spinors over the space of parameters \((g,B)\) consisting of a metric and perturbation to the spin connection. The main result states that near a \(\mathbb{Z}_{2}\)-harmonic spinor with \(\mathcal{Z}\) smooth, the universal moduli space projects to a codimension 1 submanifold in the space of parameters. The analysis is complicated by the presence of an infinite-dimensional obstruction bundle and a loss of regularity in the first variation of the Dirac operator with respect to deformations of the singular set \(\mathcal{Z}\), necessitating the use of the Nash-Moser Implicit Function Theorem. ###### Contents * 1 Introduction * 1.1 Main Results * 1.2 Relations to Gauge Theory * 1.3 Outline * 2 Semi-Fredholm Properties * 2.1 Function Spaces * 2.2 Mapping Properties * 2.3 Higher Regularity * 3 Local Expressions * 3.1 The Model Operator * 3.2 Local Expressions * 3.3 Asymptotic Expansions * 4 The Obstruction Space * 4.1 The Model Obstruction * 4.2 Fredholm Properties * 4.3 The Index via Concentration * 4.4 The Obstruction Map * 4.5 The Higher Regularity Obstruction * 5 The Universal Dirac Operator * 5.1 Trivializations * 5.2 Universal Linearization * 5.3 First Variation Formula Fredholmness of Deformations * 6.1 Conormal Regularity * 6.2 Obstruction Component of Deformations * 6.3 The Index of \(\mathcal{L}_{\Phi_{0}}\) * 7 Nash-Moser Theory * 7.1 Tame Frechet Spaces * 7.2 The Implicit Function Theorem * 8 Tame Estimates * 8.1 The Obstruction Bundle * 8.2 Invertibility on a Neighborhood * 8.3 Quadratic and Error Terms * 8.4 Tame Frechet Spaces * 8.5 Tame Estimates for the Linearization * 8.6 Proofs of Theorem 1.4 and Corollary 1.5 * A Appendix I: Exponential Decay * B Appendix II: Boundary and Edge Regularity ## 1 Introduction The notion of a \(\mathbb{Z}_{2}\)-harmonic spinor was introduced by C. Taubes to describe the limits of renormalized sequences of solutions to generalized Seiberg-Witten equations. \(\mathbb{Z}_{2}\)-harmonic spinors are also the simplest type of Fueter section, and are therefore of interest in the study of gauge theories and enumerative theories on manifolds with special holonomy. Beyond these connections, \(\mathbb{Z}_{2}\)-harmonic spinors are intrinsic objects on low-dimensional manifolds and can be studied without reference to any of these theories. This article investigates the local structure of the universal moduli space of \(\mathbb{Z}_{2}\)-harmonic spinors over the space of parameters on a compact 3-manifold. The main result states that this universal moduli space locally projects to a codimension 1 submanifold, i.e. a "wall", in the space of parameters. This provides a key step toward confirming expectations that \(\mathbb{Z}_{2}\)-harmonic spinors should enter into the above theories via wall-crossing formulas. Results in this direction have also been obtained by R. Takahashi using different techniques [49]. The present work grew out of attempts to develop a more robust analytic framework for these results, with an eye towards applications to gluing problems [42] and other deformation problems. As observed by S. Donaldson [9], the same analytic issues appear in many distinct geometric contexts, most of which remain unexplored [26]. ### Main Results Let \((Y,g)\) be a closed, oriented, Riemannian 3-manifold, and fix a spin structure with spinor bundle \(S\to Y\). Given a closed submanifold \(\mathcal{Z}\subset Y\) of codimension 2, choose a real line bundle \(\ell\to Y-\mathcal{Z}\). The spinor bundle \(S\otimes_{\mathbb{R}}\ell\) carries a Dirac operator denoted \(\not{D}_{\mathcal{Z}}\) formed from the spin connection and the unique flat connection on \(\ell\) with holonomy in \(\mathbb{Z}_{2}\). A \(\mathbb{Z}_{2}\)**-harmonic spinor** is a solution \(\Phi\in\Gamma(S\otimes_{\mathbb{R}}\ell)\) of the twisted Dirac equation on \(Y-\mathcal{Z}\) satisfying \[\not{D}_{\mathcal{Z}}\Phi=0\qquad\qquad\text{ and }\qquad\qquad\nabla\Phi\in L^{2}. \tag{1.1}\] The submanifold \(\mathcal{Z}\) is called the **singular set**. When \(\mathcal{Z}\) has sufficient regularity, the latter requirement implies that \(|\Phi|\) extends continuously to the closed manifold \(Y\) with \(\mathcal{Z}\subseteq|\Phi|^{-1}(0)\). The existence (and abundance) of \(\mathbb{Z}_{2}\)-harmonic spinors with \(\mathcal{Z}\neq\emptyset\) on closed 3-manifolds with \(b_{1}>1\) was established by Doan-Walpuski in [8]. The definition of the Dirac operator relies on a background choice of a Riemannian metric \(g\) on \(Y\) and possibly a perturbation \(B\) to the spin-connection. Let \(\mathcal{P}=\{(g,B)\}\) denote the parameter space of possible choices. Given a pair \((g_{0},B_{0})\) and a \(\mathbb{Z}_{2}\)-harmonic spinor \((\mathcal{Z}_{0},\ell_{0},\Phi_{0})\) with respect to this pair, the goal of the present work is to study the local deformation problem, i.e. to describe the structure of the set of nearby pairs \((g,B)\in\mathcal{P}\) for which there exists a \(\mathbb{Z}_{2}\)-harmonic spinor. This problem cannot be addressed with the standard elliptic theory used for classical harmonic spinors [27, 32]. If \(\ell\) has a non-trivial twist around \(\mathcal{Z}_{0}\), the Dirac operator \(\not{D}_{\mathcal{Z}_{0}}\) is degenerate along the singular set \(\mathcal{Z}_{0}\) and is therefore not a uniformly elliptic operator on a closed manifold. Instead, it is an **elliptic edge operator** - a class of operators well-studied in microlocal analysis [34, 37, 45]. For such operators, elliptic regularity fails and the extension to Sobolev spaces need not be Fredholm. In particular, for natural function spaces \(\not{D}_{\mathcal{Z}_{0}}\) possesses an infinite-dimensional cokernel. As a result, the problem of deforming a solution to a solution for a nearby parameter seemingly carries an infinite-dimensional obstruction. The following key idea, first described by Takahashi in [49], addresses this issue. _Key Idea: the infinite-dimensional obstruction is cancelled by deformations of the singular set \(\mathcal{Z}\)._ Since the Dirac equation \(\not{D}_{\mathcal{Z}}\) depends on \(\mathcal{Z}\), but \(\mathcal{Z}\) is in turn determined by the vanishing of the norm \(|\Phi|\) of a spinor solving (1.1), the singular set and the spinor are coupled and must be solved for simultaneously. The problem thus has a similar character to a free-boundary-value problem, where the domain and solution must be found concurrently, though the "boundary" here has codimension 2. In particular, this analysis requires an understanding of the derivative of the Dirac operator with respect to deformations of the singular set \(\mathcal{Z}\). Upgrading the singular set \(\mathcal{Z}\) to a variable, we define the **universal Dirac operator** to be the operator acting on pairs \((\mathcal{Z},\Phi)\) of a singular set and spinor with reference to a background parameter \(p\in\mathcal{P}\) by \[\not{\mathbb{D}}_{p}(\mathcal{Z},\Phi):=\not{D}_{\mathcal{Z}}\Phi\] where the choice of parameter \(p=(g,B)\) is implicit on the right-hand side. **Definition 1.1**.: Given a parameter pair \(p=(g,B)\in\mathcal{P}\) the **moduli space of \(\mathbb{Z}_{2}\)-harmonic spinors** is the space \[\mathscr{M}_{\mathbb{Z}_{2}}(p):=\left\{(\mathcal{Z},\ell,\Phi)\ \Big{|}\ \not{\mathbb{D}}_{p}(\mathcal{Z},\Phi)=0\ \,\ \ w_{1}(\ell)\in H^{1}(Y-\mathcal{Z};\mathbb{Z}_{2})\ \,\ \ \|\Phi\|_{L^{2}}=1\ \right\}\Big{/}\mathbb{Z}_{2} \tag{1.2}\] and the **universal moduli space of \(\mathbb{Z}_{2}\)-harmonic spinors** is the union \[\widetilde{\mathscr{M}}_{\mathbb{Z}_{2}}:=\bigcup_{p\in\mathcal{P}}\mathscr{M }_{\mathbb{Z}_{2}}(p).\] The middle condition in (1.2) means that the real line bundle \(\ell\to Y-\mathcal{Z}_{0}\) is considered up to its topological isomorphism class. Because \(\not{D}_{\mathcal{Z}}\) is \(\mathbb{R}\)-linear and \(\mathbb{Z}_{2}\) acts by \(\Phi\mapsto-\Phi\), the fiber of the moduli space \(\mathscr{M}_{\mathbb{Z}_{2}}(p)\) is a real projective space for each choice of \((\mathcal{Z},\ell)\). Here, the singular set \(\mathcal{Z}\subset Y\) is assumed to be closed and rectifiable subset of (Hausdorff) codimension 2. It is conjectured that the singular set is generically a differentiable submanifold of codimension 2 when the metric \(g\) is smooth, though at present is only known to be rectifiable in general [23, 56, 71]. Taubes and Wu have constructed examples for which the singular set \(\mathcal{Z}\) is modeled by a collection of rays from the origin, which are expected to model non-generic behavior [60]. We do not attempt to address these regularity issues here. We now state the main results. The first result, Theorem 1.3 describes the linearized deformation theory near a \(\mathbb{Z}_{2}\)-harmonic spinor; the next result, Theorem 1.4, address the non-linear version. Throughout, we fix a central parameter \(p_{0}=(g_{0},B_{0})\) such that there exists a \(\mathbb{Z}_{2}\)-harmonic spinor \((\mathcal{Z}_{0},\ell_{0},\Phi_{0})\) with respect to \(p_{0}\) meeting the following requirements. **Definition 1.2**.: A \(\mathbb{Z}_{2}\)-harmonic spinor \((\mathcal{Z}_{0},\ell_{0},\Phi_{0})\) is said to be **regular** if the following three assumptions hold: **Assumption 1**.: The singular set \(\mathcal{Z}_{0}\subset Y\) is a smooth, embedded link, and the real line bundle \(\ell_{0}\) restricts to the mobius bundle on every disk normal to \(\mathcal{Z}_{0}\). **Assumption 2**.: The spinor \(\Phi_{0}\) has non-vanishing leading-order, i.e. there is a constant \(c_{1}\) such that \[|\Phi_{0}|\geqslant c_{1}\mathrm{dist}(-,\mathcal{Z}_{0})^{1/2}\] **Assumption 3**.: \(\Phi_{0}\) is isolated, i.e. it is the unique \(\mathbb{Z}_{2}\)-harmonic spinor for the pair \((\mathcal{Z}_{0},\ell_{0})\) with respect to \((g_{0},B_{0})\) up to normalization and sign. With these assumptions, the Dirac operator \[\not{D}_{\mathcal{Z}_{0}}:H^{1}(S\otimes_{\mathbb{R}}\ell)\to L^{2}(S \otimes_{\mathbb{R}}\ell) \tag{1.3}\] has closed range and infinite-dimensional cokernel, where \(H^{1}\) is the Sobolev space of sections whose covariant derivative is \(L^{2}\). Let \(\Pi_{0}\) denote the \(L^{2}\)-orthogonal projection to the orthogonal complement of the range, which is naturally isomorphic to the cokernel. The first result gives a precise manifestation of the key idea explained above: **Theorem 1.3**.: The projection of the first-variation of the universal Dirac operator with respect to deformations of the singular set \(\mathcal{Z}\) \[\Pi_{0}\circ\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\not{\mathbb{D}}:L^{2,2}( \mathcal{Z}_{0};N\mathcal{Z}_{0})\longrightarrow\mathrm{Coker}(\not{D}_{ \mathcal{Z}_{0}}) \tag{1.4}\] is an elliptic pseudo-differential operator of order \(\frac{1}{2}\) and its Fredholm extension has index \(-1\). Here, sections of the normal bundle \(N\mathcal{Z}_{0}\) is the tangent space to the space of embeddings of \(\mathcal{Z}_{0}\). In Section 4, it is shown there is an isomorphism \(\mathrm{Coker}(\not{D}_{\mathcal{Z}_{0}})\simeq L^{2}(\mathcal{Z}_{0}; \mathcal{S})\) of the infinite-dimensional cokernel with a space of sections on \(\mathcal{Z}_{0}\); composing with this isomorphism, (1.4) is a map of sections of vector bundles on \(\mathcal{Z}_{0}\) and the meaning of pseudodifferential operator is the standard one. Theorem 1.3 shows that the operator \(\not{\mathbb{D}}\) displays a **loss of regularity** of order \(\frac{3}{2}\). While the non-linear operator \(\not{\mathbb{D}}\) is only bounded into \(L^{2}\), the fact that (1.4) is elliptic of order \(\frac{1}{2}\) means it has the following properties: 1. the kernel is finite-dimensional, and the closure of the range has finite codimension, 2. the range is dense and not closed, but is closed as a map into \(\mathrm{Coker}(\not{D}_{\mathcal{Z}_{0}})\cap H^{3/2}\) with the \(\frac{3}{2}\)-norm, 3. as a map into \(\mathrm{Coker}(\not{D}_{\mathcal{Z}_{0}})\cap H^{3/2}\), it is Fredholm with index equal to \(-1\). Loss of regularity is an intriguing phenomenon intrinsic to many types of PDE [1, 18, 29]. More details are given during the proof, but for our immediate purposes a loss of regularity means that there are no function spaces for which the universal Dirac operator is simultaneously bounded and has Fredholm derivative. For every natural function space \(\mathcal{X}\) for the domain, the codomain \(\mathcal{Y}\) of the universal Dirac operator \(\not{\mathbb{D}}:\mathcal{X}\rightarrow\mathcal{Y}\) may be chosen _either_ so that the non-linear part of \(\not{\mathbb{D}}\) is bounded, in which case the derivative \(\mathrm{d}\not{\mathbb{D}}\) does not have closed range, _or_ it may be chosen so that the derivative is Fredholm, in which case non-linear part is unbounded. Deformation problems for equations displaying a loss of regularity cannot be addressed using the standard Implicit Function Theorem on Banach spaces; instead they must be solved using various versions of the Nash-Moser Implicit Function Theorem on tame Frechet manifolds, denoted in our case by \(\mathcal{X}\) and \(\mathcal{Y}\). Using the linearized result Theorem 1.3 and the Nash-Moser Implicit Function Theorem leads to our main result: **Theorem 1.4**.: There exists an open neighborhood \(\mathscr{U}_{0}\) of the universal moduli space \(\widetilde{\mathscr{M}}_{\mathbb{Z}_{2}}\) centered at \((p_{0},(\mathcal{Z}_{0},\ell_{0},\Phi_{0}))\) such that the projection \(\pi\) to the parameter space restricts to a homeomorphism from \(\mathscr{U}_{0}\) to \(\pi(\mathscr{U}_{0})\), and the image \(\pi(\mathscr{U}_{0})\) posseses a Kuranishi chart of virtual codimension 1. The same conclusion holds replacing \(\mathcal{P}\) by any tame Frechet submanifold \(\mathcal{P}^{\prime}\subseteq\mathcal{P}\). To possess a Kuranishi chart of virtual codimension 1 means that the set is locally modeled by the zero-locus of a smooth map \(\kappa:\mathcal{P}\to\mathbb{R}\) (see e.g. Section 3.3 of [7]). In particular, if the map (1.4) has empty kernel, then \(\kappa\) is transverse to \(0\) and \(\pi(\mathscr{U}_{0})\) is a smooth Frechet submanifold of codimension 1. In either case, \(\mathscr{U}_{0}\) also consists of regular \(\mathbb{Z}_{2}\)-harmonic spinors. More generally, the universal eigenvalue problem has a spectral crossing along \(\pi(\mathscr{U}_{0})\): **Corollary 1.5**.: There is a set \(V_{0}\subseteq\mathcal{P}\) centered at \(p_{0}\) possessing a Kuranishi chart of virtual codimension 0 such that for \(p\in V_{0}\) there exists triples \((\mathcal{Z}_{p},\Phi_{p},\Lambda_{p})\) defined implicitly as smooth functions of \(p\) satisfying \[\not{D}_{\mathcal{Z}_{p}}\Phi_{p}=\Lambda_{p}\Phi_{p} \tag{1.5}\] for \(\Lambda_{p}\in\mathbb{R}\) and such that \(\pi(\mathscr{U}_{0})=\Lambda^{-1}(0)\). Of course, the triple coincides with \((\mathcal{Z}_{0},\Phi_{0},0)\) at \(p_{0}\). Analogous to Theorem 1.4, \(V_{0}\) consists of regular \(\mathbb{Z}_{2}\)-harmonic eigenvectors, and if the map (1.4) has empty kernel then \(V_{0}\) is an open neighborhood of \(p_{0}\) and \(\Lambda:V_{0}\to\mathbb{R}\) is transverse to \(0\). Once again, the conclusion holds replacing \(\mathcal{P}\) by any tame Frechet submanifold \(\mathcal{P}^{\prime}\subseteq\mathcal{P}\). **Remark 1.6**.: Assumption 3 can be shown to hold generically. Assumption 2 is known to be generic in analogous situations (see [25]), though we do not prove attempt to prove such a statement here. It is conjectured that Assumption 1 also holds generically. The genericity of embeddings is currently under investigation in ongoing work of Haydys-Mazzeo-Takahashi [23]. The genericity of smoothness and other questions on the regularity of the singular set \(\mathcal{Z}_{0}\) involve significant detours into geometric measure theory (see [22, 71]) and are beyond the scope of the present article. Theorem 1.4 implies that smoothness is stable under smooth variations of the metric and perturbation. ### Relations to Gauge Theory As stated at the beginning of the article, the motivation for the study of \(\mathbb{Z}_{2}\)-harmonic spinors comes from gauge theory. \(\mathbb{Z}_{2}\)-harmonic spinors appear as limiting objects into two distinct settings in gauge theory: i) generalized Seiberg-Witten theory in 2,3, and 4 dimensions, and ii) Yang-Mills theory on manifolds with special holonomy in 6,7, and 8 dimensions. #### 1.2.1 Gauge Theory in Low-Dimensions Most equations mathematical gauge theory fit into the framework of "generalized Seiberg-Witten equations" [4, 67]. Generalized Seiberg-Witten equations are systems of non-linear first-order PDEs on low-dimensional manifolds, whose variables typically include a connection \(A\) on a principal \(G\)-bundle for \(G\) a compact Lie group, and a spinor \(\Psi\). Examples include the standard Seiberg-Witten equations [30, 38], the Vafa-Witten equations [50, 51, 63], the Kapustin-Witten equations [35, 36, 68, 69], the complex ASD equations [20, 54], and the ADHM-Seiberg-Witten equations [6, 21]. In each case, one wishes to understand the moduli space of all solutions modulo gauge transformations. In the nicest cases, such as the standard Seiberg-Witten equations, the moduli space is compact. In general, however, there are sequences of solutions for which the \(L^{2}\)-norm of \(\Psi\) diverges. A variety of convergence theorems following pioneering work of Taubes [55] have shown that after renormalizing the spinor \(\Psi\) to have unit \(L^{2}\)-norm, such sequences converge to a type of \(\mathbb{Z}_{2}\)-harmonic spinor for many equations. In this sense, \(\mathbb{Z}_{2}\)-harmonic spinors are limiting objects appearing at the boundary of the moduli space. The simplest example of this behavior is the following: **Example 1.7**.: **(Two spinor Seiberg-Witten Equations)** Let \((Y,g)\) be a Riemannian 3-manifold, \(S\to Y\) a spin\({}^{c}\)-structure, and \(E\) an auxiliary \(SU(2)\)-bundle with connection \(B\). The two-spinor Seiberg-Witten equations are the following pair of equations for a \(U(1)\)-connection \(A\) on \(\det(S)\) and a spinor \(\Psi\in\Gamma(S\otimes_{\mathbb{C}}E)\): \[\begin{cases}\not{D}_{AB}\Psi=0\\ \star F_{A}+\frac{1}{2}\mu(\Psi,\Psi)=0,\end{cases} \tag{1.6}\] where \(F_{A}\) is the curvature of \(A\). By a theorem of Haydys-Walpuski [24], sequences of solutions to (1.6) subconverge (modulo gauge) to either another solution or, after renormalization of the spinor, to a \(\mathbb{Z}_{2}\)-harmonic spinor. See Section 2 of [41] for details. The perturbation \(B\) in Theorem 1.3 and Theorem 1.4 is the remanent of the \(SU(2)\)-connection denoted by the same symbol. This type of convergence is quite general. As a second example, the _limiting configurations_ at the boundary of Hitchin moduli space are the square roots of holomorphic quadratic differentials on Riemann surfaces, which are a dimensional reduction of \(\mathbb{Z}_{2}\)-harmonic spinors (see [55], Theorem 1.2). \(\mathbb{Z}_{2}\)-harmonic spinors are therefore a generalization of objects that have been well-studied in the context of Higgs bundles and the geometry of Hitchin moduli space near the boundary [15, 16, 33]. More generally, convergence theorems of a similar type have been proved by Taubes for **(i)** flat \(SL(2,\mathbb{C})\) connections on a 3-manifold [55], **(ii)** the complex ASD equations on a 4-manifold [54], **(iii)** the Kapustin-Witten equations [59], **(iv)** the Vafa-Witten equations [58], and **(v)** the Seiberg-Witten equations with multiple spinors on a 4-manifold [57], and by Haydys-Walpuski and Walpuski-Zhang respectively for **(vi)** the Seiberg-Witten equations with multiple spinors on a 3-manifold [24] and **(vii)** the ADHM\({}_{1,2}\) Seiberg-Witten equations on a 3-manifold [67]. In many of these cases the \(\mathbb{Z}_{2}\)-harmonic spinors that arise are \(\mathbb{Z}_{2}\)-harmonic 1-forms, i.e. "spinors" for the Dirac-type operator \((d+d^{*})\). Theorem 1.4 applies directly to the \(\mathbb{Z}_{2}\)-harmonic spinors in Example 1.7; while the deformation theory for many of these more general cases does not follow from Theorem 1.4, the differences from the present situation are expected to be primarily topological with the results following a similar analytic framework. A result similar to Theorem 1.4 for \(\mathbb{Z}_{2}\)-harmonic 1-forms follows from a result of S. Donaldson for multi-valued harmonic functions [9]. #### 1.2.2 Fueter Sections The _Fueter equation_ is a non-linear generalization of the Dirac equation on 3 and 4-manifolds for spinors taking values in a bundle of hyperkahler orbifolds rather than a Clifford module [43, 53]. Solutions of the Fueter equation are called **Fueter Sections**. The Fueter equation arises naturally in the study of gauge theory on manifolds with special holonomy in dimensions 6, 7, or 8. On such manifolds, sequences of Yang-Mills instantons may converge with bubbling along a calibrated submanifold \(Y\) of codimension 4 [52, 62]. The bubbling behavior is expected to be captured by the data of a Fueter section of the bundle \(\mathscr{M}_{ASD}\to Y\) whose fibers are the moduli spaces of framed anti-self-dual instantons on the fibers of the normal bundle to \(Y\)[65, 66]. Consequently, Fueter sections play key role in proposals for constructing gauge-theoretic invariants on these manifolds. In a closely related direction, Fueter sections also govern the deformation theory of calibrated submanifolds [26] and should therefore play a role in enumerative theories of these [6]. In particular, in both cases they are expected to contribute terms to wall-crossing formulas which relate these theories to generalized Seiberg-Witten theories on low-dimensional calibrated submanifolds and compensate for losses of compactness as parameters vary. For more in-depth expositions, see [4, 6, 10, 21, 24]. In other directions, there are putative applications of Fueter sections to symplectic geometry [5, 28, 47, 64], and to constructing generalized Floer theories on 3-manifolds [11, 12]. In all these cases, a well-developed theory of Fueter sections is lacking and many aspects remain speculative. In at least the contexts of coming from gauge theory, it is expected that Fueter sections with singularities are unavoidable. Singularities arise when a Fueter section intersects the orbifold locus of the target hyperkahler orbifold. \(\mathbb{Z}_{2}\)-harmonic spinors are the simplest examples of Fueter sections with singularities, corresponding to the hyperkahler orbifold \(X=\mathbb{H}/\mathbb{Z}_{2}\). The data of a \(\mathbb{Z}_{2}\)-harmonic spinors as defined in (1.1) is equivalent to that of Fueter section valued in a bundle with fiber \(\mathbb{H}/\mathbb{Z}_{2}\) via choosing local lifts to a bundle with fiber \(\mathbb{H}\). The line bundle \(\ell\) captures the sign ambiguity in the choice of local lift, and the singular set \(\mathcal{Z}\) arises from where the section intersects the singular stratum \(\{0\}\in\mathbb{H}/\mathbb{Z}_{2}\) (see [41] Section 2 or [7] Section 4 for details). For more general hyperkahler orbifolds \(X\) there is a stratification by stabilizer subgroups into subsets of codimension 4k, and a singular set arises where a Fueter section hits these strata. The reader is cautioned that even though these strata are codimension at least 4 and we consider a base manifold \(Y\) of dimension 3, solutions of the Fueter equation do not behave generically, and the existence of a codimension 2 singular set \(\mathcal{Z}\) is stable under perturbation [8] in all known cases. Much of the work involving Fueter sections (e.g. [7, 19, 44, 65, 66]) has dealt only with the case that \(\mathcal{Z}=\emptyset\). This article contributes a step toward understanding Fueter sections with singularities. ### Outline The paper is divided into three main parts: Sections 2-4 study the semi-Fredholm theory of the Dirac operator with a fixed singular set. Sections 5-6 study deformations of the singular set and prove Theorem 1.3. The non-linear deformation result, Theorem 1.4, is proved in Sections 7-8 using Nash-Moser theory. Throughout, we endeavor to give a largely self-contained exposition that does not assume previous familiarity with Nash-Moser theory or the microlocal analysis of singular elliptic operators. We now outline these three parts in more detail and explain the strategy for proving Theorems 1.3 and 1.4. Section 2 begins with semi-Fredholm analogues of several standard results from elliptic theory for Dirac operator with a fixed singular set \(\mathcal{Z}_{0}\). Although Fredholmness and elliptic bootstrapping in the usual sense fail, one still obtains various "semi"-elliptic estimates that display several properties analogous to the standard elliptic case. Many of the results in this section are particular cases of general results from microlocal analysis on elliptic edge operators [34, 45]. Section 3 studies the local expressions of solutions; while a \(\mathbb{Z}_{2}\)-harmonic spinor need not extend smoothly across \(\mathcal{Z}_{0}\), the standard notion of regularity is replaced with the existence of an asymptotic expansion dictating the behavior along \(\mathcal{Z}_{0}\). These asymptotic expansions and play a key role in all the local analysis in later sections. Section 4 then investigates the infinite-dimensional cokernel of the Dirac operator in more detail. As asserted in the introduction following (1.4), an isomorphism with a space of sections along \(\mathcal{Z}_{0}\) is established. It is also shown that the cokernel concentrates along \(\mathcal{Z}_{0}\) with exponential decay in the normal directions. Although the results of Section 4 are in some sense preliminary to the main purpose of the paper, the reader is cautioned that this section contains many of the more technical points of the article. Some readers may prefer to read only the statements in Section 4 on a first pass. With the semi-Fredholm theory for fixed singular set established, Sections 5 - 6 proceed to study deformations of the singular set. The key point is that via pulling back by diffeomorphisms moving \(\mathcal{Z}_{0}\) to nearby links, deformations of the singular set are equivalent to deformations of the metric along the family of pullback metrics while keeping the singular set fixed. Schematically, \[\begin{pmatrix}\text{varying }\mathcal{Z}\\ \text{fixed }g_{0}\end{pmatrix}\qquad\qquad\begin{matrix}\partial\\ \partial\mathcal{Z}\end{matrix}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, spinor _and_ its rate of vanishing along \(\mathcal{Z}_{0}\); the loss of regularity is then an unavoidable consequence of the asymptotics of \(\mathbb{Z}_{2}\)-harmonic spinors. Here, it is worth emphasizing that while there is a pleasing geometric reason for Theorem 1.3, the fact that the operator (1.4) is elliptic emerges quite miraculously from the formulas during the proof. Since differentiating the symbol does not preserve ellipticity, Bourguignon-Gauduchon's formula leads to a highly non-elliptic operator on \(Y\); the content of Theorem 1.3 is to assert that under the isomorphisms from Section 4 associating this with an operator on sections of \(\mathcal{Z}_{0}\), ellipticity somewhat surprisingly emerges! Theorem 6.1 provides a more technical version of Theorem 1.3, and an explicit formula for the elliptic operator (1.4) is given during the proof. Sections 7-8 use Theorem 1.3 and a version of the Nash-Moser Implicit Function Theorem to prove Theorem 1.4. Section 7 gives a brief and practical introduction to Nash-Moser theory, and Section 8 shows that the universal Dirac operator satisfies the necessary hypotheses. The most challenging of these is to show that Theorem 1.3 persists on an open neighborhood of \((p_{0},\mathcal{Z}_{0},\Phi_{0})\). In this, the difficulty is ensuring that some of the more subtle aspects of Sections 4 and 6 are stable. ### Acknowledgements This article constitutes a portion of the author's Ph.D. thesis. The author is grateful to his advisors Clifford Taubes and Tomasz Mrowka for their insights and suggestions. The author would also like to thank Rafe Mazzeo, and Thomas Walpuski for many helpful discussions. This work was supported by a National Science Foundation Graduate Research Fellowship and by National Science Foundation Grant No. 2105512. It was also partially completed while the author was in residence at the Simons Laufer Mathematical Sciences Institute (previously known as MSRI) in Berkeley, California, during the Fall 2022 semester, supported by NSF Grant DMS-1928930. ## 2 Semi-Fredholm Properties Let \((Y,g_{0})\) denote a closed, oriented Riemannian 3-manifold, and fix a spin structure \(\mathfrak{s}_{0}\to Y\). Denote by \(S\) the associated spinor bundle, and Clifford multiplication by \(\gamma_{\circ}:T^{*}Y\to\operatorname{End}(S)\). \(S\) carries its spin connection \(\nabla^{\operatorname{spin}}\) with respect to which \(\gamma_{\circ}\) is parallel and a real inner product denoted \(\langle-,-\rangle\). More generally, consider the set of perturbations to the spin connection \(\nabla_{B}=\nabla^{\operatorname{spin}}+B\) where \(B\in\Omega^{1}(\mathfrak{so}(S))\) is a real-linear endomorphism commuting with Clifford multiplication. The motivation for introducing this class of perturbations comes from the relation to gauge theory described in Example 1.7. Fix a choice \(B_{0}\) of such a perturbation. Now let \(\mathcal{Z}_{0}\subset Y\) be a smoothly embedded link, i.e. a union of disjoint embedded copies of \(S^{1}\). Choose a real line bundle \(\ell_{0}\to Y-\mathcal{Z}_{0}\), and let \(A_{0}\) denote the unique flat connection on \(\ell\) with holonomy in \(\mathbb{Z}_{2}\). The Clifford module \((S_{0},\gamma,\nabla)\) defined using the fixed pair \((g_{0},B_{0})\) as \[S_{0}:=S\otimes_{\mathbb{R}}\ell_{0}\hskip 56.905512pt\gamma=\gamma_{\circ} \otimes 1\hskip 56.905512pt\nabla=\nabla_{B_{0}}\otimes\operatorname{Id}+1 \otimes\nabla_{A_{0}} \tag{2.1}\] carries a singular Dirac operator. **Definition 2.1**.: The \(\mathbb{Z}_{2}\)**-Dirac operator** associated to the Clifford module \((S_{0},\gamma,\nabla)\) is defined on sections \(\psi\in\Gamma(S_{0})\) by \[\not{D}_{\mathcal{Z}_{0}}\psi:=\gamma(\nabla\psi).\] In contexts where the singular set \(\mathcal{Z}_{0}\) is fixed and no ambiguity will arise, we omit the subscript and write \(\not{D}\). In the case that \(B_{0}=0\) and \(\ell_{0}\) extends over \(\mathcal{Z}_{0}\) (and _a fortiori_ if \(\mathcal{Z}_{0}=\emptyset\)), this is the classical spin Dirac operator associated to the spin structure obtained from twisting \(\mathfrak{s}_{0}\) by \(\ell_{0}\). The case of interest to us is that in which \(\ell_{0}\) does not extend over \(\mathcal{Z}_{0}\) and instead restricts to the mobius line-bundle on the normal planes of \(\mathcal{Z}_{0}\). Assumption 1 restricts to case. As explained in the introduction, when Assumption 1 holds the Dirac operator \(\not{D}\) is not an elliptic operator in the standard sense--it is singular along \({\cal Z}_{0}\), and its extension to spaces of sections is only semi-Fredholm. In this section we introduce appropriate Sobolev spaces of sections and describe the semi-Fredholm mapping properties of this Dirac operator. More general versions of these results for larger classes of singular operators can be found in [9, 17, 37, 45, 70]. Here, we give a self-contained exposition. ### Function Spaces To begin, we introduce "edge" Sobolev spaces starting with the case of lowest regularity. These are the natural function spaces for the analysis of certain classes of singular elliptic operators (see [45]). Let \(r\) denote a smooth weight function equal to \(\mathrm{dist}(-,{\cal Z}_{0})\) on a tubular neighborhood of \({\cal Z}_{0}\) and equal to \(1\) away from a slightly larger tubular neighborhood. For smooth sections compactly supported in \(Y-{\cal Z}_{0}\), define the \(rH^{1}_{e}\) and \(L^{2}\) norms respectively by \[\|\varphi\|_{rH^{1}_{e}} := \left(\int_{Y\setminus{\cal Z}_{0}}|\nabla\varphi|^{2}+\frac{| \varphi|^{2}}{r^{2}}\ dV\right)^{1/2}\qquad\qquad\mbox{and}\qquad\qquad\| \psi\|_{L^{2}}:=\left(\int_{Y\setminus{\cal Z}_{0}}|\psi|^{2}\ dV\right)^{1/2},\] where \(\nabla\) is the connection on \(S_{0}\) defined above in (2.1), and \(dV\) denotes the volume form of the Riemannian metric \(g_{0}\). In addition, we use \(r^{-1}H^{-1}_{e}\) to denote dual norm of \(rH^{1}_{e}\) with respect to the \(L^{2}\)-pairing: \[\left|\xi\right|_{r^{-1}H^{-1}_{e}}=\sup_{\|\varphi\|_{rH^{1}_{e}}=1}\langle \xi,\varphi\rangle_{L^{2}}.\] **Definition 2.2**.: The basic **edge Sobolev spaces** denoted \(rH^{1}_{e},L^{2}\), and \(r^{-1}H^{-1}_{e}\) are defined by \[rH^{1}_{e}(Y-{\cal Z}_{0};S_{0}) := \{\ \varphi\ |\ \ \|\varphi\|_{rH^{1}_{e}}<\infty\}\] \[L^{2}(Y-{\cal Z}_{0};S_{0}) := \{\ \psi\ |\ \ \|\psi\|_{L^{2}}<\infty\}\] \[r^{-1}H^{-1}_{e}(Y-{\cal Z}_{0};S_{0}) := \{\ \xi\ |\ \ \|\xi\|_{r^{-1}H^{-1}_{e}}<\infty\}\] i.e. as the completions of compactly supported smooth sections with respect to the above norms respectively. By construction, \(r^{-1}H^{-1}_{e}=(rH^{1}_{e})^{\star}\) is the dual space with respect to the \(L^{2}\)-pairing. When it is apparent from context, we will abbreviate these by \(rH^{1}_{e}\), \(L^{2}\), and \(r^{-1}H^{-1}_{e}\) respectively. These spaces are equivalent for different choices of the weight function \(r\). The former two \(rH^{1}_{e}\) and \(L^{2}\) are Hilbert spaces with the inner products arising from the polarization of the above norms. Although \(Y-{\cal Z}_{0}\) is not compact, the weight ensures following version of Rellich's Lemma holds, proved by a standard diagonalization argument. **Lemma 2.3**.: The inclusion \[rH^{1}_{e}(Y-{\cal Z}_{0};S_{0})\ \hookrightarrow\ L^{2}(Y-{\cal Z}_{0};S_{0})\] is compact. ### Mapping Properties The following proposition gives the fundamental mapping properties of the singular Dirac operator on the spaces defined in the previous subsection. **Proposition 2.4**.: The operator \[\not{D}:rH^{1}_{e}(Y-{\cal Z}_{0};S)\longrightarrow L^{2}(Y-{\cal Z}_{0};S).\] is (left) semi-Fredholm, i.e. it satisfies the below properties: * \(\ker(\not{D})\) is finite-dimensional, and * \(\operatorname{Range}(\not{D})\) is closed. Proof.: It is immediate from the definitions of \(rH^{1}_{e},L^{2}\) that \(\not{D}\) is a bounded operator. Given \(\varphi\in rH^{1}_{e}\), it suffices to show that there is a constant \(C\) such that the elliptic estimate \[\|\varphi\|_{rH^{1}_{e}}\leqslant C\Big{(}\|\not{D}\varphi\|_{L^{2}}+\|\varphi \|_{L^{2}}\Big{)} \tag{2.2}\] holds. Using the compactness of the embedding from Lemma 2.3, both conclusions of the lemma then follow from standard theory (see, e.g. [40] Section 10.4.1). The estimate (2.2) follows from the Weitzenbock formula and integration by parts, as we now show, though some caution must be taken about the boundary term along \(\mathcal{Z}_{0}\). Let \(\varphi\in rH^{1}_{e}\) be a spinor, and for each \(n\in\mathbb{N}\) let \(N_{1/n}(\mathcal{Z}_{0})\) denote a tubular neighborhood of \(\mathcal{Z}_{0}\) of radius \(1/n\). Additionally, let \(\chi_{n}\) denote a cut-off function equal to \(1\) on \(Y-N_{1/n}(\mathcal{Z}_{0})\) and compactly supported in \(N_{2/n}(\mathcal{Z}_{0})\) satisfying \[|d\chi_{n}|\leqslant\frac{C}{n}\leqslant\frac{C^{\prime}}{r}.\] Then, integrating by parts and using the \(\not{D}\) is formally self-adjoint, \[\int_{Y\setminus\mathcal{Z}_{0}}|\not{D}\varphi|^{2}\ dV = \lim_{n\to\infty}\int_{Y\setminus\mathcal{Z}_{0}}\langle\not{D} \varphi,\not{D}\varphi\rangle\chi_{n}\ dV\] \[= \lim_{n\to\infty}\int_{Y\setminus\mathcal{Z}_{0}}\langle\varphi, \not{D}\not{D}\varphi\rangle\chi_{n}\ +\langle\varphi,\gamma(d\chi_{n})\not{D}\varphi \rangle\ dV.\] The Weitzenbock formula shows that \[\not{D}\not{D}=\nabla^{*}\nabla+F\] wherein \(F\) is a zeroth order term arising from the scalar curvature and the derivatives of the perturbation \(B_{0}\). Substituting this and integrating by parts again yields \[\int_{Y\setminus\mathcal{Z}_{0}}|\not{D}\varphi|^{2}\ dV=\int_{Y\setminus \mathcal{Z}_{0}}|\nabla\varphi|^{2}+\langle\varphi,F\varphi\rangle+\lim_{n\to \infty}\int_{Y\setminus\mathcal{Z}_{0}}\langle\varphi,d\chi_{n}\cdot\nabla \varphi+\gamma(d\chi_{n})\not{D}\varphi\rangle\ dV\] where \(\cdot\) denotes contraction of \(1\)-form indices. Since \(F\) is smooth on \(Y\) hence uniformly bounded, rearranging and using Young's inequality yields \[\int_{Y\setminus\mathcal{Z}_{0}}|\nabla\varphi|^{2}\ dV \leqslant C\left(\|\not{D}\varphi\|_{L^{2}}^{2}+\|\varphi\|_{L^{2}}^{2}+ \lim_{n\to\infty}\int_{N_{2/n}(\mathcal{Z}_{0})}|\nabla\varphi|^{2}+d\chi_{n} ^{2}|\varphi|^{2}\ dV\right) \tag{2.3}\] \[\leqslant C\left(\|\not{D}\varphi\|_{L^{2}}^{2}+\|\varphi\|_{L^{2}}^{2}+ \lim_{n\to\infty}\int_{N_{2/n}(\mathcal{Z}_{0})}|\nabla\varphi|^{2}+\frac{| \varphi|^{2}}{r^{2}}\ dV\right). \tag{2.4}\] Provided \(\varphi\in rH^{1}_{e}\) the latter limit vanishes, hence \[\|\nabla\varphi\|_{L^{2}}\leqslant C\Big{(}\|\not{D}\varphi\|_{L^{2}}+\| \varphi\|_{L^{2}}\Big{)}. \tag{2.5}\] To conclude, we show the left-hand side of (2.5) dominates the \(rH^{1}_{e}\) norm. For \(n\) sufficiently large, choose local coordinates \(N_{1/n}(\mathcal{Z}_{0})\simeq S^{1}\times D_{1/n}.\) Denote these by \((t,r,\theta)\) for \(t\) the coordinate in the \(S^{1}\) factor and \((r,\theta)\) polar coordinates on \(D_{1/n}\). For each fixed \(t_{0},r_{0}\), the fact that the holonomy around the loop \((t_{0},r_{0},\theta)\) for \(\theta\in[0,2\pi)\) is \(-1\) implies that the operator \(\nabla_{\theta}\) has lowest eigenvalue \(1/2\) on this loop (see the local expressions in Section 3.1). It follows that \[\int_{N_{1/n}({\cal Z}_{0})}\frac{|\varphi|^{2}}{r^{2}}\ dV\leqslant\frac{1}{4} \int_{N_{1/n}({\cal Z}_{0})}\frac{1}{r^{2}}|\nabla_{\theta}\varphi|^{2}\ dV\leqslant \frac{1}{4}\|\nabla\varphi\|_{L^{2}}^{2},\] and away from \(N_{1/n}({\cal Z}_{0})\), the weight \(r\) is uniformly bounded. Combining this estimate with (2.5) (possibly increasing the constant on the \(\|\varphi\|_{L^{2}}\) factor) yields (2.2), completing the lemma. Given the above lemma, there is finite-dimensional space of solutions. **Definition 2.5**.: A non-zero element \(\Phi\) in the kernel of the operator \[\not{D}:rH^{1}_{e}(Y-{\cal Z}_{0};S_{0})\longrightarrow L^{2}(Y-{\cal Z}_{0};S _{0}) \tag{2.6}\] is called a \({\mathbb{Z}}_{2}\)**-harmonic spinor**. Note that although the estimate (2.2) resembles the standard bootstrapping inequality, it does not imply that an \(L^{2}\) solution of \(\not{D}\psi=0\) necessarily lies in \(rH^{1}_{e}\). In order to establish (2.2) it was necessary to assume _a priori_ that \(\varphi\in rH^{1}_{e}\), else the boundary term along \({\cal Z}_{0}\) need not vanish and the proof fails. Since \(\not{D}\) is uniformly elliptic on any compact subset \(K\subset Y-{\cal Z}_{0}\), standard theory applies to show that \(\varphi\in rH^{1}_{loc}\) (in fact \(C^{\infty}_{loc}\)) but there is no guarantee it has finite \(rH^{1}_{e}\)-norm on \(Y-{\cal Z}_{0}\). Indeed, as we will see in Section 4, the \(rH^{1}_{e}\)-kernel and the \(L^{2}\)-kernel are genuinely different spaces, with the latter infinite-dimensional. These \(L^{2}\) kernel elements not in \(rH^{1}_{e}\) are not called \({\mathbb{Z}}_{2}\)-harmonic spinors. #### 2.2.1 The Adjoint Operator. Although the cokernel of (2.6) is not necessarily finite-dimensional as in standard elliptic theory, it can still be described as the solutions of the formal adjoint operator. As in the proof of Lemma 2.4, integration by parts shows that the relation \[\langle\not{D}v,\varphi\rangle_{L^{2}}=\langle v,\not{D}\varphi\rangle_{L^{2}} \tag{2.7}\] holds for \(v,\varphi\in rH^{1}_{e}\). Here we have used that \(\not{D}\) is formally self-adjoint since the unperturbed Dirac operator is and \(B_{0}\in\Omega^{1}(\mathfrak{so}(S))\). As a consequence of (2.7), the Dirac operator extends to a bounded map \[\not{D}:L^{2}(Y-{\cal Z}_{0};S_{0})\longrightarrow r^{-1}H^{-1}_{e}(Y-{\cal Z }_{0};S_{0}).\] where for \(v\in L^{2}\), the spinor \(\not{D}v\in r^{-1}H^{-1}_{e}\) is the linear functional defined by the relation (2.7). To emphasize the domain of definition for various manifestations of the Dirac operator, we often write \(\not{D}|_{rH^{1}_{e}}\) or \(\not{D}|_{L^{2}}\). We then have the following: **Lemma 2.6**.: The extension \(\not{D}|_{L^{2}}\) defined by (2.7) is the (true) adjoint of \(\not{D}|_{rH^{1}_{e}}\), and there is a closed decomposition \[L^{2}(Y-{\cal Z}_{0};S_{0})=\ker(\not{D}|_{L^{2}})\ \oplus\ \mbox{Range}(\not{D}|_{rH^{1}_{e}}).\] Proof.: Suppose that \(\psi\in L^{2}\) is perpendicular to the range, i.e. \(\langle\psi,\not{D}\varphi\rangle_{L^{2}}=0\) for all \(\varphi\in rH^{1}_{e}\). The definition of \(\not{D}|_{L^{2}}\) via (2.7) shows that as a linear functional on \(rH^{1}_{e}\), one has \(\not{D}\psi=0\). #### 2.2.2 The Second Order Operator. The (left) semi-Fredholmness of \(\not{D}\) implies that the second order operator \(\not{D}\not{D}\) is Fredholm for purely formal reasons. **Lemma 2.7**.: The second order operator \(\not{D}\not{D}:rH^{1}_{e}(Y-{\cal Z}_{0};S_{0})\longrightarrow r^{-1}H^{-1}(Y -{\cal Z}_{0};S_{0})\) is Fredholm and \(\ker(\not{D}\not{D})=\ker(\not{D}|_{rH^{1}_{e}})\simeq\mbox{coker}(\not{D} \not{D})\). In particular, there is an elliptic estimate \[\|\varphi\|_{rH^{1}_{e}}\leqslant C(\|\not{D}\not{D}\varphi\|_{r^{-1}H^{-1}_{ e}}+\|\pi_{0}(\varphi)\|_{L^{2}}). \tag{2.8}\] where \(\pi_{0}(\varphi)\) is the \(L^{2}\)-orthogonal projection onto \(\ker(\not{D}|_{rH^{1}_{e}})\). Proof.: (Cf. [49] Proposition 4.4) By definition of \(\not{D}|_{L^{2}}\) via 2.7, if \(\varphi\in rH^{1}_{e}\) and \(\varphi\in\ker(\not{D}\not{D})\), then \[0=\langle\not{D}\not{D}\varphi,\varphi\rangle_{L^{2}}=\|\not{D}\varphi\|_{L^{ 2}}\] hence \(\varphi\in\ker(\not{D}|_{rH^{1}_{e}})\), which is finite dimensional by Proposition 2.4. To show that the range is closed and the cokernel finite-dimensional (and naturally isomorphic to \(\ker(\not{D}|_{rH^{1}_{e}})\)), let \(f\in r^{-1}H^{-1}_{e}\) and consider the functional \(E_{f}:rH^{1}_{e}\to\mathbb{R}\) given by \[E_{f}(\varphi):=\int_{Y\setminus\mathcal{Z}_{0}}|\not{D}\varphi|^{2}+\langle \varphi,f\rangle\ dV.\] The Euler-Lagrange equations of \(E_{f}\) are \[\not{D}\not{D}\varphi=f\qquad\qquad\qquad\langle f,\Phi\rangle=0\ \ \ \forall\ \Phi\in\ker(\not{D}|_{rH^{1}_{e}})\] so it suffices to show that \(E_{f}\) admits a minimizer. By standard theory ([13] Chapter 8) this holds if \(E_{f}\) is (i) coercive, and (ii) weakly lower semi-continuous. The second of these is standard (see e.g. [13] Section 8.2.2). (i) means that \[E_{f}(\varphi)\geq c_{1}\|\varphi\|_{rH^{1}_{e}}^{2}-c_{2} \tag{2.9}\] holds for some constants \(c_{i}\), and \(\varphi\) in the \(L^{2}\)-orthogonal complement of \(\ker(\not{D}|_{rH^{1}_{e}})\), which follows from the elliptic estimate (2.2) of Proposition 2.4 and Young's inequality. This establishes Fredholmness, and the estimate (2.8) is a routine consequence. As a consequence of the preceding lemma, we may let \(P:r^{-1}H^{-1}_{e}\to rH^{1}_{e}\) denote the solution operator defined by \[P(\xi)=\varphi\qquad\qquad\text{s.t.}\quad\quad\text{i)}\ \not{D}\not{D} \varphi=\xi\mod\ker(\not{D}|_{rH^{1}_{e}}) \tag{2.10}\] \[\text{and}\quad\quad\text{ii)}\ \langle\varphi,\Phi\rangle_{L^{2}}=0\ \forall\Phi\in\ker(\not{D}|_{rH^{1}_{e}}). \tag{2.11}\] To summarize, we have the following corollary: **Corollary 2.8**.: The following hold using the splitting \(L^{2}=\ker(\not{D}|_{L^{2}})\oplus\operatorname{Range}(\not{D}|_{rH^{1}_{e}})\) of Lemma 2.6. 1. The second order operator \(\not{D}\not{D}\) factors through the \(\operatorname{Range}(\not{D}|_{rH^{1}_{e}})\) summand of \[rH^{1}_{e}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad ### Higher Regularity This subsection extends the results of the previous two to "edge" and "boundary" Sobolev spaces of higher regularity (see [45] again for a more general exposition). Beginning with the "boundary" spaces, define the space of "boundary" vector fields \[\mathcal{V}_{\mathrm{b}}:=\{V\in C^{\infty}(Y;TY)\ \ |\ \ \ V|_{\mathcal{Z}_{0}} \in C^{\infty}(\mathcal{Z}_{0};T\mathcal{Z}_{0})\}\] as those tangent to \(\mathcal{Z}_{0}\) at the boundary. Let \(\nabla^{\mathrm{b}}\) denote the covariant derivative with respect to such vector fields, so that in local coordinates \((t,x,y)\) where \(t\) is a coordinate along \(\mathcal{Z}_{0}\) and \(x,y\) coordinates in the normal directions it is given by \[\nabla^{\mathrm{b}}=dx\otimes r\nabla_{x}\ +\ dy\otimes r\nabla_{y}\ +\ dt\otimes\nabla_{t}\] and is equal to the standard covariant derivative \(\nabla\) away from \(\mathcal{Z}_{0}\). For \(m\in\mathbb{N}\), define the \(H^{m}_{\mathrm{b}}\)-norm on compactly supported smooth sections by \[\|\psi\|_{H^{m}_{\mathrm{b}}} := \left(\int_{Y\setminus\mathcal{Z}_{0}}|(\nabla^{\mathrm{b}})^{m }\psi|^{2}\ +\ldots+|\nabla^{\mathrm{b}}\psi|^{2}\ +\ |\psi|^{2}\ dV\right)^{1/2}. \tag{2.12}\] **Definition 2.9**.: The **mixed boundary and edge Sobolev spaces** are defined as (the closures of) \[rH^{m,1}_{\mathrm{b},e}(Y-\mathcal{Z}_{0};S_{0}) := \left\{\varphi\ |\ \|(\nabla^{\mathrm{b}})^{m}\varphi\|_{rH^{1}_{e}}^{2}+ \ldots+\|\nabla^{\mathrm{b}}\varphi\|_{rH^{1}_{e}}^{2}\ +\ \|\varphi\|_{rH^{1}_{e}}^{2}<\infty\right\}\] \[H^{m}_{\mathrm{b}}(Y-\mathcal{Z}_{0};S_{0}) := \left\{\psi\ |\ \|(\nabla^{\mathrm{b}})^{m}\psi\|_{L^{2}}^{2}+ \ldots+\|\nabla^{\mathrm{b}}\psi\|_{L^{2}}^{2}\ +\ \|\psi\|_{L^{2}}^{2}=\|\psi\|_{H^{m}_{\mathrm{b}}}^{2}<\infty\right\}\] \[r^{-1}H^{m,-1}_{\mathrm{b},e}(Y-\mathcal{Z}_{0};S_{0}) := \left\{\xi\ |\ \|(\nabla^{\mathrm{b}})^{m}\xi\|_{r^{-1}H^{-1}_{e}}^{2}+ \ldots+\|\nabla^{\mathrm{b}}\xi\|_{r^{-1}H^{-1}_{e}}^{2}\ +\ \|\xi\|_{r^{-1}H^{-1}_{e}}^{2}<\infty\right\}\] equipped with the norms given by the positive square root of the quantities required to be finite. As for \(m=0\), changing the weight \(r\) result in equivalent norms. More generally, one can define the spaces for \(m\in\mathbb{R}^{\geq 0}\) by interpolation. We have the following version of the standard interpolation inequalities: **Lemma 2.10**.: The following interpolation inequalities hold for \(m_{1}<m<m_{2}\): \[\|\psi\|_{H^{m}_{\mathrm{b}}}\leqslant C_{m_{1},m,m_{2}}\|\psi\|_{H^{m_{1}}_{ \mathrm{b}}}^{\alpha}\|\psi\|_{H^{m_{2}}_{\mathrm{b}}}^{1-\alpha} \|\varphi\|_{H^{m,1}_{\mathrm{b},e}}^{\alpha}\leqslant C_{m_{1},m,m_{2}}\| \varphi\|_{H^{m_{1},1}_{\mathrm{b},e}}^{\alpha}\|\varphi\|_{H^{m_{2},1}_{ \mathrm{b},e}}^{1-\alpha}\] where \(\alpha=\frac{m_{2}-m}{m_{2}-m_{1}}\), and the constants may depend on the triple \(m_{1},m,m_{2}\). Proof.: Choose local cylindrical coordinates \((t,r,\theta)\) on a tubular neighborhood of \(\mathcal{Z}_{0}\), where \(t\) a coordinate along \(\mathcal{Z}_{0}\) and \((r,\theta)\) polar coordinates in the normal directions. The coordinate change \(s=\log(r)\) is a diffeomorphism between \(Y-\mathcal{Z}_{0}\) and the manifold \(Y^{\circ}\) given by attaching a cylindrical end \(T^{2}\times(-\infty,r_{0})\) near \(\mathcal{Z}_{0}\). Under this coordinate change, \(H^{m}_{\mathrm{b}}\) is taken to the standard Sobolev spaces \(e^{-s}H^{m}\) with the an exponential weight. After multiplying by an exponential weight function, the inequalities for \(H^{m}_{\mathrm{b}}\) follow from the standard ones on \(Y^{\circ}\) (see, e.g. [14]). For the mixed boundary and edge spaces, note that \(\|[\nabla,\nabla^{\mathrm{b}}]\varphi\|_{L^{2}}\leqslant\|\nabla\varphi\|_{L^ {2}}\), and iterating these commutators shows that \[\|\varphi\|_{H^{m,1}_{\mathrm{b},e}}^{2}\sim\|\nabla\varphi\|_{H^{m}_{ \mathrm{b}}}^{2}+\|\tfrac{\varphi}{r}\|_{H^{m}_{\mathrm{b}}}^{2} \tag{2.13}\] is an equivalent expression for the norm, after which the interpolation inequalities for \(H^{m,1}_{\mathrm{b},e}\) follow from those for \(H^{m}_{\mathrm{b}}\) applied to \(\nabla\varphi\) and \(\tfrac{\varphi}{r}\). Applying the elliptic estimate (2.2) to \((\nabla^{\mathrm{b}})^{m}\) and iterating commutators \([\nabla,\nabla^{\mathrm{b}}]\) also establishes the following higher-regularity elliptic estimates: **Corollary 2.11**.: There are constants \(C_{m}\) depending on up to \(m+3\) derivatives of the pair \((g_{0},B_{0})\) such that the following elliptic estimates hold for \(\varphi\in rH^{m,1}_{\mathrm{b},e}\): \[\|\varphi\|_{rH^{m,1}_{\mathrm{b},e}} \leq C_{m}(\|\not{D}\varphi\|_{H^{m}_{\mathrm{b}}}+\|\varphi\|_{H^{m} _{\mathrm{b}}})\] \[\|\varphi\|_{rH^{m,1}_{\mathrm{b},e}} \leq C_{m}(\|\not{D}\not{D}\varphi\|_{r^{-1}H^{m-1}_{\mathrm{b},e}}+ \|\varphi\|_{r^{-1}H^{m,-1}_{\mathrm{b},e}})\] (note also that \(\|\varphi\|_{H^{m}_{\mathrm{b}}}\leq C\|\varphi\|_{rH^{m-1,1}_{\mathrm{b},e}}\)). Using this, we immediately deduce the higher-regularity version of the results of the previous subsection. **Corollary 2.12**.: For all \(m>0\), the following statements hold: 1. There is an \(H^{m}_{\mathrm{b}}\)-closed decomposition \[H^{m}_{\mathrm{b}}=\ker(\not{D}|_{H^{m}_{\mathrm{b}}})\oplus\mathrm{Range}( \not{D}|_{rH^{m,1}_{\mathrm{b},e}})\] orthogonal with respect to the \(L^{2}\)-inner product. Moreover, the latter two spaces coincide with \(\ker(\not{D}|_{H^{m}_{\mathrm{b}}})=\ker(\not{D}|_{L^{2}})\cap H^{m}_{ \mathrm{b}}\) and \(\mathrm{Range}(\not{D}|_{rH^{m,1}_{\mathrm{b},e}})=\mathrm{Range}(\not{D}|_{rH ^{1}_{e}})\cap H^{m}_{\mathrm{b}}\). 2. The second order operator \(\not{D}\not{D}\) factors through the \(\mathrm{Range}(\not{D}|_{rH^{1}_{e}})\cap H^{m}_{\mathrm{b}}\) summand of \[rH^{m,1}_{\mathrm{b},e}\] \[\xy(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{ \mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{ \mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{ \mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}= (0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_ {\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0){rH^{m,1}_{ \mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0,0){rH^{m,1}_{\mathrm{b},e}}=(0 ### The Model Operator Let \(Y_{0}=S^{1}\times\mathbb{R}^{2}\) denote the product equipped with coordinates \((t,x,y)\) and the product metric \(g_{0}=dt^{2}+dx^{2}+dy^{2}\). Take \(\mathcal{Z}_{0}=S^{1}\times\{0\}\) and \(\ell_{0}\to Y_{0}-\mathcal{Z}_{0}\) the pullback of the mobius bundle on \(\mathbb{R}^{2}-\{0\}\). The twisted spinor bundle of the product spin structure can be identified with \(S=\underline{\mathbb{C}}^{2}\otimes_{\mathbb{R}}\ell_{0}\). A section \(\psi\in\Gamma(\underline{\mathbb{C}}^{2}\otimes_{\mathbb{R}}\ell)\) may be written as \[\psi=e^{i\theta/2}\begin{pmatrix}\psi^{+}\\ \psi^{-}\end{pmatrix} \tag{3.2}\] where \(\psi^{\pm}\) are \(\mathbb{C}\)-valued function and \((r,\theta)\) are polar coordinates on \(\mathbb{R}^{2}\). Indeed, on each normal plane \(\mathbb{R}^{2}-\{0\}\), the bundle \(\underline{\mathbb{C}}\otimes_{\mathbb{R}}\ell\) can be constructed as the bundle with fiber \(\mathbb{C}\) glued along two (thickened) rays by the transition functions \(+1\) and \(-1\). Consequently, \(e^{i\theta/2}\), gives rise to a global nowhere-vanishing section of this bundle. Writing section in the form (3.2), the connection arising from the spin connection and \(\nabla_{A_{0}}\) on \(\ell_{0}\) (with perturbation \(B_{0}=0\)) is simply \(\nabla=\mathrm{d}\). The Dirac operator then takes the form \[\not{D}=\begin{pmatrix}i\partial_{t}&-2\partial_{z}\\ 2\overline{\partial}_{z}&-i\partial_{t}\end{pmatrix} \tag{3.3}\] where \(z=x+iy\). That is to say, it is just the normal spin Dirac operator on \(Y_{0}\), but the spinors have an additional \(e^{i\theta/2}\) term which is differentiated as expected. **Remark 3.1**.: Although it is convenient for computation, the singular nature of the Dirac operator \(\not{D}_{\mathcal{Z}}\) is hidden in the expression 3.3. It can be written in the following equivalent way which makes the singular nature manifest. Multiplication \(e^{-i\theta/2}:\underline{\mathbb{C}}^{2}\otimes\ell_{0}\simeq\underline{ \mathbb{C}}^{2}\) provided an alternative trivialization, in which spinor are written \(\psi=(\psi^{+},\psi^{-})\) where \(\psi^{\pm}\) are still \(\mathbb{C}\)-valued functions. In this trivialization Dirac operator is instead given by \[\not{D}=\begin{pmatrix}i\partial_{t}&-2\partial_{z}\\ 2\overline{\partial}_{z}&-i\partial_{t}\end{pmatrix}+\frac{1}{2}\gamma(d \theta)=\begin{pmatrix}i\partial_{t}&-2\partial_{z}\\ 2\overline{\partial}_{z}&-i\partial_{t}\end{pmatrix}+\frac{1}{4}\gamma\left( \frac{dz}{z}-\frac{d\overline{z}}{\overline{z}}\right)\] where \(\gamma\) denotes Clifford multiplication. The singular nature of the operator is now apparent in the zeroth order term which is unbounded on \(L^{2}\). Equivalently, \(r\not{D}\) is an elliptic operator with bounded zeroth order term, but _the symbol degenerates along_\(\mathcal{Z}_{0}\). This type of operator is called an elliptic operator with edge-type degeneracies or simply an **elliptic edge operator**. The theory of operators of this type has been studied extensively in microlocal analysis and many results in Section 2 hold in considerable generality (see [9, 17, 37, 45, 70] and the references therein). **Example 3.2**.: Let us now identify the \(L^{2}\)-kernel of \(\not{D}\) on \(Y_{0}\) (Cf. [49] Section 3). As in Lemma 2.6, this also identifies the cokernel of the operator on \(rH^{1}_{e}\) since \(\mathrm{Coker}(\not{D}|_{rH^{1}_{e}})\simeq\mathrm{Ker}(\not{D}|_{L^{2}})\) continues to hold. Here, the weight function is given by \(r\) globally on \(Y_{0}\). Writing a general section in Fourier series as \[\psi=\sum_{k,\ell}e^{i\ell t}e^{i(k+\frac{1}{2})\theta}\begin{pmatrix}\psi^{+} _{k,\ell}e^{-i\theta}\\ \psi^{-}_{k,\ell}\end{pmatrix}\] and using the polar expressions \[\overline{\partial}_{z}=\frac{1}{2}e^{i\theta}(\partial_{r}+\frac{i}{r} \partial_{\theta}) \overline{\partial}_{z}=\frac{1}{2}e^{-i\theta}(\partial_{r}-\frac{i}{r} \partial_{\theta}),\] the Dirac equation 3.3 becomes the following system of ODEs for \(\psi^{\pm}_{k,\ell}(r)\) which decouple for distinct pairs \((k,\ell)\): \[\frac{d}{dr}\begin{pmatrix}\psi^{+}_{k,\ell}\\ \psi^{-}_{k,\ell}\end{pmatrix}=\begin{pmatrix}\frac{(k-\frac{1}{2})}{r}&-\ell \\ -\ell&-\frac{(k+\frac{1}{2})}{r}\end{pmatrix}\begin{pmatrix}\psi^{+}_{k,\ell}\\ \psi^{-}_{k,\ell}\end{pmatrix}. \tag{3.4}\] This system of equations can be solved by substituting the second equation into the first, after which the general solution is given in terms of modified Bessel Functions. If \(k\neq 0\), the pair \((k,\ell)\) admits no solutions in \(L^{2}(S^{1}\times\mathbb{R}^{2})\); for \(k=0\) and the pairs \((0,\ell)\) with \(\ell\neq 0\), \[\Psi^{\text{Euc}}_{\ell}=\sqrt{|\ell|}e^{i\ell t}e^{-|\ell|r}\begin{pmatrix} \frac{1}{\sqrt{z}}\\ \frac{\text{sgn}(\ell)}{\sqrt{z}}\end{pmatrix} \tag{3.5}\] is an infinite-dimensional set of orthonormalized solutions in \(L^{2}\), and \(\ker(\not{D}|_{L^{2}})\) is their \(L^{2}\)-closure. Disregarding the issues of the integrability of the \(\ell=0\) solutions as \(r\to\infty\) (which is immaterial in the upcoming case of \(Y\) compact) and formally including this element leads to an isomorphism \[L^{2}(S^{1};\mathbb{C})\simeq\ker(\not{D}|_{L^{2}}) \tag{3.6}\] defined by the linear extension of \(e^{i\ell t}\mapsto\Psi^{\text{Euc}}_{\ell}\). In this example there are no \(\mathbb{Z}_{2}\)-harmonic spinors. There is a second choice of spin structure on \(Y_{0}=S^{1}\times\mathbb{R}^{2}\) which also has monodromy \(-1\) around the \(S^{1}\) factor. In this case, spinor may be written with half integer Fourier modes \(e^{i\ell t}e^{it/2}\), and the calculation is identical but the solutions are indexed by \(\ell^{\prime}\in\mathbb{Z}+\frac{1}{2}\). Just as in this model example, the \(L^{2}\)-kernel on a general closed \(3\)-manifold is infinite-dimensional, and the failure to prove Fredholmness was not simply a shortcoming of the techniques in Section 2. In the model case, the kernel spanned by \(\Psi^{\text{Euc}}_{\ell}\) displays the following salient properties which generalize to the case of \(Y\) closed: **Expansion**: Solutions \(\Psi^{\text{Euc}}_{\ell}\) have asymptotic expansions with terms \(r^{k-\frac{1}{2}}\) for \(k\in\mathbb{Z}\). **Isomorphism**: There is a natural isomorphism \(\ker(\not{D}|_{L^{2}})\simeq L^{2}(\mathcal{Z}_{0};\mathbb{C})\) given by associating a kernel element to each eigenfunction of the Dirac operator \(i\partial_{t}\) on \(\mathcal{Z}_{0}\). **Rapid Decay**: For eigenvalues \(|\ell|>>0\), solutions \(\Psi^{\text{Euc}}_{\ell}\) decay exponentially away from \(\mathcal{Z}_{0}\). The first item follows from the power series expansion of \(e^{-|\ell|r}\). The remainder of Section 3 deals with the first item, and the remaining two properties are the subject of Section 4. **Remark 3.3**.: Although there are no \(\mathbb{Z}_{2}\)-harmonic spinors in \(rH^{1}_{e}\) in Example 3.2, there are explicit solutions given in terms of modified Bessel functions for \((k,\ell)=(\pm 1,\ell)\) which have leading order \(z^{1/2}\) and \(\overline{z}^{1/2}\) and thus lie in \(rH^{1}_{loc}\) near \(\mathcal{Z}_{0}\). All these solutions, however, grow exponentially as \(r\to\infty\). Therefore, intuitively, the existence of a \(\mathbb{Z}_{2}\)-harmonic spinor on a closed manifold \(Y\) is a non-generic phenomenon and occurs only when one of these exponentially growing solutions can be patched together with a bounded solution on the complement of a neighborhood of \(\mathcal{Z}_{0}\) in \(Y\). ### Local Expressions From here on, we return to the case that \((Y,g_{0})\) is a closed, oriented Riemannian \(3\)-manifold and \(\mathcal{Z}_{0}\) a smoothly embedded link. In order to write local expressions, we endow a tubular neighborhood \(N_{r_{0}}(\mathcal{Z}_{0})\) of a component of \(\mathcal{Z}_{0}\) diffeomorphic to a solid torus with a particular set of coordinates. Let \(\gamma:S^{1}\to\mathcal{Z}_{j}\) denote an arclength parameterization of a chosen component \(\mathcal{Z}_{i}\) of \(\mathcal{Z}_{0}\) whose length is denoted by \(|\mathcal{Z}_{j}|\), and fix a global orthonormal frame \(\{n_{1},n_{2}\}\) of the pullback \(\gamma^{*}N\mathcal{Z}_{0}\) of the normal bundle to \(\mathcal{Z}_{0}\). We are free to arrange that \(\{\dot{\gamma},n_{1},n_{2}\}\) is an oriented frame of \(TY\) along \(\mathcal{Z}_{j}\). **Definition 3.4**.: A system of **Fermi coordinates** for \(r_{0}<r_{\text{inj}}\) where \(r_{\text{inj}}\) is the injectivity radius of \(Y\) is the diffeomorphism \(S^{1}\times D_{r_{0}}\simeq N_{r_{0}}(\mathcal{Z}_{0})\) for a chosen component of \(\mathcal{Z}_{0}\) defined by \[(t,x,y)\mapsto\text{Exp}_{\gamma(t)}(xn_{1}+yn_{2}).\] Here \(t\) is the normalized coordinate on the \(S^{1}\) with \(t\in\mathbb{R}/|\mathcal{Z}_{i}|\mathbb{Z}\). In these coordinates the Riemannian metric \(g_{0}\) can be written \[g_{0}=dt^{2}+dx^{2}+dy^{2}\ +\ O(r) \tag{3.7}\] Given such a coordinate system, \((t,r,\theta)\) are used to denote the corresponding cylindrical coordinates, and \((t,z,\overline{z})\) the complex ones on the \(D_{r_{0}}\) factor. **Remark 3.5**.: There are different conventions on the usage of "Fermi coordinates", with some requiring that the curve is question is a geodesic. In this case, \(n_{x}\) and \(n_{y}\) can be chosen to locally solve an ODE so that \(g=dt^{2}+dx^{2}+dy^{2}+O(r^{2})\). Here, we make no such assumption and the difference from the product metric is \(O(r)\). Explicitly, the correction to the product metric is \[[2x\mathfrak{m}_{x}(t)+2y\mathfrak{m}_{y}(t)]dt^{2}\ +\ [\mu(t)y]dtdx\ +\ [-\mu(t)x]dtdy\ +\ O(r^{2})\] where \(\mathfrak{m}_{\alpha}(t)=\langle\nabla_{\dot{\gamma}}\dot{\gamma},n_{\alpha}\rangle\) for \(\alpha=x,y\) and \(\mu(t)=\langle\nabla_{\dot{\gamma}}n_{x},n_{y}\rangle=-\langle\nabla_{\dot{ \gamma}}n_{y},n_{x}\rangle\). A choice of coordinates induces a trivialization of the frame bundle of \(Y\) on \(N_{r_{0}}(\mathcal{Z}^{i})\) by a global orthonormal frame \(\{e_{t},e_{1},e_{2}\}\) which restricts to \(\{\partial_{t},\partial_{x},\partial_{y}\}\) along \(\mathcal{Z}_{i}\). We may now distinguish two cases: **Case 1:**: The spin structure restricts to as the product \(\mathfrak{s}_{0}|_{N_{r_{0}}(\mathcal{Z}_{i})}\simeq N_{r_{0}}(\mathcal{Z}_{i} )\times\mathrm{Spin}(3)\), so that \[S|_{N_{r_{0}}(\mathcal{Z}_{i})}\simeq\underline{\mathbb{C}}^{2}\otimes\ell_{0} \tag{3.8}\] **Case 2:**: The spin structure restricts to \(N_{r_{0}}(\mathcal{Z}_{i})\) as the double cover of \(Fr(Y)|_{N_{r_{0}}(\mathcal{Z}_{i})}\simeq N_{r_{0}}(\mathcal{Z}_{i})\times SO\)(3) non-trivial in the \(\mathcal{Z}_{i}\) factor, so that \[S|_{N_{r_{0}}(\mathcal{Z}_{i})}\simeq\underline{\mathbb{C}}^{2}\otimes\ell_{t }\otimes\ell_{0} \tag{3.9}\] where \(\ell_{t}\) is the pullback of the mobius bundle on \(\mathcal{Z}_{j}\). We note that, in general, there are some rather subtle topological restrictions on which combinations of Case 1 and Case 2 can occur for the different components of \(\mathcal{Z}_{0}\). For instance, if \(Y=S^{3}\) and \(\mathcal{Z}_{0}\) has a single component, then the unique spin structure on \(S^{3}\) always restricts to Case 2 on a tubular neighborhood of \(\mathcal{Z}_{0}\); if \(\mathcal{Z}_{0}\) has multiple components then the number which fall in Case 1 must be even. First consider Case 1. It may be assumed that the identification 3.8 is chosen so that the factors of \(\mathbb{C}^{2}\) are given by the \(\pm i\) eigenspaces of \(\gamma(e_{t})\), hence Clifford multiplication is given by \[\gamma(e^{t})=\sigma_{t}=\begin{pmatrix}i&0\\ 0&-i\end{pmatrix}\qquad\qquad\gamma(e^{1})=\sigma_{x}=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\qquad\qquad\gamma(e^{2})=\sigma_{y}=\begin{pmatrix}0&i\\ i&0\end{pmatrix}.\] As in the model case (Example 3.2), spinors can be written in this trivialization in the form (3.2) where \(\psi^{\pm}\) are \(\mathbb{C}\)-valued function on \(N_{r_{0}}(\mathcal{Z}_{i})\). In Case 2, the same holds after changing the trivialization by \(e^{i\ell t/2}\) which alters the Dirac operator by \(\frac{i}{2}\gamma(dt)\). This leads to the following: **Lemma 3.6**.: In both Case (1) and Case (2), the \(\mathbb{Z}_{2}\)-Dirac operator in local coordinates around a component \(\mathcal{Z}_{i}\subseteq\mathcal{Z}_{0}\) and the above trivialization takes the form \[\not{D}=\not{D}_{0}+\mathfrak{d}\] where * \(\not{D}_{0}\) is the Dirac operator in the product metric on \(N_{r_{0}}(\mathcal{Z}_{i})\), given by (3.3) * \(\mathfrak{d}\) is a first order perturbation arising from the \(O(r)\) terms of \(g_{0}\), the perturbation \(B_{0}\) and \(\frac{i}{2}\gamma(dt)\) in Case 2, so that \[|\mathfrak{d}\psi|\leq C(r|\nabla\psi|+|\psi|).\] ### Asymptotic Expansions In this subsection establishes that \(\mathbb{Z}_{2}\)-harmonic spinors have local power series expansions by half integer powers of \(r\). This follows from invoking general regularity results for elliptic edge operators from [45], and should be seen as the appropriate form of elliptic regularity for the singular operator \(\not{D}\). Fix a choice of Fermi coordinates near \(\mathcal{Z}_{0}\). **Definition 3.7**.: A spinor \(\psi\in L^{2}(Y-\mathcal{Z}_{0};S_{0})\) is said to admit a **polyhomogenous expansion** with index set \(\mathbb{Z}^{+}+\frac{1}{2}\) if \[\psi\sim\sum_{n\geq 0}\sum_{k,p\in\mathbb{Z}}\left(\begin{array}{c}c_{n,k,p}(t )e^{ik\theta}\\ d_{n,k,p}(t)e^{ik\theta}\end{array}\right)r^{n+1/2}\log(r)^{p}e^{-i\theta/2}\] where \(c_{n,k,p}(t),d_{n,k,p}(t)\in C^{\infty}(S^{1};\mathbb{C})\), and where \(\sim\) denotes convergence in the following sense: for every \(N\in\mathbb{N}\), the partial sums \[\psi_{N}=\sum_{n\leq N}\sum_{k=-2n}^{2n+1}\sum_{p\in n-1}\left(\begin{array}[] {c}c_{n,k,p}(t)e^{ik\theta}\\ d_{n,k,p}(t)e^{ik\theta}\end{array}\right)r^{n+1/2}\log(r)^{p}e^{-i\theta/2}\] satisfy the pointwise bounds \[|\psi-\psi_{N}| \leqslant C_{N}r^{N+1+\frac{1}{4}} |\nabla_{t}^{\alpha}\nabla^{\beta}(\psi-\psi_{N})|\leqslant C_{N, \alpha,\beta}r^{N+1+\frac{1}{4}-|\beta|} \tag{3.10}\] for constants \(C_{N,\alpha,\beta}\) determined by the background data and choice of local coordinates and trivialization. Here, \(\beta\) is a multi-index of derivatives in the directions normal to \(\mathcal{Z}_{0}\). The work of Mazzeo [45] implies the following regularity result about \(\mathbb{Z}_{2}\)-harmonic spinors (see also Appendix A of [26]). **Proposition 3.8**.: Suppose that \(\Phi_{0}\in rH^{1}_{e}(Y-\mathcal{Z}_{0};S_{0})\) is a \(\mathbb{Z}_{2}\)-harmonic spinor. Then \(\Phi_{0}\) admits a polyhomogenous expansion with index set \(\mathbb{Z}^{+}+\frac{1}{2}\). Moreover, \(c_{n,k,p}\) and \(d_{n,k,p}\) vanish unless \(-2n\leqslant k\leqslant 2n+1\) and \(p\leqslant n-1\). Thus \(\Phi_{0}\) has a local expression \[\Phi_{0} \sim \begin{pmatrix}c(t)\sqrt{z}\\ d(t)\sqrt{z}\end{pmatrix}+\sum_{n\geq 1}\sum_{k=-2n}^{2n+1}\sum_{p=0}^{n-1} \left(\begin{array}{c}c_{n,k,p}(t)e^{ik\theta}\\ d_{n,k,p}(t)e^{ik\theta}\end{array}\right)r^{n+1/2}\log(r)^{p}e^{-i\theta/2} \tag{3.11}\] where \(c(t),d(t),c_{k,m,n}(t),d_{k,m,n}(t)\in C^{\infty}(S^{1};\mathbb{C})\). In this form, Assumption 2 is the requirement that \(|c(t)|^{2}+|d(t)|^{2}>0\) is nowhere-vanishing. The same result holds for an \(rH^{1}_{e}\)-solution of the operator \(\not{D}-\lambda\mathrm{Id}\). Proof.: The existence of such an expansion is a consequence of the regularity theory in [45] (Section 7, Proposition 7.17) and the fact that the indicial roots are \(j+\frac{1}{2}\) for \(j\in\mathbb{Z}\) in this case. See also [23, 26]. The constraints on the expansion compared to Definition 3.7 then follow from writing the equation \(\not{D}\Phi_{0}-\lambda\Phi_{0}=0\) in Fermi coordinates as \[\begin{pmatrix}0&-2\partial\\ 2\overline{\partial}&0\end{pmatrix}\Phi_{0}=-\mathfrak{d}\Phi_{0}-\begin{pmatrix} -i\partial_{t}&0\\ 0&i\partial_{t}\end{pmatrix}\Phi_{0}+\lambda\Phi_{0}\] with \(\mathfrak{d}\) as in Lemma 3.6, and formally solving term by term. The expression (3.10) depends on the choice of Fermi coordinates in the following way. Another choice of Fermi coordinates arises from an alternative choice of normal frame \(n_{x},n_{y}\) differing by a transformation induced by a change of trivialization of the spin structure. Such a change of frame is given in complex coordinates on \(N\mathcal{Z}_{0}\) by \[n_{1}+in_{2}\mapsto e^{-2i\sigma(t)}(n_{1}+in_{2})\] where \(\sigma(t):\mathcal{Z}_{0}\to S^{1}\) (the minus sign in the exponent is due to the convention that Clifford multiplication is by cotangent vectors). The new complex coordinates \((t,z^{\prime},z^{\prime})\) resulting from such a transformation are likewise related to the original coordinates by \[(t,z^{\prime},\overline{z}^{\prime})=(t,e^{-2i\sigma(t)}z,e^{2i\sigma(t)} \overline{z}^{\prime}).\] This shows the following: **Corollary 3.9**.: For a term of a polyhomogenous expansion \[\psi(t,z,\overline{z})=\left(\begin{array}{c}a(t)e^{ik\theta}\\ b(t)e^{ik\theta}\end{array}\right)r^{n+1/2}\log(r)^{p}e^{-i\theta/2}\] the coefficients are naturally sections \(a(t)\in C^{\infty}(\mathcal{Z}_{0};N\mathcal{Z}_{0}^{-k})\) and \(b(t)\in C^{\infty}(\mathcal{Z}_{0};N\mathcal{Z}_{0}^{-k+1})\). In particular, the leading coefficients \(c(t),d(t)\) of (3.10) are sections of \(N\mathcal{Z}_{0}^{-1},N\mathcal{Z}_{0}\) respectively. **Remark 3.10**.: More generally, \(L^{2}\)-elements elements have similar asymptotic expansions, but it is no longer necessarily the case that the coefficients are smooth. In general, the coefficients only make sense as distributions (see Section 7 of [45] for a more general discussion). If \(\psi\in\ker(\not{D})\cap L^{2}(Y-\mathcal{Z}_{0};S_{0})\), then it admits a **weak asymptotic expansion** of the form \[\psi\sim\begin{pmatrix}\frac{c_{0}(t)}{\sqrt{z}}\\ \frac{d_{0}(t)}{\sqrt{z}}\end{pmatrix}\ +\ \sum_{n\geqslant 1}\sum_{k=-2n-1}^{2n+2} \sum_{p=0}^{n-1}\left(\begin{array}{c}c_{n,k,p}(t)e^{ik\theta}\\ d_{n,k,p}(t)e^{ik\theta}\end{array}\right)r^{n-1/2}\log(r)^{p}e^{-i\theta/2}\] where \(c_{n,k,p},d_{n,k,p}\in L^{-1/2-n}(S^{1};\mathbb{C})\) are understood in a distributional sense and are sections of an appropriate power of \(N\mathcal{Z}_{0}\) as in Corollary 3.9. _There is no nice sense in which these weak expansions converge_. In particular, if \(\psi\in L^{2}\) has such an expansion, then the difference \(|\psi-\psi_{N}|\) will not necessarily lie in \(L^{2}\). Consequently, there is no robust notion in which the later terms are "smaller" than the earlier ones. If there were stronger notions of convergence for such expansions, it is possible that the use of Nash-Moser Theory could be eliminated in the proof of Theorem 1.4. ## 4 The Obstruction Space This section studies the infinite-dimensional cokernel of the operator \[\not{D}:rH^{1}_{e}(Y-\mathcal{Z}_{0};S_{0})\longrightarrow L^{2}(Y- \mathcal{Z}_{0};S_{0}), \tag{4.1}\] which coincides with \(\ker(\not{D}|_{L^{2}})\) by Lemma 2.6. The main results are Propositions 4.2 and 4.3, which generalize the "isomorphism" and "rapid decay" properties from Example 3.2 respectively. The first of these, Proposition 4.2, explicitly identifies the cokernel of (4.1) with a space of spinors on \(\mathcal{Z}_{0}\). The second, Proposition 4.3 gives an explicit description of the cokernel elements, showing their support concentrates exponentially along \(\mathcal{Z}_{0}\). Both propositions are used heavily in the upcoming sections. An essential and frustrating point is that any weakening of Proposition 4.3 leads to several error terms being unbounded in Section 6. **Definition 4.1**.: We define the **Obstruction Space** associated to the data \((\mathcal{Z}_{0},g_{0},B_{0})\) by \[\mathbf{Ob}(\mathcal{Z}_{0}):=\{\psi\in L^{2}\ |\ \psi\in\ker(\not{D}|_{L^{2}})\}.\] Although this definition appears to be a redundant renaming of \(\ker(\not{D}|_{L^{2}})\), it is stated this way in preparation for the later generalization in Section 8. There, we consider varying the tuple \((\mathcal{Z}_{0},g_{0},B_{0})\) and must deal with the fact that \(\ker(\not{D}|_{L^{2}})\) may not form a vector bundle. By Lemma 2.6 there is a closed orthogonal decomposition \[L^{2}=\mathbf{Ob}(\mathcal{Z}_{0})\oplus\mathrm{Range}(\not{D}|_{rH^{1}_{e}}).\] There is also a further \(L^{2}\)-orthogonal decomposition \(\mathbf{Ob}(\mathcal{Z}_{0})=\mathbf{Ob}(\mathcal{Z}_{0})^{\perp}\oplus\ker( \not{D}|_{rH^{1}_{e}})\) into the space of \(\mathbb{Z}_{2}\)-harmonic spinors and its orthogonal complement within the obstruction space. Let \(S_{0}|_{Z_{0}}\) denote the spinor bundle restricted to \(\mathcal{Z}_{0}\). Clifford multiplication \(\gamma(dt)\) by the unit tangent vector induces a splitting \(S|_{\mathcal{Z}_{0}}\simeq\mathcal{S}_{\mathcal{Z}_{0}}\oplus\mathcal{S}_{ \mathcal{Z}_{0}}\) where \(\mathcal{S}_{\mathcal{Z}_{0}}\) is a rank 1 complex spinor bundle on \(\mathcal{Z}_{0}\). **Proposition 4.2**.: There is an isomorphism \[\mathrm{ob}\oplus\iota:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}}) \oplus\ker(\not{D}|_{rH^{1}_{e}})\longrightarrow\mathbf{Ob}(\mathcal{Z}_{0}), \tag{4.2}\] where \(\iota\) is the inclusion of \(\mathrm{span}\,\ker(\not{D}|_{rH^{1}_{e}})\). The spinor bundle \(\mathcal{S}_{\mathcal{Z}_{0}}\) carries a 1-dimensional Dirac operator denoted \(\not{\ell}_{\mathcal{Z}_{0}}\) as follows. Using an arclength parameterization, identify each component \(\mathcal{Z}_{(j)}\) of \(\mathcal{Z}_{0}\) with a circle with coordinate \(t_{j}\) and length \(|\mathcal{Z}_{(j)}|\). This induces a trivialization \(\mathcal{S}_{\mathcal{Z}_{0}}\simeq\underline{\mathbb{C}}\) as the \(+i\) eigenspace of \(\gamma(dt_{j})\), and we define \(\not{\ell}_{\mathcal{Z}_{0}}\) to be \(i\partial_{t}\). This Dirac operator \(\not{\ell}_{\mathcal{Z}_{0}}\) may be diagonalized on \(L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\) with \[\mathrm{Spec}(\not{\ell}_{\mathcal{Z}_{0}})=\bigsqcup_{j}\tfrac{2\pi}{| \mathcal{Z}_{(j)}|}\mathbb{Z}\] and eigenvectors \(\phi^{(j)}_{\ell}=e^{i\ell t_{j}}\) for \(\ell\in 2\pi\mathbb{Z}/|\mathcal{Z}_{(j)}|\). These eigenvector provide a convenient basis for \(L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\). **Proposition 4.3**.: (A) When \(\not{D}\) is complex linear, there is a complex basis \(\Psi^{(j)}_{\ell}\) of \(\mathbf{Ob}(\mathcal{Z}_{0})^{\perp}\) indexed by \((j,\ell)\in\mathrm{Spec}(\not{\ell}_{\mathcal{Z}_{0}})\) such that the \(\mathbf{Ob}(\mathcal{Z}_{0})\)-component of a spinor \(\psi\in L^{2}\) under (4.2) is given by \[\mathrm{ob}^{-1}(\Pi^{\mathrm{Ob}}\psi)=\sum_{\mathrm{Spec}(\not{\ell}_{ \mathcal{Z}_{0}})}\langle\psi,\Psi^{(j)}_{\ell}\rangle_{\mathbb{C}}\phi^{(j)}_ {\ell}. \iota^{-1}(\psi)=\sum_{k}\langle\psi,\Phi_{k}\rangle\Phi_{k}. \tag{4.3}\] where \(\langle-,-\rangle_{\mathbb{C}}\) is the hermitian inner product, and \(\Phi_{k}\) a (real) basis of \(\ker(\not{D}|_{rH^{1}_{e}})\). Moreover, \[\Psi^{(j)}_{\ell}=\chi_{j}\Psi^{\mathrm{Euc}}_{\ell}+\zeta^{(j)}_{\ell}+\xi^{ (j)}_{\ell}\] where * \(\Psi^{\mathrm{Euc}}_{\ell}\) are the \(L^{2}\)-orthonormalized Euclidean obstruction elements from Example 3.2 (in the trivialization 3.6) and \(\chi_{j}\) is a cutoff function supported on a tubular neighborhood of \(\mathcal{Z}_{(j)}\). * \(\zeta^{(j)}_{\ell}\) is a perturbation with \(L^{2}\)-norm \(O(\frac{1}{|\ell|})\) which decays exponentially away from \(\mathcal{Z}_{0}\) in the following sense: \[\|\zeta^{(j)}_{\ell}\|_{L^{2}(A^{(j)}_{n\ell})}\leq\frac{C}{|\ell|}\mathrm{ Exp}\left(-\frac{n}{c_{1}}\right).\] (4.4) where \(A^{(j)}_{n\ell}\) denotes the collection of annuli \[A^{(j)}_{n\ell}=\left\{\tfrac{n}{|\ell|}R_{0}\leq r^{(j)}\leq\tfrac{n+1}{| \ell|}R_{0}\right\}\] (4.5) for some constant \(R_{0}\), and \(r^{(j)}\) denotes the geodesic distance to \(\mathcal{Z}_{(j)}\). Additionally, in Fermi coordinates on \(N_{r_{0}}(\mathcal{Z}_{(j)})\) and in the trivialization of Lemma 3.6, \(\zeta^{(j)}_{\ell}\) is a linear combination of only Fourier modes \(e^{ipt}\) in the range \(\ell-|\ell|/2\leq p\leq\ell+|\ell|/2\). * \(\xi^{(j)}_{\ell}\) is a perturbation of \(L^{2}\)-norm \(O(\frac{1}{|\ell|^{2}})\) i.e. satisfying \[\|\xi^{(j)}_{\ell}\|_{L^{2}}\leq\frac{C}{|\ell|^{2}}\] for a universal constant \(C\). (B) In the case that \(\not{D}\) is only \(\mathbb{R}\)-linear, then there is a real basis \[\Psi^{\mathrm{Re},j}_{\ell}=\chi_{j}\Psi^{\mathrm{Euc}}_{\ell}+\zeta^{\mathrm{ Re},j}_{\ell}+\xi^{\mathrm{Re},j}_{\ell}\qquad\qquad\Psi^{\mathrm{Im},j}_{ \ell}=i(\chi_{j}\Psi^{\mathrm{Euc}}_{\ell})+\zeta^{\mathrm{Im},j}_{\ell}+\xi^{ \mathrm{Im},j}_{\ell}\] satisfying identical bounds where the inner product in (4.3) is replaced by \[\langle\psi,\Psi^{(j)}_{\ell}\rangle_{\mathbb{C}}=\langle\psi,\Psi^{\mathrm{ Re},j}_{\ell}\rangle\ +\ i\langle\psi,\Psi^{\mathrm{Im},j}_{\ell}\rangle.\] Propositions 4.2-4.3 are proved concurrently, with the proof occupy the remainder of the section. The proof has several steps, and goes roughly as follows. Because the Euclidean obstruction elements \(\Psi^{\mathrm{Euc}}_{\ell}\) decay exponentially away from \(\mathcal{Z}_{0}\), and the Dirac operator differs from the Euclidean one by \(O(r)\), these Euclidean obstruction elements are very good approximations to those on \(Y\) once \(|\ell|\) is sufficiently large. Intuitively, pasting \(\Psi^{\mathrm{Euc}}_{\ell}\) onto \(Y\) using a cut-off function and correcting them by projection provides a Fredholm map \[\mathbf{Ob}^{\mathrm{Euc}}\to\mathbf{Ob}(\mathcal{Z}_{0}) \tag{4.6}\] whose index can be shown to be zero. Then, up to replacing a finite-dimensional space for small indices \(|\ell|\), this yields an isomorphism with the model Euclidean case, which is identified with \(L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\) by Example 3.2. The reason for the caveat "intuitively" is that simply pasting the Euclidean elements \(\Psi^{\mathrm{Euc}}_{\ell}\) onto \(Y\) does not immediately yield good enough approximations to show (4.6) is Fredholm. To deal with this, we first construct better approximations \(\Psi^{N}_{\ell}\) on the normal bundle \(N\mathcal{Z}_{0}\) using a metric \(g_{N}\) equal to that on \(Y\) on a tubular neighborhood, after which above argument proceeds to show that (4.6) is an isomorphism. The map ob is obtained from (4.6) after a compact correction that ensures (4.3) holds (see Remark 4.4). Finally, it must be shown that all these construction have appropriate higher-regularity analogues on \(H^{m}_{\mathrm{b}}\). Thus the five steps of the proof are the following, each of which occupies a subsection: * Construction of model obstruction elements \(\Psi^{N}_{\ell}\) on the normal bundle \((N\mathcal{Z}_{0},g_{N})\). * Patching argument to show pasting \(\Psi^{N}_{\ell}\) onto \((Y,g_{0})\) is Fredholm onto \(\mathbf{Ob}\). * Index Calculation. * Finite-dimensional correction for small \(|\ell|\) and construction of ob. * Higher-Regularity. We prove the propositions in the case that \(\not{D}\) is complex linear, and use \(\langle-,-\rangle\) to denote the Hermitian inner product in this subsection. The real-linear case, which is needed later, differs only notationally. Additionally, since everything in the construction is local, we will tacitly assume **Assumption 4***.**\(\mathcal{Z}_{0}\) is smooth with a single component and omit the superscript \((j)\) from the notation. The general case is a trivial extension. **Remark 4.4**.: The reader is cautioned that the basis \(\Psi_{\ell}\) is not in general orthonormal. The reason for this is that orthonormalizing disrupts the decay properties of \(\zeta_{\ell}\). Additionally, the map ob is not given by the obvious linear extension of \(\phi_{\ell}\mapsto\Psi_{\ell}\). Because the sequence of inner products does not calculate the expression for a spinor \(\psi\) in the given basis if it is not orthonormal, this naive map must be altered in order for the projection of \(\psi\) to still be calculated by its inner products (4.3). This is done in Section 4.4. ### The Model Obstruction This section begins the proof of Propositions 4.2-4.3 by constructing model basis for the obstruction on the normal bundle \(N\mathcal{Z}_{0}\). Let \(r_{0}>0\) be small, and take \(\chi_{0}\) a cut-off function vanishing for \(r>r_{0}\) and equal to \(1\) for \(r<r_{0}/2\). Set \[(N,g_{N}):=(N\mathcal{Z}_{0},\chi_{0}g_{0}+(1-\chi_{0})(dt^{2}+dx^{2}+dy^{2}))\] where \(N\mathcal{Z}_{0}\simeq S^{1}\times\mathbb{R}^{2}\) is the normal bundle of \(\mathcal{Z}_{0}\) equipped with coordinates \((t,x,y)\) as in Section 3.2. The spinor bundle \(S_{N}\to N\) may be identified with \(S_{N}=\underline{\mathbb{C}}^{2}\otimes\ell_{\mathcal{Z}_{0}}\), after which the Dirac operator on \((N,g_{N})\) may be written \[\not{D}_{N}:=\not{D}_{0}+\mathfrak{d}\] with \(\not{D}_{0},\mathfrak{d}\) as in Lemma 3.6. If the spin structure falls in Case 2 as in (3.2), then we use \(\chi_{0}\gamma(idt/2)\) in place of \(\gamma(idt/2)\); additionally, we replace \(B_{0}\) with \(\chi_{0}B_{0}\) so that this operator is equal to the Dirac operator \(\not{D}\) on \(Y\) for \(r\leq r_{0}/2\) and equal to \(\not{D}_{0}\) for \(r\geq r_{0}\). The following lemma completes describes the obstruction on \((N,g_{N})\) for \(r_{0}\) sufficiently small. **Lemma 4.5**.: For \(r_{0}<<1\), \(\ker(\not{D}_{N}|_{rH^{1}_{e}})=\emptyset\) and there is a basis \[\Psi^{N}_{\ell}=\Psi^{\rm Euc}_{\ell}+\zeta^{N}_{\ell}+\xi^{N}_{\ell}\] of \(\ker(\not{D}_{N}|_{L^{2}})\) satisfying the bounds of Proposition 4.3 such that \[L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})_{0} \to \ker(\not{D}_{N}|_{L^{2}})\] \[\phi_{\ell} \mapsto \Psi^{N}_{\ell}\] is an isomorphism, where \(L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})_{0}\) means the orthogonal complement of the \(0\)-eigenspace. **Remark 4.6**.: The \(\ell=0\) element is omitted simply because the \(r^{-1/2}\) asymptotics fail to be \(L^{2}\) on the non-compact space \(N\). Fredholmness is insensitive to discarding the \(\ell=0\) mode, but it must be accounted for in the index calculation of Section 4.3. Proof of the isomorphism in Lemma 4.5.: As in Lemma 3.6, we have bounds \[|\mathfrak{d}\varphi|\leqslant C(r|\nabla\varphi|+|\varphi|)\] on \(\operatorname{supp}(\chi_{0})\). It follows that \(\|\mathfrak{d}\varphi\|_{L^{2}}\leqslant Cr_{0}\|\varphi\|_{rH^{1}_{e}}\). Thus for \(r_{0}\) sufficiently small, \[\|\varphi\|_{rH^{1}_{e}}\leqslant C\|\not{D}_{0}\varphi\|_{L^{2}}\leqslant C\| (\not{D}_{0}+\mathfrak{d})\varphi\|_{L^{2}}+C\|\mathfrak{d}\|_{L^{2}}\qquad \quad\Rightarrow\qquad\quad\|\varphi\|_{rH^{1}}\leqslant C^{\prime}\|\not{D} _{N}\varphi\|_{L^{2}}\] which shows \(\ker(\not{D}|_{rH^{1}_{e}})=\emptyset\). Since \(\ker(\not{D}|_{rH^{1}_{e}})=\emptyset\), it follows as in Lemma 2.7 that \[\not{D}_{N}\not{D}_{N}:rH^{1}_{e}\to rH^{-1}_{e}\] is an isomorphism. Denote by \(P_{N}\) its inverse. Then, as in Corollary 2.8, the projection to the range factor in \(L^{2}=\ker(\not{D}_{N}|_{L^{2}})\oplus\operatorname{Range}(\not{D}_{N}|_{rH^{1} _{e}})\) is given by \(\Pi^{\rm Rg}=\not{D}_{N}P_{N}\not{D}_{N}\). Define \[\Psi^{N}_{\ell}:=\Psi^{\rm Euc}_{\ell}-\Pi^{\rm Rg}(\Psi^{\rm Euc}_{\ell})= \Psi^{\rm Euc}_{\ell}-\not{D}_{N}P_{N}\not{D}_{N}(\Psi^{\rm Euc}_{\ell}) \tag{4.7}\] so that \(\Psi^{N}_{\ell}\in\ker(\not{D}_{N}|_{L^{2}})\). We claim that the map defined by \(\phi_{\ell}\mapsto\Psi^{N}_{\ell}\) is an isomorphism. Clearly \(\phi_{\ell}\mapsto\Psi^{\rm Euc}_{\ell}\) is an isomorphism to \(\ker(\not{D}_{0}|_{L^{2}})\) by Example 3.2, so it suffices to show that \(Id-\Pi^{\rm Rg}:\ker(\not{D}_{0}|_{L^{2}})\to\ker(\not{D}_{N}|_{L^{2}})\) is an isomorphism. For injectivity, note that for any \(\Psi\in L^{2}\), integrating by parts show \[\|\mathfrak{d}\Psi^{\rm Euc}\|_{rH^{-1}_{e}}=\sup_{\|\varphi\|_{rH^{1}}=1} \langle\mathfrak{d}\Psi,\varphi\rangle\leqslant Cr_{0}\|\Psi\|_{L^{2}}. \tag{4.8}\] Hence for any \(\Psi^{\text{Euc}}\in\ker(\not{D}_{0}|_{L^{2}})\) one has \[\|\Psi^{\text{Euc}}\|_{L^{2}}\leq\|(Id-\not{D}_{N}P_{N}\not{D}_{N})\Psi^{\text{Euc }}\|_{L^{2}}+\|\not{D}_{N}P_{N}\not{D}_{N}\Psi^{\text{Euc}}\|_{L^{2}}\] and \(\not{D}_{N}P_{N}\not{D}_{N}\Psi^{\text{Euc}}=\not{D}_{N}P_{N}\mathfrak{d}\Psi^{ \text{Euc}}\), hence (4.8) shows that \(\|\not{D}_{N}P_{N}\not{D}_{N}\Psi^{\text{Euc}}\|_{L^{2}}\leq Cr_{0}\|\Psi^{ \text{Euc}}\|_{L^{2}}\). It follows that for \(r_{0}\) sufficiently small, \(Id-\Pi^{\text{Rg}}:\ker(\not{D}_{0}|_{L^{2}})\to\ker(\not{D}_{N}|_{L^{2}})\) is injective with closed range. For surjectivity, we argue by contraction: suppose that \(\eta\in\ker(\not{D}_{N}|_{L^{2}})\) were orthogonal to the image of \(Id-\Pi^{\text{Rg}}\) and normalized so that \(\|\eta\|_{L^{2}}=1\). Since \(\eta\in\ker(\not{D}_{N}|_{L^{2}})=\text{Range}(\not{D}_{N}|_{r_{e}^{H_{e}^{1}}} )^{\perp}\) we have \[0=\langle\not{D}_{N}\varphi,\eta\rangle=\langle(\not{D}_{0}+\mathfrak{d}) \varphi,\eta\rangle\] for all \(\varphi\in rH_{e}^{1}\), hence \(|\langle\not{D}_{0}\varphi,\eta\rangle|=|\langle\mathfrak{d}\varphi,\eta \rangle|\leq Cr_{0}\|\varphi\|_{rH_{e}^{1}}\), which is to say that component of \(\eta\) in \(\text{Range}(\not{D}_{0})\) is small. Consequently, there is a \(\Psi^{\text{Euc}}_{\eta}\) so that we may write \(\eta=\Psi^{\text{Euc}}_{\eta}+w\) with \(\Psi^{\text{Euc}}_{\eta}\in\ker(\not{D}_{0}|_{L^{2}})\) and \[1-Cr_{0}\leq\|\Psi^{\text{Euc}}_{\eta}\|_{L^{2}}\leq 1\hskip 56.905512pt\|w\|_{L^{2}} \leq Cr_{0}\] But this would imply \[0=\langle Id-\Pi^{\text{Rg}}(\Psi^{\text{Euc}}),\eta\rangle=\langle\Psi^{ \text{Euc}}_{\eta}+\not{D}_{N}P_{N}\mathfrak{d}\Psi^{\text{Euc}}_{\eta},\Psi^{ \text{Euc}}_{\eta}+w\rangle\geq 1-C^{\prime}r_{0},\] a contradiction after possibly decreasing \(r_{0}\). #### 4.1.1 Exponential Decay: We now turn to the issue of showing that there is a decomposition \[\Psi^{N}_{\ell}=\Psi^{\text{Euc}}_{\ell}+\zeta^{N}_{\ell}+\xi^{N}_{\ell}\] satisfying the desired exponential decay properties for \(\zeta^{N}_{\ell}\). To begin, we state a general lemma about solutions of the equation \[\not{D}_{0}\not{D}_{0}u=f\] is proved which is proved in Appendix A. Here, \(\not{D}_{0}\) denotes the Dirac operator in the product metric. In the setting where \(f\) has restricted Fourier modes in the \(S^{1}\) direction, then solutions with only high Fourier modes should enjoy good exponential decay properties away from \(\mathcal{Z}_{0}\). To state the lemma, we denote by \(A_{n\ell}\) the sequence of annuli 4.5 from Part (B) of Proposition 4.2, and define \(B_{n\ell}=A_{(n-1)\ell}\cup A_{n\ell}\cup A_{(n+1)\ell}\). **Lemma 4.7**.: Let \(m\) be a non-negative integer, and assume that \(|\ell|\geq 2m\). Suppose that \(u_{\ell}\in rH_{e}^{1}(N)\) is the unique solution of \[\not{D}_{0}\not{D}_{0}u_{\ell}=f_{\ell} \tag{4.9}\] where \(f_{\ell}\in r^{-1}H_{e}^{-1}\) satisfies the following two properties: 1. \(f_{\ell}\) has only Fourier modes in \(t\) in the range \[\ell-L_{0}\leq p\leq\ell+L_{0}\] (4.10) where \(|L_{0}|\leq|\ell|/2\). 2. For \(m\) as above, there are constants \(C_{m},c_{m}\) independent of \(\ell\) such that \(f_{\ell}\) satisfies the bounds \[\|f_{\ell}\|_{r^{-1}H_{e}^{-1}(B_{n\ell})}^{2}\leq\frac{C_{m}}{|\ell|^{2+2m}} \text{Exp}\left(-\frac{2n}{c_{m}}\right)\] (4.11) on the sequence of annuli \(B_{n\ell}\). Then there are constants \(C^{\prime}_{m},c^{\prime}_{m}\) independent of \(\ell\) such that \(u_{\ell}\) satisfies \[\|u_{\ell}\|^{2}_{rH^{1}_{z}(A_{nt})}\leq\frac{C^{\prime}_{m}}{|\ell|^{2+2m}} \mathrm{Exp}\left(-\frac{2n}{c^{\prime}_{m}}\right). \tag{4.12}\] Moreover, \(u_{\ell}\) has only Fourier modes in the same range as \(f_{\ell}\). Given this lemma, we now complete the proof of Lemma 4.5 by showing that \(\Psi^{N}_{\ell}\) admits a decomposition satisfying the desired bounds: Proof of the form of \(\Psi^{N}_{\ell}\) in Lemma 4.5.: In (4.7), we defined \[\Psi^{N}_{\ell}:=\Psi^{\mathrm{Euc}}_{\ell}-\not{D}_{N}P_{N}\not{D}_{N}(\Psi^{ \mathrm{Euc}}_{\ell})\] thus it suffices to show that there is a decomposition \(-\not{D}_{N}P_{N}\not{D}_{N}(\Psi^{\mathrm{Euc}}_{\ell})=\zeta^{N}_{\ell}+\xi^ {N}_{\ell}\) with the latter two satisfying the desired estimates of Proposition 4.3. We may write \(\not{D}_{N}=\not{D}_{0}+\mathfrak{d}\) where \(\mathfrak{d}\) can be explicitly written in the form \[\mathfrak{d}=\sum_{ij=1}^{3}a_{i,j}(t,x,y)\sigma_{i}\partial_{j}+\sum_{k=0}^{3 }\Gamma_{k}(t,x,y)\sigma_{k}\] where \(|a_{ij}|\leq Cr\) and \(|\Gamma|\leq C\) and \(\sigma_{0}=Id\) in the second sum. Decomposing \(a_{ij}(t,x,y),\Gamma_{k}(t,x,y)\) into the Fourier modes in the \(t\)-direction on \(N\simeq S^{1}\times\mathbb{R}^{2}\), this operator can be written as \[\mathfrak{d}=\mathfrak{d}^{\mathrm{low}}+\mathfrak{d}^{\mathrm{high}}\] where \(\mathfrak{d}^{\mathrm{low}}\) consists of the Fourier modes of \(a_{ij},\Gamma_{k}\) with Fourier index \(|p|\leq|\ell|/4\). We now define \(\zeta^{N}_{\ell},\xi^{N}_{\ell}\) as follows. Since \(\not{D}_{0}\Psi^{\mathrm{Euc}}_{\ell}=0\) by definition, one has \[\not{D}_{N}P_{N}\not{D}_{N}(\Psi^{N}_{\ell})=\not{D}_{N}P_{N}(\mathfrak{d}^{ \mathrm{low}}\Psi^{N}_{\ell}+\mathfrak{d}^{\mathrm{high}}\Psi^{N}_{\ell})= \not{D}_{N}P_{N}(f^{\mathrm{low}}_{\ell}+f^{\mathrm{high}}_{\ell}).\] where \(f^{\mathrm{low}}_{\ell}:=\mathfrak{d}^{\mathrm{low}}\Psi^{N}_{\ell}\) and \(f^{\mathrm{high}}_{\ell}:=\mathfrak{d}^{\mathrm{high}}\Psi^{N}_{\ell}\). Recall that \(P_{0},P_{N}\) denote the solution operators so that \(u=P_{0}(f)\) denotes the unique solution of \(\not{D}_{0}\not{D}_{0}u=f\) and likewise for \(P_{N}\) with \(\not{D}_{N}\). Set \[\zeta^{N}_{\ell} := \not{D}_{N}u_{\ell}\qquad\qquad\qquad u_{\ell}:=-P_{0}(f^{ \mathrm{low}}_{\ell})\] \[\xi^{N}_{\ell} := \not{D}_{N}v_{\ell}\qquad\qquad\qquad v_{\ell}:=-P_{N}(f^{ \mathrm{high}}_{\ell}-(\not{D}_{N}\not{D}_{N}-\not{D}_{0}\not{D}_{0})u_{\ell})\] By construction, we have \[\not{D}_{N}(\zeta^{N}_{\ell}+\xi^{N}_{\ell}) = \not{D}_{N}\not{D}_{N}v_{\ell}+(\not{D}_{N}\not{D}_{N}-\not{D}_{0 }\not{D}_{0})u_{\ell}+\not{D}_{0}\not{D}_{0}u_{\ell}\] \[= -f^{\mathrm{high}}_{\ell}+(\not{D}_{N}\not{D}_{N}-\not{D}_{0} \not{D}_{0})u_{\ell}-(\not{D}_{N}\not{D}_{N}-\not{D}_{0}\not{D}_{0})u_{\ell}-f^ {\mathrm{low}}_{\ell}\] \[= -\not{D}_{N}(\Psi^{N}_{\ell})\] And clearly \(\zeta^{N}_{\ell}+\xi^{N}_{\ell}\in\mathrm{Range}(\not{D}_{N})\), thus \(\zeta^{N}_{\ell}+\xi^{N}_{\ell}\) is the unique \(\mathrm{ker}(\not{D}_{N}|_{L^{2}})\)-perpendicular solution of \(\not{D}_{N}\psi=-\not{D}_{N}\Psi^{\mathrm{Euc}}_{\ell}\). Equivalently, \(\zeta^{N}_{\ell}+\xi^{N}_{\ell}:=-\not{D}_{N}P_{N}\not{D}_{N}(\Psi^{N}_{\ell})\) is the desired correction. To see that \(\zeta_{\ell}\) satisfies the desired decay properties, we apply Lemma 4.7 in the case that \(m=0\). The first hypothesis of that lemma is satisfied by construction, as \(f^{\mathrm{low}}_{\ell}=\mathfrak{d}^{\mathrm{low}}\Psi^{N}_{\ell}\) has Fourier modes in the desired range by definition. To verify the second hypothesis, note that since \(f^{\mathrm{low}}_{\ell}\in L^{2}\) we have \[\|f^{\mathrm{low}}_{\ell}\|_{r^{-1}H^{-1}_{z}(B_{n,\ell})}\leq\sup_{\|u\|=1} \langle u,f^{\mathrm{low}}_{\ell}\rangle_{L^{2}}\leq\sup_{\|u\|=1}\|u\|_{rH^{1}_{ z}}|rf^{\mathrm{low}}_{\ell}\|_{L^{2}(B_{nt})}\leq\|rf^{\mathrm{low}}_{\ell}\|_{L^{2}(B_{nt })}\] hence using the bounds \(|a_{ij}|\leq Cr\) and \(|\Gamma_{k}|\leq C\) for \(\mathfrak{d}^{\mathrm{low}}\), \[\int_{B_{n\ell}}r^{2}|f_{\ell}^{\rm low}|^{2}\ dV \leqslant C\frac{n^{2}}{|\ell|^{2}}R_{0}^{2}\int_{B_{n\ell}}|r\nabla_{j} \psi_{\ell}^{\rm Euc}|^{2}+|r\nabla_{t}\psi_{\ell}^{\rm Euc}|^{2}+|\psi_{\ell}^ {\rm Euc}|^{2}\ rdrd\theta dt \tag{4.13}\] \[\leqslant C\frac{n^{2}}{|\ell|^{2}}R_{0}^{2}\int_{B_{n\ell}}(1+r^{2}|\ell| ^{2}+1)\frac{e^{-2|\ell|r}}{r}|\ell|\ rdrd\theta dt\] (4.14) \[\leqslant C\frac{n^{5}}{|\ell|^{2}}R_{0}^{5}e^{-nR_{0}}\leqslant\frac{C^{ \prime}}{|\ell|^{2}}e^{-2n/c_{1}}. \tag{4.15}\] Thus we conclude from Lemma 4.7 that \[\|u_{\ell}\|_{rH_{e}^{1}(A_{n\ell})}\leqslant\frac{C_{0}}{|\ell|}{\rm Exp} \left(-\frac{n}{c_{0}}\right)\qquad\qquad\Rightarrow\qquad\qquad\|\zeta_{ \ell}^{N}\|_{L^{2}(A_{n\ell})}\leqslant\frac{C_{0}}{|\ell|}{\rm Exp}\left(- \frac{n}{c_{0}}\right)\] as desired. To finish, we show the desired bound on \(\xi_{\ell}^{N}\) as well. Since \(\not{D}_{N}:rH_{e}^{1}\to L^{2}\) and \(P_{N}:rH_{e}^{-1}\to rH_{e}^{1}\) are bounded, it suffices to show that \[\|f_{\ell}^{\rm high}-(\not{D}_{N}\not{D}_{N}-\not{D}_{0}\not{D}_{0})u_{\ell} \|_{r^{-1}H_{e}^{-1}}\leqslant\frac{C}{|\ell|^{2}}. \tag{4.16}\] Addressing the two terms on the left separately, one has \(\not{D}_{N}\not{D}_{N}-\not{D}_{0}\not{D}_{0}=\mathfrak{d}\not{D}_{0}+\not{D}_ {0}\mathfrak{d}+\mathfrak{d}^{2}\) which shows \[\|(\not{D}_{N}\not{D}_{N}-\not{D}_{0}\not{D}_{0})u_{\ell}\|_{r^{-1}H_{e}^{-1}} ^{2}\leqslant C\sum_{n}\underset{A_{n\ell}}{\sup}(r^{2}\|u_{\ell}\|_{rH_{e}^{1 }(A_{n\ell})}^{2})\leqslant\frac{C}{|\ell|^{4}}. \tag{4.17}\] For the first term, we use the fact that the coefficients \(a_{ij},\Gamma_{k}\) are smooth and \(\mathfrak{d}^{\rm high}\) have only Fourier modes \(p\) with \(|p|\geqslant|\ell|/4\), in conjunction with the Sobolev embedding for each fixed \((x,y)\in D_{r_{0}}\). For example, \[\|a^{\rm high}\|_{C^{0}(Y)}\leqslant\underset{x,y}{\sup}\,\|a^{\rm high}(t)\|_ {C^{0}(S^{1})}\leqslant C\sup_{x,y}\|a^{\rm high}(t)\|_{L^{1,2}(S^{1})} \leqslant\frac{C}{|\ell|^{2}}\sup_{x,y}\|a^{\rm high}(t)\|_{L^{3,2}(S^{1})} \leqslant\frac{Cr}{|\ell|^{2}} \tag{4.18}\] and likewise for \(\Gamma^{\rm high}\). Combining the bounds (4.17) and (4.18) shows (4.16), completing the proof. Combining with the proof of the isomorphism via (4.7) completes the proof of Lemma 4.5. A slight extension of the above shows the following stronger estimates, which will be used later to deliver estimates on the higher derivatives of \(\zeta_{\ell},\xi_{\ell}\). **Corollary 4.8**.: For every \(m\) there is an alternative decomposition \[\zeta_{\ell}^{N}+\xi_{\ell}^{N}=\zeta_{\ell}^{(m)}+\xi_{\ell}^{(m)}\] where * There are constants \(C_{m}\) and \(C_{m}^{\prime}\) such that \[\|\zeta_{\ell}^{(m)}\|_{L^{2}(A_{n\ell})}\leqslant\frac{C_{m}}{|\ell|}{\rm Exp }\left(-\frac{n}{c_{m}}\right)\qquad\quad\|(r\nabla_{z})^{\alpha}(\nabla_{t}) ^{\beta}\zeta_{\ell}^{(m)}\|_{L^{2}(A_{n\ell})}\leqslant\frac{C_{m}^{\prime}| \ell|^{\beta}}{|\ell|}{\rm Exp}\left(-\frac{n}{c_{m}^{\prime}}\right).\] (4.19) where \(A_{n\ell}\) is as in Proposition 4.2 and where \(\nabla_{z}\) is the covariant derivative in the normal directions. * The latter perturbation satisfies \[\|\xi_{\ell}^{(m)}\|_{L^{2}}\leqslant\frac{C_{m}}{|\ell|^{2+m}}.\qquad\qquad \qquad\|(r\nabla_{z})^{\alpha}(\nabla_{t})^{\beta}\xi_{\ell}^{(m)}\|_{L^{2}} \leqslant\frac{C_{m}^{\prime}|\ell|^{\beta}}{|\ell|^{2+m}}\] (4.20) Moreover, \(\zeta_{\ell}\) contains only Fourier modes \(e^{ipt}\) with \(\ell-\frac{|\ell|}{2}\leq p\leq\ell+\frac{|\ell|}{2}.\) The constants \(C_{m},c_{m}\) are independent of \(\ell\), and depend on up to the \(L^{m+3,2}\)-norm of the metric, and \(C^{\prime}_{m},c^{\prime}_{m}\) on up to the \(L^{m+|\alpha|+|\beta|+3,2}\)-norm. Proof.: For \(\alpha=\beta=0\), this follows from applying Lemma 4.7 inductively. Instead of solving for \(\xi_{\ell}\) with \(f^{\text{high}}_{\ell}-(\not{D}_{N}\not{D}_{N}-\not{D}_{0}\not{D}_{0})u_{\ell}\) on the right hand side, set \((f^{\text{low}}_{\ell})^{1}=-(\not{D}_{N}\not{D}_{N}-\not{D}_{0}\not{D}_{0})u_{\ell}\) and apply Lemma 4.7 again to the low Fourier modes to obtain a second correction \(\zeta^{1}_{\ell}\) and set \(\zeta^{1}_{\ell}=\zeta^{0}_{\ell}+\zeta^{1}_{\ell}\). Proceeding in this fashion, each iteration yields an additional power of \(|\ell|^{-1}\) in the new remainder. To control the range of Fourier modes, define the low modes instead by truncating at \(L_{0}=|\ell|/4m\). Using higher Sobolev norms in 4.18 allows the remainder after \(m\) iterations to be controlled by \(C_{m}|\ell|^{-(m+2)}\), after which gives the bound on \(\xi^{(m)}_{\ell}\). The higher derivative estimates follow from repeating the argument applying estimates for nested sequences of commutators \([r\nabla_{z},\not{D}_{N}]\) and \([\nabla_{t},\not{D}_{N}]\). Each application of \(\nabla_{t}\) adds a power of \(|\ell|\), but the normal \(\nabla^{\text{b}}\)-derivatives only constants. ### Fredholm Properties This subsection continues the proof of Propositions 4.2 and 4.3 by showing that pasting the model basis \(\Psi^{N}_{\ell}\) onto \(Y\) gives a Fredholm map onto \(\mathbf{Ob}(\mathcal{Z}_{0})\). We begin by defining this patching map; this map is a preliminary version of the map \(\mathrm{ob}\) from Proposition 4.2. Let \(\chi_{1}\) denote a cut-off function equal to \(1\) for \(r\leq r_{0}/4\) such that \(\mathrm{supp}(\chi_{1})\subseteq\{r\leq r_{0}/2\}\). Define \[M:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\oplus \mathbb{R}^{k} \rightarrow \mathbf{Ob}(\mathcal{Z}_{0}) \tag{4.21}\] \[(\phi_{\ell},c_{\alpha}) \mapsto \chi_{1}\Psi^{N}_{\ell}-\not{D}P\not{D}(\chi_{1}\Psi^{N}_{\ell})+ c_{\alpha}\Phi_{\alpha} \tag{4.22}\] where \(\alpha=1,...,k=\dim(\ker\not{D}|_{rH^{1}})\). Here \(P\) is again the solution operator on \(Y\) defined in (2.10). Additionally, we set \(\Psi^{N}_{0}:=\Psi^{\text{Euc}}_{0}\) to span a real \(2\)-dimensional subspace of the span of \((z^{-1/2},0)\) and \((0,\overline{z}^{-1/2})\), which is specified in the subsequent Section 4.3. **Lemma 4.9**.: The map \(M\) defined by (4.22) is Fredholm. Proof.: It suffices prove this ignoring the finite-dimensional span of \(\phi_{0}\) and \(\Phi_{k}\). With these modes discarded, Lemma 4.5 shows \(M\) may be viewed as a map \[M:\ker(\not{D}_{N}|_{L^{2}}) \rightarrow \mathbf{Ob}(\mathcal{Z}_{0})\] \[\Psi \mapsto \chi_{1}\Psi-\not{D}v_{\Psi}\qquad\qquad\text{where}\qquad\qquad v _{\Psi}:=P\not{D}(\chi_{1}\Psi).\] We define a pseudo-inverse \[M^{\dagger}:\mathbf{Ob}(\mathcal{Z}_{0}) \rightarrow \ker(\not{D}_{N}|_{L^{2}})\] \[\Phi \mapsto \chi_{1}\Phi-\not{D}_{N}u_{\Psi}\qquad\qquad\text{where}\qquad \qquad\quad u_{\Phi}:=P_{N}\not{D}_{N}(\chi_{1}\Phi).\] To prove the lemma, we check that \(M,M^{\dagger}\) are indeed pseudo-inverses so that \(M^{\dagger}M=Id+A_{1}\) and \(MM^{\dagger}=Id+A_{2}\) for compact operators \(A_{1},A_{2}\) from which Fredholmness follows. First, we note that standard elliptic theory implies the following: if \(K\Subset Y-\mathcal{Z}_{0}\) is compactly contained in the complement of \(\mathcal{Z}_{0}\), then the restriction \[R:\mathbf{Ob}(\mathcal{Z}_{0})\to rH^{1}_{e}(K) \tag{4.23}\] is compact. Indeed, since \(\not{D}\) is uniformly elliptic away from \(\mathcal{Z}_{0}\), this follows from standard elliptic bootstrapping and Rellich's Lemma. The equivalent statement holds on \(K_{N}\Subset N\), but compactness then also _a priori_ requires that \(K_{N}\) be bounded in the non-compact \(N\). A quick computation shows \[(MM^{\dagger}-I)\Phi = (\chi_{1}^{2}-1)\Phi-\chi_{1}\not{D}_{N}u_{\Phi}-\not{D}v_{M^{ \dagger}\Phi}. \tag{4.24}\] \[(M^{\dagger}M-I)\Psi = (\chi_{1}^{2}-1)\Psi-\chi_{1}\not{D}v_{\Psi}-\not{D}_{N}u_{M\Psi}. \tag{4.25}\] and we claim the terms on the right hand side are compact. For the first expression, \(\operatorname{supp}(\chi_{1}^{2}-1)\Subset Y-\mathcal{Z}_{0}\) hence compactness follows from what was said about the restriction map (4.23). Likewise, 4.23 implies that the map \(\Phi\mapsto u_{\Phi}\) is compact since it may be written as the composition \[u=P_{N}\circ d\chi_{1}.\circ R|_{\operatorname{supp}(d\chi_{1})}.\] Similarly, \(\Psi\mapsto v_{\Psi}\) is compact. Since the remaining terms on the right hand side of factor through these, we conclude that \(MM^{\dagger}-Id\) is compact. The only difference for \(M^{\dagger}M-Id\) is that \((\chi_{1}^{2}-1)\) is not compactly supported on \(Y_{N}\). Nevertheless, a standard diagonalization using the decay properties of \(\Psi_{\ell}^{N}=\Psi_{\ell}^{\operatorname{Euc}}+\zeta_{\ell}^{N}+\zeta_{ \ell}^{N}\) shows that it is compact on elements of \(\mathbf{Ob}_{N}\) (choose subsequences on that simultaneously converge on \(r\leq n\) and on the span of \(|\ell|\leq n\)). ### The Index via Concentration In this subsection we calculate the Fredholm index of the map \(M:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\oplus\mathbb{R}^{n} \to\mathbf{Ob}(\mathcal{Z}_{0})\). This is done by introducing a family of perturbations depending on \(\mu\in\mathbb{R}\)1 Footnote 1: This approach was suggested to the author by Clifford Taubes. \[\not{D}_{\mu}:=\not{D}+\mu J\] where \(J\) is a complex anti-linear map with \(J^{2}=-Id\). As \(\mu\to\infty\) the cokernel elements become increasingly concentrated near \(\mathcal{Z}_{0}\), and for \(\mu\) sufficiently large we may conclude that the \(\mu\)-version of \(M_{\mu}\) is an isomorphism. There are two subtleties in this. 1) one must be careful to ensure the family \(M_{\mu}\) can be viewed on a fixed Banach space, as \(\ker(\not{D}_{\mu}|_{rH^{1}_{\mu}})\) may jump in dimension as \(\mu\) varies. 2) The role of the \(\ell=0\) modes for the index must be clarified. On \(S^{1}\times\mathbb{R}^{2}\), the \(\ell^{th}\) Fourier mode \(\ell\) has two solutions linearly independent over \(\mathbb{C}\), one _decaying_ exponentially like \(e^{-|\ell|r}\) (this being \(\Psi_{\ell}^{\operatorname{Euc}}\)), and another _growing_ exponentially like \(e^{+|\ell|r}\) which is not \(L^{2}\) hence not part of \(\ker(\not{D}_{0}|_{L^{2}})\). For the \(\ell=0\) mode the situation is different: there are four (real) solutions given by the complex span of \((1/\sqrt{z},0)\) and \((0,1/\sqrt{z})\)_neither_ of which are \(L^{2}\). It is not at first clear which of these should contribute to the index, though we will show that a particular 2-dimensional subspace contributes. The point is that the perturbation \(\mu\to\infty\) breaks a degeneracy between the exponentially growing and exponentially decaying modes for \(\ell=0\) and distinguishes a 2-dimensional subspace of exponentially decaying ones. **Lemma 4.10**.: The Fredholm map \[M:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\oplus\mathbb{R}^{k}\to \mathbf{Ob}(\mathcal{Z}_{0})\] from Lemma 4.9 has Index 0. Proof.: We can recast the Fredholm problem on fixed Banach spaces by considering the operator \[\overline{\mathcal{Q}}_{0}:=\begin{pmatrix}M&0\\ M&\overline{D}\end{pmatrix}:\begin{matrix}L^{2}(\mathcal{Z}_{0};\mathcal{S}_{ \mathcal{Z}_{0}})\oplus\mathbb{R}^{k}&\mathbf{Ob}(\mathcal{Z}_{0})\\ \oplus&\longrightarrow&\oplus\\ rH^{1}_{e}&\operatorname{Range}(\not{D})\oplus\mathbb{R}^{k}\end{matrix}=L^{2}(Y ;S_{0})\oplus\mathbb{R}^{k}\] where \(\overline{\not{D}}=(\not{D},\sum_{\alpha}\langle\smile,\Phi_{\alpha}\rangle \Phi_{\alpha})\). By construction, \(\overline{\not{D}}\) is Fredholm of Index 0, hence \(\mathcal{Q}\) is Fredholm, and it suffices to show that \(\mathcal{Q}\) has Index 0. Notice also that if we were to replace \(P\) in the definition of \(M\) with another parametrix \(P^{\prime}\) for \(\not{D}\not{D}:rH^{1}_{e}\to rH^{-1}_{e}\) then the resulting \[\mathcal{Q}_{0}:=M^{\prime}\oplus\overline{\not{D}} \tag{4.26}\] differs by compact operators, hence is Fredholm of the same index as \(\overline{\mathcal{Q}}_{0}\) (though it is no longer necessarily block diagonal). Now set \(\not{D}_{\mu}:=\not{D}+\mu J\) for \(\mu\geq 0\). Since the Weitzenbock formula becomes \[\not{D}_{\mu}^{\star}\not{D}_{\mu}=(\not{D}-\mu J)(\not{D}+\mu J)=\not{D}^{ \star}\not{D}+\mu^{2}, \tag{4.27}\] the proofs of Proposition 2.4 and Lemma 2.7 apply to show that \(\not{D}_{\mu}:rH^{1}_{e}\to L^{2}\) has finite-dimensional kernel and closed range, and \(\not{D}_{\mu}^{\star}\not{D}_{\mu}:rH^{1}_{e}\to rH^{-1}_{e}\) is Fredholm. Define a pseudo-inverse by \[\overline{P}_{\mu}(f)=u\qquad\qquad\text{where}\qquad\qquad\not{D}_{\mu}^{ \star}\not{D}_{\mu}u=f\mod\ker(\not{D}_{\mu}|_{rH^{1}_{e}})\] is the unique \(\ker(\not{D}_{\mu}|_{rH^{1}_{e}})\)-perpendicular solution. The proofs of Lemmas 4.5 and 4.9 apply equally well to show that \[\overline{Q}_{\mu}=\begin{array}{c}L^{2}(\mathcal{Z}_{0};\mathcal{S}_{ \mathcal{Z}_{0}})\oplus\mathbb{R}^{k}\\ \oplus\\ rH^{1}_{e}\end{array}\longrightarrow\ L^{2}(Y;S)\oplus\mathbb{R}^{k}\] is a Fredholm operator for each \(\mu\). It is _not_, however, necessarily continuous in \(\mu\). If there are jumps in the dimension of \(\ker(\not{D}_{\mu}^{\star}\not{D}_{\mu}|_{rH^{1}_{e}})\), then \(\overline{P}_{\mu}\) need not be a continuous family of parametrices. Instead, let \(P_{\mu}\) be a continuous family of parametrices for \(\not{D}_{\mu}^{\star}\not{D}_{\mu}\). Using this to form \(\mathcal{Q}_{\mu}\) for each \(\mu\geq 0\) we obtain a now continuous family of Fredholm operators. As noted in (4.26) the operators \(\mathcal{Q}_{\mu}\) are connected by a continuous path of Fredholm operators to \(\overline{\mathcal{Q}}_{0}\), hence have the same index. \(Q_{\mu}\) is continuous family of Fredholm operators although the summand \(M_{\mu}\) is not since \(\ker(\not{D}_{\mu}|_{rH^{1}})\) may jump in dimension as \(\mu\) varies. Given the above, it suffices to calculate \(\operatorname{Ind}(\mathcal{Q}_{\mu})\) for \(\mu>>0\). Since the Weitzenbock formula 4.27 implies that \(\ker(\not{D}_{\mu}|_{rH^{1}_{e}})=\emptyset\) for \(\mu>>0\), we can arrange by a further homotopy of parametrices that \(\mathcal{Q}_{\mu}\) is formed using \(\overline{P}^{\mu}=(\not{D}_{\mu}^{\star}\not{D}_{\mu})^{-1}\) once \(\mu\) is large. Removing the \(\mathbb{R}^{k}\) summands form both the domain and range does not disrupt Fredholmness nor alter the index, so these may be ignored. Furthermore, there is new splitting \(L^{2}=\ker(\not{D}_{\mu}^{\star}|_{L^{2}})\oplus\operatorname{Range}(\not{D}_ {\mu}|_{rH^{1}_{e}})\) in which one may now write \[\mathcal{Q}^{\mu}=\begin{pmatrix}M_{\mu}&0\\ 0&\not{D}_{\mu}\end{pmatrix}:\begin{array}{c}L^{2}(\mathcal{Z}_{0}; \mathcal{S}_{\mathcal{Z}_{0}})\\ \oplus\\ rH^{1}_{e}\end{array}\longrightarrow\begin{array}{c}\ker(\not{D}_{\mu}^{ \star}|_{L^{2}})\\ \oplus\\ \operatorname{Range}(\not{D}_{\mu})\end{array}\] where \(M_{\mu}(\Psi^{N}_{\ell})=\chi_{1}\Psi^{N}_{\ell}-\not{D}_{\mu}\overline{P}_ {\mu}\not{D}_{\mu}(\chi_{1}\Psi^{N}_{\ell})\) as before. Since \(\not{D}_{\mu}\) is injective, hence an isomorphism onto its range, it suffices now to show that \(M_{\mu}\) is an isomorphism for \(\mu>>0\). Finally, since \(\not{D}_{\mu}\) is injective once \(\mu\) is sufficiently large independent of small variation in the metric, we may arrange by a further homotopy through Fredholm operators that the metric is a product for \(r\leq r_{0}\). The proof is then completed by the subsequent two lemmas. The next lemma shows that the perturbation \(\mu J\) means the \(L^{2}\)-kernel enjoys an additional factor of \(e^{-\mu r}\) in the exponential decay compared to the \(\mu=0\) case, thus it is concentrated more strongly near \(\mathcal{Z}_{0}\). The proof is an elementary exercise is solving ODEs by diagonalizing matrices since the Fourier modes decouple. **Lemma 4.11**.: For \((N,g_{\mathrm{prod}})\), the perturbed Dirac operator \[\not{D}_{N,\mu}:rH^{1}_{e}\longrightarrow L^{2}\] is injective, and its adjoint's extension to \(L^{2}\) has a kernel \(\ker(\not{D}_{N,\mu}^{\star}|_{L^{2}})\) characterized by the following. * There is a real 2-dimensional subspace of \(\ker(\not{D}_{N,\mu}^{\star}|_{L^{2}})\) in the \(\ell=0\) modes. It is given by the span over \(\mathbb{R}\) of \[\psi^{+}_{0}=\begin{pmatrix}\frac{e^{-\mu r}}{\sqrt{z}}\\ 0\end{pmatrix}\qquad\qquad\psi^{-}_{0}=\begin{pmatrix}0\\ \frac{e^{-\mu r}}{\sqrt{z}}\end{pmatrix}\] * There is a real \(4\)-dimensional subspace of \(\ker(\not{D}^{\star}_{N,\mu}|_{L^{2}})\) in the \(\pm\ell\) modes spanned over \(\mathbb{R}\) by spinors \[\psi^{(j)}_{|\ell|}=\frac{e^{\pm i\theta/2}}{r^{1/2}}e^{-\sqrt{\ell^{2}+\mu^{2}} }e^{\pm i\ell t}v^{(j)}\] where \(v^{(j)}\in\mathbb{R}^{4}\) for \(j=1,\ldots,4\). Given the above we can now specify a particular definition of \(M_{\mu}\) by specifying a real \(2\)-dimensional subspace of the \(\ell=0\) modes: take the real span of \(\phi_{0}\mapsto(z^{-1/2},0)\) and \(i\phi_{0}\mapsto(0,\overline{z}^{-1/2})\). This is the two-dimensional subspace of the \(\ell=0\) modes which decays exponentially for \(\mu>0\) alluded to at the beginning of the subsection. **Lemma 4.12**.: For \(\mu>>0\), \[M_{\mu}:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\to\ker(\not{D}^ {\star}_{\mu}|_{L^{2}})\] is an isomorphism. Proof.: By the previous Lemma 4.11, one has \(L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})=\ker(\not{D}^{\star}_{N, \mu}|_{L^{2}})\). As before, \(M_{\mu}:\ker(\not{D}^{\star}_{N,\mu}|_{L^{2}})\to\ker(\not{D}^{\star}_{\mu}|_{ L^{2}})\) and \(M^{\dagger}_{\mu}:\ker(\not{D}^{\star}_{\mu}|_{L^{2}})\to\ker(\not{D}^{\star}_{N,\mu}|_{L^{2}})\) are given by \[M_{\mu}(\Psi) = \chi_{1}\Psi-\not{D}_{\mu}v_{\Psi}\qquad\qquad\text{where}\qquad \qquad v_{\Psi}:=P_{\mu}\not{D}^{\star}_{\mu}(\chi_{1}\Psi).\] \[M^{\dagger}_{\mu}(\Phi) = \chi_{1}\Phi-\not{D}_{N,\mu}u_{\Psi}\qquad\qquad\text{where}\qquad \qquad u_{\Phi}:=P_{N,\mu}\not{D}^{\star}_{N,\mu}(\chi_{1}\Phi).\] Here, \(P_{\mu},P_{N,\mu}\) are the true inverses. By the explicit forms in Lemma 4.11, every \(\Psi\in\ker(\not{D}^{\star}_{\mu}|_{L^{2}})\) on \(N\) satisfies \[\|\Psi\|_{L^{2}(\operatorname{supp}(d\chi_{1}))}\leq Ce^{-\mu r_{0}/c_{1}}\| \Psi\|_{L^{2}} \tag{4.28}\] on \(\operatorname{supp}(d\chi_{1})\). It then follows from the expression 4.25 that \[\|(M^{\dagger}_{\mu}M_{\mu}-I)\Psi\|_{L^{2}}\leq Ce^{-\mu r_{0}/c_{1}}\|\Psi\| _{L^{2}},\] hence for \(\mu\) sufficiently large, \(M^{\dagger}_{\mu}M_{\mu}\) is an isomorphism thus \(M_{\mu}\) is injective. Surjectivity follows by the same argument using (4.24) in place of (4.25) where (4.28) is replaced by the bound \[\|\Phi\|_{L^{2}(\operatorname{supp}(d\chi_{1}))}\leq\frac{C}{\mu}\|\Phi\|_{L^{ 2}(Y)} \tag{4.29}\] for \(\Phi\in\ker(\not{D}^{\star}_{\mu})\) on \(Y\). To prove (4.29), let \(\rho\) denote a cut-off function supported equal to \(1\) on \(Y-N_{r_{0}/8}(\mathcal{Z}_{0})\) so that \(\rho=1\) on \(\operatorname{supp}(\chi_{1})\). Integrating by parts shows \[\int_{Y\setminus\mathcal{Z}_{0}}\rho\langle J\Phi,\not{D}\Phi\rangle = \int_{Y\setminus\mathcal{Z}_{0}}\rho\langle J\not{D}\Phi,\Phi \rangle+\langle d\rho.J\Phi,\Phi\rangle\;dV\] \[= -\int_{Y\setminus\mathcal{Z}_{0}}\rho\langle\not{D}\Phi,J\Phi \rangle+\int_{Y\setminus\mathcal{Z}_{0}}\langle d\rho.J\Phi,\Phi\rangle\;dV\] since \(\not{D}J=J\not{D}\) and \(J^{\dagger}=-J\). Consequently, since \(d\rho\) is bounded by a universal constant, \[2\mathrm{Re}\langle\rho J\Phi,\not{D}\Phi\rangle_{L^{2}}\leq C\|\Phi\|_{L^{2}}. \tag{4.30}\] Then, if \(\Phi\in\ker(\not{D}^{\star}_{\mu})\), \[0=\langle\rho J\Phi,(\not{D}-\mu J)\Phi\rangle_{L^{2}} = -\mu\langle\rho\Phi,\Phi\rangle_{L^{2}}+\langle\rho J\Phi,\not{D} \Phi\rangle_{L^{2}}\quad\stackrel{{\ref{eq:2.1}}}{{\Rightarrow}} \quad\mu\|\Phi\|_{L^{2}(\rho=1)}\leq C\|\Phi\|_{L^{2}(Y)}.\] The latter gives (4.29) which implies \(M_{\mu}\) is surjective for \(\mu\) sufficiently large. This completes the lemma and thus the proof of Lemma 4.10. ### The Obstruction Map In this subsection we complete the proofs of Propositions 4.2 and 4.3 by constructing a map \(\operatorname{\mathrm{ob}}\) satisfying the asserted properties. The map \(\operatorname{\mathrm{ob}}\) is constructed from the map \(M\) defined by (4.21)-(4.22). To review, we have now shown via Lemma 4.10 that \(M\) is Fredholm of Index \(0\). It is convenient to amend \(M\) by making the images of the two summands orthogonal. Thus we revise \(M\) to be given by \[M:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\oplus \mathbb{R}^{k} \rightarrow \operatorname{\mathbf{Ob}}(\mathcal{Z}_{0}) \tag{4.31}\] \[(\phi_{\ell},c_{\alpha}) \mapsto \chi_{1}\Psi_{\ell}^{N}-\not{D}P\not{D}(\chi_{1}\Psi_{\ell}^{N}) -\pi_{1}(\chi_{1}\Psi_{\ell}^{N})+c_{\alpha}\Phi_{\alpha}. \tag{4.32}\] where \(\pi_{1}\) is the \(L^{2}\)-orthogonal projection onto \(\ker(\not{D}|_{rH^{1}})\). Since \(\pi_{1}\) has finite rank thus is compact, this does not disrupt the Fredholmness or index. Let \(L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})_{L_{0}}\) denote the subspace spanned by \(\phi_{\ell}\) for \(|\ell|\geq L_{0}\). **Lemma 4.13**.: For \(L_{0}\) sufficiently large, the restricted map \[M|_{L_{0}}:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})_{N}\oplus \mathbb{R}\rightarrow\operatorname{\mathbf{Ob}}(\mathcal{Z}_{0})\] is injective. Given Lemma 4.13, \(\operatorname{\mathrm{ind}}(M)=0\) means that the (complex) codimension of \(\operatorname{\mathrm{Im}}(M|_{L_{0}})\subseteq\operatorname{\mathbf{Ob}}\) is \(2L_{0}+1\), and we can make the following definition: **Definition 4.14**.: The **Obstruction Basis** is defined as \[\Psi_{\ell}:=\begin{cases}\chi_{1}\Psi_{\ell}^{N}-\not{D}P\not{D}(\chi_{1} \Psi_{\ell}^{N})-\pi_{1}(\chi_{1}\Psi_{\ell}^{N})&|\ell|>L_{0}\\ \Psi_{\ell}&|\ell|\leq L_{0}\end{cases}\] where \(\Psi_{\ell}\) for \(|\ell|\leq L_{0}\) is chosen to be an orthonormal basis of the orthogonal complement of \(\operatorname{\mathrm{Im}}(M|_{L_{0}})\subseteq\operatorname{\mathbf{Ob}}( \mathcal{Z}_{0})\). It then follows that the amended map \[M^{\prime}:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}} )\oplus\mathbb{R}^{k} \rightarrow \operatorname{\mathbf{Ob}}(\mathcal{Z}_{0})\] \[(\phi_{\ell},c_{\alpha}) \mapsto \Psi_{\ell}+c_{\alpha}\Phi_{\alpha}\] is an isomorphism. Additionally, by in the upcoming proof of Lemma 4.13, we will see that \(\Psi_{\ell}\) admits a decomposition \[\Psi_{\ell}=\chi_{1}\Psi_{\ell}^{\mathrm{Euc}}+\zeta_{\ell}+\xi_{\ell} \tag{4.33}\] satisfying the desired properties. Proof of Lemma 4.13.: Clearly, if the \(\Phi_{\alpha}\)-component is \(0\) then \(c_{\alpha}=0\) since the images of the summands are orthogonal. We show that if \(\phi\in L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})_{L_{0}}\) then \(M(\phi)=0\) implies \(\phi=0\). Since \(\phi_{\ell}\rightarrow\chi_{1}\Psi_{\ell}^{N}\) extends to a bounded linear isomorphism (with bounded inverse) \(L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\rightarrow\ker(\not{D}_ {N}|_{L^{2}})\) by Lemma 4.5, it suffices to assume that \(\phi=\sum c_{\ell}\phi_{\ell}\) is such that \(\Psi^{N}:=\sum c_{\ell}\Psi_{\ell}^{N}\) satisfies \(|\Psi^{N}|_{L^{2}}=1\). Since \(\Phi_{\alpha}\) is polyhomogenous by Proposition 3.8, the Fourier mode restrictions of \(\Psi_{\ell}^{\mathrm{Euc}}+\zeta_{\ell}^{N}\) and the bound on \(\xi_{\ell}^{N}\) in Lemma 4.5 and Corollary 4.8 \[\langle\chi_{1}\Psi^{N},\Phi_{\alpha}\rangle\leq CL_{0}^{-m} \tag{4.34}\] for any \(m>0\). We show a similar bound on the term \(\not{D}P\not{D}(\chi_{1}\Psi^{N})\); in fact, since \(\not{D},P\) are bounded, it suffices to show it for \(\not{D}(\chi_{1}\Psi^{N})\). For this, one has \[\|\not{D}(\chi_{1}\Psi^{N})\|_{rH_{\varepsilon}^{-1}} = \|d\chi_{1}.\Psi^{N}|_{\mathrm{supp}(d\chi_{1})}\|_{rH_{ \varepsilon}^{-1}}\] \[\leq \|d\chi_{1}.(\Psi^{\mathrm{Euc}}+\zeta^{N})|_{\mathrm{supp}(d\chi_ {1})}\|_{L^{2}}+\|d\chi_{1}.\xi^{N}\|_{L^{2}}\leq C\mathrm{Exp}(-\tfrac{n}{c_{1 }})+CL_{0}^{-m-2}.\] Combining this with 4.34 we conclude \[\|\chi_{1}\Psi^{N}-\not{D}P\not{D}(\chi_{1}\Psi^{N})-\pi_{1}(\chi_{1 }\Psi^{N})\|_{L^{2}} \geq \|\chi_{1}\Psi^{N}\|_{L^{2}}-CL_{0}^{-m}-CL_{0}^{-m+2}\] \[\geq \tfrac{1}{2}\big{\|}\sum_{\ell}c_{\ell}\phi_{\ell}\big{\|}_{L^{2}}\] for, say \(m=4\) and \(L_{0}\) sufficiently large. Here we have used that \(\|(1-\chi_{1})\Psi^{N}\|_{L^{2}}=O(\operatorname{Exp}(-L_{0}))\). It follows that \(M|_{L_{0}}\) is injective. Moreover, the bounds above show that the correction terms satisfy \(\|\not{D}P\not{D}(\chi_{1}\Psi^{N}_{\ell})-\pi_{1}(\chi_{1}\Psi^{N}_{\ell})\|_ {L^{2}}\leq C_{m}|\ell|^{-m}\) for any \(m\), hence they can be absorbed into \(\xi_{\ell}\) to yield the decomposition (4.33) without disrupting the bounds of Lemma 4.5 and Corollary 4.8. To complete the proofs of Propositions 4.2 and 4.3 we construct the map \(\operatorname{ob}\) from \(M^{\prime}\). This is necessary because \(M\) does not satisfy the property 4.3 that the projection to \(\operatorname{\mathbf{Ob}}(\mathcal{Z}_{0})\) is easily calculated from the sequence of inner products. Indeed, since the basis \(\Psi_{\ell}\) is not necessarily orthonormal, the coefficients of \(\Psi=c_{\ell}\psi_{\ell}\) are not calculated by the \(L^{2}\)-inner product, i.e. in general \[(M)^{-1}(\Pi^{\operatorname{Ob}}\psi)\neq\left(\sum_{\ell\in\mathbb{Z}} \langle\psi,\Psi_{\ell}\rangle\mathbb{C}\phi_{\ell}\,\ \sum_{\alpha}\langle\psi,\Phi_{\alpha}\rangle\Phi_{ \alpha}\right).\] Rather frustratingly, one cannot orthonormalize and retain the decay properties of Proposition 4.3 (disrupting these would lead to certain error terms being unbounded later, so the decay properties are essential). To amend this without orthonormalizing, we precompose \(M^{\prime}\) with a change of basis \(U:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\to L^{2}(\mathcal{Z}_{ 0};\mathcal{S}_{\mathcal{Z}_{0}})\). Specifically, let \(U\) be defined by the linear extension of \[U(c_{k}\phi_{k}):=\sum_{\ell\in\mathbb{Z}}\langle M^{\prime}(c_{k}\phi_{k}), \Psi_{\ell}\rangle\ \phi_{\ell}=\sum_{\ell\in\mathbb{Z}}\langle c_{k}\Psi_{k},\Psi_{\ell}\rangle \ \phi_{\ell}. \tag{4.35}\] **Lemma 4.15**.: For \(L_{0}\) sufficiently large, \(U:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\to L^{2}(\mathcal{Z}_{ 0};\mathcal{S}_{\mathcal{Z}_{0}})\) is an isomorphism, and \[\operatorname{ob}:=M^{\prime}\circ U^{-1}\] satisfies the properties (4.2) and (4.3). Proof.: (4.2) and (4.3) are immediate from the first statement. Indeed, \(\operatorname{ob}\) is clearly an isomorphism if \(U\) is since \(M^{\prime}\) is by construction. Additionally, using (4.35), one has that for a spinor \(\psi\in L^{2}\) \[\operatorname{ob}^{-1}(\Pi^{\operatorname{Ob}}\psi)=UU^{-1}( \operatorname{ob}^{-1}\Pi^{\operatorname{Ob}}\psi) = \sum_{\ell\in\mathbb{Z}}\langle M^{\prime}U^{-1}(\operatorname{ob}^ {-1}\Pi^{\operatorname{Ob}}(\psi)),\Psi_{\ell}\rangle\ \phi_{\ell}\] \[= \sum_{\ell\in\mathbb{Z}}\langle\operatorname{ob}(\operatorname{ob }^{-1}\Pi^{\operatorname{Ob}}(\psi)),\Psi_{\ell}\rangle\phi_{\ell}=\sum_{\ell \in\mathbb{Z}}\langle\psi,\Psi_{\ell}\rangle\ \phi_{\ell}\] as desired. To show \(U\) is an isomorphism we show that \[U=Id+K\] where \(\|K\|_{L^{2}\to L^{2}}\leq CL_{0}^{-1/8}\). To see this, write \(\Psi_{\ell}=\chi_{1}\Psi_{\ell}^{\operatorname{Euc}}+\Xi_{\ell}\) where \(\Xi_{\ell}=\zeta_{\ell}+\xi_{\ell}\). We claim the following four bounds hold where all inner products are the hermitian inner product on \(L^{2}\): 1. \(\langle\Psi_{k},\Psi_{\ell}\rangle=\delta_{k\ell}\) unless both \(|k|>L_{0}\) and \(|\ell|>L_{0}\). 2. \(\langle\Xi_{k},\Xi_{\ell}\rangle\leq\frac{C}{|k||\ell|}\). 3. \(\langle\Xi_{k},\chi_{1}\Psi_{\ell}^{\operatorname{Euc}}\rangle\leq\frac{C}{|k ||\ell|}\). 4. \(\langle\chi_{1}\Psi_{k}^{\operatorname{Euc}},\chi_{1}\Psi_{\ell}^{\operatorname {Euc}}\rangle=\delta_{k\ell}+a_{k\ell}\) where \(|a_{k\ell}|\leq\frac{C}{|k|^{1/2}|\ell|^{1/2}}\) and if \(|k-\ell|\geq|k|^{1/4}|\ell|^{1/4}\) then \(|a_{k\ell}|\leq\frac{C}{|k|^{2}|\ell|^{2}}\). (i) holds by construction by Definition 4.14. And (ii) is immediate from the bounds on \(\zeta_{\ell}+\xi_{\ell}\) and Cauchy-Schwartz in Lemma (4.13). For (iii), recall from Definition 4.14 that \(\Xi_{\ell}=\Xi_{\ell}^{\perp}+\pi_{k}\) where \(\Xi_{\ell}^{\perp}\perp\mathbf{Ob}(\mathcal{Z}_{0})\) is the (negative of the) projection of \(\chi_{1}\Psi_{\ell}^{\mathrm{Euc}}\) to the orthogonal complement, and \(\pi_{k}=\sum_{\alpha}\langle\chi_{1}\Psi_{k}^{\mathrm{Euc}},\Phi_{\alpha} \rangle\Phi_{\alpha}\). Thus since \(\langle\Xi_{k}^{\perp},\Psi_{\ell}\rangle=0\), \[\langle\Xi_{k},\chi_{1}\Psi_{\ell}^{\mathrm{Euc}}\rangle=\langle\Xi_{k}^{\perp },\chi_{1}\Psi_{\ell}^{\mathrm{Euc}}\rangle-\langle\pi_{k},\chi_{1}\Psi_{\ell }^{\mathrm{Euc}}\rangle=\langle\Xi_{k}^{\perp},\Xi_{\ell}\rangle+\langle\pi_{k },\chi_{1}\Psi_{\ell}^{\mathrm{Euc}}\rangle\] so (iii) follows from (ii) and polyhomogeneity of \(\Phi_{\alpha}\). Finally, for (iv) the integral may be written explicitly as \[(1+\mathrm{sgn}(k)\mathrm{sgn}(\ell))\int_{N_{r_{0}}(\mathcal{Z}_{0})}\chi_{1 }^{2}|k|^{\frac{1}{2}}|\ell|^{\frac{1}{2}}\frac{e^{-(|\ell|+|k|)r}}{r}e^{i(k- \ell)t}|g|^{1/2}d\mathrm{vol}\] were \(|g|^{1/2}=1+O(r)\) the the volume form in normal coordinates. For \(|g|^{1/2}=1\) the integral is (exponentially close to) \(\delta_{k\ell}\). Integrating the \(O(r)\) term results in the first bound. Since the metric is smooth, the \(e^{i(k-\ell)t}\) Fourier mode is bounded by \(|k-\ell|^{m}\) for \(m\) large. Thus if \(|k-\ell|\geq|k|^{1/4}|\ell|^{1/4}\) the stronger bound follows. With (i)-(iv) established, we calculate the \(L^{2}\)-norm of \(K\) on \(c(t)=\sum_{k}c_{k}\phi_{k}\), \[\left\|(Kc(t)\right\|_{L^{2}}^{2} = \sum_{|\ell|\geq L_{0}}\Big{|}\sum_{|k|\geq L_{0}}c_{k}a_{k\ell}+c_ {k}(\langle\chi_{1}\Psi_{k}^{\mathrm{Euc}},\Xi_{\ell}\rangle+\langle\Xi_{k}, \chi_{1}\Phi_{\ell}^{\mathrm{Euc}}\rangle+\langle\Xi_{k},\Xi_{\ell}\rangle) \Big{|}^{2} \tag{4.36}\] \[\leq C\|c(t)\|_{L^{2}}\sum_{|\ell|\geq L_{0}}\Big{(}\sum_{|k|\geq L_{0} }|a_{k\ell}|^{2}+\sum_{|k|\geq L_{0}}\frac{1}{|k|^{2}|\ell|^{2}}\Big{)} \tag{4.37}\] where we have used Cauchy-Schwartz and (i)-(iii). The second term is easily summable, with sum bounded by \(\frac{1}{L_{0}}\). Next, we split the first sum into the two cases of (iv): \[\sum_{|\ell|\geq L_{0}}\Big{(}\tfrac{1}{|\ell|}\sum_{|k-\ell|\leq|k\ell|^{1/4}} \tfrac{1}{|k|}+\sum_{|k-\ell|\geq|k\ell|^{1/4}}\tfrac{1}{|k|^{4}|\ell|^{4}} \Big{)} \tag{4.38}\] Again the second sum is summable and bounded by \(\frac{1}{L_{0}}\). For the first, observe that for each \(|\ell|\) there are at most \(|k|^{1/4}|\ell|^{1/4}\) non-zero \(|k|\), and \(|k-\ell|\leq|k|^{1/4}|\ell|^{1/4}\) implies \(|k|\geq|\ell|/2\). Hence the first sum is bounded by \[\sum_{|\ell|\geq L_{0}}\frac{1}{|\ell|}\sup_{|k-\ell|\leq|k\ell|^{1/4}}|k|^{1/4 }|\ell|^{1/4}\frac{1}{|k|}\leq\sum_{|\ell|\geq L_{0}}\frac{1}{|\ell|^{3/2}} \leq\frac{C}{L_{0}^{1/4}}. \tag{4.39}\] It follows that \(\|K\|_{L^{2}\to L^{2}}\leq CL_{0}^{-1/8}\) hence is an isomorphism after possibly increasing \(L_{0}\). This completes the proof of Lemma 4.15, thus the proofs of Propositions 4.2 and 4.3. To conclude this subsection, we briefly note the following higher-regularity extension of the previous lemma: **Lemma 4.16**.: The map \(U\) defined by 4.35 restricts to an isomorphism \[U:L^{m,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\to L^{m,2}(\mathcal{Z} _{0};\mathcal{S}_{\mathcal{Z}_{0}})\] for every \(m>0\). Proof.: As in the proof of the previous Lemma 4.15, write \(U=Id+K\). It suffices to show that \(K:L^{m,2}\to L^{m+1/8,2}\) is bounded, hence \(K\) is a smoothing operator of order \(\frac{1}{8}\). Knowing this, the lemma follows from the "elliptic estimate" \[\|\phi\|_{m}\leq C_{m}\ (\|U\phi\|_{m}+\|\phi\|_{m-1/8}) \tag{4.40}\] derived by writing \(Id=U-K\) and using the triangle inequality and the fact that \(U:L^{2}\to L^{2}\) is an isomorphism. To show that \(K:L^{m,2}\to L^{m+1/8,2}\) is bounded, first observe that the \(m=0\) case follows from (4.39), where we instead use the summability of \(\frac{1}{|\ell|^{5/4}}\) and add the weighting of \(|\ell|^{1/4}\) to the overall sum. The \(m>0\) case follows repeating the proof using the bounds \[\frac{|\ell|^{2m+2\delta}|a_{k\ell}|^{2}}{|k|^{2m}}\leqslant C_{m}|\ell|^{2 \delta}|a_{k\ell}|^{2}\qquad\qquad\qquad\frac{|\ell|^{2m+2\delta}|b_{k\ell}|^{ 2}}{|k|^{2m}}\leqslant C_{m}|\ell|^{2\delta}|b_{k\ell}|^{2} \tag{4.41}\] for \(a_{k\ell}\) as before and \(b_{k\ell}\) any of the latter inner products in (4.36), and using Cauchy-Schwartz with the grouping \((\frac{a_{k\ell}}{|k|^{m}})(c_{k}|k|^{m})\). The bounds (4.41) follow easily from the Fourier mode restriction on \(\zeta_{\ell}\) and integration by parts using the higher-order bounds of Corollary 4.8. ### The Higher Regularity Obstruction This subsection refines Propositions 4.2 and 4.3 to cover the cases of higher regularity. The Dirac operator \[\not{D}:H^{m,1}_{\mathrm{b},e}(Y-\mathcal{Z}_{0};S)\to H^{m}_{\mathrm{b}}(Y- \mathcal{Z}_{0};S)\] has infinite-dimensional cokernel equal to \(\mathbf{Ob}\cap H^{m}_{\mathrm{b}}\) by Corollary 2.12. It is not _a priori_ clear that this cokernel coincides with the natural restriction \(\mathbf{Ob}^{m}:=\mathrm{Im}(\mathrm{ob}|_{L^{m,2}(\mathcal{Z}_{0};\mathcal{Z }_{0})})\). The next lemma asserts that this is indeed the case. **Lemma 4.17**.: There is equality \[\mathbf{Ob}^{m}=\mathbf{Ob}\cap H^{m}_{\mathrm{b}}\] as subspaces \(H^{m}_{\mathrm{b}}(Y-\mathcal{Z}_{0};S_{0})\). In particular, \(\mathrm{ob}|_{L^{m,2}}\) restricts to an isomorphism making the following diagram commute. Proof.: Lemma 4.16 shows that there are equivalences of norms \[L^{m,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\ \stackrel{{ U}}{{\sim}}\ \ L^{m,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\] It is therefore enough to show that \[\sum_{\ell}c_{\ell}\Psi_{\ell}\in H^{m}_{\mathrm{b}}\qquad\Leftrightarrow \qquad\sum_{\ell}|c_{\ell}||\ell|^{2m}<\infty.\] The right hand side equivalent to the \(H^{m}_{\mathrm{b}}\)-norm of \(\sum_{\ell}\Psi^{\mathrm{Euc}}_{\ell}\), and the statement then follows from the fact that the projection operator \(\not{D}P\not{D}:H^{m}_{\mathrm{b}}\to H^{m}_{\mathrm{b}}\) is bounded by Corollary 2.12. ## 5 The Universal Dirac Operator This section begins the analysis of the Dirac operator allowing the singular set \(\mathcal{Z}_{0}\) to vary. This is done by introducing a "universal" Dirac operator which is the infinite-dimensional family of Dirac operators parameterized by possible singular sets. The main result of this section, Proposition 5.5 in Subsection 5.2 calculates the derivative of this universal Dirac operator with respect to variations in the singular set. Throughout this section, care is taken to construct explicit trivializations of the relevant Banach vector bundles. The present situation is more subtle than the case of scalar-valued functions appearing in [9], and imprecision about certain isomorphisms can lead to incorrect formulas for the derivative with respect to variations in the singular set. For the remainder of the article we assume \((\mathcal{Z}_{0},\ell_{0},\Phi_{0})\) is regular in the sense that it satisfies Assumptions 1-3. Note that Assumption 3 requires that the Dirac operator is only \(\mathbb{R}\)-linear placing us in Case (B) of Proposition 4.3 (this requires \(B_{0}\neq 0\)). We also continue to tacitly assume that \(\mathcal{Z}_{0}\) is connected (Assumption 4*). From here on, \(\langle-,-\rangle\) denotes the real inner product on spinors, and \(\langle-,-\rangle_{\mathbb{C}}\) the Hermitian one. ### Trivializations We consider variations of the singular set \(\mathcal{Z}_{0}\) as follows. Let \[\mathcal{E}_{0}\subseteq\mathrm{Emb}^{2,2}(\mathcal{Z}_{0};Y)\] denote an open neighborhood of \(\mathcal{Z}_{0}\) in the space of embedded links of Sobolev regularity \((2,2)\). For each \(\mathcal{Z}\in\mathcal{E}_{0}\), let \((S_{\mathcal{Z}},\gamma,\nabla)\) denote the Clifford module defined analogously to \(S_{0}\) in (2.1) so that \(S_{\mathcal{Z}}:=S_{s_{0}}\otimes\ell_{\mathcal{Z}}\). Here \(\ell_{\mathcal{Z}}\to Y-\mathcal{Z}\) is the real line bundle whose holonomy representation agrees with that of \(\ell_{0}\) (up to homotopy) equipped with its unique flat connection with holonomy in \(\mathbb{Z}_{2}\). The Dirac operator \(\not{D}_{\mathcal{Z}}\) is defined as in Definition 2.1, and the Hilbert spaces \(rH^{1}_{e}(Y-\mathcal{Z},S_{\mathcal{Z}}),L^{2}(Y-\mathcal{Z},S_{\mathcal{Z}})\) are defined for \(\mathcal{Z}\in\mathcal{E}_{0}\) analogously to 2.2 but using a weight \(r_{\mathcal{Z}}\approx\mathrm{dist}(-,\mathcal{Z})\). Define families of Hilbert spaces \[\mathbb{H}^{1}_{e}(\mathcal{E}_{0}) := \{(\mathcal{Z},\varphi)\ |\ \mathcal{Z}\in\mathcal{E}_{0}\,\ \varphi\in rH^{1}_{e}(Y-\mathcal{Z};S_{\mathcal{Z}})\}\] \[\mathbb{L}^{2}(\mathcal{E}_{0}) := \{(\mathcal{Z},\psi)\ |\ \mathcal{Z}\in\mathcal{E}_{0}\,\ \psi\in L^{2}(Y-\mathcal{Z};S_{\mathcal{Z}})\}\] which come equipped with projections \(p_{1}:\mathbb{H}^{1}_{e}(\mathcal{E}_{0})\to\mathcal{E}_{0}\) and \(p_{0}:\mathbb{L}^{2}(\mathcal{E}_{0})\to\mathcal{E}_{0}\) respectively. **Lemma 5.1**.: There are trivializations \[\Upsilon:\mathbb{H}^{1}_{e}(\mathcal{E}_{0}) \simeq \mathcal{E}_{0}\times rH^{1}_{e}(Y-\mathcal{Z}_{0};S_{0})\] \[\Upsilon:\mathbb{L}^{2}(\mathcal{E}_{0}) \simeq \mathcal{E}_{0}\times\ L^{2}(Y-\mathcal{Z}_{0};S_{0})\] which endow the spaces on the left with the structure of locally trivial Hilbert vector bundles. Assuming this lemma momentarily, we define **Definition 5.2**.: The **Universal Dirac Operator** is the section \(\mathbb{\dot{D}}\) defined by \[\begin{array}{l}p_{1}^{*}\mathbb{L}^{2}(\mathcal{E}_{0})\\ \left\downarrow\right\uparrow\\ \mathbb{H}^{1}_{e}(\mathcal{E}_{0})\end{array}\mathbb{\dot{D}}(\mathcal{Z}, \varphi):=\not{D}_{\mathcal{Z}}\varphi\] Before proving Lemma 5.1, we first construct a chart around \(\mathcal{Z}_{0}\) in \(\mathrm{Emb}^{2,2}(\mathcal{Z};Y)\). A choice of Fermi coordinates \((t,x,y)\) on \(N_{r_{0}}(\mathcal{Z}_{0})\) gives an association \(T_{\mathcal{Z}_{0}}\mathrm{Emb}^{2,2}(\mathcal{Z};Y)\simeq L^{2,2}(\mathcal{Z} _{0};N\mathcal{Z}_{0})\). Choosing a cut-off function \(\chi(r):N_{r_{0}}\to\mathbb{R}\) equal to \(1\) for \(r\leqslant r_{0}/2\) and vanishing for \(r\geqslant r_{0}\) we define an exponential map as follows: given \(\eta\in L^{2,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\) with \(\|\eta\|_{2,2}<\rho_{0}\) we set \[F_{\eta}(t,z)=(t,z+\chi(r)\eta(t)). \tag{5.1}\] Then define \[\mathrm{Exp}:L^{2,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0}) \to \mathrm{Emb}^{2,2}(\mathcal{Z}_{0};Y)\] \[\eta \mapsto \mathcal{Z}_{\eta}:=F_{\eta}[\mathcal{Z}_{0}],\] and take \(\mathcal{E}_{0}=B_{\rho_{0}}(\mathcal{Z}_{0})\subset L^{2,2}(\mathcal{Z}_{0};N \mathcal{Z}_{0})\) to be the open ball of radius \(\rho_{0}\). **Lemma 5.3**.: For \(\rho_{0}\) sufficiently small, \(F_{\eta}\) is a diffeomorphism and \(\operatorname{Exp}:\mathcal{E}_{0}\to\operatorname{Emb}^{2,2}(\mathcal{Z}_{0};Y)\) is a homeomorphism onto its image. Proof.: Since \(|\eta|_{C^{1}}\leq C|\eta|_{L^{2,2}}\leq C\rho_{0}\) by the Sobolev embedding theorem, it follows that \[\operatorname{d}\!F_{\eta}=\begin{pmatrix}1&0&0\\ \chi\eta^{\prime}_{x}&1+\partial_{x}\chi\eta_{x}&\partial_{y}\chi\eta_{x}\\ \chi\eta^{\prime}_{y}&\partial_{x}\chi\eta_{y}&1+\partial_{y}\chi\eta_{y} \end{pmatrix}\] is close to the identity for \(\rho_{0}\) sufficiently small. It then follows from the Inverse Function Theorem that it is a local diffeomorphism, and since \(F_{\eta}\) preserves normal disks \(\{t_{0}\}\times D_{\mathbf{r}_{0}}\) and is monotonically increasing in the \(\eta\) direction, it is injective hence a diffeomorphism. For the second statement, note that \(F_{\eta}(t,0,0)=(t,\eta(t))\) is distinct for distinct \(\eta\in C^{1}\), hence \(\operatorname{Exp}\) is injective. For surjectivity, since any \(L^{2,2}\)-embedding \(\mathcal{Z}\) close to \(\mathcal{Z}_{0}\) is also close in \(C^{1}\), such an embedding must be a graph over \(\mathcal{Z}_{0}\) in local coordinates. Thus \(\mathcal{Z}=\mathcal{Z}_{\eta}\) for \(\eta\) the function defining this graph. Continuity of \(\operatorname{Exp}\) and its inverse are obvious. **Remark 5.4**.: Note that \(F_{\eta}\) is not the flow of a time-independent vector field on \(Y\). Although it is morally equivalent, this choice simplifies several formulas and avoids standard issues with thinking of \(C^{\infty}(Y;TY)\) as the tangent space of the diffeomorphism group. We now prove Lemma 5.1 by constructing the trivializations \(\Upsilon\). The only slight subtlety here is the association of spinor bundles for different metrics. To highlight the metric dependence, we denote by \(S_{h}\) the spinor bundle (without tensoring with \(\ell_{0}\)) formed with the spin structure \(\mathfrak{s}_{0}\) using the metric \(h\). Proof of Lemma 5.1.:. Let \(g_{\eta}:=F_{\eta}^{*}g_{0}\) denote the pullback metric. We now define \(\Upsilon\) as the map induced on sections by a fiberwise linear diffeomorphism (denoted by the same symbol) \(\Upsilon_{\eta}:S_{g_{0}}\otimes\ell_{\mathcal{Z}_{\eta}}\simeq S_{g_{0}} \otimes\ell_{\mathcal{Z}_{0}}\) which is given as the composition of three maps \(\Upsilon:=\tau\circ\iota\circ F^{*}\). where \(F_{\eta}^{*}\) is the pullback by the diffeomorphism \(F_{\eta}\), \(\iota\) is a canonical association of \(S_{g_{\eta}}\simeq F_{\eta}^{*}S_{g_{0}}\) and of line bundles, and \(\tau\) is an association of spinor bundles for different metrics. Note that \(\Upsilon\) covers the diffeomorphism \(F_{\eta}\) on \(Y\) hence is not an isomorphism in the category of vector bundles on \(Y\). It is obvious that the first map \(F_{\eta}^{*}\) is a diffeomorphism linear on fibers. The remainder of the proof proceeds in four steps. _Step 1:_ First, note that it follows directly from the definitions that there are canonical isomorphisms \[F_{\eta}^{*}S_{g_{0}}\simeq S_{g_{\eta}}\qquad\qquad\qquad F_{\eta}^{*}\ell_{ \mathcal{Z}_{0}}\simeq\ell_{\mathcal{Z}_{0}}.\] In addition, these isomorphisms associate the spin connections \(F_{\eta}^{*}\nabla_{g_{0}}^{\mathrm{spin}}\) and \(\nabla_{g_{\eta}}^{\mathrm{spin}}\) and the unique flat connections on the line bundles. _Step 2:_ Next, we define the map \(\tau\) by parallel transport on cylinders. This construction follows Section 5 of [2]. Consider the natural family of interpolating metrics \[g_{s\eta}=(F_{s\eta})^{*}g_{0}\qquad\qquad\text{ for }s\in[0,1]\] Note this family does not coincide with \((1-s)g_{0}+sg_{\eta}\). Now consider the (generalized) 4-dimensional cylinder \[X_{\eta}=([0,1]\times Y,ds^{2}+g_{s\eta}).\] It is spin since \(w_{2}(X_{\eta})=w_{2}(Y)=0\), and Spin structures on \(W\) are in 1-1 correspondence with those on \(Y\). Let \(S_{X}\to X_{\eta}\) be the spinor bundle on \(X_{\eta}\) arising from the spin structure corresponding to the fixed spin structure on \(Y\). There is a natural isomorphism \(S^{+}_{X}|_{Y\times\{s\}}\simeq S_{g_{s\eta}}\) (see [30] Section 4.3 or [27] top of page 4). Let \(\nabla_{X}\) denote the spin connection on \(S^{+}_{X}\). Parallel transport along the curve \(\gamma_{y}(s)=(s,y)\) in the \(s\) direction defines a linear isometry \[\tau^{g_{\eta}}_{g_{0}}(y,s):(S_{g_{0}})_{y}\to(S_{g_{s\eta}})_{y}.\] Together, parallel transport for \(s=1\) along all such curves define an isomorphism \(\tau^{g_{\eta}}_{g_{0}}:S_{g_{0}}\to S_{g_{s\eta}}\) which is a fiberwise isometry. _Step 3:_ Note that the previous two steps depend continuously on the parameter \(\eta\). Denote by \(\mathcal{Y}\to\mathcal{E}_{0}\) the universal 3-manifold bundle whose fiber above \(\eta\in\mathcal{E}_{0}\) is \(Y_{\eta}=(Y-\mathcal{Z}_{0},g_{\eta})\), and by \(\mathfrak{S}\to\mathcal{Y}\) the vector bundle above it whose restriction to \(Y_{\eta}\) is the bundle \(S_{g_{\eta}}\otimes\ell_{\mathcal{Z}_{0}}\). There is then a trivialization of \(\mathfrak{S}\) denoted \(\tau:\mathcal{E}_{0}\times(S_{g_{0}}\otimes\ell_{\mathcal{Z}_{0}})\longrightarrow \mathfrak{S}\) defined by parallel transport along cylinders in the radial directions in \(\mathcal{E}_{0}\). That is, we set \[\tau(\eta,\psi):=(\tau^{g_{\eta}}_{g_{0}}\otimes 1)(\psi).\] Likewise, we take \(F^{*}(\eta,-):=F^{*}_{\eta}\) and \(\iota(\eta,-):=\iota_{\eta}\) and define \[\Upsilon:=\tau\circ\iota\circ F^{*}.\] _Step 4:_ Given the fiberwise linear diffeomorphism \(\Upsilon\), what remains to be seen is that the induced map \[\Upsilon:rH^{1}_{e}(Y-\mathcal{Z}_{0},S_{g_{0}}\otimes\ell_{\mathcal{Z}_{0}}) \longrightarrow rH^{1}_{e}(Y-\mathcal{Z}_{\eta},S_{g_{0}}\otimes\ell_{ \mathcal{Z}_{\eta}})\] gives an equivalence of norms. To see this, note that \(\eta\in L^{2,2}(\mathcal{Z})\) and \(L^{1,2}(\mathcal{Z}_{0})\hookrightarrow C^{0}(\mathcal{Z}_{0})\) by the Sobolev embedding, hence in Fermi coordinates the pullback metric has entries of the form \(h(t)g_{1}(t,x,y)\) with \(h(t)\in L^{1,2}(S^{1})\) and \(g_{1}(t,x,y)\) smooth (cf. Lemma 5.8). The Christoffel symbols of \(\nabla_{B_{0}}\) are, in turn, has entries of the form \(f(t)g_{2}(t,x,y)\) now with \(f(t)\in L^{2}(S^{1})\). The equivalence of norms is then a consequence of the following "mixed dimension" Sobolev multiplication for \(f\in L^{2}(S^{1})\) and \(\varphi\in rH^{1}_{e}\): \[\|f(t)\varphi\|_{L^{2}(S^{1}\times D^{2})}\leq C\|f\|_{L^{2}(S^{1})}\|\varphi \|_{rH^{1}_{e}},\] which is proved by integrating over slices \(\{t\}\times D^{2}\) and using the Sobolev restriction theorem. ### Universal Linearization Using the trivialization constructed in Lemma 5.1, we may now calculate the (vertical component of the) derivative of the universal Dirac operator considered as a map \[\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\overline{\mathbb{B}}:L^{2,2}(\mathcal{ Z}_{0};N\mathcal{Z}_{0})\times rH^{1}_{e}(Y-\mathcal{Z}_{0};S_{0})\longrightarrow L ^{2}(Y-\mathcal{Z}_{0};S_{0}). \tag{5.2}\] After trivializing, differentiating with respect to a variation \(\eta\) in the singular set becomes differentiation of the Dirac operator with respect to the family of metrics \(g_{s\eta}\) for \(s\in[0,1]\). **Proposition 5.5**.: In the local trivialization provided by \(\Upsilon\), the Linearization of the universal Dirac operator on the spaces 5.2 is given by \[\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\overline{\mathbb{B}}(\eta,\psi)= \mathcal{B}_{\Phi_{0}}(\eta)+\not{D}_{\mathcal{Z}_{0}}\psi \tag{5.3}\] where \[\mathcal{B}_{\Phi_{0}}(\eta)=\left(\frac{d}{ds}\Big{|}_{s=0}\tau^{g_{s\eta}}_{ g_{0}}\circ\not{D}^{g_{s\eta}}_{\mathcal{Z}_{0}}\circ(\tau^{g_{s\eta}}_{g_{0}}) ^{-1}\right)\Phi_{0}\] is the first variation of the Dirac operator with respect to the family of metrics \(g_{s\eta}\) acting on the spinor \(\Phi_{0}\). **Remark 5.6**.: (Cf. Section 4.1 of [9]) Since the configuration \((\mathcal{Z}_{0},\Phi_{0})\) does not lie along the zero-section in \(\mathbb{H}^{1}_{e}(\mathcal{E}_{0})\), there is no canonical association \[T_{(\mathcal{Z}_{0},\Phi_{0})}\mathbb{H}^{1}_{e}(\mathcal{E}_{0})\simeq T_{ \mathcal{Z}_{0}}\mathcal{E}_{0}\oplus rH^{1}_{e}(Y-\mathcal{Z}_{0}).\] Thus expression of the derivative (5.2) implicitly relies on a choice of connection -- the pullback of the product connection by \(\Upsilon^{-1}\). Different choices of trivialization will result in different connections and different expressions for the derivative \(\mathrm{d}\bar{\mathcal{D}}\). Concretely, this choice manifests as the dependence of the family of metrics \(g_{\eta}\) on our choice of diffeomorphisms \(F_{\eta}\). A different choice of family of diffeomorphisms differs from our choice of \(F_{\eta}\) by composing with (a family of) diffeomorphisms fixing \(\mathcal{Z}_{0}\). Although there are many possible choices (see [42] and [61]) this choice simplifies many expressions. Of course, the salient properties of the linearization are independent of these choices. Proof of Proposition 5.5.: Take a path \[\gamma:(-\epsilon,\epsilon) \rightarrow \mathbb{H}^{1}_{e}(\mathcal{E}_{0})\] \[s \mapsto (\mathcal{Z}_{\eta(s)},\Phi(s))\] such that \(\gamma(0)=(\mathcal{Z}_{0},\Phi_{0})\). Using the chart \(\mathrm{Exp}:L^{2,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\rightarrow\mathcal{E}_ {0}\), we may assume that \(\eta(s)=s\eta\). Let \(\mathcal{H}\) be the section of \(\mathbb{H}^{1}_{e}(\mathcal{E}_{0})\) obtained from radial parallel transport of \(\Phi_{0}\) in the connection induced by the trivialization \(\Upsilon\). That is, set \[\mathcal{H}=\Upsilon^{-1}(\mathcal{E}_{0}\times\{\Phi_{0}\}).\] We may write each \(\Phi(s)\in rH^{1}_{e}(Y-\mathcal{Z}_{s\eta})\) as the point in \(\mathcal{H}\) plus a vertical vector \(\phi(s)=\Upsilon^{-1}(\psi(s))\), i.e. \[\gamma(s)=(\mathcal{Z}_{s\eta},\Upsilon^{-1}_{s\eta}(\Phi_{0})+\phi(s)\ )=( \mathcal{Z}_{s\eta},\Upsilon^{-1}_{s\eta}(\Phi_{0}+\psi(s))\ ).\] The derivative in the trivialization given by \(\Upsilon\) is \[\frac{d}{ds}\Big{|}_{s=0}\Upsilon_{s\eta}\circ\not{\mathbb{D}}(\mathcal{Z}_{ s\eta},\Upsilon^{-1}_{s\eta}(\Phi_{0}+\psi))=\frac{d}{ds}\Big{|}_{s=0}\Upsilon_{s \eta}\circ\not{\mathbb{D}}^{g_{0}}_{\mathcal{Z}_{s\eta}}\circ\Upsilon^{-1}_{ s\eta}(\Phi_{0}+\psi) \tag{5.4}\] where \(\Upsilon\) denotes the trivialization for both \(\mathbb{H}^{1}_{e}(\mathcal{E}_{0})\) and \(\mathbb{L}^{2}(\mathcal{E}_{0})\). Recalling the definition of \(\Upsilon=\tau\circ\iota\circ F^{*}\), the following diagram commutes, where the rightmost vertical arrow is the expression (5.4) which we wish to calculate. The middle vertical arrow denotes Dirac operator on the bundle \(S_{g_{\eta}}\otimes\ell_{\mathcal{Z}_{0}}\) formed from with the pullback metric \(g_{\eta}\) and the unique flat connection on \(\ell_{\mathcal{Z}_{0}}\). By commutativity, the rightmost vertical arrow is equivalent to the conjugation of the middle arrow by \(\tau^{g_{\eta}}_{g_{0}}\) and its inverse. Consequently, using the product rule (noting as well that \(\psi(0)=0\) and \(\tau^{g_{\eta(0)}}_{g_{0}}=Id\)), \[\frac{d}{ds}\Big{|}_{s=0}\Upsilon_{s\eta}\circ\not{\mathbb{D}}^{ g_{0}}_{\mathcal{Z}_{s\eta}}\circ\Upsilon^{-1}_{s\eta}(\Phi_{0}+\psi(s)) = \frac{d}{ds}\Big{|}_{s=0}\left(\tau^{g_{s\eta}}_{g_{0}}\circ\not{ \mathbb{D}}^{g_{s\eta}}_{Z_{0}}\circ(\tau^{g_{s\eta}}_{g_{0}})^{-1}\right)(\Phi _{0}+\psi(s))\] \[= \left(\frac{d}{ds}\Big{|}_{s=0}(\tau^{g_{s\eta}}_{g_{0}}\circ \not{\mathbb{D}}^{g_{s\eta}}_{Z_{0}}\circ(\tau^{g_{s\eta}}_{g_{0}})^{-1}\right) \Phi_{0}+\not{\mathbb{D}}^{g_{0}}_{\mathcal{Z}_{0}}\dot{\psi}(0).\] as claimed. ### First Variation Formula In order to analyze the derivative of the universal Dirac operator calculated in Proposition 5.5, a more explicit formula is needed for the variation of the Dirac operator with respect to metrics \((\mathcal{B}_{\Phi_{0}}(\eta)\) in 5.3). The formula for this variation is originally due to Bourguignon and Gauduchon [3]. A concise proof (in English) was later given in [2]. See also [39]. #### 5.3.1 Metric Variations Suppose, forgetting any reference to the above situation momentarily, that \(g_{s}\) is a path of metrics on a Riemannian spin manifold \(X\). Let \(\dot{g}_{s}\) denote the derivative of this path at \(s=0\), and let \[\tau_{g_{0}}^{g_{s}}:S_{g_{s}}\to S_{g_{0}}\] be the association of spinor bundles via parallel transport on \([0,1]\times X\) as defined in the proof of Lemma 5.1. We obtain a 1-parameter family of operators \[\tau_{g_{0}}^{g_{s}}\circ\not{D}_{g_{s}}\circ(\tau_{g_{0}}^{g_{s}})^{-1}: \Gamma(S_{g_{0}})\rightarrow\Gamma(S_{g_{0}})\] as the right arrow in the commutative diagram for every \(s\). Letting \(\{e_{i}\}\) be an orthonormal frame for the metric \(g_{0}\) and \(\{e^{i}\}\) its dual frame, Bourguignon and Gauduchon calculate: **Theorem 5.7**.: **(Bourguignon-Gauduchon [3])** The first variation of the Dirac operator with respect to the family of metrics \(g_{s}\) is given by \[\left(\frac{d}{ds}\Big{|}_{s=0}\tau_{g_{0}}^{g_{s}}\circ\not{D}_{g_{s}}\circ( \tau_{g_{0}}^{g_{s}})^{-1}\right)\Psi=-\frac{1}{2}\sum_{ij}\dot{g}_{s}(e_{i}, e_{j})e^{i}.\nabla_{j}^{g_{0}}\Psi+\frac{1}{2}d\Gamma\mathrm{r}_{g_{0}}(\dot{g}_{s}). \Psi+\frac{1}{2}\mathrm{div}_{g_{0}}(\dot{g}_{s}).\Psi \tag{5.5}\] where \(.\) denotes Clifford multiplication in the \(g_{0}\) metric. Note that the first term is independent of the choice of frame for the same reason as the standard Dirac operator. Here, in an orthonormal frame, the \(\mathrm{div}_{g_{0}}(k)\) is the 1-form \(-(e_{i}-\nabla_{i}k_{ij}e^{i})e^{j}.\) To give some quick intuition for this slightly unappetizing formula, the first term comes from differentiating the symbol of the Dirac operator (Clifford multiplication), and the second two terms arise from differentiating the Christoffel symbols. #### 5.3.2 Pullback Metric Formula We will apply Bourguignon-Gauduchon's formula (5.7) in the case that the family of metrics is the one given by the pullbacks \[\dot{g}_{\eta}:=\frac{d}{ds}\Big{|}_{s=0}g_{s\eta}=\frac{d}{ds}\Big{|}_{s=0} F^{*}_{s\eta}g_{0}. \tag{5.6}\] As in Definition 3.4, the metric in Fermi coordinates \((t,x,y)\) on the tubular neighborhood \(N_{r_{0}}(\mathcal{Z}_{0})\) has the form \[g_{0}=dt^{2}+dx^{2}+dy^{2}+h\qquad\quad\text{where}\quad|h_{ij}|\leq Cr.\] **Lemma 5.8**.: The derivative of the family of pullback metrics 5.6 is given by \[\dot{g}_{\eta}=\begin{pmatrix}0&\eta_{x}^{\prime}\chi&\eta_{y}^{\prime}\chi\\ \eta_{x}^{\prime}\chi&2\eta_{x}\partial_{x}\chi&\eta_{x}\partial_{y}\chi+\eta_ {y}\partial_{x}\chi\\ \eta_{y}^{\prime}\chi&\eta_{x}\partial_{y}\chi+\eta_{y}\partial_{x}\chi&2\eta _{y}\partial_{y}\chi\end{pmatrix}+h_{1}+h_{2} \tag{5.7}\] where * \(h_{1}\)is a \(O(1)\) term whose entries are formed from products of derivatives of \(h_{ij}\) and \(\eta\). * \(h_{2}\) is a \(O(r)\) term whose entries are formed from products of \(h_{ij}\) and products of \(\eta,\eta^{\prime}\). Here, \(\eta=\eta_{x}+i\eta_{y}\) and \(\eta^{\prime}=\frac{d}{dt}\eta\) and \(\dot{g}_{\eta}\) denotes \(\frac{d}{ds}g_{s\eta}\). Proof.: Since the diffeomorphism \(F_{\eta}(s)\) is supported in the tubular neighborhood, it suffices to do the calculation in the local coordinates. First, consider the case that \(h=0\). Recall \[F_{s\eta}(t,x,y)=(t,x+s\chi(r)\eta_{x}(t),y+s\chi(r)\eta_{y}(t)),\] hence \[\mathrm{d}F_{s\eta}=\begin{pmatrix}1&0&0\\ s\chi\eta_{x}^{\prime}&1+s\partial_{x}\chi\eta_{x}&s\partial_{y}\chi\eta_{x} \\ s\chi\eta_{y}^{\prime}&s\partial_{x}\chi\eta_{y}&1+s\partial_{y}\chi\eta_{y} \end{pmatrix}.\] A quick calculation shows in this case the pullback metric is \[\dot{g}_{\eta} = \frac{d}{ds}\Big{|}_{s=0}(\mathrm{d}F_{s\eta})^{T}g_{0}(\mathrm{ d}F_{s\eta}) \tag{5.8}\] \[= \begin{pmatrix}0&\eta_{x}^{\prime}\chi&\eta_{y}^{\prime}\chi\\ \eta_{x}^{\prime}\chi&2\eta_{x}\partial_{x}\chi&\eta_{x}\partial_{y}\chi+ \eta_{y}\partial_{x}\chi\\ \eta_{y}^{\prime}\chi&\eta_{x}\partial_{y}\chi+\eta_{y}\partial_{x}\chi&2 \eta_{y}\partial_{y}\chi\end{pmatrix}. \tag{5.10}\] Now assume \(h\neq 0\), and let \(\widetilde{h}_{ij}=h_{ij}(t,z+F_{s\eta})\). Then the term added to the above is \[= \frac{d}{ds}\Big{|}_{s=0}(\mathrm{d}F_{s\eta})^{T}\cdot h(t,z+F_{ s\eta})\cdot(\mathrm{d}F_{s\eta})\] \[= \frac{d}{ds}\Big{|}_{s=0}(\mathrm{d}F_{s\eta})^{T}\begin{pmatrix} \widetilde{h}_{11}+s\chi(\widetilde{h}_{12}\eta_{x}^{\prime}+\widetilde{h}_{1 3}\eta_{y}^{\prime})&\widetilde{h}_{12}+s\partial_{x}\chi(\widetilde{h}_{12} \eta_{x}+\widetilde{h}_{13}\eta_{y})&\widetilde{h}_{13}+s\partial_{y}\chi( \widetilde{h}_{12}\eta_{x}+\widetilde{h}_{13}\eta_{y})\\ \widetilde{h}_{21}+s\chi(\widetilde{h}_{22}\eta_{x}^{\prime}+\widetilde{h}_{2 3}\eta_{y}^{\prime})&\widetilde{h}_{22}+s\partial_{x}\chi(\widetilde{h}_{22} \eta_{x}+\widetilde{h}_{23}\eta_{y})&\widetilde{h}_{23}+s\partial_{y}\chi( \widetilde{h}_{22}\eta_{x}+\widetilde{h}_{23}\eta_{y})\\ \widetilde{h}_{31}+s\chi(\widetilde{h}_{32}\eta_{x}^{\prime}+\widetilde{h}_{3 3}\eta_{y}^{\prime})&\widetilde{h}_{32}+s\partial_{x}\chi(\widetilde{h}_{32} \eta_{x}+\widetilde{h}_{33}\eta_{y})&\widetilde{h}_{33}+s\partial_{y}\chi( \widetilde{h}_{32}\eta_{x}+\widetilde{h}_{33}\eta_{y})\end{pmatrix}.\] Write the matrix above as \(\widetilde{h}_{ij}+sB_{ij}\), so that e.g. \(B_{11}=\chi\widetilde{h}_{12}\eta_{x}^{\prime}+\widetilde{h}_{13}\eta_{y}^{ \prime}\). Then since \[\mathrm{d}F_{s\eta}^{T}=Id+s\begin{pmatrix}0&\chi\eta_{x}^{\prime}&\chi\eta_{y }^{\prime}\\ 0&\partial_{x}\chi\eta_{x}&\partial_{x}\chi\eta_{y}\\ 0&\partial_{y}\chi\eta_{x}&\partial_{y}\chi\eta_{y}\end{pmatrix}\] and \(\left(\widetilde{h}_{ij}\right)\) is symmetric, the above becomes \[= \frac{d}{ds}\Big{|}_{s=0}\left[\left(\widetilde{h}_{ij}\right)+s \left(A_{ij}+A_{ij}^{T}\right)+O(s^{2})\right]=\underbrace{\frac{d}{ds}\Big{|}_ {s=0}\left(\widetilde{h}_{ij}\right)}_{:=h_{1}}+\underbrace{\left(A_{ij}+A_{ ij}^{T}\right)}_{:=h_{2}}. \tag{5.11}\] Call these terms \(h_{1}\) and \(h_{2}\) as indicated. Since \[\frac{d}{ds}\Big{|}_{s=0}\widetilde{h}_{ij} = \frac{d}{ds}\Big{|}_{s=0}h_{ij}(t,x+s\chi\eta_{x},y+s\chi\eta_{y})= (\partial_{x}h_{ij})\chi\eta_{x}+(\partial_{y}h_{ij})\chi\eta_{y}\] \[A_{ij}\Big{|}_{s=0} = h_{k}\ell\chi\eta_{\alpha}^{\prime}\quad\text{ or }\quad h_{k\ell} \partial_{\alpha}\chi\eta_{\beta}\] where \(\alpha,\beta\) range over \(x,y\) and (summation is implicit in the expression for \(A\)), these are of the forms claimed respectively for \(h_{1}\) and \(h_{2}\) Combining the formula for the linearization of the universal Dirac operator of Proposition 5.5 with the formula of Bourguignon-Gauduchon (Theorem 5.7) and the calculation of the pullback metric in Lemma 5.8 allows us to immediately deduce the following more concrete expression for the linearization. **Corollary 5.9**.: The linearization of the universal Dirac operator at \((\mathcal{Z}_{0},\Phi_{0})\) is given by \[\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\bar{\mathbb{D}}(\eta,\psi) = \left(-\frac{1}{2}\sum_{ij}\dot{g}_{\eta}(e_{i},e_{j})e^{i}. \nabla_{j}^{g_{0}}+\frac{1}{2}d\mathrm{Tr}_{g_{0}}(\dot{g}_{\eta}).+\frac{1}{2 }\mathrm{div}_{g_{0}}(\dot{g}_{\eta}).+\mathcal{R}(B_{0},\chi\eta).\right)\Phi_ {0} \tag{5.12}\] \[+\not{D}\psi \tag{5.13}\] where \(\mathcal{R}(B_{0},\eta)\) is a smooth term involving first derivatives of \(B_{0}\) and linear in \(\chi\eta\), and \(.\) denotes Clifford multiplication using the metric \(g_{0}\). Explicitly, \(\dot{g}_{\eta}\) is given in Fermi coordinates by \[\begin{pmatrix}0&\eta_{x}^{\prime}\chi&\eta_{y}^{\prime}\chi\\ \eta_{x}^{\prime}\chi&2\eta_{x}\partial_{x}\chi&\eta_{x}\partial_{y}\chi+ \eta_{y}\partial_{x}\chi\\ \eta_{y}^{\prime}\chi&\eta_{x}\partial_{y}\chi+\eta_{y}\partial_{x}\chi&2\eta _{y}\partial_{y}\chi\end{pmatrix}+h_{1}+h_{2}\] with \(h_{1},h_{2}\) as in the above Lemma 5.8. Proof.: In the case that \(B_{0}=0\), this follows immediately from 5.7 and the above calculation of the pullback metric in Lemma 5.8. The line bundle is fixed after pulling back by \(F_{\eta}\) and plays no role. The perturbation \(B_{0}\) pulls back to \(F_{s\eta}^{*}B_{0}\), and differentiating this yields the term \(\mathcal{R}(B_{0},\chi\eta)\). A word of caution to the reader: the formula for this linearization is slightly deceptive in the following sense. The expression for \(\mathcal{B}_{\Phi_{0}}(\eta)\), which is the first line in (5.12), appears to be a first order term plus a zeroeth order term. But these are the orders in the _spinor_\(\Phi_{0}\), and we are viewing it as an equation in the _deformation_\(\eta\). The variation of the pullback metrics \(\dot{g}_{\eta}\), as above, contains first derivatives of \(\eta(t)\), and so the trace and divergence, which contain derivatives of \(\dot{g}_{\eta}\) contain second derivatives of \(\eta(t)\). Thus this equation is actually _second order_ in \(\eta\), with the second and third terms being leading order. This is part of the reason deformation \(\eta\) must be taken to be at least \(L^{2,2}\). **Remark 5.10**.: For later use, we note that the proof of Lemma 5.8 shows that the complete formula for the pullback metric can be written \[g_{s\eta}=g_{0}+s\dot{g}_{\eta}+\mathfrak{q}(s\eta,s\eta)\] where \(\mathfrak{q}(s\eta,s\eta)\) is a matrix whose entries are \(O(s^{2})\) and are formed from finite sums of terms of the following form * Products of at least two terms of the form \(\chi\eta_{\alpha}^{\prime}\), or \(\partial_{\beta}\chi\eta_{\alpha}\), or \((\widetilde{h}-h)\leq C|\chi\eta|\). * Higher order terms of the form \((\widetilde{h}-h-h_{1})\leq C|\chi\eta|^{2}\). where the bounds on the terms involving \(\widetilde{h}\) follow from Taylor's theorem. ## 6 Fredholmness of Deformations In this section we prove Theorem 1.3 by explicitly calculating the obstruction component of the linearized universal Dirac operator. Working in the trivialization of Lemma 5.1 and splitting the domain and codomain into their summands, the linearization has the following block lower-triangular matrix where \(\Pi_{0}:L^{2}\to\mathbf{Ob}(\mathcal{Z}_{0})\) denotes the orthogonal projection: \[\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\mathbb{\bar{D}}=\begin{pmatrix}\Pi_{0} \mathcal{B}_{\Phi_{0}}&0\\ (1-\Pi_{0})\mathcal{B}_{\Phi_{0}}&\not{D}\end{pmatrix}:\begin{array}{ccc}L^{ 2,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})&\\ \oplus&\\ rH_{e}^{1}&\end{array}\longrightarrow\begin{array}{ccc}\mathbf{Ob}( \mathcal{Z}_{0})&\\ \oplus&\\ \mathrm{Range}(\not{D}|_{rH_{e}^{1}}).\end{array} \tag{6.1}\] Composing with the isomorphism \(\mathrm{ob}^{-1}:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\to \mathbf{Ob}(\mathcal{Z}_{0})\) from Proposition 4.2, the upper left entry of (6.1) can be written as \((T_{\Phi_{0}},\pi_{1})\) where \(\pi\) is the \(L^{2}\)-orthogonal projection onto \(\mathbb{R}\Phi_{0}\), and \(T_{\Phi_{0}}\) is the composition: In particular, \(T_{\Phi_{0}}\) is a map of Hilbert spaces of sections of vector bundles on \(\mathcal{Z}_{0}\). The main result of the current section is the following theorem, which refines the statement of Theorem 1.3 in the introduction. Although \(T_{\Phi_{0}}\) is _a priori_ only bounded into \(L^{2}\), the theorem shows it is Fredholm onto a dense subspace of this. **Theorem 6.1**.: The composition \(T_{\Phi_{0}}\) is an elliptic pseudo-differential operator of order \(1/2\). In particular, as a map \[T_{\Phi_{0}}:L^{2,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\longrightarrow L^{3/2,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}}) \tag{6.2}\] it is Fredholm, and has index \(0\). Given the block-diagonal decomposition (6.1), Theorem 6.1 and standard Fredholm theory and bootstrapping imply the following result on the full linearization (6.1). Here, recall that \(\mathbf{Ob}^{m}=\mathbf{Ob}(\mathcal{Z}_{0})\cap H_{\mathrm{b}}^{m}\). **Corollary 6.2**.: The following versions of the the linearized universal Dirac operator are Fredholm of Index \(0\) for all \(m\geq 0\): \[(m\geq 0)\quad\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\mathbb{\bar{D}}:L^{m+2,2} (\mathcal{Z}_{0};N\mathcal{Z}_{0})\oplus rH_{\mathrm{b},e}^{m,1}\longrightarrow \mathbf{Ob}^{m+3/2}\oplus\Big{(}(\mathrm{Range}(\not{D})\cap H_{\mathrm{b}}^ {m}\Big{)}.\] The proof of Theorem 6.1 occupies the remainder of the section. Subsection 6.1 discusses the relation between regularity on \(Y-\mathcal{Z}_{0}\) and regularity in \(L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\) under ob, and Subsection 6.2 proves 6.1. Subsection 6.3 finishes the Index calculation by proving Lemma 6.9. ### Conormal Regularity The loss of regularity of \(r=3/2\) appearing in Theorem 6.1 is a consequence of the fact that \(\mathbf{Ob}\) does not simply inherit the obvious notion of regularity from \(Y-\mathcal{Z}_{0}\). _Key Observation:_ The regularity of \(\Pi_{0}(\psi)\in\mathbf{Ob}(\mathcal{Z}_{0})\) depends on both the regularity of \(\psi\) and its asymptotics along \(\mathcal{Z}_{0}\). Using Proposition 4.3, the regularity of \(\Pi_{0}(\psi)\) is a question about the rate of decay in \(|\ell|\) of the sequence of inner products \[\big{\{}\,\langle\psi,\Psi_{\ell}\rangle_{\mathbb{C}}\,\,\big{\}}_{\ell\in \mathbb{Z}}. \tag{6.3}\] Because the basis elements \(\Psi_{\ell}\) concentrate exponentially around \(\mathcal{Z}_{0}\) as \(|\ell|\to\infty\), this rate of decay is intertwined with the growth of \(\psi\) along \(\mathcal{Z}_{0}\). If, for example, \(\psi\) is compactly supported away from \(\mathcal{Z}_{0}\), then Proposition 4.3 implies the sequence (6.3) decays faster than polynomially and \(C^{\infty}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\) is smooth regardless of the regularity of \(\Psi\) on \(Y\). By Lemma 4.17, regularity of \(\Pi_{0}\psi\in\mathbf{Ob}(\mathcal{Z}_{0})\) coincides with regularity of \(\mathrm{ob}^{-1}\Pi_{0}\psi\in L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_ {0}})\). More generally, suppose that a spinor \(\psi\) can be written locally in Fermi coordinates and an accompanying trivialization as \[\psi=\chi\begin{pmatrix}f^{+}(t)h^{+}(\theta)\\ f^{-}(t)h^{-}(\theta)\end{pmatrix}r^{p} \tag{6.4}\] where \(f^{\pm}\in L^{k}(S^{1};\mathbb{C})\) and \(h^{\pm}\) are smooth. Here, \(\chi\) is a cutoff function supported in a neighborhood \(N_{r_{0}}(\mathcal{Z}_{0})\). **Definition 6.3**.: Suppose \(\psi\) has the form (6.4). The quantity \[s=\boxed{1+k+p}\] is called the **conormal regularity** of \(\psi\). The following simple lemma governs many of the regularity considerations in the upcoming sections. **Lemma 6.4**.: Suppose that \(\psi\in L^{2}\) has conormal regularity \(s\). Then \(\mathrm{ob}^{-1}\Pi_{0}(\psi)\in L^{s,2}(\mathcal{Z}_{0};\mathcal{S}_{ \mathcal{Z}_{0}})\) and \[\|\mathrm{ob}^{-1}\Pi_{0}(\psi)\|_{s}\leq C_{s}(\|f^{+}\|_{L^{k,2}}+\|f^{-}\|_ {L^{k,2}}).\] Proof.: Using Proposition 4.3, \(\mathrm{ob}^{-1}\Pi_{0}(\psi)\) is calculated by the sequence of inner products \[\langle\psi,\Psi_{\ell}\rangle=\langle\psi,\Psi_{\ell}^{\mathrm{Euc}}+\zeta_{ \ell}+\xi_{\ell}\rangle\qquad\quad\text{where}\qquad\quad\Psi_{\ell}^{ \mathrm{Euc}}=\chi\sqrt{|\ell|}e^{i\ell t}e^{-|\ell|r}\begin{pmatrix}\frac{1}{ \sqrt{z}}\\ \frac{\mathrm{sgn}(\ell)}{\sqrt{z}}\end{pmatrix}.\] Assume first that \(g_{0}=dt^{2}+dx^{2}+dy^{2}\) on \(N_{r_{0}}(\mathcal{Z}_{0})\). Taking the inner product of \(\psi\) in 6.4 with \(\Psi_{\ell}^{\mathrm{Euc}}\) yields \[\langle\psi,\Psi_{\ell}^{\mathrm{Euc}}\rangle = \Big{\langle}\chi\begin{pmatrix}f^{+}h^{+}\\ f^{-}h^{-}\end{pmatrix}r^{p}\,\ \sqrt{|\ell|}e^{it}\begin{pmatrix}\frac{e^{-|\ell|r}}{ \sqrt{z}}\\ \mathrm{sgn}(\ell)\frac{e^{-|\ell|r}}{\sqrt{z}}\end{pmatrix}\Big{\rangle}\] \[\leq \int_{S^{1}}\langle f^{+}+Hf^{-},e^{i\ell t}\rangle\int_{\mathbb{ R}^{2}}\sqrt{|\ell|}e^{-|\ell|r}r^{p-1/2}\chi(r)\|h^{\pm}\|_{C^{0}}rdrd \theta dt\] \[\leq C\int_{S^{1}}\langle f^{+}+Hf^{-},e^{i\ell t}\rangle dt\ \int_{0}^{\infty}\sqrt{|\ell|}e^{-|\ell|r}r^{p+1/2}dr\] \[\leq C\big{\langle}\frac{1}{|\ell|^{p+1}}\left(f^{+}(t)+Hf^{-}(t) \right)\big{\rangle},e^{i\ell t}\big{\rangle}_{L^{2}(S^{1};\mathbb{C})}\] Since \(f^{\pm}\in L^{k,2}(S^{1};\mathbb{C})\), then \((f^{+}(t)+Hf^{-}(t))\in L^{k,2}(S^{1};\mathbb{C})\) as well, thus after applying the Fourier multiplier \(1/|\ell|^{p+1}\) it lies in \(L^{1+k+p,2}(S^{1};\mathbb{C})\) as desired. For the case of a general metric, the integrals differs by a factor of \(1+O(r)\) and the latter only contributes a higher regularity term bounded by a constant times \(|\ell|^{-(s+1)}\). It is easy to show that the contributions arising from \(\zeta_{\ell}+\xi_{\ell}\) satisfy the same bounds using Corollary 4.8 and integration by parts. Since these terms are dealt with explicitly in the proof of Theorem 6.1, the details are omitted here. The following additional cases are a straightforward extension of the above and the example considered preceding (6.4). **Corollary 6.5**.: Let \(\psi\in L^{2}\) 1. Suppose that \(\mathrm{supp}(\psi)\Subset Y-\mathcal{Z}_{0}\). Then \(\mathrm{ob}^{-1}\Pi_{0}(\psi)\in L^{s,2}(\mathcal{Z}_{0};\mathcal{S}_{ \mathcal{Z}_{0}})\) for all \(s>0\), and its \(L^{s,2}\)-norm is bounded by \(C_{s}\|\psi\|_{L^{2}}\). * Suppose \(\psi\) has the form \[\psi=\begin{pmatrix}f^{+}(t)\varphi^{+}(t,r,\theta)\\ f^{-}(t)\varphi^{-}(t,r,\theta)\end{pmatrix}\] (6.5) where \(f^{\pm}\in L^{k,2}(S^{1};\mathbb{C})\) and \(\varphi^{\pm}\) satisfy pointwise bounds \(|\varphi^{\pm}|+|\nabla_{t}\varphi^{\pm}|+\ldots+|\nabla_{t}^{k}\varphi^{\pm}|< C(\varphi)r^{p}\). Then \(\operatorname{ob}^{-1}\!\Pi_{0}(\psi)\in L^{s,2}(\mathcal{Z}_{0};\mathcal{S}_ {\mathcal{Z}_{0}})\) for \(s=1+k+p\), and its \(L^{s,2}\)-norm is bounded by \(C_{s}C(\varphi)|f^{\pm}\|_{L^{k,2}}\). **Remark 6.6**.: At this point, it is already apparent that there is a loss of regularity of \(3/2\) dictated by the \(r^{1/2}\)-asymptotics of \(\mathbb{Z}_{2}\)-harmonic spinors. Indeed, Corollary 5.9 shows that \(\mathcal{B}_{\Phi_{0}}(\eta)\) schematically has the form \(\eta^{\prime}.\nabla\Phi_{0}+\eta^{\prime\prime}.\Phi\). Since \(\eta\in L^{2,2}\), and \(\Phi_{0}=O(r^{1/2})\) with \(\nabla\Phi_{0}=O(r^{-1/2})\), these terms have conormal regularity \(s=1+1-1/2\) and \(s=1+0+1/2\) respectively. Lemma 6.4 therefore already implies that (to leading order), \(\Pi_{0}\mathcal{B}_{\Phi_{0}}(\eta)\subseteq\mathbf{Ob}^{3/2}\) hence lies is a proper dense subset of \(\mathbf{Ob}(\mathcal{Z}_{0})\). Decreasing the regularity of \(\eta\) below \(L^{2,2}\), however, causes \(\Pi^{\mathrm{Rg}}(\mathcal{B}_{\Phi_{0}}(\eta))\) to be unbounded into \(L^{2}\). ### Obstruction Component of Deformations This subsection carries out the main portion of the proof of Theorem 6.1 by proving an explicit formula for \(T_{\Phi_{0}}\). This formula is expressed in terms of standard operators and the following zeroth order operator, for which we recall from Proposition 3.8 that \(c(t)\in N\mathcal{Z}_{0}^{-1}\) and \(d(t)\in N\mathcal{Z}_{0}\) denote the leading order (i.e. \(r^{1/2}\)) coefficients of \(\Phi_{0}\). Define an operator \[\mathcal{L}_{\Phi_{0}}:L^{2}(\mathcal{Z}_{0};N\mathcal{Z}_{0}) \longrightarrow L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}}) \tag{6.6}\] \[\xi(t) \mapsto H(c(t)\xi(t))-\overline{\xi}(t)d(t). \tag{6.7}\] where \(H:L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\longrightarrow L^{2}( \mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\) is the Hilbert Transform given by the Fourier multiplier \(\mathrm{sgn}(\ell)\) where \(\ell\) is the Fourier variable (and we take \(\mathrm{sgn}(0)=1\)). These formulas include an implicit association \(\mathcal{S}_{\mathcal{Z}_{0}}\simeq\underline{\mathbb{C}}\) induced by the arclength parameterization. **Lemma 6.7**.: For \(\eta(t)\in L^{2,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\), the operator \(T_{\Phi_{0}}\) in Theorem 6.1 is given by \[T_{\Phi_{0}}(\eta(t))=-\tfrac{3|\mathcal{Z}_{0}|}{2}(\Delta+1)^{-\tfrac{3}{4}} \,\mathcal{L}_{\Phi_{0}}(\eta^{\prime\prime}(t))\ +\ K(\eta) \tag{6.8}\] where \(|\mathcal{Z}_{0}|\) denotes the length, \(\Delta\) denotes the positive-definite Laplacian on \(\mathcal{S}_{\mathcal{Z}_{0}}\), \(\mathcal{L}_{\Phi_{0}}\) is as in 6.7 above, and \(\eta^{\prime\prime}(t)\) denotes the (covariant) second derivative on \(N\mathcal{Z}_{0}\). \(K\) is a lower-order term. The formula (6.8) is proved by calculating the sequence of inner products \[\operatorname{ob}^{-1}(\Pi_{0}\mathcal{B}_{\Phi_{0}}(\eta))=\sum_{\ell}\langle \mathcal{B}_{\Phi_{0}}(\eta),\Psi_{\ell}\rangle_{\mathbb{C}}\ \phi_{\ell} \tag{6.9}\] quite explicitly, where \(\mathcal{B}_{\Phi_{0}}(\eta)\) is as in Corollary 5.9. The proof consists of five steps: Steps 1-2 calculate (6.9) in the case that \(g_{0}\) is locally the product metric and \(\Phi_{0}\) is given by its leading order term, and Steps 3-5 show that the small parade of error terms arising from higher order terms contribute to the lower-order operator \(K\). Proof of Lemma 6.7.: Suppose, to begin, that all the structure are given locally by the Euclidean ones. That is, assume \[g_{0}=dt^{2}+dx^{2}+dy^{2}\qquad\quad\Phi_{0}=\begin{pmatrix}c(t)\sqrt{z}\\ d(t)\sqrt{\overline{z}}\end{pmatrix}\qquad\quad\dot{g}_{\eta}=\begin{pmatrix}0& \eta_{x}^{\prime}\chi&\eta_{y}^{\prime}\chi\\ \eta_{x}^{\prime}\chi&2\eta_{x}\partial_{x}\chi&\eta_{x}\partial_{y}\chi+\eta_ {y}\partial_{x}\chi\\ \eta_{y}^{\prime}\chi&\eta_{x}\partial_{y}\chi+\eta_{y}\partial_{x}\chi&2\eta_ {y}\partial_{y}\chi\end{pmatrix},\] and \(B_{0}=0\); also assume that the obstruction elements of Proposition 4.2 have \(\zeta_{\ell}+\xi_{\ell}=0\) so that \[\Psi_{\ell}=\chi\sqrt{|\ell|}e^{i\ell t}e^{-|\ell|r}\begin{pmatrix}\frac{1}{ \sqrt{z}}\\ \frac{\operatorname{sgn}(\ell)}{\sqrt{z}}\end{pmatrix}\] _Step 1: product case, divergence term._ Let \(e_{i}\) for \(i=1,2,3\) denote an orthonormal frame for \(g_{0}\) with \(e^{i}\) the dual frame. Recall that for a symmetric 2-tensor \(k\), \(\operatorname{div}_{g_{0}}k=(-\nabla_{i}k_{ij})e^{j}\). \[\tfrac{1}{2}\mathrm{div}_{g_{0}}(\dot{g}_{\eta}).\Phi_{0} = -\frac{1}{2}\left[\sigma_{2}\chi\eta_{x}^{\prime\prime}+\sigma_{ 3}\chi\eta_{y}^{\prime\prime}\right]\begin{pmatrix}c(t)\sqrt{z}\\ d(t)\sqrt{\overline{z}}\end{pmatrix}+(\mathbf{I})\] \[= -\frac{1}{2}\left[\chi\eta_{x}^{\prime\prime}\begin{pmatrix}-d(t )\sqrt{\overline{z}}\\ c(t)\sqrt{z}\end{pmatrix}+\chi\eta_{y}^{\prime\prime}\begin{pmatrix}id(t)/ \sqrt{\overline{z}}\\ ic(t)/\sqrt{z}\end{pmatrix}\right]+(\mathbf{I})\] \[= -\frac{1}{2}\left[\begin{pmatrix}-\overline{\eta}^{\prime}d(t) \chi\sqrt{\overline{z}}\\ \eta^{\prime\prime}c(t)\chi\sqrt{z}\end{pmatrix}\right]+(\mathbf{I})\] where we have written \(\eta(t)=\eta_{x}(t)+i\eta_{y}(t)\), and \[(\mathbf{I})=-\frac{1}{2}\Big{[}(\partial_{x}\chi\eta_{x}^{\prime}+\partial_ {y}\chi\eta_{y}^{\prime})\sigma_{t}+(2\partial_{xx}\chi\eta_{x}+\partial_{xy} \chi\eta_{y}+\partial_{yy}\chi\eta_{x})\sigma_{x}+(\partial_{xy}\chi\eta_{y}+ \partial_{yy}\chi\eta_{x}+2\partial_{yy}\chi\eta_{y})\sigma_{y}\Big{]}.\Phi_{0}.\] Taking the inner product of the first term with \(\Psi_{\ell}\) yields \[\langle\tfrac{1}{2}\mathrm{div}_{g_{0}}(\dot{g}_{\eta}).\Phi_{0},\Psi_{\ell}\rangle =\] \[= -\frac{1}{2}\int_{S^{1}}\begin{pmatrix}-\overline{\eta}^{\prime \prime}d(t)\\ \eta^{\prime\prime}c(t)\end{pmatrix},\begin{array}{c}e^{i\ell t}\\ \operatorname{sgn}(\ell)e^{i\ell t}\end{pmatrix}_{\mathbb{C}}\ dt\ \ \int_{\mathbb{R}^{2}}\sqrt{|\ell|}\chi^{2}e^{-|\ell|r} rdrd\theta\] \[= -\frac{1}{2}\langle\operatorname{sgn}(\ell)\eta^{\prime\prime}c- \overline{\eta}^{\prime\prime}d\,\ e^{i\ell t}\rangle_{\mathbb{C}} \ \int_{\mathbb{R}^{2}}\sqrt{|\ell|}\chi^{2}(r)e^{-|\ell|r} rdrd\theta\] \[= \langle-\frac{1}{2}\frac{|\mathcal{Z}_{0}|}{|\ell|^{3/2}}\mathcal{ L}_{\Phi_{0}}(\eta^{\prime\prime}),e^{i\ell t}\rangle_{\mathbb{C}}+\langle K,e^{i \ell t}\rangle\] since \[\int_{0}^{\infty}\sqrt{|\ell|}e^{-|\ell|r}rdrd\theta=\frac{1}{|\ell|^{3/2}}\] and the presence of \(\chi^{2}(r)\) results in a difference from this of size \(O(e^{-|\ell|r_{0}})\) which is denoted by \(K\). Then, since \[\frac{1}{|\ell|^{3/2}}=\frac{1}{(|\ell|^{2}+1)^{3/4}}+O\left(\frac{1}{|\ell|^ {3}}\right),\] we can write \[\mathrm{ob}^{-1}(\tfrac{1}{2}\mathrm{div}_{g_{0}}(\dot{g}_{\eta}).\Phi_{0})=- \tfrac{|\mathcal{Z}_{0}|}{2}(\Delta+1)^{-\tfrac{3}{4}}\mathcal{L}_{\Phi_{0}}( \eta^{\prime\prime})+K\] where \(K\) is a psuedo-differential operator of lower order (the first term has order \(1/2\)). For the term **(I)**, note that it is a sum of term compactly supported away from \(\mathcal{Z}_{0}\), hence by Case (B) of Corollary 6.5, it contributes a smoothing operator which we may absorb into \(K\). _Step 2: product case, symbol term._ The "symbol" term from \(\mathcal{B}_{\Phi_{0}}(\eta)\) is given by \[-\frac{1}{2}\dot{g}_{\eta}(e_{i},e_{j})e^{i}.\nabla_{j}\Phi_{0} = -\frac{1}{2}\left[\chi\eta_{x}^{\prime}\sigma_{t}\nabla_{x}\Phi_{ 0}+\chi\eta_{y}^{\prime}\sigma_{t}\nabla_{y}\Phi_{0}\right]+(\mathbf{II})\] \[= -\frac{1}{4}\left[\chi\eta_{x}^{\prime}\begin{pmatrix}ic(t)/ \sqrt{z}\\ -id(t)/\sqrt{\overline{z}}\end{pmatrix}+\chi\eta_{y}^{\prime}\begin{pmatrix}-c (t)\sqrt{z}\\ d(t)\sqrt{\overline{z}}\end{pmatrix}\right]+(\mathbf{II})\] \[= -\frac{1}{4}\left[\begin{pmatrix}i\eta^{\prime}c(t)\chi/\sqrt{z} \\ -i\overline{\eta}^{\prime}d(t)\chi/\sqrt{\overline{z}}\end{pmatrix}\right]+( \mathbf{II})\] where \[(\mathbf{II}) = -\frac{1}{2}\Big{[}(\chi\eta^{\prime}_{x}\sigma_{x}+\chi\eta^{ \prime}_{y}\sigma_{y})\nabla_{t}\Phi_{0}+(2\partial_{x}\chi\eta_{x}\sigma_{x}+ \partial_{x}\chi\eta_{y}\sigma_{y}+\partial_{y}\chi\eta_{x}\sigma_{y})\nabla_{x }\Phi_{0}\] \[\qquad+\ (2\partial_{y}\chi\eta_{y}\sigma_{y}+\partial_{x}\chi \eta_{y}\sigma_{x}+\partial_{y}\chi\eta_{x}\sigma_{x})\nabla_{y}\Phi_{0}\Big{]}.\] Taking the inner product of the first term with \(\Psi_{\ell}\) yields the following. This calculation is almost identical to the previous one, but with an additional integration by parts. \[\langle\tfrac{1}{2}\dot{g}_{\eta}(e_{i},e_{j})e^{i}.\nabla_{j} \Phi_{0},\Psi_{\ell}\rangle = -\frac{1}{4}\Big{\langle}\chi\begin{pmatrix}i\eta^{\prime}c(t)/ \sqrt{z}\\ -i\overline{\eta}^{\prime}d(t)/\sqrt{z}\end{pmatrix}\,\ \sqrt{|\ell|}e^{i\ell t}\chi \begin{pmatrix}e^{-|\ell|r}/\sqrt{z}\\ \mathrm{sgn}(\ell)e^{-|\ell|r}/\sqrt{\overline{z}}\end{pmatrix}\Big{\rangle}_ {\mathbb{C}}\] \[= -\frac{1}{4}\Big{\langle}\chi\begin{pmatrix}i\eta^{\prime}c(t)/ \sqrt{z}\\ -i\overline{\eta}^{\prime}d(t)/\sqrt{\overline{z}}\end{pmatrix}\,\ \tfrac{\sqrt{|\ell|}}{i \ell|\mathrm{sgn}\ell}\partial_{t}e^{i\ell t}\chi\begin{pmatrix}e^{-|\ell|r}/ \sqrt{z}\\ \mathrm{sgn}(\ell)e^{-|\ell|r}/\sqrt{\overline{z}}\end{pmatrix}\Big{\rangle}_ {\mathbb{C}}\] \[= -\frac{1}{4}\Big{\langle}\chi\partial_{t}\begin{pmatrix}\eta^{ \prime}c(t)/\sqrt{z}\\ -\overline{\eta}^{\prime}d(t)/\sqrt{z}\end{pmatrix}\,\ \tfrac{\mathrm{sgn}(\ell)e^{i\ell t }}{e^{i\ell t}}\chi\begin{pmatrix}\mathrm{sgn}(\ell)e^{-|\ell|r}/\sqrt{z}\\ e^{-|\ell|r}/\sqrt{\overline{z}}\end{pmatrix}\Big{\rangle}_{\mathbb{C}}\] \[= -\frac{1}{4}\int_{S^{1}}\begin{pmatrix}\partial_{t}(\eta^{\prime }c(t))\\ -\partial_{t}(\overline{\eta}^{\prime}d(t))\end{pmatrix}\,\ \begin{matrix} \mathrm{sgn}(\ell)e^{i\ell t}\\ e^{i\ell t}\end{pmatrix}_{\mathbb{C}}\ dt\ \ \int_{\mathbb{R}^{2}}\frac{1}{ \sqrt{|\ell|}}\chi^{2}e^{-|\ell|r}drd\theta\] In the second line we have multiplied the second argument by \(1\) in the form \(1=\frac{i\ell}{i|\ell|\mathrm{sgn}\ell}\) and noted \(i\ell\psi_{\ell}=\partial_{t}\psi_{\ell}\), and then integrated by parts. Then, \[= -\frac{1}{4}\langle\mathrm{sgn}(\ell)\eta^{\prime\prime}c- \overline{\eta}^{\prime\prime}d\,\ e^{i\ell t}\rangle_{L^{2}(S^{1})}\ \int_{\mathbb{R}^{2}}\frac{1}{ \sqrt{|\ell|}}\chi^{2}(r)e^{-|\ell|r}rdrd\theta\] \[\qquad-\frac{1}{4}\langle\mathrm{sgn}(\ell)\eta^{\prime}c^{ \prime}-\overline{\eta}^{\prime}d^{\prime}\,\ e^{i\ell t}\rangle_{L^{2}(S^{1})}\ \int_{\mathbb{R}^{2}}\frac{1}{ \sqrt{|\ell|}}\chi^{2}(r)e^{-|\ell|r}rdrd\theta\] \[= \langle-\frac{1}{4}\frac{|\mathcal{Z}_{0}|}{|\ell|^{3/2}} \mathcal{L}_{\Phi_{0}}(\eta^{\prime\prime}),e^{i\ell t}\rangle_{\mathbb{C}}+ \langle-\frac{1}{4}\frac{|\mathcal{Z}_{0}|}{|\ell|^{3/2}}\mathcal{L}_{\nabla_{ t}\Phi_{0}}(\eta^{\prime}),e^{i\ell t}\rangle_{\mathbb{C}}+\langle K,e^{i\ell t}\rangle\] Where \(K\) is again an error of size \(O(e^{-|\ell|r_{0}})\) and \(\mathcal{L}_{\nabla_{t}\Phi_{0}}\) is defined exactly as \(\mathcal{L}_{\Phi_{0}}\) but with \(c^{\prime}(t),d^{\prime}(t)\) in place of \(c(t),d(t)\). Both \(\mathcal{L}_{\nabla_{t}\Phi_{0}}\) and the term **(II)** are lower order by Lemma 6.4 and Case (B) of Corollary 6.5, so they may be absorbed into \(K\). To see this, note these are comprised of terms of the form form \(\eta^{\prime}\nabla_{t}\Phi_{0}=\eta^{\prime}r^{1/2}\), hence of conormal regularity \(s=5/2\) or have a factor of \(d\chi\) so are compactly supported away from \(\mathcal{Z}_{0}\). The term same applies to the term \(\frac{1}{2}d\mathrm{Tr}_{g_{0}}(\dot{g}_{\eta}).\Phi_{0}\). **Remark 6.8**.: A coincidence has occurred here. Lemma 6.4 implies that the two leading order terms from _Step 1_ and _Step 2_ are both order \(1/2\) as they have the same conormal regularity. The calculation shows they are actually _the same_ up to a constant multiple and lower order terms. It is unclear if there is a more abstract reason for this (cf. Remark 5.6). Now we return to the general case. In general, there are deviation from the product case for \(\Phi_{0},(g_{0},B_{0})\) and \(\Psi_{\ell}\). These are accounted for in _Step 3-Step 5_ respectively. _Step 3:_ By Proposition 3.8 we can in general write \[\Phi_{0}=\begin{pmatrix}c(t)\sqrt{z}\\ d(t)\sqrt{z}\end{pmatrix}+\Phi_{1}\] where the higher order terms satisfy \[|\Phi_{1}|+|\nabla_{t}^{k}\Phi_{1}|\leq C_{k}r^{3/2}\qquad\qquad\qquad|\nabla_{z }\Phi_{1}|+|\nabla_{t}^{k}(\nabla_{z}\Phi_{1})|\leq C_{k}r^{1/2} \tag{6.10}\] for any \(k\in\mathbb{N}\) and identically for \(\nabla_{\overline{z}}\). The resulting contribution to \(\mathcal{B}_{\Phi_{0}}(\eta)\) is \[-\frac{1}{2}\dot{g}_{\eta}(e_{i},e_{j})e^{i}\cdot\nabla_{j}\Phi_{1}+\frac{1}{2}d \mathrm{Tr}_{g_{0}}(\dot{g}_{\eta})\cdot\Phi_{1}+\frac{1}{2}\mathrm{div}_{g_{0}} (\dot{g}_{\eta})\cdot\Phi_{1} \tag{6.11}\] and using 6.10 and Part (C) of Corollary 6.5 shows that each term has conormal regularity one higher than the corresponding term for the leading order of \(\Phi_{0}\). (6.11) therefore contributes an operator of order \(-1/2\) which can be absorbed into \(K\). _Step 4:_ As in Definition 3.4, the metric in Fermi coordinates is given by \[g_{0}=dt^{2}+dx^{2}+dy^{2}+h\] where \(h=O(r)\). Compared to the case of the product metric, we now have \(e_{i}=\partial_{i}+O(r)\) and \[\nabla_{i}^{g_{0}}=\partial_{i}+\Gamma_{i}. \dot{g}_{\eta}=\dot{g}_{\eta}^{\mathrm{prod}}+h_{1}+h_{2} \tag{6.12}\] \[dV_{g_{0}}=(1+O(r))rdrd\theta dt \langle-,-\rangle_{g_{0}}=(1+O(r))\langle-,-\rangle_{\mathrm{Euc}}. \tag{6.13}\] where \(h_{1},h_{2}\) are as in Corollary 5.9. As such, each additional term in \(\mathcal{B}_{\Phi_{0}}(\eta)\) has _either_ an additional power of \(r\)_or_ one fewer derivative of \(\eta\) compared to the terms for the product case. Using Corollary 6.5 and the bounds \[|\Phi_{0}|+|\nabla_{t}^{k}\Phi_{0}|\leq C_{k}r^{1/2} |\nabla_{z}\Phi_{0}|+|\nabla_{t}^{k}(\nabla_{z}\Phi_{0})|\leq C_{k }r^{-1/2}\] we see that all such terms have conormal regularity at least \(s=5/2\). The term \(\mathcal{R}(B_{0},\chi\eta)=O(1)\eta\) arising from the perturbation similarly has conormal regularity \(s>5/2\). They therefore factor through \(L^{5/2,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\) and can be added to the compact term \(K\). In addition, changing \(\frac{d}{dt}\) to the covariant derivative only contributes to the compact term. _Step 5:_ By Proposition 4.3 we may in general write \[\Psi_{\ell}=\Psi_{\ell}^{\mathrm{Euc}}+\zeta_{\ell}^{(m)}+\xi_{\ell}^{(m)}\] where the latter satisfy the bounds of Corollary 4.8. Set \[K_{1}(\eta):=\sum_{\ell}\langle\mathcal{B}_{\Phi_{0}}(\eta),\zeta_{\ell}^{(m)} \rangle\phi_{\ell} K_{2}(\eta):=\sum_{\ell}\langle\mathcal{B}_{\Phi_{0}}(\eta),\xi_{ \ell}^{(m)}\rangle\phi_{\ell}. \tag{6.14}\] We claim that the second factors \(K_{2}:L^{2,2}\to L^{5/2,2}\to L^{3/2}\) hence contributes a compact term. By Cauchy-Schwartz and the bound \(\|\xi_{\ell}^{(m)}\|_{L^{2}}\leq C_{m}|\ell|^{-2-m}\), \[\|K_{2}(\eta)\|_{5/2,2}^{2} = \sum_{\ell}|\langle\mathcal{B}_{\Phi_{0}}(\eta),\xi_{\ell}^{(m)} \rangle|^{2}|\ell|^{5}\] \[\leq \sum_{\ell}\|\mathcal{B}_{\Phi_{0}}(\eta)\|_{L^{2}}^{2}\ \|\xi_{\ell}^{(m)}\|_{L^{2}}^{2}\ | \ell|^{5}\] \[\leq C\|\mathcal{B}_{\Phi_{0}}(\eta)\|_{L^{2}}^{2}\sum_{\ell}\frac{| \ell|^{5}}{|\ell|^{4+2m}}\quad\ \leq\ C\|\eta\|_{L^{2,2}}^{2}\sum_{\ell}\frac{1}{|\ell|^{2m-1}}\leq C\|\eta\|_ {L^{2,2}}^{2}\] for, say, \(m=2\). In the last line we have used that \(|\mathcal{B}_{\Phi_{0}}(\eta)|\leq(|\eta|+|\eta^{\prime}|+|\eta^{\prime\prime}| )r^{-1/2}\) and the latter is integrable on normal disks. Likewise, we claim \(K_{1}\) factors through \(L^{3/2+\delta,2}\) for \(\delta<1/2\). This time, we apply Cauchy-Schwartz on each annulus \(A_{n\ell}\) (defined in 4.3). Write \(K_{1}=K_{1}^{\prime}+K_{1}^{\prime\prime}\) where \[K_{1}^{\prime}(\eta)=\langle\tfrac{1}{2}d\mathrm{Tr}_{g_{0}}(\dot{g}_{\eta}) \cdot\Phi_{0}+\tfrac{1}{2}\mathrm{div}_{g_{0}}(\dot{g}_{\eta})\cdot\Phi_{0}, \zeta_{\ell}\cdot\rangle K_{1}^{\prime\prime}(\eta)=\langle-\tfrac{1}{2}\dot{g} _{\eta}(e_{i},e_{j})e^{i}\cdot\nabla_{j}\Phi_{0},\zeta_{\ell}\rangle\] and we keep the superscript \((m)\) implicit. For the first of these, \[\|K_{1}^{\prime}(\eta)\|_{3/2+\delta,2}^{2} \leq C\sum_{\ell}\sum_{n}\|\eta^{\prime\prime}|\Phi_{0}\|_{L^{2}(A_{n \ell})}^{2}\ \|\zeta_{\ell}\|_{L^{2}(A_{n\ell})}^{2}\ |\ell|^{3+2\delta}\] \[\leq C\sum_{\ell}|\ell|^{3+2\delta}\sum_{n}\|\eta^{\prime\prime}r^{1/2 }\|_{L^{2}(A_{n\ell})}^{2}\ \frac{1}{|\ell|^{2}}\mathrm{Exp}\left(-\frac{n}{c_{1}}\right)\] Then, since \(r\sim\frac{(n+1)R_{0}}{|\ell|}\) on \(A_{n\ell}\), and each has area \(O(|\ell|^{-2})\), the above is bounded by \[\leq C\|\eta^{\prime\prime}\|_{L^{2}(S^{1})}^{2}\sum_{\ell}|\ell|^{3+2 \delta}\sum_{n}\frac{(n+1)^{3}}{|\ell|^{5}}\mathrm{Exp}\left(-\frac{n}{c_{1}}\right) \leq C\|\eta^{\prime\prime}\|_{L^{2}(S^{1})}^{2}\sum_{\ell}\frac{1}{| \ell|^{2-2\delta}}\ \leq\ C\|\eta\|_{L^{2,2}}^{2}.\] The \(K_{1}^{\prime\prime}\) term is the same except we first use the Fourier mode restriction that \(\zeta_{\ell}\) has only Fourier modes \(p\) with \(\ell-\frac{|\ell|}{2}\leq p\leq\ell+\frac{|\ell|}{2}\) to write \(1\sim\frac{i\hat{c}_{\ell}}{|\ell|}\) and then integrate by parts as in _Step 2_. ### The Index of \(\mathcal{L}_{\Phi_{0}}\) In this section we complete the proof of Theorem 6.1. This follows from the following about \(\mathcal{L}^{\Phi_{0}}\). The key role and Fredholmness of a similar map was originally observed in [49]. Here, we present a simplified proof. **Lemma 6.9**.: When Assumption 2 holds, \[\mathcal{L}_{\Phi_{0}}:L^{2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\to L^{2}( \mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\] is an elliptic pseudo-differential operator of Index \(0\). To begin, we have the following fact. Let \(a(t)\in C^{\infty}(\mathcal{Z}_{0};\mathbb{C})\) be a smooth function and let \[[H,a]=H\circ a(t)-a(t)\circ H\] denote the commutator. **Claim 6.9.1**.: The commutator \[[H,a]:L^{m,2}(\mathcal{Z}_{0};\mathbb{C})\to L^{m+1,2}(\mathcal{Z}_{0}; \mathbb{C})\] is a smoothing operator of order \(1\). Proof.: Multiplication by \(a(t)\) and \(H\) are both elliptic pseudodifferential operators of order \(0\), hence so is the commutator. Using the composition property of principal symbols, its principal symbol of order \(0\) is \[\sigma_{0}([H,a])=\sigma_{0}(H)\sigma_{0}(a)-\sigma_{0}(a)\sigma_{0}(H)=0\] hence it is a pseudodifferential operator of order \(-1\). We now prove the lemma: Proof of Lemma 6.9.: Given \(\xi\in L^{2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\simeq L^{2}( \mathcal{Z}_{0};\mathbb{C})\) we define a pseudo-inverse. Set \[\mathcal{L}_{\Phi_{0}}^{\star}(\xi(t))=\overline{c}(t)H\xi(t)-d(t)\overline{ \xi(t)}. \tag{6.15}\] Using Claim 6.9.1 to move \(H\) past combinations of the smooth functions \(c(t),d(t)\) and their conjugates (and noting \(H^{2}=Id\)), we compute \[\mathcal{L}_{\Phi_{0}}\circ\mathcal{L}_{\Phi_{0}}^{\star}(\xi(t)) = ((Hc(t)-d(t)\circ\mathrm{conj}))(\overline{c}(t)H-d(t)\circ\mathrm{ conj})(\xi(t)))\] \[= Hc\overline{c}H\xi-dc\overline{H\xi}-Hcd\overline{\xi}+d \overline{d}\xi\] \[= (|c|^{2}+|d|^{2})f+[H,|c|^{2}]H\xi-dc(\overline{H\xi}+H\overline {\xi})-[H,cd]\overline{\xi}\] \[= ((|c|^{2}+|d|^{2})Id+K)\xi\] for a smoothing operator \(K\). In the last line we have used \(\overline{Hf}+H\overline{\xi}=2\xi_{0}\) where \(\xi_{0}\) is the zeroeth Fourier mode, which is clearly a smoothing operator. It follows that \[\frac{1}{|c|^{2}+|d|^{2}}\mathcal{L}_{\Phi_{0}}^{\star}\] provides a right pseudo-inverse for \(\mathcal{L}_{\Phi_{0}}\) (commuting the denominator past \(H\) contributes to the compact term). An equivalent calculation for the reverse composition shows it is also a left pseudo-inverse, thus \(\mathcal{L}_{\Phi_{0}}\) is Fredholm. Since \(\pi_{1}(\mathbb{C}^{2}-\{0\},\ast)\) is trivial, the pair \((c(t),d(t))\) is homotopic through pairs satisfying the condition \(|c(t)|^{2}+|d(t)|^{2}>0\) to the constant pair \((1,0)\). The operator \(\mathcal{L}_{\Phi_{0}}\) is therefore homotopic to the identity through Fredholm operators hence has index \(0\). Theorem 6.1 is now immediate: Proof of Theorem 6.1.: Lemma 6.7 shows that the operator \[\mathrm{ob}^{-1}(B_{\Phi_{0}}(\eta))=-\frac{3|\mathcal{Z}_{0}|}{2}(\Delta+1)^{ -\frac{3}{4}}\mathcal{L}_{\Phi_{0}}(\eta^{\prime\prime}(t))+K\] is given as the sum of following compositions: where the diagonal arrow is the inclusion, hence compact. All the top arrows are Fredholm of Index \(0\) using Lemma 6.9, and Theorem 6.1 therefore follows from the composition law for pseudodifferential operators. Given Theorem 6.1, we now impose one more tacit assumption that this Fredholm operator of Index zero is actually invertible. This is expected to hold generically (see [25]), though we do not prove such a result here. At the end of Section 8, this assumption can be removed by the use of standard Kuranishi models. **Assumption 5*.** The index zero map \(T_{\Phi_{0}}:L^{2,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\to L^{3/2,2}( \mathcal{Z}_{0},\mathcal{S}_{\mathcal{Z}_{0}})\) is an isomorphism. ## 7 Nash-Moser Theory As explained in introduction, deducing the non-linear deformation result (Theorem 1.4) from the linear one (Theorem 1.3) requires the Nash-Moser Implicit Function Theorem because of the loss of regularity in the operator \(T_{\Phi_{0}}\). This section gives a brief and practical introduction to the framework of Nash-Moser Theory and states the relevant version of the implicit function theorem. The most complete reference for the full abstract theory is [18]. Here, we more closely follow the expositions in [1, 46, 48] which are more modest in scope but suffice for our purposes. ### Tame Frechet Spaces Let \(\mathcal{X},\mathcal{Y}\) be Frechet spaces given as the intersection of families of Banach spaces \[\mathcal{X}:=\bigcap_{m\geqslant 0}X_{m}\qquad\qquad\qquad\mathcal{Y}:= \bigcap_{m\geqslant 0}Y_{m} \tag{7.1}\] whose norms are monotonically increasing so that \[\|x\|_{0}\leqslant\|x\|_{1}\leqslant\ldots\leqslant\|x\|_{m},\] and likewise for \(\mathcal{Y}\). The topologies on \(\mathcal{X},\mathcal{Y}\) are the ones generated by the countable collection of norms, i.e. a set \(U\) is open if and only if for each point \(x\in U\) there are \(r>0\) and \(m\geqslant 0\) such that the ball \(\{x\ |\ \|x\|_{m}<r\}\subset U\) measured in the \(m\)-norm is contained in \(U\). **Definition 7.1**.: A Frechet space \(\mathcal{X}\) is said to be **tame** if it satisfies the two following criteria: 1. For all \(m_{1}<m<m_{2}\) the interpolation inequalities \[\|x\|_{m}\leqslant C_{m,m_{1},m_{2}}\|x\|_{m_{1}}^{\alpha}\|x\|_{m+2}^{1-\alpha}\] holds where \(\alpha=\frac{m_{2}-m}{m_{2}-m_{1}}\). 2. \(\mathcal{X}\) is equipped with a family of smoothing operators \[S_{\varepsilon}:\mathcal{X}\to\mathcal{X}\] for all \(\varepsilon\in(0,1]\) satisfying the following conditions. 1. \(\|S_{\varepsilon}x\|_{n}\leqslant C_{mn}\varepsilon^{m-n}\|x\|_{m}\) for \(n\geqslant m\) and \(\|S_{\varepsilon}x\|_{m}\leqslant C_{mn}\|x\|_{n}\) for \(n\leqslant m\). 2. \(\|S_{\varepsilon}x-x\|_{m}\leqslant C_{mn}\varepsilon^{n-m}\|x\|_{n}\) for \(n\geqslant m\). 3. \(\|\frac{d}{d\varepsilon}S_{\varepsilon}x\|_{n}\leqslant C_{mn}\varepsilon^{m-n- 1}\|x\|_{m}\) for all \(m,n\geqslant 0\). In practice, most reasonable choices of families of norms coming from Sobolev or Holder norms are tame. Roughly speaking, smoothing operators \(S_{\theta}\) are usually constructed by truncating local Fourier transforms at radius \(\varepsilon^{-1}\). In particular, the Frechet spaces introduced in Section 8.4 are tame and possess smoothing operators constructed effectively in this fashion. Given two tame Frechet spaces \(\mathcal{X}\) and \(\mathcal{Y}\), **Definition 7.2**.: A **tame Frechet map** on an open subset \(U\subseteq\mathcal{X}\) \[\mathcal{F}:U\to\mathcal{Y}\] is a smooth map of vector spaces such that there is an \(r\) and the estimate \[\|\mathcal{F}(x)\|_{m}\leqslant C_{m}\ (1+\|x\|_{m+r}) \tag{7.2}\] holds for all sufficiently large \(m\). The definitions of tame spaces and maps extend naturally to define a category of tame Frechet manifolds with tame Frechet maps between them (see [18] for details). The key point about tame estimates is that each norm depends only on a fixed finite number \(r\) of norms larger than it. Thus, for example, a map with an estimate of the form (7.2) where \(r=2m\) would not be tame. ### The Implicit Function Theorem Before stating a precise version of the Nash-Moser Implicit Function Theorem, let us briefly give some intuition. Here, our exposition follows [48]. Suppose that \(\mathcal{F}:\mathcal{X}\to\mathcal{Y}\) is a map with \(\mathcal{F}(0)=0\), and we wish to solve \[\mathcal{F}(x)=f \tag{7.3}\] for \(f\in\mathcal{Y}\) small. When \(\mathcal{X}\) and \(\mathcal{Y}\) are Banach spaces, the (standard) Implicit Function Theorem is proved using Newton iteration and the Banach Fixed Point Theorem. More specifically, one begins with an initial approximation \(x_{0}=0\), and (provided that \(\mathrm{d}_{x_{0}}\mathcal{F}\) is invertible) defines \[x_{k+1}=x_{k}+(\mathrm{d}_{x_{0}}\mathcal{F})^{-1}(f-\mathcal{F}(x_{k})). \tag{7.4}\] The sequence \(x_{k}\to x_{\infty}\) then converges to a unique fixed point solving equation (7.3) for \(f\in\mathcal{Y}\) sufficiently small. Alternatively, one can modify the iteration step (7.4) by inverting \(\mathrm{d}\mathcal{F}\) at \(x_{k}\) instead of at \(x_{0}\), taking \[x_{k+1}=x_{k}+(\mathrm{d}_{x_{k}}\mathcal{F})^{-1}(f-\mathcal{F}(x_{k})). \tag{7.5}\] This iteration scheme has a much faster rate of convergence: like \(\sim 2^{-2^{k}}\). Consider now the case of \(\mathcal{X},\mathcal{Y}\) tame Frechet spaces when \(\mathrm{d}\mathcal{F}\) displays a loss of regularity of \(r\). Given an initial bound on \(f\in Y_{m}\), then \(x_{1}\) is bounded only in \(X_{m-r}\) thus \(g-\mathcal{F}(x_{1})\) in \(Y_{m-r}\) and \(x_{2}\) in \(X_{m-2r}\). In this way, the standard Newton iteration scheme will exhaust the prescribed regularity in a finite number of steps. To circumvent this loss of regularity, Nash introduced smoothing operators at each stage. More precisely, for some \(\varepsilon_{k}\in(0,1]\), we set \[x_{k+1}=x_{k}+(\mathrm{d}_{S_{\varepsilon_{k}}(x_{k})}\mathcal{F})^{-1}S_{ \varepsilon_{k}}(f-\mathcal{F}(x_{k})), \tag{7.6}\] where the smoothing operators in the subscript are those on \(\mathcal{X}\) and in the argument those on \(\mathcal{Y}\). The key point is that the rate of convergence is rapid enough to overcome the disruption of the smoothing operators, but only if we use this smoothing to modify the improved iteration (7.5), rather than the original iteration (7.4). Thus, unlike to the Implicit Function Theorem on Banach spaces, the Nash-Moser Implicit Function Theorem requires the linearization be invertible on a neighborhood of the initial guess, and requires bounds on the second derivatives to control the linearization over this neighborhood. Specifically, the theorem requires the following hypotheses on a tame map \(\mathcal{F}:U\to\mathcal{Y}\): **Hypothesis (I).** There exists a \(\delta_{0}>0\) and an \(m_{0}\geq 0\) such that for \(x\in U_{0}=B_{\delta_{0}}(0,m_{0})\cap\mathcal{X}\), the open ball of radius \(\delta_{0}\) measured in the \(m_{0}\) norm, then \[\mathrm{d}_{x}\mathcal{F}:\mathcal{X}\to\mathcal{Y}\] is invertible. **Hypothesis (II).** With \(x\in U_{0}\) as above, there are fixed \(s,s^{\prime}\in\mathbb{N}\) such that the unique solution \(u\) of \[\mathrm{d}_{x}\mathcal{F}(u)=g\] satisfies the tame estimate \[\|u\|_{m}\leq C_{m}\,\,\Big{(}\|g\|_{m+s}+\|g\|_{m_{0}}\cdot\|x\|_{m+s^{\prime }}\Big{)}. \tag{7.7}\] **Hypothesis (III).** With \(x\in U_{0}\) as above, there are fixed \(r,r^{\prime}\in\mathbb{N}\) such that the second derivative satisfies the tame estimate \[\|\mathrm{d}_{x}^{2}\mathcal{F}(u,\upsilon)\|_{m}\leq C_{m}\ \Big{(}\|u\|_{m+r}\|v\|_{m_{0}}\ +\ \|u\|_{m_{0}}\|v\|_{m+r}\ +\ \|u\|_{m_{0}}\|v\|_{m_{0}}\cdot(1+\|x\|_{m+r^{\prime}}) \Big{)}. \tag{7.8}\] For our purposes, we require a slight extension of the standard Nash-Moser Implicit Function Theorem that keeps track of subspaces that have some specified additional property, denoted (P). **Definition 7.3**.: A property (P) that is satisfied on linear (not necessarily closed) subspaces \(\mathbf{P}_{\mathcal{X}}\subseteq\mathcal{X}\) and \(\mathbf{P}_{\mathcal{Y}}\subseteq\mathcal{Y}\) is said to be **propagated** by the iteration scheme if \[u\in\mathbf{P}_{\mathcal{X}}\,\ g\in\mathbf{P}_{\mathcal{Y}} \Rightarrow S_{\varepsilon}(u)\in\mathbf{P}_{\mathcal{X}}\ \,\ \ S_{\varepsilon}(g)\in\mathbf{P}_{ \mathcal{Y}}\hskip 28.452756pt\forall\varepsilon\in(0,1]\] \[u\in\mathbf{P}_{\mathcal{X}} \Rightarrow \mathcal{F}(u)\in\mathbf{P}_{\mathcal{Y}}\] \[x\in\mathbf{P}_{\mathcal{X}}\,\ g\in\mathbf{P}_{\mathcal{Y}} \Rightarrow (\mathrm{d}_{x}\mathcal{F})^{-1}g\in\mathbf{P}_{\mathcal{X}}.\] In particular, in the iteration scheme (7.6), if \(f\) has property (P) then \(x_{k}\) has property (P) for all \(k\geq 0\). We will use the following version of the Nash-Moser Implicit Function Theorem. The proof is identical to that in [48], with the additional observation that Hypotheses **(I)**-**(III)** are only ever invoked at elements \(x_{k}\) occurring in the iteration, and at linear combinations of the \(x_{k}\) and their smoothings. The proof of smooth dependence on parameters is given in [18, III.1]. **Theorem 7.4**.: **(Nash-Moser Implicit Function Theorem)** Suppose that \(\mathcal{X}\) and \(\mathcal{Y}\) are tame Frechet spaces as in (7.1). Moreover, assume that a property (P) satisfied on linear subspaces \(\mathbf{P}_{\mathcal{X}}\subseteq\mathcal{X}\) and \(\mathbf{P}_{\mathcal{Y}}\subseteq\mathcal{Y}\) is propagated, and that Hypotheses **(I)**-**(III)** hold for \(x\in U_{0}\cap\mathbf{P}_{\mathcal{X}}\). 1. There exists an \(m_{1}\geq m_{0}\) depending on \(s,s^{\prime},r,r^{\prime}\) and a \(\delta_{1}\geq 0\) such that if \(f\in\mathcal{Y}\) with \[f\in\mathbf{P}_{\mathcal{Y}}\hskip 56.905512pt\text{and}\hskip 56.905512pt\|f\|_{m_{1}} \leq\delta_{1}\] then there exists a unique solution \(x\in\mathcal{X}\) of \[\mathcal{F}(x)=f.\] 2. Suppose, in addition, that \(\mathcal{F}\) and \(f\) are parameterized (via a smooth tame map) by another tame Frechet space \(\mathcal{P}\) with \(f_{p_{0}}=0\) at \(p_{0}\in\mathcal{P}\). If the Hypotheses **(I)**-**(III)** hold uniformly on an open neighborhood \(V_{0}\subset\mathcal{P}\) of \(p_{0}\) and \(\|f_{p}\|_{m_{1}}<\delta_{1}\) for all \(p\in V_{0}\), then the unique solution \(x_{p}\) of \[\mathcal{F}_{p}(x)=f_{p}\] also depends smoothly on \(p\) locally near \(p_{0}\). In case (B), smooth tame dependence on \(p\) means that we replace \(\|x\|_{m+s^{\prime}}\) and \(\|x\|_{m+r^{\prime}}\) on the right-hand sides of Hypothesis **(II)** and **(III)** by \(\|(p,x)\|_{m+s^{\prime}}\) and \(\|(p,x)\|_{m+r^{\prime}}\). Case (B) asserts that \[\mathcal{F}^{-1}(f_{p})\subset\mathcal{P}\times\mathcal{X}\] is locally a tame Frechet submanifold that is a graph over \(\mathcal{P}\). ## 8 Tame Estimates In this final section we complete the proofs of Theorem 1.4 and Corollary 1.5 by verifying the hypotheses of the Nash-Moser Implicit Function Theorem 7.4 for the operator \[\overline{\mathbb{D}}_{p}:\mathcal{P}\times\mathcal{X}\longrightarrow\mathcal{Y} \overline{\mathbb{D}}_{p}:=(\mathbb{D}_{p}-\Lambda\operatorname{Id }\,,\ 1-\|\Phi\|_{L^{2}}^{2})\] on tame Frechet spaces \(\mathcal{X}=\{(\eta,\Lambda,\varphi)\}\) and \(\mathcal{Y}=\{\psi,c\}\) introduced in Section 8.4. Here \(\Lambda,c\in\mathbb{R}\) and \(\mathcal{P}=\{(g,B)\}\) is the space of smooth metrics and perturbations (equipped with the standard Frechet structure arising from the \(L^{m,2}\) norms on \(Y\)). In our case, the property (P) that is propagated by the iteration scheme is polyhomogeneity of the spinor. Set: \[\mathbf{P}_{\mathcal{X}} := \{(\eta,\Lambda,\varphi)\in\mathcal{X}\ |\ \varphi\text{ is polyhomogenous with index set }\mathbb{Z}^{+}+\tfrac{1}{2}\}\] \[\mathbf{P}_{\mathcal{Y}} := \{\quad(\psi,c)\in\mathcal{Y}\ |\ \psi\text{ is polyhomogenous with index set }\mathbb{Z}^{+}-\tfrac{1}{2}\}\] Here, we use a slightly weaker notion of polyhomogeneity than is given in Definition 3.8. More specifically, we do not constrain the \(\theta\) modes, so that \(\varphi\in\mathbf{P}_{\mathcal{X}},\psi\in\mathbf{P}_{\mathcal{Y}}\) means that there are respectively asymptotic expansions \[\varphi\sim\begin{pmatrix}c(t,\theta)\\ d(t,\theta)\end{pmatrix}r^{1/2}\ \ +\ \sum_{n\geq 1}\sum_{p=0}^{n}\left( \begin{array}{c}c_{n,p}(t,\theta)\\ d_{n,p}(t,\theta)\end{array}\right)r^{n+1/2}(\log r)^{p} \tag{8.1}\] \[\psi\sim\begin{pmatrix}c(t,\theta)\\ d(t,\theta)\end{pmatrix}r^{-1/2}+\ \ \sum_{n\geq 1}\sum_{p=0}^{n}\left( \begin{array}{c}c_{n,p}(t,\theta)\\ d_{n,p}(t,\theta)\end{array}\right)r^{n-1/2}(\log r)^{p} \tag{8.2}\] where \(c_{n,p},d_{n,p}\in C^{\infty}(S^{1}\times S^{1})\) and \(\sim\) denotes convergence in the sense of Definition 3.7. This section is divided into six subsections. Subsections 8.1-8.3 cover preliminary material used to verify the hypotheses of the Nash-Moser theorem. Specifically, subsections 8.1 and 8.2 are devoted to lemmas used in the verification of the Hypothesis **(I)**. Then in subsection 8.3 the precise form of the derivative and second derivative of \(\mathbb{D}_{p}\) are derived using the non-linear version of Bourguignon-Gauduchon's Formula (5.7). Subsection 8.4 introduces the tame Frechet spaces \(\mathcal{X},\mathcal{Y}\), and Subsection 8.5 derives tame estimates verifying Hypotheses **(I)**-**(III)**. The final subsection 8.6 invokes Theorem 7.4 to complete the proofs. ### The Obstruction Bundle This subsection covers preliminary lemmas used in the verification of Hypothesis **(I)** which asserts that the linearization of \(\operatorname{d}\!\!\mathbb{D}\) is invertible on a neighborhood of \(((g_{0},B_{0}),\mathcal{Z}_{0},\Phi_{0})\). Although the invertibility of the linearization, at the end of the proof, comes down to the fact that there is an open neighborhood of invertible operators around the identity in a Banach space, the proper context in which to invoke this fact is somewhat subtle. The first step is to upgrade the obstruction space \(\mathbf{Ob}(\mathcal{Z}_{0})\) to a vector bundle. This is the content of the current subsection. To motivate this construction briefly, observe that by Corollary 6.2 (when Assumption 5* holds) imply the linearization at \((\mathcal{Z}_{0},\Phi_{0})\) for \(p=p_{0}\) is an invertible map after supplementing the domain and codomain with additional factors of \(\mathbb{R}\): \[\operatorname{d}_{(\mathcal{Z}_{0},\Phi_{0})}\overline{\mathbb{D}}_{p_{0}}= \begin{pmatrix}\Pi_{0}\mathcal{B}_{\Phi_{0}}-\Phi_{0}&0\\ (1-\Pi_{0})\mathcal{B}_{\Phi_{0}}&\overline{\not{D}}\end{pmatrix}\ :\ \begin{matrix}L^{2,2}(\mathcal{Z}_{0};N \mathcal{Z}_{0})\oplus\mathbb{R}&\mathbf{Ob}(\mathcal{Z}_{0})\cap H_{\text{b}} ^{3/2}\\ \oplus&rH_{e}^{1}&\mathbf{Range}(\not{D}|_{rH_{e}^{1}})\oplus\mathbb{R}\end{matrix} \tag{8.3}\] where \(\overline{\not{D}}=(\not{D}\,\ \big{<}\ \big{<}\,\Phi_{0})\). For a nearby parameter \(p\neq p_{0}\), however, the infinite-dimensional cokernel \(\ker(\not{D}_{p}|_{L^{2}})\) is tilted slightly with respect to that for \(p_{0}\). Because of the mixed regularity norm, the perturbation to the above linearization for \(p\neq p_{0}\) is not bounded in the top component (and therefore does not behave in a tame fashion). To avoid this and show the linearization at \(p\neq p_{0}\) is well-behaved, we must work in the analogous decomposition induced by \(p\). Thus the first step is to show the family of obstruction spaces form a locally trivial vector bundle \(\mathbf{Ob}\) over an open ball \(V_{0}\subset\mathcal{P}\). Proving that \(\mathbf{Ob}\) forms a locally trivial vector bundle involves some rather subtle issues. Besides the standard issue that the "dimension" of the cokernel may jump (which is why the obstruction space was defined as a thickening of cokernel in Section 4), several technical issues arise. The obvious approach to constructing a trivialization would be to argue that the construction of \(\mathbf{Ob}\) in Section 4 is continuous in \(p_{0}\in U_{0}\). Although this appears intuitive, it is not clear such a statement has an easy proof, or is even necessarily true (see Remark 8.1). For this reason, _we do not use this natural approach to showing \(\mathbf{Ob}\) forms a Banach vector bundle over \(V_{0}\)_; instead, we construct a trivialization by projection. For this second approach, the key point is that (while an arbitrary projection operator on \(L^{2}\) has no reason to respect regularity) the projection to the cokernel is a pseudodifferential edge operator, so preserves the space \(H^{m}_{\mathrm{b}}\) by Corollary 2.12 item (C). **Remark 8.1**.: The subtle part of the constructions in Section 4 that makes continuity not obvious is the seemingly innocuous choice of indexing the spectrum of the Dirac operator on \(\mathcal{Z}_{0}\). As the metric \(g_{0}\) changes, so does the induced metric on \(\mathcal{Z}_{0}\). In general, comparing the spectra of two different Riemannian metrics is a quite messy endeavor and showing the bounds of Proposition 4.3 for the basis \(\Psi_{\ell}\) are uniform and continuous is a subtle issue. While there may be ways to circumvent this since \(\mathcal{Z}_{0}=\sqcup S^{1}\), we avoid this approach keeping an eye towards generalizing these results to higher dimensions. We begin by defining the bundle \(\mathbf{Ob}\to V_{0}\), where \(V_{0}\) is an open ball of radius \(\delta_{0}\) around \(p_{0}\) measured in the \(m_{0}\)-norm. Here, \(m_{0}\in\mathbb{N}\) is an integer to be chosen later (\(m_{0}=11\) works). Let \(p\in V_{0}\). By parallel transport on cylinders as in Section 5.1, we may think of the Dirac operator for every \(p\) as an operator on the spinor bundle \(S_{0}\), thus we tacitly write \[\not{D}_{p}:=\tau^{h}_{g_{0}}\circ\not{D}_{h,B}\circ(\tau^{h}_{g_{0}})^{-1} \tag{8.4}\] for the Dirac operator with respect to a metric \(h\) and perturbation \(B\) (and the fixed singular locus \(\mathcal{Z}_{0}\)) on the spinor bundle \(S_{0}\). By the (standard) Implicit Function Theorem with the Fredholm operator \(\not{D}_{p}^{\star}\not{D}_{p}:rH^{1}_{e}\to r^{-1}H^{-1}_{e}\), we observe **Lemma 8.2**.: For \(0<\delta_{0}\) sufficiently small, there is a unique eigenvector \((\Phi_{p},\mu_{p})\in rH^{1}_{e}\) such that \[\not{D}_{p}^{\star}\not{D}_{p}\Phi_{p}=\mu_{p}\Phi_{p}\] and equal to \((\Phi_{0},0)\) at \(p_{0}=(g_{0},B_{0})\). Moreover, these satisfy \[\|\Phi_{p}-\Phi_{0}\|_{H^{m_{0}-2,1}_{b,e}}\ +\ |\mu_{p}| \leq C\|p-p_{0}\|_{m_{0}}.\] and \(\Phi_{p}\) is polyhomogeneous with index set \(\mathbb{Z}^{+}+\frac{1}{2}\). Next, let \(rH^{\perp}_{e}\) denote the \(L^{2}\)-orthogonal complement of \(\Phi_{p}\) in \(rH^{1}_{e}\). A trivial extension of the arguments in Section 2 shows the following lemma. In the statement, \(\not{D}_{p}^{\star}\) denotes the adjoint of the Dirac operator with respect to the \(L^{2}\)-inner product formed using \(g_{0}\). **Lemma 8.3**.: For \(0<\delta_{0}\) sufficiently small, the following hold: * \(\not{D}_{p}:rH^{\perp}\to L^{2}\) is injective with closed range. * \(\not{D}_{p}^{\star}\not{D}_{p}:rH^{\perp}_{e}\to r^{-1}H^{-1}_{e}/\mathbb{R} \Phi_{p}\) is an isomorphism and if \(\langle u,\Phi_{p}\rangle_{L^{2}}=0\) then there is a uniform bound on the solution operator \[P_{p}(f)=u\quad\text{ s.t. }\not{D}_{p}^{\star}\not{D}_{p}u=f\mod\Phi_{p} \qquad\Rightarrow\qquad\|P_{p}\|_{r^{-1}H^{-1}_{e}\to rH^{1}}\leq C.\] As a result of the first bullet point, \(\mathfrak{R}:=\operatorname{Range}(\not{D}_{p}|_{rH^{L}_{e}})\subseteq L^{2}\) is a smooth Banach subbundle, and we may define **Definition 8.4**.: The **Obstruction Bundle** denoted \(\mathbf{Ob}\to V_{0}\) is defined by the \(L^{2}\)-orthogonal complement of \(\mathfrak{R}\) so that there is an orthogonal splitting \[L^{2}=\mathbf{Ob}\oplus\mathfrak{R}\] as smooth Banach vector bundles over \(V_{0}\). For \(m\leq m_{0}-3\), we denote the higher-regularity versions by \(\mathbf{Ob}^{m}:=\mathbf{Ob}\cap H^{m}_{\mathrm{b}}\) and \(\mathfrak{R}^{m}:=\mathfrak{R}\cap H^{m}_{\mathrm{b}}\). **Proposition 8.5**.: Provided \(\delta_{0}\) is sufficiently small, then for every \(m\leq m_{0}\) and in particular for \(m=5/2\), the map \[\Xi_{p}:\mathbf{Ob}^{m}_{p} \longrightarrow L^{m,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\oplus \mathbb{R}\] \[\Psi \mapsto \mathrm{ob}^{-1}_{0}\circ(\Psi-\not{D}_{0}P_{0}\not{D}^{\star}_ {0}\Psi).\] provides a local trivialization of the smooth Banach vector bundles \(\mathbf{Ob}^{m}\). Proof.: We begin with the \(m=0\) case. Since \(\mathrm{ob}^{-1}_{0}\) is an bounded linear isomorphism with bounded inverse onto \(\mathbf{Ob}_{0}\), it suffices to show that the projection \[\mathbf{Ob}_{p}\rightarrow\mathbf{Ob}_{0} \Psi\mapsto\Pi_{0}\Psi=(\Psi-\not{D}_{0}P_{0}\not{D}^{\star}_{0}\Psi) \tag{8.5}\] is an isomorphism (where \(\mathbf{Ob}_{0},\not{D}_{0},P_{0}\) denote those for the original parameter \(p_{0}\)). Indeed, the reverse projection \[\mathbf{Ob}_{0}\rightarrow\mathbf{Ob}_{p} \Psi\mapsto\Pi_{p}\Psi=(\Psi-\not{D}_{p}P_{p}\not{D}^{\star}_{p}( \Psi))\] is an inverse of (8.5) up to small error of size \(O(\delta)\). To see this, write and element \(\Psi=(\psi,c\Phi_{0})\in\mathbf{Ob}_{0}\) and the composition on the first component is \[\psi\mapsto \Pi_{0}(\psi-\not{D}_{p}P_{p}\not{D}^{\star}_{p}(\psi)\] \[= \Pi_{0}(\psi-\not{D}_{p}P_{p}\mathfrak{d}(\psi))\] \[= \Pi_{0}(\psi+O(\delta))\] where \(\mathfrak{d}=(\not{D}^{\star}_{p}-\not{D}^{\star}_{0})\) is a 1st order operator with \(\left\|\mathfrak{d}\right\|_{L^{2}\rightarrow r^{-1}H^{m-1}_{e}}\leq C\delta\). This holds since \(\not{D}_{p}P_{p}\not{D}^{\star}_{0}\psi=0\) by definition for \(\psi\in\mathbf{Ob}_{0}\). Taking the reverse projection given by (8.5) shows \[\Pi_{0}(\psi+O(\delta))=\psi+O(\delta)\] again. Similar arguments apply to the \(c\Phi_{0}\) component, and we conclude that \[\Pi_{0}\circ\Pi_{p}=Id+O(\delta):\mathbf{Ob}_{0}\rightarrow\mathbf{Ob}_{0}\] and so is an isomorphism for \(\delta\) sufficiently small. The proof for \(0<m\leq m_{0}-3\) is identical since \[\not{D}_{p}P_{p}\not{D}^{\star}_{p}:H^{m}_{\mathrm{b}}\to H^{m}_{ \mathrm{b}} \tag{8.6}\] is an (edge) pseudodifferential operators of order \(0\) hence preserves regularity for \(m<m_{0}-2\) and likewise for the reverse projection. To prove (8.6), first note that \(\Phi_{p}\in H^{m}_{\mathrm{b}}\) since is polyhomogeneous by Proposition 3.8. The result then follows easily from the parameter \(p\)-version of Corollary 2.12. ### Invertibility on a Neighborhood In this subsection we verify a preliminary version of Hypothesis **(I)** for the linearization of the universal Dirac operator. We show, in particular, that the linearization at \((p_{0},\mathcal{Z}_{0},\Phi_{0})\) is invertible as a bundle map, i.e. on \(\mathbf{Ob}_{p}\) for \(p\in V_{0}\). The complete verification of Hypothesis **(I)** is completed in Section 8.5 and follows easily from this after deriving the form of the linearization at a general \((p,\mathcal{Z}_{0},\Phi_{0})\) in the following subsection. Extending the map \(T_{\Phi_{0}}\) from Section 6 by adding the \(\lambda\)-component, define \[\overline{T}_{\Phi_{0}}=\mathrm{ob}_{0}^{-1}\circ\Pi_{0}\Big{[}(\mathrm{d}_{( \mathcal{Z}_{0},\Phi_{0})}\overline{\mathbb{P}}_{0}(\eta,0,\lambda)\Big{]}\] as the obstruction component of the linearization at the central fiber \(\mathbf{Ob}_{0}\). Assumption 5* and elliptic bootstrapping means that this map is an isomorphism \(\overline{T}_{\Phi_{0}}:L^{3,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\oplus \mathbb{R}\to L^{5/2,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}}) \oplus\mathbb{R}\). **Proposition 8.6**.: Provided that \(m_{0}\geq 10\) and \(0<\delta_{0}<<1\) is sufficiently small, then for \(p\in V_{0}\), the \(\mathbf{Ob}_{p}\) components of the linearization at \((p_{0},\mathcal{Z}_{0},\Phi_{0})\) in the trivialization provided by \(\Xi_{p}\) \[\Xi_{p}\circ\Pi_{p}(\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\overline{\mathbb{P }}_{0}):L^{3,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\oplus\mathbb{R}\longrightarrow L ^{5/2,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z}_{0}})\oplus\mathbb{R} \tag{8.7}\] is an isomorphism, and the estimate \[\|\eta\|_{L^{3,2}}+\|\lambda\|\leq C\|\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})} \overline{\mathbb{P}}_{0}(\eta,0,\lambda)\|_{\mathbf{Ob}^{5/2}\oplus L^{2}} \tag{8.8}\] hold uniformly on \(V_{0}\). Proof.: At \(p=p_{0}\), then \(\Xi_{0}\circ\Pi_{0}=\mathrm{ob}^{-1}\circ\Pi_{0}^{2}=\mathrm{ob}^{-1}\circ \Pi_{0}\) so the map (8.7) is simply \(\overline{T}_{\Phi_{0}}\) hence and isomorphism. It therefore suffices to show that for \(p\in V_{0}\) with \(m_{0}\geq 10\) and \(0<\delta_{0}<<1\) sufficiently small, then the following parameter \(p\)-version of the conormal regularity Lemma 6.4 from Section 6.1 holds: for \(\overline{\eta}=(\eta,\lambda)\in L^{3,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0}) \oplus\mathbb{R}\) one has \[\Pi_{p}\overline{\mathcal{B}}_{\Phi_{0}}(\overline{\eta})\in\mathbf{Ob}_{p} \cap H_{\mathrm{b}}^{5/2}\qquad\quad\text{and}\qquad\quad\|\Pi_{p}(\overline{ \mathcal{B}}_{\Phi_{0}}(\overline{\eta}))\|_{5/2}\leq C\|\overline{\eta}\|_{L^ {3,2}}. \tag{8.9}\] where the latter estimate holds uniformly over \(p\in V_{0}\). Given this, Lemma 8.5 shows that 8.7 is a continuous family of bounded maps between fixed Banach spaces, hence is an isomorphism for \(\delta\) sufficiently small. To conclude the lemma, it is therefore enough to establish (8.9). This is proved by writing \(\overline{\mathcal{B}}_{\Phi_{0}}\) in the Fermi coordinates of the new parameter \(p=(g,B)\). The constructions of Section 4 apply equally well to the Dirac operator \(\not{D}_{p}\) written in the Fermi coordinates and associated trivialization of the metric \(g\) rather than \(g_{0}\). As a result, there is a basis \(\Psi_{\ell}^{p},\Pi_{p}(\Phi_{p})\) of \(\mathbf{Ob}_{p}\) satisfying the conditions of Propositions 4.2 - 4.3 with the bounds on \(\zeta_{\ell}^{p},\zeta_{\ell}^{p}\) being uniform in \(p\) (the only minor caveat in this is that we use the \(L^{2}\) inner product induced by \(g_{0}\), so that \(\not{D}_{p}\) is no longer formally self-adjoint). In a similar fashion to Lemma 4.17, we obtain a bounded linear isomorphism \(\mathrm{ob}_{p}\oplus\iota_{p}:L^{m,2}(\mathcal{Z}_{0};\mathcal{S}_{\mathcal{Z }_{0}}^{p})\oplus\mathbb{R}\rightarrow\mathbf{Ob}_{p}\cap H_{\mathrm{b}}^{m}\) for \(m\leq 3\) provided, say, \(m_{0}\leq 10\). The projection to \(\mathbf{Ob}_{p}\) is calculated by the sequence of inner-products \[\mathrm{ob}_{p}\Big{(}\sum_{\ell\in\mathbb{Z}}\langle\tau_{0}^{p}\mathcal{B}_{ \Phi_{0}}(\eta),\Psi_{\ell}^{p}\rangle_{\mathbb{C}}\ \phi_{\ell}^{p}\Big{)}\] (excluding the \(\Pi_{p}(\Phi_{p})\) component, which is automatically bounded in \(H_{\mathrm{b}}^{m}\) for \(m+3<m_{0}\) by Lemma 8.2). Here, \(\tau_{0}^{p}\) is the parallel transport map \(S_{0}\to S_{p}\) as in Section 5.1, whose \(C^{m}\)-norm is bounded by the \((m+4)\)-norm of \(p\) by the smooth dependence of ODEs on parameters. Thus we may write \(\mathcal{B}_{\Phi_{0}}\) as a collection of terms \[v^{\prime}(t)\Xi_{0}(t_{p},x_{p},y_{p})\qquad\quad\text{or}\qquad\quad v^{ \prime\prime}(t)\Xi_{1}(t_{p},x_{p},y_{p}) \tag{8.10}\] where \((t_{p},x_{p},y_{p})\) are the Fermi coordinates constructed using the parameter \(p\), and apply Case (C) of Corollary 6.5. Since \(r_{p}\sim r\), the bounds \(|\nabla_{\nu}^{m}\Xi_{i}|\leq Cr^{i-1/2}\) hold for \(m\leq m_{0}\) equally well for \(r_{p}\), and it remains to write \(v^{\prime}(t)\) (resp. \(v^{\prime\prime}(t)\)) in terms of the Fermi coordinates \((t_{p},x_{p},y_{p})\). Expanding in Taylor series along \({\cal Z}_{0}\) in the norm directions, \(v^{\prime}(t)=w(t_{p})+F(t_{p},x_{p},y_{p})\) where \(w(t_{p})\in L^{2,2}(S^{1})\) and \[|F(t_{p},x_{p},y_{p})| \leqslant Cr_{p}(|\partial_{x_{p}}v^{\prime}(t)|+|\partial_{y_{p}}v^{ \prime}(t)|\] \[\leqslant Cr_{p}^{2}|v^{\prime\prime}(t)|\] where we have written \(x_{p}(t)=a_{0}(t)x+b_{0}(t)y+O(r)t+\ldots\) and likewise for \(y_{p}\). The crucial point here is that, although expanding in Taylor series seems at first to only exchange orders of growth for tangential derivatives (thus preserve conormal regularity), an extra factor of \(r_{p}\) arises since normal planes in the metrics of \(p_{0}\) and \(p\) differ to first order by a linear coordinate change of \(x,y\). Expanding the integral as in the proof of Lemma 6.4 and using Cauchy-Schwartz (along with arguments akin to those in _Step 5_ of the proof of Theorem 6.1 for \(\zeta_{\ell}^{p},\xi_{\ell}^{p}\)) shows that the type of term on the left in (8.10) has conormal regularity \(5/2\) thus satisfies (8.9). The term on the right is identical after multiplying by \(1\) in the form \((i\partial_{t}+1)(i\partial_{t}+1)^{-1}\) and integrating by parts. This establishes (8.9), completing the proof. ### Quadratic and Error Terms In this section we employ the non-linear version of Bourguignon-Gauduchon's formula [3] for the metric variation of the Dirac operator to calculate the linearization and second derivative at tuple \((p,{\cal Z},\Phi)\) near \((p_{0},{\cal Z}_{0},\Phi_{0})\) as well as the initial error term \(f_{p}\). To state Bourguignon-Gauduchon's formula, let \(p=(h,B)\) be a parameter pair of a metric and perturbation on \(Y\). Via the parallel transport map \(\tau_{g_{0}}^{h}\) we can view the Dirac operator \(\not{D}_{p}:\Gamma(S_{g_{0}})\to\Gamma(S_{g_{0}})\) on the spinor bundle associated to the metric \(g_{0}\) (here, we omit \(\tau\) from the notation as in (8.4)). Let \(a_{g_{0}}^{h},{\mathfrak{a}}\) be defined respectively by \[h(X,Y)=g_{0}(a_{g_{0}}^{h}X,Y)\hskip 56.905512pt{\mathfrak{a}}=(a_{g_{0}}^{h})^ {-1/2}\] where the latter is understood via the eigenvalues of \((a_{g_{0}}^{h})^{\star}a_{g_{0}}^{h}\), which are non-zero for \(h\) sufficiently close to \(g_{0}\). **Theorem 8.7**.: **(Bourguignon-Gauduchon, [3])** The Dirac operator \(\not{D}_{p}\) is given by \[\not{D}_{p}\Psi=\left(\sum_{i}e^{i}.\nabla^{B}_{{\mathfrak{a}}(e_{i})}\ +\ \frac{1}{4}\sum_{ij}e^{i}e^{j}.\left({ \mathfrak{a}}^{-1}(\nabla^{g_{0}}_{{\mathfrak{a}}(e_{i})}{\mathfrak{a}})e^{j} +{\mathfrak{a}}^{-1}(\nabla^{h}-\nabla^{g_{0}})_{{\mathfrak{a}}(e_{i})}{ \mathfrak{a}}(e^{j})\right).\right)\Psi \tag{8.11}\] where \(e^{i}\) and \(.\) are an orthonormal basis and Clifford multiplication for \(g_{0}\), and \(\nabla^{h}\) denotes the unperturbed spin connection of the metric \(h\) and likewise for \(g_{0}\). #### 8.3.1 Error Terms: We begin by applying Theorem 8.7 to calculate the initial error terms \(f_{p}\) for the application of the Nash-Moser Implicit Function Theorem (7.4). The initial error is given by \[\boxed{f_{p}:=\not{D}_{p}\Phi_{0}.} \tag{8.12}\] Let \(U_{1}\subset{\cal P}\) denote the ball around \(p_{0}\) of radius \(\delta_{1}\) measured in the \(m_{1}+3\) norm. Here, \(m_{1}\) (like \(m_{0}\)) is an integer to be chosen later. To simplify notation, we omit the reference to the spaces from the notation from the norms, so that e.g. \(\|-\|_{m}\) means the \(H^{m,1}_{{\rm b},\epsilon}\)-norm for elements of the domain, the \(H^{m}_{\rm b}\)-norm for elements of the codomain. **Lemma 8.8**.: The Dirac operator at parameter \(p\) can be written \[\not{D}_{p}=\not{D}_{0}+{\mathfrak{D}}_{p} \tag{8.13}\] where the latter satisfies \[\|{\mathfrak{D}}_{p}\varphi\|_{m_{1}}\leqslant C_{m_{1}}\|p\|_{m_{1}+3}\| \varphi\|_{m_{1}}. \tag{8.14}\] It follows that \(\|f_{p}\|_{m_{1}}\leqslant C\delta_{1}\). Proof.: Write \(p=(g_{0},B_{0})+(k,b)\) for \(\|(k,b)\|_{m_{1}+3}\leqslant\delta\). In an orthonormal frame for \(g_{0}\) we have \(g_{0}^{g_{0}+k}=\mathrm{Id}+k\) where we also use \(k\) to denote the corresponding matrix in this orthonormal frame. Then \(a=(\mathrm{Id}+k)^{-1/2}\). Substituting this into Theorem 8.7 shows that \[\not{D}_{p}\varphi=\not{D}_{0}\varphi+\mathfrak{d}_{1}\varphi+\mathfrak{d}_{0}\varphi\] where \(\mathfrak{d}_{1},\mathfrak{d}_{0}\) are respectively a first order and zeroth order operator satisfying \(\|\mathfrak{d}_{1}\varphi\|_{m_{1}}\leqslant C\|p\|_{m_{1}+3}\|\varphi\|_{m_{ 1}}\) and \(\|\mathfrak{d}_{0}\varphi\|_{m_{1}}\leqslant C\|p\|_{m_{1}+3}\|\varphi\|_{m_{ 1}}\). To see this, note that the coefficients of of \(\mathfrak{d}_{0}\) are formed from sums and products of entries of \(k\) (by expanding \((\mathrm{Id}+k)^{-1/2}\)), and these all lie in \(C^{m_{1}+1}(Y)\hookrightarrow L^{m_{1}+3,2}(Y)\) by the Sobolev embedding and the fact that \(C^{m_{1}+1}\) is an algebra. Likewise, coefficients of \(\mathfrak{d}_{0}\) lie in \(C^{m_{1}}\) because they are formed from sums and products of up to first derivatives of \(k,b\). Since every term is at least linear in \(p\), and \(\|(k,b)\|_{m_{1}+3}\leqslant\delta<<1\), the bound (8.14) follows. Since \(f_{p}=\not{D}_{p}\Phi_{0}=\mathfrak{d}_{0}\Phi_{0}+\mathfrak{d}_{1}\Phi_{0}\) and \(\|\Phi_{0}\|_{m_{1}}\leqslant C_{m_{1}}\), the second statement is then immediate for \(p\in U_{1}\). #### 8.3.2 Quadratic Terms For the tame estimates on \(\mathrm{d}\not{D}_{p}\) and \(\mathrm{d}^{2}\not{D}_{p}\), we must first investigate the higher-order terms of \(\not{D}_{p}\). Expanding, we may write \[\not{D}_{p}((\mathcal{Z}_{0},\Phi_{0})+(\eta,\varphi))=f_{p}\ +\ \mathrm{d}_{( \mathcal{Z}_{0},\Phi_{0})}\not{D}_{p}(\eta,\varphi)\ +\ Q_{p}(\eta,\varphi)\] where \(Q_{p}\) is comprised of second order and higher terms. The middle term at \(p_{0}\) is given by Corollary 5.9. For a general \(p\), we can write the derivative of pullback metric as \[\frac{d}{ds}\Big{|}_{s=0}F^{*}_{s\eta}(g_{0}+k)=\dot{g}_{\eta}+\dot{k}_{\eta}, \tag{8.15}\] where \(\dot{g}_{\eta}\) is as calculated in (5.10) and analogously for \(k\). Analogous to the formula for \(\mathcal{B}_{\Phi_{0}}(\eta)\) in Corollary 5.9, we set \[\mathfrak{B}_{\Phi_{0},p}(\eta):=\left(-\frac{1}{2}\sum_{ij}\dot{k}_{\eta}(e_{ i},e_{j})e^{i}.\nabla_{j}^{g_{0}}+\frac{1}{2}d\mathrm{Tr}_{g_{0}}(\dot{k}_{\eta}). +\frac{1}{2}\mathrm{div}_{g_{0}}(\dot{k}_{\eta}).+\mathcal{R}(b,\chi\eta). \right)\Phi_{0} \tag{8.16}\] to be the term arising from the perturbation \((k,b)\) to \(p_{0}\). Here \(\mathcal{R}(b,\chi\eta)\) is a zeroth order term in \(\eta\) with coefficients depending on the perturbation \(b\) to \(B_{0}\) and its derivatives. **Proposition 8.9**.: The universal Dirac operator at the parameter \(p\in U_{1}\) for at a point \((\mathcal{Z}_{0},\Phi_{0})+(\eta,\varphi)\) with \(\|(\eta,\varphi)\|_{m_{0}}\leqslant C\delta\) is given by \[\not{D}_{p}((\mathcal{Z}_{0},\Phi_{0})+(\eta,\varphi))=f_{p}\ +\ \mathrm{d}_{( \mathcal{Z}_{0},\Phi_{0})}\not{D}_{p}(\eta,\varphi)\ +\ Q_{p}(\eta,\varphi) \tag{8.17}\] where 1. \(f_{p}=\not{D}_{p}\Phi_{0}\) as in Lemma 8.8. 2. The derivative is given by \[\mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\not{D}_{p}(\eta,\varphi)=\Big{(} \mathcal{B}_{\Phi_{0}}(\eta)+\not{D}_{0}\varphi\Big{)}\ +\ \Big{(}\mathfrak{B}_{\Phi_{0},p}(\eta)+\mathfrak{D}_{p}(\varphi)\Big{)}\] where \(\mathcal{B}_{\Phi_{0}}(\eta)\) is as defined in 5.5 (cf. Corollary 5.9), and \(\mathfrak{D}_{p},\mathfrak{B}_{\Phi_{0},p}\) are as in (8.13) and (8.16) respectively. 3. The non-linear terms may be written \[Q(\eta,\varphi) = (\mathcal{B}_{\varphi}+\mathfrak{B}_{\varphi,p})(\eta)\ \ +\ \ M_{p}^{1}(\eta^{\prime},\eta^{\prime})\nabla(\Phi_{0}+\varphi)\ \ +\ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+\varphi)\ \ +\ \ F_{p}(\eta,\Phi_{0}+\varphi)\] where \[\ \ +\ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+\varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+\varphi)\] where \[\ \ +\ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+\varphi)\] where \[\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+\varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+ \varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+\varphi)\] where \[\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ M_{p}^{2}(\eta^{\prime},\eta^{\prime\prime})(\Phi_{0}+ \varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+\varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+ \varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+\varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+ \varphi)\ \ +\ \ \ F_{p}(\eta,\Phi_{0}+\varphi)\] where \[\ \ +\ * \(\mathcal{B}_{\varphi},\mathfrak{B}_{\varphi,p}\) are defined identically to \(\mathcal{B}_{\Phi_{0}},\mathfrak{B}_{\Phi_{0},p}\) but with \(\varphi\) replacing \(\Phi_{0}\). * \(M_{1}^{p}\) a finite sum of terms involving quadratic combinations of \(\chi\eta^{\prime},\eta d\chi,\chi\eta\), and linearly depending on \(\nabla(\Phi_{0}+\varphi)\) and smooth endomorphisms \(m_{i}\), e.g. \[m_{i}(y)(\chi\eta^{\prime})(\chi\eta^{\prime})\nabla_{j}(\Phi_{0}+\varphi)\] where \(m_{i}(y)\) depend on \(g_{0}+k\) (and no derivatives). * \(M_{2}^{p}\) a finite sum of terms involving quadratic combinations of \(\eta^{\prime\prime}\chi,\eta^{\prime}d\chi,\eta d^{2}\chi,\eta^{\prime}\chi, \eta d\chi,\eta\chi\), with at most one factor of \(\eta^{\prime\prime}\), and linearly depending on \(\Phi_{0}+\varphi\) and smooth endomorphisms \(m_{i}\), e.g. \[m_{i}(y)(\chi\eta^{\prime\prime})(d\chi\eta^{\prime}).(\Phi_{0}+\varphi)\] where \(m_{i}(y)\) depend on up to first derivatives of \(g_{0}+k\) and \(B_{0}+b\). * \(F_{p}\) is formed from a finite sum of similar terms but involving cubic and higher combinations of \(\eta,\eta^{\prime},\eta^{\prime\prime}\), with at most one factor of \(\eta^{\prime\prime}\). Proof.: The constant (1) and linear (2) terms are immediate from, respectively, the definition (8.12) and the proof of Corollary 5.9 but using the pullback metric (8.15) in place of \(\dot{g}_{\eta}\). For \(p=p_{0}\), quadratic terms (3) are calculated as follows. By Remark 5.10, the pullback metric can be written \[g_{\eta}=g_{0}+\dot{g}_{\eta}+\mathfrak{q}(\eta)\hskip 28.452756pt\text{ where }\hskip 28.452756pt|\mathfrak{q}(\eta)|\leq C\big{(}\ |(\chi)\eta^{\prime}|+|(d\chi)\eta|+|(\chi)\eta|\ \big{)}^{2}.\] i.e. \(\mathfrak{q}(\eta)\) vanishes to second order at \(\eta=0\). Substituting \(g_{\eta}\) into the formula 8.11, one has that (in an orthonormal frame of \(g_{0}\)) \[\mathfrak{q}_{g_{0}}^{g_{\eta}}=I-\tfrac{1}{2}\dot{g}_{\eta}+\mathfrak{q}^{ \prime}(\eta)\] where \(\mathfrak{q}^{\prime}\) obeys the same bound as \(\mathfrak{q}\). Some calculation (actually quite a lot) then yields the formula for \(\not{\mathbb{D}}_{0}\) and subtracting off the known formulas for \(\not{\mathbb{D}}_{0}\) and \(\mathrm{d}\not{\mathbb{D}}_{0}\) yields the result. The \(\mathcal{B}_{\varphi},M^{1}\), and \(M^{2}\) terms come from the quadratic terms; and \(F(\eta,\Phi_{0}+\varphi)\) from the cubic and higher order terms. The argument is identical for a general \(p\) using the pullback metric and perturbation \(p_{\eta}:=F_{\eta}^{*}p\). Straightforward differentiation now shows the following precise forms for the first and second derivatives. In these formulas, we use the notation that e.g. \(F(p^{3},q^{2},s)\) to denote a term depending cubicly on \(p\) and its derivatives, quadratically on \(q\) and its derivatives, and linearly on \(s\) and its derivatives: **Corollary 8.10**.: The derivative at a point \((\mathcal{Z}_{0},\Phi_{0})+(\eta,\varphi)\) is given by \[\begin{array}{lll}\mathrm{d}_{(\eta,\varphi)}\not{\mathbb{D}}_{p}(v,\phi)&=& \mathrm{d}_{(\mathcal{Z}_{0},\Phi_{0})}\not{\mathbb{D}}_{p}(v,\phi)\\ &&+\ (\mathcal{B}_{\varphi}+\mathfrak{B}_{\varphi})(v)\ +\ (\mathcal{B}_{\phi}+ \mathfrak{B}_{\phi})(\eta)\\ &&+\ M^{1}(\eta^{\prime},v^{\prime})\nabla(\Phi_{0}+\varphi)\ +\ M^{1}(\eta^{ \prime},\eta^{\prime})\nabla\phi\\ &&+\ M^{2}(\eta^{\prime},v^{\prime\prime})(\Phi_{0}+\varphi)\ +\ M^{2}(v^{ \prime},\eta^{\prime\prime})(\Phi_{0}+\varphi)\ +\ M^{2}(\eta^{\prime},\eta^{\prime\prime})\phi\\ &&+\ F^{1}(\eta^{3},\phi)\ +\ F^{2}(\eta^{2},v,\Phi_{0}+\varphi)\end{array}\] where the subscript \(p\) is kept implicit on the right hand side. Alternatively, the terms linear in \(\phi\) combine to form the Dirac operator \[\not{\mathbb{D}}_{p_{\eta}}\phi=\not{\mathbb{D}}_{p}\phi+\ (\mathcal{B}_{\phi}+ \mathfrak{B}_{\phi})(\eta)+M^{1}(\eta^{\prime},\eta^{\prime})\nabla\phi+M^{2}( \eta^{\prime},\eta^{\prime\prime})\phi+F^{1}(\eta^{3},\phi) \tag{8.18}\] with respect to the pullback metric and perturbation \(p_{\eta}:=F_{\eta}^{*}(p)\). **Corollary 8.11**.: The second derivative at a point \(({\cal Z}_{0},\Phi_{0})+(\eta,\varphi)\) is given by \[{\rm d}^{2}_{(\eta,\varphi)}\bar{\mathbb{D}}_{p}\Big{(}(v,\phi),(w, \psi)\Big{)} = \ \ ({\cal B}_{\psi}+{\mathfrak{B}}_{\psi})(v)\ +\ ({\cal B}_{\phi}+{ \mathfrak{B}}_{\phi})(w)\] \[+\ M^{1}(w^{\prime},v^{\prime})\nabla(\Phi_{0}+\varphi)\ +\ M^{1}(\eta^{\prime},v^{\prime})\nabla\psi\ +\ M^{1}(\eta^{\prime},w^{\prime})\nabla\phi\] \[+\ M^{2}(w^{\prime},v^{\prime\prime})(\Phi_{0}+\varphi)\ +\ M^{2}(\eta^{\prime},v^{\prime\prime})\psi\ +\ M^{2}(v^{\prime},w^{\prime\prime})(\Phi_{0}+\varphi)\] \[+\ M^{2}(v^{\prime},\eta^{\prime\prime})\psi\ +\ M^{2}(\eta^{\prime},w^{\prime\prime})\phi\ +\ M^{2}(w^{\prime},\eta^{\prime\prime})\phi\] \[+\ F^{3}(\eta^{2},w,\varphi)\ +\ F^{4}(\eta^{2},v,\psi)\ +\ F^{5}(\eta,v,w,\Phi_{0}+\varphi)\] where the subscript \(p\) is kept implicit on the right hand side. ### Tame Frechet Spaces This section introduces the tame Frechet spaces used in the proof of Theorem 1.4 and Corollary 1.5. While there is a natural Frechet space of normal vector fields \(\eta\) (this being \(C^{\infty}({\cal Z}_{0};N{\cal Z}_{0})\) with the Frechet structure arising from the \(L^{m,2}\)-norms), there are several possible choices of Frechet spaces for the spinors, arising from different versions of the boundary and edge spaces. The relevant spinors are those lying in \({\bf P}_{\cal X}\) and \({\bf P}_{\cal Y}\) for the domain and codomain respectively, i.e. those spinors with polyhomogeneous expansions (8.1-8.2). While the spaces \({\bf P}_{\cal X}\) and \({\bf P}_{\cal Y}\) are themselves tame Frechet spaces, these Frechet structures are rather unwieldy and it is advantageous to enlarge the domain and codomain to spaces where it is easier to obtain estimates and then invoke Theorem 7.4 with the property (P) of polyhomogeneity which holds on \({\bf P}_{\cal X}\) and \({\bf P}_{\cal Y}\). The mixed boundary and edge spaces \(rH^{m,1}_{{\rm b},e}\) and \(H^{m}_{\rm b}\) defined in Section 2 enlarge the domain and codomain and their norms facilitate much easier estimates using the material of Sections 2-4. Unfortunately, these spaces are slightly too large and it is impossible to control the higher order terms of the expansions (8.1-8.2) simply in terms of of these norms. To balance these conflicting advantages of \(rH^{m,1}_{{\rm b},e}\) and \({\bf P}_{\cal X}\), we opt for intermediate spaces which supplement the \(rH^{m,1}_{{\rm b},e}\) and \(H^{m}_{\rm b}\)-norms with the norm of the higher order terms in (8.1-8.2) using a stronger weight. Analogously to \(rH^{m,1}_{{\rm b},e}\) and \(H^{m}_{\rm b}\) denote \(r^{1+\nu}H^{m,1}_{{\rm b},e}\) and \(r^{\nu}H^{m}_{\rm b}\) the spaces formed by adding an overall weight of \(r^{-2\nu}\) in the norm (2.12). Equivalently, \[\varphi\in rH^{m,1}_{{\rm b},e}\ \ \Leftrightarrow\ \ r^{\nu}\varphi\in r^{1+\nu}H^{m,1}_{{\rm b},e}\] so that the multiplication map \(r^{\nu}\) is a bounded linear isomorphism, and similarly for \(r^{\nu}H^{m}_{\rm b}\). Fix \(\nu=0.9\) and define Banach spaces \[r{\cal H}^{m,1} := \left\{\varphi\ \ \Big{|}\ \ \|\varphi\|_{r{\cal H}^{m,1}}:=\left( \|\varphi\|^{2}_{rH^{m,1}_{{\rm b},e}}\ +\ \|(r\partial_{r}-\tfrac{1}{2})\varphi\|^{2}_{r^{1+\nu}H^{m-1,1}_{{\rm b}}}\ \right)^{1/2}<\infty\right\}\] \[{\cal H}^{m,0} := \left\{\psi\ \ \Big{|}\ \ \|\psi\|_{{\cal H}^{m,0}}:=\left(\ \|\psi\|^{2}_{H^{m}_{\rm b}}\ +\ \|(r \partial_{r}+\tfrac{1}{2})\psi\|^{2}_{r^{\nu}H^{m-1}_{{\rm b}}}\ \right)^{1/2}<\infty\right\}\] \[r^{-1}{\cal H}^{m,-1} := \left\{\psi\ \ \Big{|}\ \ \|\psi\|_{r^{-1}{\cal H}^{m,-1}}:=\left(\ \|\psi\|^{2}_{r^{-1}H^{m,-1}_{{\rm b},e}}\ +\ \|(r \partial_{r}+\tfrac{3}{2})\psi\|^{2}_{r^{-1+\nu}H^{m-1,-1}_{{\rm b},e}}\ \right)^{1/2}<\infty\right\}.\] with the indicated norms. These spaces are defined using the Fermi coordinates and norms of the base parameter \(p_{0}\) and do not depend on \(p\in{\cal P}\). Using these, we now define the spaces used in the proofs of Theorem 1.4 and 1.5. **Lemma 8.12**.: The spaces \[{\cal X}:=\bigcap_{m\geqslant 0}X^{\prime}_{m}\oplus{\mathbb{R}}\oplus X^{ \prime\prime}_{m} {\cal Y}:=\bigcap_{m\geqslant 0}Y^{\prime}_{m}\oplus Y^{\prime\prime}_{m} \oplus{\mathbb{R}}\] where \[X^{\prime}_{m} := L^{m,2}({\cal Z}_{0};N{\cal Z}_{0}) Y^{\prime}_{m} :={\bf Ob}\cap H^{m}_{\rm b}(Y-{\cal Z}_{0};S_{0}) \tag{8.19}\] \[X^{\prime}_{m} := r{\cal H}^{m,1}(Y-{\cal Z}_{0};S_{0}) Y^{\prime\prime}_{m} :={\mathfrak{R}}\ \cap\ {\cal H}^{m,0}(Y-{\cal Z}_{0};S_{0}) \tag{8.20}\] and \(\mathbb{R}\) has the standard norm are tame Frechet spaces as in Definition 7.1, and on an open neighborhood \(U\subset\mathcal{X}\), \[\overline{\mathbb{D}}_{p}:\mathcal{P}\times U\to\mathcal{Y}\] is a tame Frechet map. Proof.: The interpolation inequalities in item (I) of Definition 7.1 are immediate from those on the standard spaces \(L^{m,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\) and those from Lemma 2.10 which apply equally well for different weights. The smoothing operators whose existence is the content of item (II) of Definition 7.1 are constructed in Appendix B. That \(\overline{\mathbb{D}}_{p}\) is a tame Frechet map is obvious for the \(H^{m}_{\mathrm{b}}\)-norms, and for the \((r\hat{\partial_{r}}-\frac{1}{2})\) terms follows from the commutation relations in the upcoming Lemma 8.15. **Remark 8.13**.: Since \(\mathbf{Ob}\) consists of solutions of the elliptic edge operator (this being \(\not{D}\) or \(\not{D}-\Lambda_{p}\mathrm{Id}\)) which have expansions with index set \(\mathbb{Z}^{+}-\frac{1}{2}\), edge bootstrapping (see [45] Equation (7.7) and the accompanying discussion) implies that for \(\psi\in\mathbf{Ob}\) obeys \[\|(r\hat{\partial_{r}}+\tfrac{1}{2})\psi\|_{rH^{m-1}_{\mathrm{b}}}\leqslant C \|\psi\|_{H^{m}_{\mathrm{b}}}.\] Since \(\nu\leqslant 1\) it follows that the norm on \(Y^{\prime}_{m}\) is equivalent to using the \(\mathcal{H}^{m,0}\) norm. As explained at the beginning of the subsection, the point is that the additional terms allows control of the higher order terms of expansions (8.1-8.2) in \(\mathbf{P}_{\mathcal{X}}\cap r\mathcal{H}^{m,1}\). The following key lemma, proved in Appendix B makes this precise: **Lemma 8.14**.: Suppose that \(\varphi\in r\mathcal{H}^{m,1}\) is a spinor. Then following bound holds pointwise on \(Y-\mathcal{Z}_{0}\): \[|(\nabla^{\mathrm{b}})^{m}\varphi|\leqslant C_{m}r^{1/2}\|\varphi\|_{r \mathcal{H}^{m+4,1}}.\] The final two lemmas needed before the verification of Hypotheses **(I)-(III)** are effectively bookkeeping that show the Dirac operator \(\not{D}:r\mathcal{H}^{1,1}\to\mathcal{H}^{1,0}\) behaves in similar fashion to on the spaces from Section 2. Fixing a parameter \(p\), let \(r\mathcal{H}^{\perp}\) be the \(L^{2}\)-orthogonal complement of \(\Phi_{p}\) in \(r\mathcal{H}^{1,1}\subset rH^{1}_{e}\). Denote the extended Dirac operator with the \(\mathbb{R}\)-factor included at a parameter \(p\) by \[\overline{\not{D}}_{p}=\begin{pmatrix}\not{D}_{p}&0\\ \langle-,\Phi_{0}+\varphi\rangle&\langle-,\Phi_{0}+\varphi\rangle\end{pmatrix} \begin{subarray}{c}r\mathcal{H}^{\perp}\\ \oplus\\ \mathbb{R}\Phi_{p}\end{subarray}\quad\longrightarrow\quad\begin{subarray}{c} \mathfrak{R}_{p}\\ \oplus\\ \mathbb{R}.\end{subarray}\] **Lemma 8.15**.: Provided that \(m_{0}\geqslant 10\) and \(0<\delta_{0}<<1\) is sufficiently small, then for \(p\in V_{0}\) the extended Dirac operator \[\overline{\not{D}}_{p}:r\mathcal{H}^{1,1}\oplus\mathbb{R}\to(\mathfrak{R}_{p }\cap\mathcal{H}^{1,0})\oplus\mathbb{R}\] is an isomorphism and the estimate \[\|\varphi\|_{r\mathcal{H}^{1,1}}\leqslant C\|\overline{\not{D}}_{p}\varphi\|_ {\mathcal{H}^{1,0}\oplus\mathbb{R}}\] holds uniformly for \(p\in V_{0}\). Proof.: We begin by showing that the (unextended) Dirac operator \(\not{D}\) satisfies the following estimate: if \(\not{D}\varphi=f\) then \[\|\varphi\|_{rH^{1,1}_{\mathrm{b},e}}+\|(r\hat{\partial_{r}}-\tfrac{1}{2}) \varphi\|_{r^{1+\nu}H^{1}_{\mathrm{b}}}\leqslant C\Big{(}\|f\|_{H^{1}_{\mathrm{ b}}}+\|(r\hat{\partial_{r}}+\tfrac{1}{2})f\|_{r^{\nu}L^{2}})+\|\varphi\|_{r^{\nu}H^{1}_{ \mathrm{b}}}\Big{)}. \tag{8.21}\] Since \(\nu<1\) by choice, the inclusion \(rH^{1,1}_{\mathrm{b},e}\hookrightarrow r^{\nu}H^{1}_{\mathrm{b}}\) constituting the last term is compact. We first prove (8.21) for \(p=p_{0}\). That the first term is bounded by the right-hand side immediate from the estimate for \(\not{D}:rH^{1,1}_{\mathrm{b},e}\to H^{1}_{\mathrm{b}}\) (Corollary 2.11). For the second term, we apply the elliptic estimate \[\|\varphi\|_{r^{1+\nu}H^{1}_{\mathrm{b}}}\leqslant C\Big{(}\|\not{D}\varphi\|_ {r^{\nu}L^{2}}+\|\varphi\|_{r^{\nu}L^{2}}\Big{)} \tag{8.22}\] for \(\not{D}:r^{1+\nu}H^{1}_{e}\to r^{\nu}L^{2}\) to term \((r\partial_{r}-\frac{1}{2})\varphi\). This estimate cannot be derived by integration by parts as in Section 2 and instead follows from parametrix methods (see Theorem 6.1 of [45] or [70]). Then, since the commutation relations \[(r\partial_{r}+\tfrac{1}{2})\partial_{r} = \partial_{r}(r\partial_{r}-\tfrac{1}{2}) \tag{8.23}\] \[(r\partial_{r}+\tfrac{1}{2})\tfrac{1}{r}\partial_{\theta} = \tfrac{1}{r}\partial_{\theta}(r\partial_{r}-\tfrac{1}{2}). \tag{8.24}\] hold writing \(\not{D}=\not{D}_{0}+\mathfrak{d}\) as in Lemma 3.6 shows that \[\not{D}(r\partial_{r}-\tfrac{1}{2})\varphi=(r\partial_{r}+\tfrac{1}{2})\not {D}\varphi+B\varphi\] where \(B\) is a lower order term such that \(B:r^{\nu}H^{1}_{b}\to r^{\nu}L^{2}\) is bounded. Applying (8.22) and substituting this expression yields (8.21) for \(p=p_{0}\). The fact that \(\overline{\not{D}}_{p_{0}}\) is an isomorphism then follows from a standard proof by contradiction (e.g. [40] Lemma 10.4.9). For \(p\neq p_{0}\) it is straightforward to show that writing \(\not{D}_{p}=\not{D}_{p_{0}}+\mathfrak{d}_{p}\) and using the commutations (8.23-8.24) yields \[\|\mathfrak{d}_{p}\varphi\|_{\mathcal{H}^{1,0}}\leqslant C\delta_{0}\|\varphi \|_{r\mathcal{H}^{1,1}}\] completing the lemma. Finally, the projection operators to \(\mathbf{Ob}\) and \(\mathfrak{R}\) are well-behaved on the new spaces analogously to Corollary 2.12 item (C). **Lemma 8.16**.: The projection operators \[\Pi^{\mathrm{Range}}=\not{D}_{p}P_{p}\not{D}_{p}^{\star}:\mathcal{H}^{m}\to \mathfrak{R}_{p}\cap\mathcal{H}^{m}\qquad\qquad\qquad\Pi^{\mathrm{ker}}=1- \not{D}_{p}P_{p}\not{D}_{p}^{\star}:\mathcal{H}^{m}\to\mathbf{Ob}_{p}\cap \mathcal{H}^{m}\] are bounded. Proof.: For the \(H^{m}_{\mathrm{b}}\)-term of the \(\mathcal{H}^{m}\)-norm this follows directly from Corollary 2.12. For the second term, notice that by (8.22) and the analogous estimate for \(\not{D}_{p}^{\star}\not{D}_{p}:r\mathcal{H}^{m,1}\to r^{-1}\mathcal{H}^{m,-1}\), one has \[\not{D}_{p}P_{p}\not{D}_{p}^{\star}:r^{\nu}H^{m}_{\mathrm{b}}\to r^{\nu}H^{m}_{ \mathrm{b}} \tag{8.25}\] is bounded. Writing \[(r\partial_{r}+\tfrac{1}{2})D_{p}P_{p}\not{D}_{p}^{\star}=D_{p}P_{p}\not{D}_{p }^{\star}(r\partial_{r}+\tfrac{1}{2})+[(r\partial_{r}),D_{p}P_{p}\not{D}_{p}^{ \star}]\] and applying (8.25) to the first term, then using that \([(r\partial_{r}+\tfrac{1}{2}),D_{p}P_{p}\not{D}_{p}^{\star}]:H^{m-1}_{\mathrm{ b}}\to rH^{m}_{\mathrm{b}}\) is bounded for the second term yields the result. To prove that the commutator \([(r\partial_{r}+\tfrac{1}{2}),D_{p}P_{p}\not{D}_{p}^{\star}]:H^{m-1}_{\mathrm{ b}}\to rH^{m}_{\mathrm{b}}\) is bounded, it suffices to consider the case of the product metric on the model space \(S^{1}\times\mathbb{R}^{2}\) as in Example 3.2. The result for \(p=p_{0}\) follows easily from the same argument after writing \(\not{D}_{p_{0}}=\not{D}_{0}+\mathfrak{d}\) where \(|\mathfrak{d}\psi|\leqslant C(r|\nabla\psi|+|\psi|)\) as in Lemma 3.6. For a general \(p\), the same argument applies in the Fermi coordinates formed using \(p\) for the boundary Sobolev spaces defined using \(p\), whose norms are uniformly (tamely) equivalent. In the product case, the commutation relations (8.23-8.24) imply \[\not{D}_{0}(r\partial_{r}-\tfrac{1}{2})\varphi = (r\partial_{r}+\tfrac{1}{2})\not{D}_{0}\varphi-\gamma_{t}\nabla_{ t}\varphi\] \[P_{0}(r\partial_{r}+\tfrac{3}{2})f = (r\partial_{r}-\tfrac{1}{2})P_{0}f+P_{0}(\gamma_{t}\nabla_{t}\not {D}_{0}+\not{D}_{0}\gamma_{t}\nabla_{t})P_{0}f\] where \(P_{0}f=u\Leftrightarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! This is effectively identical to the proof of (8.9) in Proposition 8.6, and follows by considering the conormal regularity and Case (C) or Corollary 6.5. Indeed, by Remark 6.6 and Proposition 8.9, it is easy to see that each term of \(\mathfrak{t}_{p_{\eta}}\) is of the form either \(v^{\prime}(t)\Xi_{0}(t_{p},x_{p},y_{p})\) or \(v^{\prime\prime}(t)\Xi_{1}(t_{p},x_{p},y_{p})\) just as (8.9). Here, both \(\Xi_{0},\Xi_{1}\) can be written respectively as the sum of terms \(m_{p_{\eta}}(y)\nabla(\Phi_{0}+\varphi)\) and \(m_{0}(y)\nabla\varphi\) or \(m_{p_{\eta}}(y)(\Phi_{0}+\varphi)\) and \(m_{0}(y)\varphi\), where \(m_{p_{\eta}},m_{0}\) are smooth endomorphisms bounded in terms of the norms of \(p_{\eta},p_{0}\) respectively. In particular, by Lemma 8.14 each such term has bounds \(|\nabla_{t}^{m}\Xi_{i}|\leqslant Cr^{i-1/2}\|(p,\eta,\varphi)\|_{m+m_{0}}\) for \(m\leqslant 3\). (8.31) then follows exactly as (8.9) by writing \(v(t)\) in the Fermi coordinates of \(p_{\eta}\). We conclude, after possibly reducing \(\delta_{0}\), that \(\overline{T}_{p_{\eta}}:L^{3,2}\oplus\mathbb{R}\to L^{5/2,2}\oplus\mathbb{R}\) is an isomorphism. The full linearization acting on \((v,\lambda,\phi)\) now has the block-diagonal form \[\mathrm{d}_{(\eta,\varphi)}\overline{\overline{\mathcal{B}}}_{p}=\begin{pmatrix} \overline{T}_{p_{\eta}}&*\\ \mathcal{B}^{\mathrm{Rg}}&\overline{\overline{\mathcal{D}}}_{p_{\eta}}\end{pmatrix} :\begin{array}{ccc}L^{3,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\oplus\mathbb{ R}&&\mathbf{Ob}_{p_{\eta}}^{5/2}\\ \oplus&r\mathcal{H}^{1,1}&&(\mathfrak{R}_{p_{\eta}}\cap\mathcal{H}^{1,0}) \oplus\mathbb{R}.\end{array} \tag{8.32}\] where the top left entry is as above and \(\mathcal{B}^{\mathrm{Rg}}\) is the \(\mathfrak{R}_{p_{\eta}}\)-component of the terms (8.30). By what was said above in conjunction with Lemma 8.15, the diagonal entries are both isomorphism. We claim that the bottom left entry \(\mathcal{B}^{\mathrm{Rg}}\) is bounded. Since \(\mathcal{B}^{\mathrm{Rg}}=(\not{\mathcal{D}}_{p}P_{p}\not{\mathcal{D}}_{p}^{ \star})\mathcal{B}\) by definition, Lemma 8.16 shows that it suffices to prove that \[\mathcal{B}:L^{3,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\oplus\mathbb{R}\to r \mathcal{H}^{1,1}\] is bounded. The boundedness into \(H^{1}_{\mathrm{b}}\) is obvious using the bounds on \(\Phi_{0}\), and \(\|(\eta,\varphi)\|_{m_{0}=11}\leqslant C\delta_{0}\). For the boundedness into \(r^{\nu}L^{2}\), note that since \(\varphi\in r\mathcal{H}^{1,1}\cap\mathbf{P}_{\mathcal{X}}\) is polyhomogeneous with index set \(\mathbb{Z}^{+}+\frac{1}{2}\), the operator \((r\partial_{r}+\frac{1}{2})\) annihilates the order \(r^{-1/2}\) term of \(\nabla(\Phi_{0}+\varphi)\) and all other terms are \(O(r^{1/2})\) so are integrable with the stronger weight. This shows that 8.32 is invertible in the case that \(*=0\). In general if \(*\neq 0\), the alternative decomposition (8.18) implies that the top right entry \(*\) has image in the the 1-dimensional subspace spanned by \(\not{D}_{p_{\eta}}\Phi_{p_{\eta}}\neq 0\). By the polyhomogeneity of \(\Phi_{p_{\eta}}\) from Lemma 8.2, this automatically lands in \(\mathbf{Ob}_{p_{\eta}}^{5/2}\) where it has size \(O(\delta_{0})\) by Lemma 8.2. It therefore follows that for \(\delta_{0}\) sufficiently small, (8.32) is an isomorphism, and the estimate \[\|(v,\lambda,\phi)\|_{L^{3,2}\oplus\mathbb{R}\oplus r\mathcal{H}^{1,1}}\leq C \|\mathrm{d}_{(\eta,\varphi)}\overline{\overline{\mathcal{B}}}_{p}(v,\lambda, \phi)\|_{\mathbf{Ob}^{5/2}\oplus\mathcal{H}^{1,0}\oplus\mathbb{R}}.\] holds. Consequently, the linearization is injective, and and surjective onto \(\mathbf{Ob}_{p_{\eta}}^{5/2}\oplus\mathfrak{R}_{p_{\eta}}\oplus\mathbb{R}\). It remains to show that if the right-hand side is smooth, then \((v,\lambda,\phi)\in\mathcal{X}\) is also smooth, which follows from bootstrapping using the tame estimate of the next lemma. **Remark 8.18**.: The above lemma is, in some sense, the crux of the entire proof of Theorem 1.4. A key point in the proof is that one _must_ use the splitting of \(L^{2}\) determined by \(p_{\eta}\), which justifies in hindsight the mildly laborious preparation of Sections 8.1-8.2. Trying to prove these estimates using a different splitting leads the top right entry in the analogue of (8.32) to have infinite-dimensional image and be unbounded in the \(H^{5/2}\)-norm (in contrast to \(*\)). In such a splitting, the proof of Lemma 8.17 fails, and the inverse of \(\mathrm{d}_{(\eta,\varphi)}\overline{\overline{\mathcal{B}}}_{p}\) appears non-tame. **Lemma 8.19**.: Hypothesis **(II)** of Theorem 7.4 holds for \(\overline{\overline{\mathcal{B}}}_{p}\), i.e. there are \(s,s^{\prime}\in\mathbb{N}\) such that the following holds provided \(\delta_{0}\) is sufficiently small: for \(p\in V_{0}\) and \((\varphi,\eta)\in U_{0}\) the unique solution \(u=(v,\phi,\dot{\lambda})\) of \[\mathrm{d}_{(\varphi,\eta)}\overline{\overline{\mathcal{B}}}_{p}u=g\] obeys the tame estimate \[\|u\|_{m}\leqslant C_{m}\,\,\Big{(}\|g\|_{m+s}+\|(p,\eta,\varphi)\|_{m+s^{ \prime}}\|g\|_{m_{0}}\Big{)}. \tag{8.33}\] uniformly over \(V_{0}\times(U_{0}\cap\mathbf{P}_{\mathcal{X}})\) for all \(m\geqslant m_{0}\). Proof.: This follows essentially from differentiating the previous proof. We will show that there are tame elliptic estimates of the following form for \(\overline{T}_{p_{\eta}}\) and \(\overline{\not{D}}_{p_{\eta}}\) individually: if \(\overline{T}_{p_{\eta}}v=g_{0}\) and \(\overline{\not{D}}_{p_{\eta}}\phi=g_{1}\) then \[\|(v,\lambda)\|_{m} \leqslant C_{m}\Big{(}\|g_{0}\|_{m+3/2}\ +\ \|(p,\varphi,\eta)\|_{m+s^{\prime}}\|g_{0}\|_{m_{0}}\Big{)} \tag{8.34}\] \[\|\phi\|_{m} \leqslant C_{m}\Big{(}\|g_{1}\|_{m}\ +\ \|(p,\varphi,\eta)\|_{m+s^{\prime}}\|g_{1}\|_{m_{0}}\Big{)} \tag{8.35}\] for \(m_{0}=11\). Indeed, given these, one concludes the lemma as follows: write \(g=(g_{0},g_{1})\), so that by the decomposition (8.32) one has \(\overline{T}_{p_{\eta}}(v)=g_{0}+*\) and \(\overline{\not{D}}_{p_{\eta}}=g_{1}-{\cal B}^{\rm Rg}\) where \(*=c\not{D}_{p_{\eta}}\Phi_{p_{\eta}}\) with \(c\) the \(\Phi_{p_{\eta}}\) component of \(u\in X\). Applying (8.34) shows \[\|(v,\lambda)\|_{m} \leqslant C_{m}\Big{(}\|g_{0}+*\|_{m+3/2}\ +\ \|(p,\varphi,\eta)\|_{m+s^{\prime}}\|g_{0}+*\|_{m_{0}}\Big{)} \tag{8.36}\] \[\leqslant C_{m}\Big{(}\|g_{0}\|_{m+3/2}\ +\ \|(p,\varphi,\eta)\|_{m+s^{ \prime}}\|g_{0}\|_{m_{0}}\ +\ |c|\cdot\|(p,\eta,\varphi)\|_{m+4}\] (8.37) \[\ \ +\ |c|\cdot\|(p,\varphi,\eta)\|_{m+s^{\prime}}\|(p,\eta, \varphi)\|_{m_{0}}\Big{)}\] (8.38) \[\leqslant C_{m}\Big{(}\|g_{0}\|_{m+3/2}\ +\ \|(p,\varphi,\eta)\|_{m+s^{ \prime}}\|g\|_{m_{0}}\Big{)} \tag{8.39}\] Where we have used Lemma 8.2 to bound \(\|\not{D}_{p_{\eta}}\Phi_{p_{\eta}}\|_{m+3/2}\) by the \(L^{2}\)-norm of \(c\Phi_{p_{\eta}}\) and the \(m+4\) norm of \(p_{\eta}\). By Lemma 8.17, the \(L^{2}\)-norm is bounded by \(|c|\leqslant\|u\|_{0}\leqslant\|g\|_{m_{0}}\)and \(\|(p,\eta,\varphi)\|_{m_{0}}\leqslant C\). Similarly, for the second component, (8.35) shows \[\|\phi\|_{m} = C_{m}\Big{(}\|g_{1}-{\cal B}^{\rm Rg}(v,\lambda)\|_{m}\ +\ \|(p,\varphi,\eta)\|_{m+s^{\prime}}\|g_{1}-{\cal B}^{\rm Rg}(v,\lambda)\|_{0} \Big{)}\] \[= C_{m}\Big{(}\|g_{1}\|_{m}\ +\ \|(v,\lambda)\|_{m+s}+\|(p,\varphi, \eta)\|_{m+s^{\prime}}\|g\|_{m_{0}}+\|(p,\varphi,\eta)\|_{m+s^{\prime}}\|(v, \lambda)\|_{4}\Big{)}.\] In this, we have used that there is a (tame) boundedness estimate \[\|{\cal B}^{\rm Rg}(v,\lambda)\|_{m}\leqslant C_{m}\Big{(}\|(v,\lambda)\|_{m +s}+\|(p,\eta,\varphi)\|_{m+4}\|(v,\lambda)\|_{m_{0}}\Big{)}.\] Such an estimate from first applying Lemma 8.16, after which it suffices to show the estimate for \({\cal B}(v,\lambda)\) rather than just the range components (in this, we implicitly use that fact that the \({\cal H}^{m,0}\) norm is tamely equivalent for different metrics from Lemma 8.16). For \({\cal B}(v,\lambda)\), the estimate follows from interpolation and Young's inequality (see the subsequent Lemma 8.20). Substituting the previous estimate (8.39) on \(\|(v,\lambda)\|_{m}\) and using that \(\|(v,\lambda)\|_{4}\leqslant\|g\|_{m_{0}}\) by Lemma 8.17 then shows that \[\|(v,\phi,\lambda)\|_{m} \leqslant C_{m}\Big{(}\|g_{1}\|_{m+3/2}\ +\ \|(p,\varphi,\eta)\|_{m+s^{\prime}}\|g\|_{m_{0}}\Big{)}\] as desired. Thus to prove the lemma, we verify (8.34) and (8.35). The latter follows from differentiating elliptic estimates in the standard way. To elaborate briefly, we begin with the estimate for \(\not{D}_{p_{\eta}}\) and the \(H^{m}_{\rm b}\) term in the norms. One shows by iterating commutators that there is an elliptic estimate of the form \[\|\phi\|_{rH^{m,1}_{{\rm b},\varepsilon}}\leqslant C_{m}\Big{(}\|\overline{ \not{D}}_{p_{\eta}}\phi\|_{H^{m}_{\rm b}}\ +\ \|(p,\eta,\varphi)\|_{s^{\prime}}\|\phi\|_{rH^{m-1,1}_{{\rm b}, \varepsilon}}\ +\ \ldots\ +\ \|(p,\eta,\varphi)\|_{m+s^{\prime}}\|\phi\|_{rH^{1}_{{\rm b}}}\Big{)} \tag{8.40}\] for each \(m\) and \(s^{\prime}<m_{0}\). Given such an estimate, each middle term \(\|(p,\eta,\varphi)\|_{k+s^{\prime}}\|\phi\|_{rH^{m-k-1,1}_{{\rm b},\varepsilon}}\) can be absorbed into the \(k=0,m\) ones by Young's inequality and interpolation with \(m_{2}=m+s^{\prime}\) and \(m_{1}=s^{\prime}\) on the first factor and \(m_{2}=m-1\) and \(m_{2}=0\) on the second factor. The tame estimates are then a consequence of induction by substituting the tame estimate on \(\|\phi\|_{rH^{m-1,1}_{{\rm b},\varepsilon}}\) beginning with the base case provided by Lemma 8.17, and using that \(\|(p,\eta,\varphi)\|_{s^{\prime}}\leqslant 1\). The same exact argument applies for the spaces \(r\mathcal{H}^{m,1}\) and \(\mathcal{H}^{m,0}\) using the elliptic estimate and commutation relations from Lemma 8.15, and the tame estimate (8.35) follows. Similarly, for (8.34) it suffices to show \[\|v\|_{L^{m+2,2}}\leqslant C_{m}\Big{(}|\overline{T}_{p_{q}}v\|_{L^{m+3/2,2}}\ +\ \|(p,\eta,\varphi)\|_{s^{\prime}}\|v\|_{L^{m-1+3/2,2}}\ +\ \ldots\ +\ \|(p,\eta,\varphi)\|_{m+s^{\prime}}\|v\|_{3/2}\Big{)}, \tag{8.41}\] which is again proved by iterating commutators, but taking care to ensure that the conormal regularity is preserved. First apply the \(m=0\) estimate to \(\nabla_{t}^{m}v\). To use the term \(\mathcal{B}_{\varphi}(v)\) as an example, one has \[\mathcal{B}_{\varphi}(\nabla_{t}^{m}v)=\nabla_{t}^{m}\mathcal{B}_{\varphi}(v) +\nabla_{t}\mathcal{B}_{\varphi}(\nabla_{t}^{m-1}v)+\mathcal{B}_{\nabla_{t} \varphi}(\nabla_{t}^{m-1}v)+\ldots+\mathcal{B}_{\nabla_{t}^{m}\varphi}(v).\] By the same argument as in Proposition 8.6 and Lemma 8.17, all except the first terms have conormal regularity \(3/2\), since \(\varphi\) is polyhomogenous. This leads to \[\|v\|_{L^{m+2,2}} \leqslant C_{m}\Big{(}\|\mathrm{ob}^{-1}\Pi_{p_{q}}(\nabla_{t}^{m} \mathcal{B}_{\varphi}(v)+\ldots+\nabla_{t}^{m}F^{2}(\eta^{2},v,\varphi))\|_{L^ {m+3/2,2}}\] \[+\ \|(p,\eta,\varphi)\|_{s^{\prime}}\|v\|_{L^{m-1+3/2,2}}\ +\ \ldots\ +\ \|(p,\eta,\varphi)\|_{m+s^{\prime}}\|v\|_{3/2}\Big{)}\] \[\leqslant C_{m}\Big{(}\|\nabla_{t}^{m}\Pi_{p_{q}}(\mathcal{B}_{\varphi}(v )+....+F^{2}(\eta^{2},v,\varphi))\|_{H^{3/2}_{\mathrm{b}}}\] \[+\ \|(\nabla\not{D}P\not{D}^{\star})\nabla^{m-1}\mathcal{B}_{ \varphi}(v)\|_{H^{3/2}_{\mathrm{b}}}+\ldots+\|(\nabla^{m}\not{D}P\not{D}^{ \star})\mathcal{B}_{\varphi}(v)\|_{H^{3/2}_{\mathrm{b}}}+...\] \[+\ \|(p,\eta,\varphi)\|_{s^{\prime}}\|v\|_{L^{m-1+3/2,2}}\ +\ \ldots\ +\ \|(p,\eta,\varphi)\|_{m+s^{\prime}}\|v\|_{3/2}\Big{)}\] where we have used Corollary 4.17 on the first term, and then expanded the commutators with the projection operator. Since \(\not{D}_{p_{q}}P_{p_{q}}\not{D}^{\star}_{p_{q}}\) is zeroth order with coefficients depending on \((p,\eta,\varphi)\), it is routine to check that the second line can be absorbed into the terms of the last line. Another application of Corollary 4.17 to the first term (noting the equivalence of norms arising from \(U\) is also tame) leads to 8.41, completing the lemma. **Lemma 8.20**.: Hypothesis **(III)** of Theorem 7.4 holds for \(\overline{\mathbb{B}}_{p}\), i.e. there are \(r,r^{\prime}\in\mathbb{N}\) such that the following holds provided \(\delta_{0}\) is sufficiently small: for \(p\in V_{0}\) and \((\varphi,\eta)\in U_{0}\), the second derivative obeys the tame estimate \[\|\mathrm{d}^{2}_{(\eta,\varphi)}\overline{\mathbb{B}}_{p}(u,v)\|_{m} \leqslant C_{m}\ \Big{(}\|u\|_{m+r}\|v\|_{m_{0}}\ +\ \|u\|_{m_{0}}\|v\|_{m+r}\ +\ \|u\|_{m_{0}}\|v\|_{m_{0}}\cdot(1+\|(p,\eta, \varphi)\|_{m+r^{\prime}})\Big{)}. \tag{8.42}\] for \(u,v\in X\) uniformly over \(V_{0}\times(U_{0}\cap\mathbf{P}_{\mathcal{X}})\) for all \(m\geqslant m_{0}\). Proof.: This tame estimate follows directly from using the boundedness of the terms comprising \(\mathrm{d}^{2}_{(\eta,\varphi)}\overline{\mathbb{B}}_{p}\) in conjunction with the interpolation inequalities. As in Corollary 8.11, the second derivative is given by \[\begin{array}{lll}\mathrm{d}^{2}_{(\eta,\varphi)}\overline{\mathbb{B}}_{p} \Big{(}(v,\phi),(w,\psi)\Big{)}&=&(\mathcal{B}_{\psi}+\mathfrak{B}_{\psi})(v) \ +\ (\mathcal{B}_{\phi}+\mathfrak{B}_{\phi})(w)\\ &&+\ M^{1}(w^{\prime},v^{\prime})\nabla(\Phi_{0}+\varphi)\ +\ M^{1}(\eta^{ \prime},v^{\prime})\nabla\psi\ +\ M^{1}(\eta^{\prime},w^{\prime})\nabla\phi\\ &&+\ M^{2}(w^{\prime},v^{\prime\prime})(\Phi_{0}+\varphi)\ +\ M^{2}(\eta^{ \prime},v^{\prime\prime})\psi\ +\ M^{2}(v^{\prime},w^{\prime\prime})(\Phi_{0}+\varphi)\\ &&+\ M^{2}(v^{\prime},\eta^{\prime\prime})\psi\ +\ M^{2}(\eta^{\prime},w^{ \prime\prime})\phi\ +\ M^{2}(w^{\prime},\eta^{\prime\prime})\phi\\ &&+\ F^{3}(\eta^{2},w,\varphi)\ +\ F^{4}(\eta^{2},v,\psi)\ +\ F^{5}(\eta,v,w,\Phi_{0}+\varphi) \end{array}\] For the sake of the proverbial deceased horse, we will prove the lemma for the term \(M^{2}(w^{\prime},v^{\prime\prime})(\Phi_{0}+\varphi)\); it is straightforward to verify that the same argument applies equally well to the remaining terms. To begin, we bound that \(H_{\mathrm{b}}^{m}\)-term in the norm. By Proposition 8.9 Item (3) part (iii), this term is itself a sum of terms of the form \(m_{p}(y)w^{\prime}v^{\prime\prime}(\Phi_{0}+\varphi)\). Differentiating the part involving \(\varphi\) of such a term, \[\|\nabla_{\mathrm{b}}^{m}(m_{p}(y)w^{\prime}v^{\prime\prime}\varphi )\|_{L^{2}} \leqslant C_{m}\sum_{0\leqslant k\leqslant m}\|\nabla_{\mathrm{b}}^{k}(v^{ \prime}w^{\prime\prime})\nabla_{\mathrm{b}}^{m-k}(m_{p}\varphi))\|_{L^{2}}\] \[\leqslant C_{m}\sum_{0\leqslant k\leqslant m}\|\nabla_{\mathrm{b}}^{k}(v^{ \prime}w^{\prime\prime})\|_{L^{2,2}(S^{1})}^{2}\|\nabla_{\mathrm{b}}^{m-k}(m_ {p}\varphi)\|_{H_{\mathrm{b}}^{2}}\] \[\leqslant C_{m}\sum_{0\leqslant k\leqslant m}\|v^{\prime}w^{\prime\prime} \|_{L^{2,2}(S^{1})}^{1-\frac{k}{m}}\ \|v^{\prime}w^{\prime\prime}\|_{L^{m+2,2}(S^{1})}^{2}\ \|m_{p}\varphi\|_{H_{\mathrm{b}}^{2}}^{1-\frac{k}{m}}\ \|m_{p} \varphi\|_{H_{\mathrm{b}}^{m+2}}^{\frac{k}{m}}\] where we have used the Sobolev embedding \(C^{0}\hookrightarrow H^{2}(S^{1})\) and then the interpolation inequalities with \(m_{2}=m+2,m_{1}=2\). By Young's inequality with exponents \(p=\frac{m}{k}\) and \(q=\frac{m}{m-k}\), one finds the above is bounded by \[\leqslant C_{m}\Big{(}\ \|v^{\prime}w^{\prime\prime}\|_{L^{m+2,2}(S^{1})}\|m_{p }\varphi\|_{L^{2,2}(S^{1})}\ +\ \|v^{\prime}w^{\prime\prime}\|_{L^{2,2}(S^{1})}\|m_{p}\varphi\|_{H_{ \mathrm{b}}^{m+2}}\Big{)}\] \[\leqslant C_{m}\Big{(}\ \|v^{\prime}\|_{L^{m+4,2}(S^{1})}\|w^{\prime\prime}\|_{L^{4,2 }(S^{1})}\ +\ \|v^{\prime}\|_{L^{4,2}(S^{1})}\|w^{\prime\prime}\|_{L^{m+4,2}(S^{1})}\] \[+\ \|v^{\prime}\|_{L^{4,2}(S^{1})}\|w^{\prime\prime}\|_{L^{4,2}(S ^{1})}\left(\|m_{p}\|_{H_{\mathrm{b}}^{m+4}}\|\varphi\|_{H_{\mathrm{b}}^{4}} +\|m_{p}\|_{H_{\mathrm{b}}^{1}}\|\varphi\|_{H_{\mathrm{b}}^{m+4}}\right)\Big{)}\] \[\leqslant C_{m}\Big{(}\ \|v\|_{L^{m+5,2}}\|w\|_{L^{6,2}}\ +\ \|v\|_{L^{5,2}}\|w\|_{L^{m+6,2}}\ +\ \|v\|_{L^{5,2}}\|w^{\prime\prime}\|_{L^{6,2}}\ \cdot\ \|(p,\eta,\varphi)\|_{m+6}\Big{)}\] where we have repeated the interpolation and Young's steps from above with \(m_{2}=m+4\) and \(m_{1}=4\) on both products, and then used the fact that \(6\leqslant m_{0}\) so that \(\|m_{p}\|_{H_{\mathrm{b}}^{4}}+\|\varphi\|_{H_{\mathrm{b}}^{4}}\leqslant C\). This shows the desired estimate for \(r,r^{\prime}=6\). The same steps apply to the \(r^{\nu}H_{\mathrm{b}}^{m-1}\) term in the norm using the commutation relations from Lemma 8.15. The other terms are similar, with the constant term in \((1+\|(p,\eta,\varphi)\|_{m+r^{\prime}})\) on the right hand side arising from the terms not involving \((p,\varphi,\eta)\) such as \(\mathcal{B}_{\psi}(v)\). ### Proofs of Theorem 1.4 and Corollary 1.5 In this subsection, we invoke the Nash-Moser Implicit Function Theorem 7.4 to conclude the proofs of Theorem 1.4 and Corollary 1.5, beginning with the latter. Proof of Corollary 1.5.: Lemmas 8.17, 8.19, and 8.20 verify hypotheses **(I)**, **(II)**, and **(III)** from Section 7.2 respectively on \(V_{0}\times(U_{0}\cap\mathbf{P}_{\mathcal{X}})\). The formula (8.7) and Lemma 8.8 (which extends easily to the spaces \(\mathcal{H}^{m,0}\)) show that \(f_{p}\in\mathbf{P}_{\mathcal{Y}}\) and \(\|f_{p}\|_{m}\leqslant C\|p\|_{m+s}\). It remains to show that the property (P) of being polyhomogeneous is propagated by the iteration in the sense of Definition 7.3. Lemma 8.12 and its proof in Appendix B show that the smoothing operators \(S_{\varepsilon},S_{\varepsilon}^{\mathrm{b}}\) preserve polyhomogeneity. It is evident from the definition of the Dirac operator \(\not{D}_{p}\) that for any metric (including pullback metrics \(p_{\eta}\)) \[\varphi\in\mathbf{P}_{\mathcal{X}}\ \ \Rightarrow\ \ \overline{\overline{\mathbb{D}}}_{p}(\eta, \varphi)\in\mathbf{P}_{\mathcal{Y}}\] preserves polyhomogeneity. To show polyhomogeneity (P) is propagated, we therefore verify that \[g\in\mathbf{P}_{\mathcal{Y}}\ \ \Rightarrow\ \ (\mathrm{d}_{(\eta,\varphi)} \overline{\overline{\mathbb{D}}}_{p})^{-1}g\in\mathbf{P}_{\mathcal{X}}.\] Let \(u=(\mathrm{d}_{(\eta,\varphi)}\overline{\overline{\mathbb{D}}}_{p})^{-1}g\) be the solution, and suppose that \(g\) is polyhomogeneous with index set \(\mathbb{Z}^{+}-\frac{1}{2}\). By the block diagonal decomposition (8.32) from Lemma 8.17, one has \(\overline{T}_{p_{\eta}}(v)=\Pi_{p_{\eta}}(g)\) in the case that the upper right entry \(*\) vanishes. In this case, since \(g\in Y\) is smooth, then \(\Pi_{0}(g)\in\mathbf{Ob}\cap H_{\mathrm{b}}^{m}\) for every \(m\geqslant 0\) and admits a weak expansion as in Remark 3.10. Lemma 8.14 applies identically in the case of index set \(\mathbb{Z}^{+}-\frac{1}{2}\) to show that the coefficients are smooth, hence \(\Pi_{0}(g)\) is polyhomogeneous with index set \(\mathbb{Z}^{+}-\frac{1}{2}\). Since the latter is a vector space, the right-hand side of the second equation \[\not{D}_{p_{\eta}}\phi=(1-\Pi_{0})(g)\] is polyhomogeneous with index set \(\mathbb{Z}^{+}-\frac{1}{2}\). Since the coefficients of \(\not{D}_{p_{\eta}}\) are smooth, this left-hand side must also be polyhomogeneous with index set \(\mathbb{Z}^{+}+\frac{1}{2}\) (see [45], Proposition 7.17) with the caveat of possibly having logarithm terms appear with the \(r^{1/2}\) coefficient. The case where \(*\) is non-zero is the same as \(\Phi_{p_{\eta}}\) is polyhomogeneous. The final point is to rule out the appearance of logarithm terms \(r^{1/2}\log(r)\). This is a consequence of the restrictions on the \(\theta_{p}\)-Fourier modes in Fermi coordinates that appear with the \(r^{1/2}\) coefficient and follows from formally solving the first term at each stage. Logarithm terms \(e^{ik\theta}r^{1/2}\log(r)\) would arise from the right-hand side having terms \(r^{-1/2}e^{\pm 3i\theta/2}\), but since the right-hand side is always a combination of \(\Pi_{0}(g)\) which has only modes \(r^{-1/2}e^{\pm i\theta/2}\) and the derivatives \(\nabla_{z},\nabla_{\overline{z}}\) of a polyhomogeneous spinor from the previous stage such terms never arise. This property is preserved by the smoothing operators \(S^{\mathrm{b}}_{\varepsilon}\) by construction (see Appendix B). We conclude that the property (P) of being polyhomogeneous in the sense of having expansions of the form (8.1)-(8.2) is propagated. By the Nash-Moser Implicit Function Theorem 7.4, there is an open neighborhood \(U\subset\mathcal{P}\) of smooth parameters such that for \(p\in U\) there exists a unique solution \((\mathcal{Z}_{p},\Phi_{p},\Lambda_{p})\) to the equation \[\not{D}_{p}(\mathcal{Z}_{p},\Phi_{p})=\Lambda_{p}\Phi_{p} \tag{8.43}\] and the triples \((\mathcal{Z}_{p},\Phi_{p},\Lambda_{p})\) define a smooth tame graph of \(U\subset U\times\mathcal{X}\). This completes the proof of Corollary 1.5 in the presence of Assumptions 4* and 5*. To eliminate the first is trivial as the deformations of \(\mathcal{Z}_{0}\) are local along each component. In the absence of Assumption 5*, the standard Kuranishi picture (see, e.g. Section 3.3 of [7]) applies to show that the set of parameters for which (8.43) holds is described by the zero set of a smooth tame map \[\kappa_{p}:U\times\mathbb{R}^{n}\to\mathbb{R}^{n}\] where \(n=\dim(\ker(T_{\Phi_{0}}))\) is the dimension of the kernel of the index \(0\) map from Section 6.3. Proof of Theorem 1.4.: The projection \(\pi(\not{M}_{\mathbb{Z}_{2}})\subseteq U\cap\mathcal{P}\) of the universal moduli space of \(\mathbb{Z}_{2}\)-harmonic spinors to the parameter space is defined by the zero-set \[\pi(\not{M}_{\mathbb{Z}_{2}})=\Lambda^{-1}(0)\cap U\] of the eigenvalue \(\Lambda:U\to\mathbb{R}\) of Corollary 1.5, and there is locally a unique \(\mathbb{Z}_{2}\)-harmonic spinor \((\mathcal{Z}_{p},\Phi_{p})\) up to normalization and sign for each \(p\in\Lambda^{-1}(0)\), hence the projection \(\pi\) is a local homeomorphism. In the presence of Assumption 5*, the map \(\Lambda:U\to\mathbb{R}\) is transverse to \(0\). To see this, let \(p(s)\) be a path of parameters which we will choose momentarily. By Corollary 1.5, such a path implicitly defines triples \((\mathcal{Z}_{s},\Phi_{s},\Lambda_{s})\) for \(s\) sufficiently small. Differentiating (8.43) at \(s=0\) yields the relation that \[\not{D}_{\hat{\mathcal{Z}}}\Phi_{0}+\not{D}_{\hat{p}}\Phi_{0}+\not{D}_{0}\hat {\Phi}=\dot{\Lambda}\Phi_{0} \tag{8.44}\] where \(\cdot\) denotes the \(s\)-derivative. We now choose \(p(s)=(g(s),B(s))\) so that the derivative \((\hat{g},\dot{B})\) has the following properties. Let \(\dot{B}\) be a smooth perturbation supported on a neighborhood disjoint from \(N_{r_{0}}(\mathcal{Z}_{0})\) such that \(\langle\gamma(\dot{B})\Phi_{0},\Phi_{0}\rangle\neq 0\). Given this, we define \(\dot{g}\) in terms of \(\dot{B}\) as follows. By Assumption 4*, we know that \(T_{\Phi_{0}}:L^{2,2}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\to\ker(\not{D}_{0}|_{L^ {2}})\) is injective with closed range and \(1\)-dimensional cokernel. Let \(\Phi_{1}\subseteq\ker(\not{D}|_{L^{2}})\) denote the orthogonal complement of its range. Since \(\overline{T}_{\Phi_{0}}\) is an isomorphism, we must have have \(\langle\Phi_{1},\Phi_{0}\rangle\neq 0\). Decompose \(\Pi_{0}(\gamma(\dot{B})\Phi_{0})=(c\Phi_{1},\xi)\), and set \(\dot{g}=\dot{g}_{\eta}\) where \(T_{\Phi_{0}}(\eta)=-\xi\). Taking the inner product of (8.45) with \(\Phi_{1}\) then yields \[\langle\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\|\chi_{n}^{2}u_{\ell}\|_{rH_{x}^{1}(B_{n\ell})}^{2}\leqslant C\|\not{D}_{0}(\chi_ {n}^{2}u_{\ell})\|_{L^{2}(Y_{0})}^{2}\ \leqslant C\left(\ \|\chi_{n}^{2}\not{D}_{0}u_{\ell}\|_{L^{2}(B_{n\ell})}^{2}+\|d\chi_{n}.u_{ \ell}\|_{L^{2}(B_{n\ell})}^{2}\right).\] (A.5) By choosing \(c_{1}\) sufficiently small (and using that \(|\chi_{n}^{2}|\leqslant|\chi_{n}|\)), the \(\|\chi_{n}^{2}\not{D}_{0}u_{\ell}\|^{2}\) term can be absorbed on the left hand side of A.2, and the \(\|d\chi_{n}.u_{\ell}\|^{2}\) term can be absorbed into A.3 by increasing \(c^{2}\). Substituting A.5 again and increasing the constants yields \[\|u_{\ell}\|_{rH_{x}^{1}(A_{n\ell})}^{2}\leqslant\|\chi_{n}^{2}u _{\ell}\|_{rH_{x}^{1}(A_{n\ell})}^{2} \leqslant C_{1}\frac{|\ell|^{2}}{R_{0}^{2}}\|u_{\ell}\|_{L^{2}(B_{n\ell})}+ \frac{1}{2c_{1}}\|g_{\ell}\|_{rH_{x}^{-1}(B_{n\ell})}\] which shows, invoking the assumption on \(g_{\ell}\), that \[\|u_{\ell}\|_{rH_{x}^{1}(A_{n\ell})}^{2} \leqslant C_{1}\frac{|\ell|^{2}}{R_{0}^{2}}\|u_{\ell}\|_{rH_{x}^{1}(B_{n \ell})}+\frac{C_{m}^{\prime}}{|\ell|^{2+2m}}e^{-2n/c_{m}}.\] (A.6) Then, because of the restriction of Fourier modes on \(\eta_{\ell}\), \[C_{1}\frac{|\ell|^{2}}{R_{0}^{2}}\int_{B_{n\ell}}|u_{\ell}|^{2}\ dV \leqslant\frac{4C_{1}}{R_{0}^{2}}\left(\int_{A_{(n-1)\ell}}|\nabla u_{\ell}|^{ 2}\ dV+\int_{A_{n\ell}}|\nabla u_{\ell}|^{2}\ dV+\int_{A_{(n+1)\ell}}|\nabla u_{ \ell}|^{2}\ dV\right).\] Now set \(\mathfrak{a}_{n}=\|u_{\ell}\|_{L^{1,2}(A_{n\ell})}^{2}\), and choose \(R_{0}\) so that \(\frac{4C_{1}}{R_{0}}^{2}<\frac{1}{200}\). Equation A.6 implies the discrete differential inequality \[\mathfrak{a}_{n}-\tfrac{1}{100}(\mathfrak{a}_{n-1}+\mathfrak{a}_{n+1}) \leqslant\frac{C_{m}^{\prime}}{|\ell|^{2+2m}}e^{-2n/c_{m}}.\] To conclude, we apply a discrete version of the maximum principle: let \(\mathfrak{s}_{n}=\frac{2C_{m}}{|\ell|^{2+2m}}e^{-2n/c_{m}}\). Possibly by increasing \(c_{m}\) to \(c_{m}^{\prime}\), this rather trivially satisfies \[\mathfrak{s}_{n}-\tfrac{1}{100}(\mathfrak{s}_{n-1}+\mathfrak{s}_{n+1}) \leqslant\frac{C_{m}^{\prime}}{|\ell|^{2+2m}}e^{-2n/c_{m}^{\prime}}.\] Hence the difference \(\mathfrak{r}_{n}=\mathfrak{a}_{n}-\mathfrak{s}_{n}\) satisfies \[\mathfrak{r}_{n}-\tfrac{1}{100}(\mathfrak{r}_{n-1}+\mathfrak{r}_{n+1})\leqslant 0\] (A.7) and, possibly increasing \(C_{m}^{\prime}\), \(\mathfrak{r}_{0}\leqslant 0\) and \(r_{n}\to 0\) as \(n\to\infty\) (the latter requirement is simply from integrability). The "maximum principle" then implies \(\mathfrak{r}_{n}\leqslant 0\) for all \(n\), since an interior maximum with \(\mathfrak{r}_{n}\geqslant\mathfrak{r}_{n-1},\mathfrak{r}_{n+1}\) would violate A.7. We conclude that \(u_{\ell}\) satisfies \[\|u_{\ell}\|_{L^{1,2}(A_{n\ell})}^{2}\leqslant\frac{C_{m}^{\prime}}{|\ell|^{2+ 2m}}\mathrm{Exp}\left(-\frac{2n}{c_{m}^{\prime}}\right)\] (A.8) which completes the lemma. ## Appendix B Appendix II: Boundary and Edge Regularity This appendix gives proofs of two facts about regularity in the boundary and edge Sobolev spaces, namely Lemma 8.12 and Lemma 8.14. Recall the Frechet spaces defined above in Lemma 8.12. To restate the assertion of Lemma 8.12 succinctly: **Lemma B.1**.: For \(0<\varepsilon\leqslant 1\) there exist smoothing operators \[S_{\varepsilon}:C^{\infty}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\to C^{\infty}( \mathcal{Z}_{0};N\mathcal{Z}_{0})\qquad\qquad\qquad S^{\mathrm{b}}_{ \varepsilon}:\bigcap_{m\geqslant 0}X^{\prime\prime}_{m}\to\bigcap_{m \geqslant 0}X^{\prime\prime}_{m}\qquad\qquad\qquad S^{\mathrm{b}}_{ \varepsilon}:\mathcal{Y}\to\mathcal{Y}\] satisfying properties (i)-(iii) of Definition 7.1 and preserving the property (P) of polyhomogeneity as in defined by (8.1)-(8.2). Additionally, in Fermi coordinates around \(\mathcal{Z}_{0}\), \(S^{\mathrm{b}}_{\varepsilon}\) does not introduce new Fourier modes in \(\theta\). Proof.: On \(\mathcal{X}_{0}=C^{\infty}(\mathcal{Z}_{0};N\mathcal{Z}_{0})\) the operator \(S_{\varepsilon}\) may be defined straightforwardly as the truncation of the Fourier series at \(|\ell|\sim\frac{1}{\varepsilon}\) in a global trivialization. It is instructive for what follows, however, to construct \(S_{\varepsilon}\) using Schwartz kernels. Alternatively, \(S_{\varepsilon}\) may be defined as a convolution operator using a Schwartz kernel that smoothly approximates the \(\delta\)-distribution along the diagonal in \((\mathcal{Z}_{0})^{2}=\mathcal{Z}_{0}\times\mathcal{Z}_{0}\). More precisely, let \(\chi(r)\) be a cut-off function equal to \(1\) near \(r=0\) and vanishing for \(r>1\). Fix a collection \(U_{j}\times\mathbb{C}\) for \(j=1,..,n\) of trivializations of \(N\mathcal{Z}_{0}\) on contractible open sets, and for each \(j\), choose nested cut-off functions \(\xi_{j},\beta_{j}\) such that \(\mathrm{supp}(\beta_{j})\Subset\{\xi_{j}=1\}\). Then define \[S_{\varepsilon}(\eta)(t):=\frac{1}{\varepsilon}\sum_{j=1}^{n}\xi_{j}(t)\int_{ \mathcal{Z}_{0}}\chi\left(\tfrac{|t-t^{\prime}|}{\varepsilon}\right)\beta_{j} (t^{\prime})\eta(t^{\prime})dt^{\prime}.\] (B.1) where the constant \(\frac{1}{\varepsilon}\) serves to normalize \(\chi\) in \(L^{2}\). Properties (i)-(iii) now follow easily. The construction of \(S^{\mathrm{b}}_{\varepsilon}\) is analogous, but now we de-singularize the \(\delta\)-distribution on the diagonal in the blown-up product defined as follows. Let \(B=\mathcal{Z}_{0}\times\mathcal{Z}_{0}\subset Y\times Y\), and let \(D(B)\) denote a disk bundle of finite radius in the normal bundle. Define \[Y^{2}_{\mathrm{b}}:=(Y-(N_{r_{0}}\mathcal{Z}_{0}))^{2}\cup D(B).\] This blow-up is a compact \(6\)-manifold with corners, having three boundary strata of codimension \(1\) consisting of the interiors of \(\partial(N_{r_{0}}\mathcal{Z}_{0}\times Y),\partial(Y\times N_{r_{0}} \mathcal{Z}_{0}),\partial(D(B))\) which intersect along codimension \(2\) corners. This space can be given local coordinates \((s,\rho,\theta,\theta^{\prime},t,t^{\prime})\) in a neighborhood of the diagonal, where \(s=[r,r^{\prime}]\) is a projective coordinate along the blow-up boundary, and \(\rho=r^{\prime}\). Away from these strata, \(S^{\mathrm{b}}_{\varepsilon}\) can be defined analogously to (B.1); near the boundary strata it is defined as a product \[S^{\mathrm{b}}_{\varepsilon}:=S^{\theta}_{\varepsilon}\circ S^{\prime}_{\varepsilon}\] where \(S^{\theta}_{\varepsilon}\) is defined by truncation of the \(\theta\)-Fourier modes in a local trivialization, and \(S^{\prime}_{\varepsilon}\) is given in Fermi coordinates \((s,\rho,t,t^{\prime})\) by \[S^{\prime}_{\varepsilon}(\psi)(r,t,\theta):=\frac{1}{\varepsilon^{2}}\chi \int_{Y-\mathcal{Z}_{0}}\chi\left(\tfrac{|s-1|}{\varepsilon}\right)\chi \left(\tfrac{|t-t^{\prime}|}{\varepsilon}\right)\frac{1}{r^{\prime}}(\beta \psi)dt^{\prime}dr^{\prime}\] (B.2) where the factor of \(1/r^{\prime}\) appears because \(|r-r^{\prime}|\sim r^{\prime}s\) and the \(\delta\)-distribution is homogeneous of order \(-1\). The properties (i)-(iii) for the spaces \(H^{m}_{\mathrm{b}}\) follow analogously to the compact case. That \(S^{\mathrm{b}}_{\varepsilon}\) introduces no new Fourier modes in \(\theta\) is manifest from the definition, and the fact that polyhomogeneity is preserved is a consequence of the pushforward theorem or of direct inspection of the integral (B.2) (see [17] Section 3.1). Since the ratio \(r/r^{\prime}\) is uniformly bounded where \(\chi\neq 0\), the commutators \([\nabla^{\varepsilon},S^{\mathrm{b}}_{\varepsilon}]\) and \([r^{\alpha},S^{\mathrm{b}}_{\varepsilon}]\) are uniformly bounded, properties (i)-(iii) for the space \(\bigcap_{m\geqslant 0}H^{m,1}_{\mathrm{b},e}\) follows from the equivalent description of the norm (2.13). The same applies for the terms \((r\partial_{r}\pm\frac{1}{2})\psi\) and therefore for the spaces \(r\mathcal{H}^{m,1}\) and \(\mathcal{H}^{m,0}\). What remains is to prove Lemma 8.14, which requires several steps. **Lemma B.2**.: If \(\varphi\in r^{\alpha}H_{\rm b}^{3}\) for \(\alpha>1\) then the \(\varphi\) satisfies the pointwise bound \[|\varphi(x)|\leqslant C\|\varphi\|_{rH_{\rm b}^{3}}.\] (B.3) Proof.: We first prove the lemma in the \(1\)-dimensional case. Consider \(\mathbb{R}^{+}=(0,\infty)\) with the measure \(rdr\) and suppose that \(\varphi\in rH_{b}^{1}(rdr)\) with \({\rm supp}(\varphi)\subseteq(0,1]\). Then we claim that there is a constant \(C\) so that \[|\varphi(x)|\leqslant C\|\varphi\|_{rH_{\rm b}^{1}(rdr)}=C\left(\int_{\mathbb{ R}^{+}}\frac{|\varphi|^{2}}{r^{2}}+|\nabla\varphi|^{2}\ rdr\right)^{1/2}.\] (B.4) This follows from a dyadic decomposition. Since \(r\) is uniformly bounded on \([1/2,2]\) and \(L^{1,2}[1/2,2]\hookrightarrow C^{0}[1/2,2]\) by the standard Sobolev embedding, we have \(|\varphi(1)|^{2}\leqslant c\int|\varphi|^{2}+|\nabla\varphi|^{2}dr\leqslant c \|\varphi\|_{rH_{\rm b}^{1}}^{2}\). Then, by the Fundamental Theorem of Calculus, \[|\varphi(1/2)| \leqslant |\varphi(1)|+\int_{1/2}^{1}|\varphi^{\prime}(\rho)|d\rho\leqslant |\varphi(1)|+\left(\int_{1/2}^{1}\rho|\varphi^{\prime}(\rho)|^{2}d\rho \right)^{1/2}\left(\int_{1/2}^{2}\frac{1}{\rho}d\rho\right)^{1/2}\] \[\leqslant |\varphi(1)|+(\log 2)^{1/2}\|\varphi\|_{rH_{\rm b}^{1}([1/2,1],rdr)}\] Similarly, \(|\varphi(1/4)|\leqslant|\varphi(1/2)|+(\log 2)^{1/2}\|\varphi\|_{rH_{\rm b}^{1}([ 1/4,1/2],rdr)}\leqslant|\varphi(1)|+(\log 2)^{1/2}\|\varphi\|_{rH_{\rm b}^{1}([1/4,1], rdr)}\) where the second inequality follows from substituting the above. In general, using the estimate on \(|\varphi(1)|\) we conclude that \[|\varphi(2^{-k})|\leqslant C\|\varphi\|_{rH_{\rm b}^{1}(rdr)}.\] (B.4) then follows from applying the Fundamental Theorem of calculus again for \(x\in[2^{-k},2^{-k+1}]\). In general, for \(\varphi\in H_{\rm b}^{m}(Y-\mathcal{Z}_{0};S_{0})\), the lemma follows from the above by applying (B.4) to rays of constant \((t,\theta)\) and after using the Sobolev restriction theorem (which increases the number of derivatives needed). Next, we have the following fundamental fact about ODEs. For it, we use the \(1\)-dimensional b-spaces \(r^{\alpha}L_{\rm b}^{1,2}([0,1],rdr)\) and \(r^{\alpha}L^{2}([0,1],rdr)\) defined by the norms \[\|u\|_{r^{\alpha}L_{\rm b}^{1,2}}=\left(\int_{0}^{1}(|r\partial_{r}u|^{2}+|u|^ {2})r^{-2\alpha}rdr\right)^{1/2}\hskip 56.905512pt\|u\|_{r^{\alpha}L_{\rm b}^{2 }}=\left(\int_{0}^{1}|u|^{2}r^{-2\alpha}rdr\right)^{1/2}.\] **Lemma B.3**.: Provided \(\alpha>3/2\) then \[(r\partial_{r}-\tfrac{1}{2}):r^{\alpha}L_{\rm b}^{1,2}(0,1]\to r^{\alpha}L_{ \rm b}^{2}(0,1]\] is an isomorphism and there holds \[\|u\|_{r^{\alpha}L_{\rm b}^{1,2}}\leqslant\|(r\partial_{r}-\tfrac{1}{2})u\|_ {r^{\alpha}L_{\rm b}^{2}}\] (B.5) Proof.: Setting \(r=e^{s}\) for \(s\in(-\infty,0]\) the problem is equivalent to the analogous statement for \[\partial_{s}-\tfrac{1}{2}:e^{(1-\alpha)s}L^{1,2}((-\infty,0],ds)\longrightarrow e ^{(1-\alpha)s}L^{2}((-\infty,0],ds)\] which is conjugate to \[\tfrac{1}{e^{(\alpha-1)s}}(\partial_{s}-\tfrac{1}{2})e^{(\alpha-1)s}=( \partial_{s}+\alpha-\tfrac{3}{2}):L^{1,2}((-\infty,0],ds)\to L^{2}((-\infty,0],ds).\] The claim then follows directly from integrating by parts since the boundary term \((\alpha-\tfrac{3}{2})|u(0)|^{2}>0\) is strictly positive. We now conclude the proof of Lemma 8.14. Proof of Lemma 8.14.: If \(\varphi\) is compactly supported in the region \(Y-N_{r_{0}/2}(\mathcal{Z}_{0})\) where \(r\) is uniformly bounded below, the lemma is immediate from the standard Sobolev Embedding Theorem. We may therefore assume that \(\varphi\) is supported in a tubular neighborhood of \(\mathcal{Z}_{0}\). Since \(\varphi\in r\mathcal{H}^{m,1}\cap\mathbf{P}_{\mathcal{X}}\) by assumption, we may write \[\varphi=A(t,\theta)r^{1/2}+B(t,\theta,r)\] in local coordinates, after which it suffices to show the bound for each term individually. Applying (B.5) to derivatives and integrating over the \(t,\theta\) variables leads to \[\|r\partial_{r}u\|_{r^{\alpha}H^{m}_{\mathrm{b}}}+\|u\|_{r^{\alpha}H^{m}_{ \mathrm{b}}}\leq C\|(r\partial_{r}-\tfrac{1}{2})u\|_{r^{\alpha}H^{m}_{\mathrm{ b}}}\] for \(\alpha>3/2\) and in particular for \(\alpha=1+\nu\). Applying this to \(B(t,\theta,r)\) and discarding the first term shows that \[\|B(r,t,\theta)\|_{r^{1+\nu}H^{m}_{\mathrm{b}}}\leq C\|(r\partial_{r}-\tfrac {1}{2})B\|_{r^{1+\nu}H^{m}_{\mathrm{b}}}=C\|(r\partial_{r}-\tfrac{1}{2}) \varphi\|_{r^{1+\nu}H^{m}_{\mathrm{b}}}\leq C\|\varphi\|_{r\mathcal{H}^{m+1,1}}\] (B.6) since \((r\partial_{r}-\tfrac{1}{2})\) annihilates \(A(t,\theta)r^{1/2}\). Then, applying Lemma B.2 to \(B(t,\theta,r)r^{-1/2}\) shows that \[|B(t,\theta,r)r^{-1/2}|\leq\|B(r,t,\theta)\|_{r^{3/2}H^{m}_{\mathrm{b}}}\leq \|B(r,t,\theta)\|_{r^{1+\nu}H^{m}_{\mathrm{b}}}\leq C\|\varphi\|_{r\mathcal{H} ^{m+1,1}}\] and the result for \(B(t,\theta,r)\) follows after multiplying by \(r^{1/2}\). For the first term, the triangle inequality and (B.6) shows that \[\|A(t,\theta)r^{1/2}\|_{rH^{m}_{\mathrm{b}}}=\|A(t,\theta)r^{1/2}+B\|_{rH^{m} _{\mathrm{b}}}-\|B(r,t,\theta)\|_{rH^{m}_{\mathrm{b}}}\leq C\|\varphi\|_{r \mathcal{H}^{m+2,1}}.\] Finally, since \(\|A(t,\theta)r^{1/2}\|_{rH^{m}_{\mathrm{b}}}\sim\|A(t,\theta)\|_{L^{m,2}(T^{2})}\), the bound for the first term follows from the Sobolev embedding on \(T^{2}\) after increasing \(m+2\) to \(m+4\).
2310.10366
Ewald's Conjecture and integer points in algebraic and symplectic toric geometry
We solve several open problems concerning integer points of polytopes arising in symplectic and algebraic geometry. In this direction we give the first proof of a broad case of Ewald's Conjecture (1988) concerning symmetric integral points of monotone lattice polytopes in arbitrary dimension. We also include an asymptotic quantitative study of the set of points appearing in Ewald's Conjecture. Then we relate this work to the problem of displaceability of orbits in symplectic toric geometry. We conclude with a proof for the $2$-dimensional case, and for a number of cases in higher dimensions, of Nill's Conjecture (2009), which is a generalization of Ewald's conjecture to smooth lattice polytopes. Along the way the paper introduces two new classes of polytopes which arise naturally in the study of Ewald's Conjecture and symplectic displaceability: neat polytopes, which are related to Oda's Conjecture, and deeply monotone polytopes.
Luis Crespo, Álvaro Pelayo, Francisco Santos
2023-10-16T13:04:49Z
http://arxiv.org/abs/2310.10366v3
# Ewald's conjecture and integer points in algebraic and symplectic toric geometry ###### Abstract. We solve several open problems concerning integer points of polytopes arising in symplectic and algebraic geometry. Ewald's Conjecture from 1988 states that if \(P\) is a monotone \(n\)-polytope in \(\mathbb{R}^{n}\) then the set \(\mathbb{Z}^{n}\cap P\cap-P\) contains a unimodular basis of the lattice \(\mathbb{Z}^{n}\). We prove this conjecture for \(n\)-polytopes which do not recursively contain unimodular triangles. Then we study the combinatorial and asymptotic properties of the function \(P\mapsto\mathbb{Z}^{n}\cap P\cap-P\). Ewald's Conjecture is closely related to problems in toric geometry, and we state the implications of our results in this context. In 2009 Nill proposed a generalization of Ewald's Conjecture which says that if \(P\) is an \(n\)-dimensional lattice smooth polytope in \(\mathbb{R}^{n}\) then \(\mathbb{Z}^{n}\cap P\cap-P\) contains a unimodular basis of \(\mathbb{Z}^{n}\). We prove this conjecture for \(n=2\). In the last part of the paper we provide algorithms for the proofs we gave earlier of these conjectures, hence strengthening the interplay between theory and computation in convex geometry/combinatorics, and by extension, in symplectic and algebraic geometry. Key words and phrases:Ewald Conjecture, monotone symplectic manifold, toric geometry, momentum map, smooth reflexive polytope, Delzant polytope, monotone polytope, mirror symmetry 2 physics - because of their association to Gorenstein Fano varieties, see Batyrev [3], Cox-Little-Schenck [9, Theorem 8.3.4], Franco-Seong [14], Haase-Melnikov [18] and Nill [26]. In the present paper we are interested in understanding, both theoretically as well as computationally, the properties of the following function, which we call the _Ewald symmetry function_; the function has as input a monotone polytope, and as output a set of lattice points. This function appears implicitly in the influential 1988 paper by Gunter Ewald [13]. **Definition 1.1** (Ewald symmetry function).: The _Ewald symmetry function_ is the map \[\mathcal{E} : \mathfrak{M}\longrightarrow\bigcup_{n\in\mathbb{N}}\mathcal{P}( \mathbb{Z}^{n}),\] \[P\mapsto\mathcal{E}(P):=\mathbb{Z}^{\dim P}\cap P\cap-P.\] That is, if \(P\) is an \(n\)-dimensional monotone polytope then the set \(\mathcal{E}(P)\subset\mathbb{Z}^{n}\) consists precisely of the _symmetric integral points of \(P\)_, meaning integral points \(x\in\mathbb{Z}^{n}\) for which both \(x\in P\) and \(-x\in P\). The following is a long standing conjecture concerning the function \(\mathcal{E}\). **Conjecture 1.2** (Ewald's Conjecture 1988 [13, Conjecture 2]).: _Let \(n\in\mathbb{N}\). If \(P\) is an \(n\)-dimensional monotone polytope in \(\mathbb{R}^{n}\) then the set \(\mathcal{E}(P)\) contains a unimodular basis of \(\mathbb{Z}^{n}\)._ The original formulation of Conjecture 1.2 states that, if \(P\) is a monotone polytope, the dual polytope of \(P\) (a _Fano polytope_) is contained in \([-1,1]^{n}\). As pointed out by \(\mathcal{O}\)bro [28], this is equivalent to saying that \(P\) contains, via a unimodular transformation, the polytope \(\operatorname{conv}\{\mathrm{e}_{1},-\mathrm{e}_{1},\ldots,\mathrm{e}_{n},- \mathrm{e}_{n}\}\), what gives the above formulation \((\mathrm{e}_{1},\ldots,\mathrm{e}_{n}\) are the canonical vectors of \(\mathbb{R}^{n})\). McDuff [23, Section 3.1] and Payne [29, Remark 4.6] use the formulation above. The conjecture has been shown in some dimensions using software: indeed, \(\mathcal{O}\)bro proved [28, page 67] that Ewald's Conjecture holds for any \(n\leqslant 7\). Besides this result, little is known about Conjecture 1.2. In her paper [23] McDuff indicates that it is not even known whether \(\mathcal{E}(P)\neq\{0\}\) for every monotone polytope \(P\), and the same remark is made by Payne in [29] (McDuff and Payne remove the origin from \(\mathcal{E}(P)\) in their definition, while we do not). McDuff's article concerns symplectic geometry and in fact Conjecture 1.2 is closely related to certain problems in symplectic geometry. Conjecture 1.2 was generalized by Nill [38] to smooth polytopes, reflexive or not. Let \(\mathfrak{S}\) the set of all smooth polytopes, of any dimension. Let \[\tilde{\mathcal{E}}:\mathfrak{S}\longrightarrow\bigcup_{n\in\mathbb{N}} \mathcal{P}(\mathbb{Z}^{n})\] be the function defined by the same expression as the Ewald symmetry function; the expression is in fact valid provided only that \(P\) is a subset of \(\mathbb{R}^{n}\). Since Ewald's original conjecture concerns monotone polytopes, we have preferred to define the Ewald symmetry function on the domain \(\mathfrak{M}\) instead of a more general domain. **Conjecture 1.3** (General Ewald's Conjecture, Nill 2009 [38]).: _Let \(n\in\mathbb{N}\). If \(P\) is an \(n\)-dimensional lattice smooth polytope in \(\mathbb{R}^{n}\) then the set \(\tilde{\mathcal{E}}(P)\) contains a unimodular basis of \(\mathbb{Z}^{n}\)._ The paper has three main goals. The first one is to understand the properties of the Ewald symmetry function; in particular we prove Ewald's Conjecture for an important class of polytopes, those which recursively do not contain triangles (Theorem 2.6), and study the connections of the conjecture to symplectic toric geometry (Corollaries 2.12, 9.6, 9.7, 9.8). The second main goal is to prove Nill's Conjecture in dimension \(2\) (Theorem 2.8). The third main goal (accomplished in Section 10) is to establish a strong connection between "theory" and "computation" in this context, by giving algorithms which implement the proofs of our main results, hence making them amenable to computational techniques. As a conclusion, the paper helps to deepen the links between "continuous" problems in symplectic and algebraic geometry, and "discrete" problems in combinatorics/convex geometry. ### Structure of the paper The paper is organized as follows. _The first and most extensive part the paper is the "theory part"_, where we give statements and proofs of the results we just announced and several others; this part corresponds to the following sections: * In Section 2 we state the main results of the paper and outline the connections with symplectic toric geometry. * In Section 3 we review the meaning of the so called _weak, strong and star Ewald conditions_; the weak condition corresponds to the assumptions of the Ewald's Conjecture. The other conditions are closely related to each other, although it is not clear how in all cases. We will discuss which of the conditions implies the other(s) (in some cases it is not known yet). * In Section 4 we prove Ewald's Conjecture (Conjecture 1.2) for the class of so called _recursively_ UT-_free polytopes_, meaning polytopes which do not recursively have unimodular triangles. * In Section 5 we show that many interesting recursively UT-free polytopes arise by applying the fiber bundle construction for polytopes. * In Section 6 we study the properties of the Ewald symmetry function, in particular how it behaves with respect to the Cartesian product and fiber bundle operations. * In Section 7 we study the so called \((\pm)\)-symmetrical polytopes and explain how they arise in the study of Ewald's Conjecture. * In Section 8 we study smooth polytopes in general, not necessarily reflexive, in dimension \(2\) (i.e. smooth polygons), and prove Conjecture 1.3 for \(n=2\). * In Section 9 we discuss the implications of Ewald's Conjecture in symplectic toric geometry, in particular recalling how monotone symplectic toric manifolds are classified by monotone polytopes. In fact, there is a strong connection between the problem of when a fiber of the momentum map of symplectic toric geometry is displaceable, and the problem of when a point in the monotone polytope is displaceable by a probe. _The second part of the paper is the "algorithmic part"_, consisting only of one section, but which relies heavily on the previous sections. This part helps strengthening the connection between theory and computation in this context, via the construction of algorithms; more concretely: * In Section 10, which is divided into three parts, we give algorithms which implement the theoretical proofs of the main results of the paper. * Section 10.1 gives the algorithm we use to detect recursively UT-free polytopes. * Section 10.2 gives the algorithm corresponding to the proof of Theorem 2.6. * Section 10.3 gives the algorithm corresponding to the proof of Theorem 2.8. We conclude the paper with Section 11, in which we state some open questions. ### Acknowledgements We thank Francisco Santos (Universidad de Cantabria) for helpful discussions and for the statement and proof of Lemma 4.1, and Monica Blanco (Universidad de Cantabria) for comments which have improved the paper. We thank Joe Brendel for pointing out to us a connection of his paper [5] with our work, which has resulted in the inclusion of Corollary 2.12 in the present paper. The first author is funded by grants PID2019-106188GB-I00 and PID2022-137283NB-C21 of MCIN/AEI/10.13039/501100011033, by FPU19/04163 of the Spanish Government, and by project CLaPTo (21.SI03.64658) of Universidad de Cantabria and Banco Santander. The second author is funded by a BBVA (Bank Bilbao Vizcaya Argentaria) Foundation Grant for Scientific Research Projects with title _From Integrability to Randomness in Symplectic and Quantum Geometry_. He thanks the Dean of the School of Mathematics Antonio Bru and the Chair of the Department of Algebra, Geometry and Topology at the Complutense University of Madrid, Rutwig Campoamor, for their support and excellent resources he is being provided with to carry out the BBVA project. He also thanks the Department of Mathematics, Statistics and Computation at the University of Cantabria for inviting him in July and August 2023 for a visit during which part of this paper was written, and the Universidad Internacional Menendez Pelayo (UIMP) for the hospitality during his visit. ## 2. Main results In this section we state our main results and explain some of the implications in symplectic toric geometry. These results concern Ewald's Conjecture from 1988 (Conjecture 1.2) and its generalization to smooth lattice polytopes by Nill in 2009 (Conjecture 1.3). Ewald's Conjecture was shown in complete generality up to dimension \(7\) by \(\O\)bro [28] with the help of computational software. As far we know the present paper gives the first proof of a broad case of Ewald's Conjecture in arbitrary dimension \(n\in\mathbb{N}\) done without the help of software (the case of "being recursively free of unimodular triangles"). In low dimensions, eg. \(2,3,4,5\) and \(6\), many monotone polytopes fall within this broad case, as shown in Table 4.1. ### Main results concerning Ewald's Conjecture (1988) We use the following notion introduced by McDuff in her paper on probes, motivated by her study of combinatorial problems in symplectic toric geometry. **Definition 2.1** (Strong Ewald Condition [23, Definition 3.5]).: Let \(P\) be an \(n\)-dimensional smooth polytope in \(\mathbb{R}^{n}\) with the origin in its interior. We say that \(P\) satisfies the _strong Ewald condition_ if the following condition holds: for each facet \(F\) of \(P\) the set \(\mathcal{E}(P)\cap F\) contains a unimodular basis of \(\mathbb{Z}^{n}\). It follows from Definition 2.1 that if \(P\) satisfies the strong Ewald condition then Conjecture 1.2 holds for \(P\). Using computational software \(\O\)bro was able to prove the following pioneer result concerning Ewald's Conjecture: **Theorem 2.2** (\(\O\)bro [28, page 67]).: _Let \(n\in\mathbb{N}\). If \(n\leqslant 7\) then every \(n\)-dimensional monotone polytope satisfies the strong Ewald condition._ The definitions we give below are valid for any convex polytope. **Definition 2.3** (Unimodular triangle).: A lattice triangle in \(\mathbb{R}^{n}\) is _unimodular_ if it is \(\operatorname{AGL}(n,\mathbb{Z})\)-equivalent to the triangle \(\operatorname{conv}(\mathrm{e}_{1},\mathrm{e}_{2},\mathrm{e}_{3})\), where \(\mathrm{e}_{1},\mathrm{e}_{2},\mathrm{e}_{3}\) are the first three canonical vectors of \(\mathbb{R}^{n}\). **Definition 2.4** (UT-free polytope).: We say that a convex polytope \(P\) is UT_-free_ (short for _unimodular triangle free_) if no face of \(P\) is a unimodular triangle. **Definition 2.5** (Recursively UT-free polytope).: Let \(n\in\mathbb{N}\). An \(n\)-dimensional polytope \(P\) is _recursively_ UT_-free_ (short for _recursively unimodular triangle free_) if either \(n=2\) or the following two conditions hold: * \(P\) is UT-free, and * for any facet \(F\) of \(P\), the intersection of \(P\) with the linear hyperplane parallel to \(F\) is recursively UT-free. Now we can state our first main result. **Theorem 2.6**.: _Let \(n\in\mathbb{N}\). Every \(n\)-dimensional monotone recursively UT-free polytope satisfies the strong Ewald condition._ Theorem 2.6 has the following immediate consequence. **Corollary 2.7**.: _Ewald's Conjecture (Conjecture 1.2) holds for all monotone recursively UT-free polytopes._ ### Main result concerning the General Ewald's Conjecture (2009) The following result states that Conjecture 1.3 is valid for \(n=2\) under no additional assumption. **Theorem 2.8**.: _Nill's Conjecture (Conjecture 1.3) holds for \(n=2\), that is, if \(P\) is a \(2\)-dimensional lattice smooth polytope in \(\mathbb{R}^{2}\) then \(\tilde{\mathcal{E}}(P)\) contains a unimodular basis of \(\mathbb{Z}^{2}\)._ The techniques we use to prove Theorem 2.8, as we will see, use in a crucial way the fact that we are in dimension \(2\) (this proof is carried out in Section 8). It seems quite challenging to make an argument of the type we give, work in dimensions \(3\) or higher, and at this time we are unsure as to whether there would be a counterexample to the conjecture, even in dimension \(3\). ### Implications of Ewald's Conjecture and the class of \((\pm)\)-symmetrical polytopes Now we introduce the concept of \((\pm)\)-symmetrical polytope and discuss its relation with Conjecture 1.2. **Definition 2.9** (\((\pm)\)-symmetrical polytope).: Let \(m,n\in\mathbb{N}\). Let \(P\) be a monotone \(n\)-dimensional polytope in \(\mathbb{R}^{n}\) defined by the inequalities \(Ax\leqslant\mathbf{1}\), where \(A\in\mathbb{Z}^{m\times n}\) and \(\mathbf{1}\) is the vector with all entries \(1\), and let \(b\in\mathbb{Z}^{m}\). We define \[P_{+}:=\{x\in\mathbb{R}^{n}:Ax\leqslant\mathbf{1}+b\}\] and \[P_{-}:=\{x\in\mathbb{R}^{n}:Ax\leqslant\mathbf{1}-b\}.\] We say that \(P\) is _\((\pm)\)-symmetrical respect to \(b\)_ if there is an integer point \(x\in P_{+}\) such that \(-x\in P_{-}\), and that \(P\) is _\((\pm)\)-symmetrical_ when it is \((\pm)\)-symmetrical respect to \(b\) for every integer vector \(b\in\mathbb{Z}^{m}\) such that \(P_{+}\) and \(P_{-}\) are combinatorially equivalent to \(P\). It is quite possible that most (if not all) monotone polytopes are \((\pm)\)-symmetrical; in fact, finding one which is _not_ would give a counterexample to Conjecture 1.2: **Theorem 2.10**.: _Let \(n\in\mathbb{N}\). If Conjecture 1.2 holds for dimension \(n\), then every monotone \((n-1)\)-polytope is \((\pm)\)-symmetrical. Furthermore, if \(P\) is a bundle with base \(B\) and fiber \(Q\), where \(B\) and \(Q\) satisfy the conjecture and \(Q\) is \((\pm)\)-symmetrical, then \(P\) also satisfies it._ **Corollary 2.11**.: _If Conjecture 1.2 holds, every monotone polytope is \((\pm)\)-symmetrical._ ### Connection with symplectic/algebraic geometry and physics Ewald's Conjecture is closely related to important problems in symplectic geometry, in particular to the problem of when the fibers of the toric momentum map are displaceable. Monotone polytopes completely characterize the so called _monotone symplectic toric manifolds_, and are of great interest in symplectic toric geometry. The polytopes are assigned to the manifolds via the Delzant correspondence (which sends such a manifold \(M\) to the image \(\mu(M)=P\) under the toric momentum map \(\mu\), as explained in in Section 9.1). Conjecture 1.2 is related to the problem of when a fiber of the momentum map of the symplectic toric manifold is displaceable. The results in this paper imply (Corollaries 9.6 and 9.8) that if \(M\) is a monotone symplectic toric manifold whose momentum polytope \(\mu(M)\) is not \((\pm)\)-symmetrical then there exists another symplectic toric manifold \(M^{\prime}\) and an interior point of \(M^{\prime}\) distinct from the origin which is not displaceable by a probe, and give a criterion to decide whether this happens for \(M\) itself (these concepts are reviewed in Section 9). Finally, we would like to thank Joe Brendel for pointing out to us his paper [5], and that our results, in view of his results [5, Theorems 1.2 and 1.4], have as an immediate consequence the following corollary. We refer to the aforementioned paper by Brendel and the references therein for more details on the concepts (eg. Chekanov torus) which appear in the following result (see also Pelayo [30] and Schlenk [37] for surveys on various aspects of symplectic geometry and topology). **Corollary 2.12**.: _Let \((M,\omega)\) be a compact connected monotone symplectic toric manifold. Let us assume that \(\mu(M)\) is recursively UT-free. Then the following statements hold:_ 1. _If the central fiber (that is, the fiber over the unique integral point of_ \(\mu(M)\)_) is real, then_ \(\mu(M)\) _is centrally symmetric, that is,_ \(\mu(M)=-\mu(M)\)_._ 2. _The Chekanov torus can be embedded into_ \(M\) _to yield an exotic Lagrangian which is not real._ Figure 1. The only five \(2\)-dimensional monotone polygons. Proof.: The results are a consequence of [5, Theorems 1.2 and 1.4] and Theorem 2.6 above, in view of the fact that the so called FS property for a polytope, which is the assumption in [5, Theorems 1.2 and 1.4], is implied if the polytope is recursively UT-free. Indeed, the FS property for a polytope \(P\) says that, for any facet \(F\) of \(P\), there is an integer point \(x\) in \(F\) such that \(-x\in P\) (that is, \(x\in F\cap\mathcal{E}(P)\)). This property is immediately implied if \(P\) is recursively UT-free, because in that case, by Theorem 2.6, there is a unimodular basis contained in \(F\cap\mathcal{E}(P)\). ## 3. Preliminaries We this section we introduce, following McDuff, the so called _weak Ewald property_ and the _star Ewald property_. The weak Ewald property corresponds to the conclusion of Ewald's Conjecture, while as we saw (Definition 2.1) the strong Ewald property is a stronger statement. The star Ewald property, is a condition which is related to the strong condition, although this relation is not yet completely clear as we see next. ### Monotone, i.e. smooth reflexive, polytopes We start by recalling the notions of smooth, reflexive and monotone polytope. **Definition 3.1** (Delzant polytope).: Let \(n\in\mathbb{N}\). An \(n\)-dimensional polytope in \(\mathbb{R}^{n}\) is called a _Delzant polytope_, or a _smooth polytope_, if it satisfies the following three properties: * it is _simple_: there are precisely \(n\) edges meeting a each vertex; * it is _rational_: it has rational edge directions; * the primitive edge-direction vectors at each vertex form a basis of the lattice \(\mathbb{Z}^{n}\). Equivalently, a Delzant polytope is a polytope with a simplicial and unimodular normal fan. Let \(P\) be an \(n\)-dimensional rational polytope and let \(F\) be a facet of \(P\). We write \(u_{F}\) for the primitive exterior normal vector to \(F\). With this notation in mind, there are constants \(b_{F}\in\mathbb{R}\) such that the irredundant inequality description of \(P\) is \[P=\{x\in\mathbb{R}^{n}\,|\,u_{F}\cdot x\leqslant b_{F},\text{ where }F\text{ is a facet of }P\}.\] For the following definition recall that a _lattice polytope_ is a polytope with integer vertices. **Definition 3.2** (Reflexive polytope).: A _reflexive polytope_ is a lattice polytope whose dual polytope also has integer vertices. Equivalently, a lattice polytope is reflexive if and only if every facet-supporting hyperplane is of the form \(u_{F}\cdot x=1\), where \(u_{F}\) is the primitive exterior normal vector to the facet. In the present paper, as in [32] (where we were interested in blow-ups of polytopes) we always consider polytopes up to \(\operatorname{AGL}(n,\mathbb{Z})\)-equivalence. In other words, two polytopes \(P\) and \(P^{\prime}\) are equivalent if and only if there is an \(n\)-dimensional integer matrix \(A\) with determinant \(\pm 1\) and \(t\in\mathbb{Z}^{n}\) such that \(P\) is sent to \(P^{\prime}\) by the mapping \(x\mapsto Ax+t.\) If \(P\) is a reflexive polytope then \(P\) has the origin as the unique interior lattice point, and for the class of reflexive polytopes \(\operatorname{AGL}(n,\mathbb{Z})\)-equivalence is the same as \(\operatorname{GL}(n,\mathbb{Z})\)-equivalence. **Definition 3.3** (Monotone polytope).: A polytope is _monotone_ if it is smooth and reflexive. The number of reflexive polytopes, and hence the number of monotone polytopes, is finite in every dimension [19]. Table 1 shows the number of smooth reflexive polytopes up to dimension \(9\) (see also Figure 1 for figures in dimension \(2\)). The enumeration for \(n\leqslant 8\) is due to Obro [27] and for \(n=9\) is due to Lorenz and Paffenholz [20]. **Remark 3.4**.: As mentioned in [32, Section 3.1] sometimes smooth polytopes are required to have integer vertices, and in the sense a smooth polytope would be a _lattice Delzant polytope_. ### The weak, strong and star Ewald conditions In [23, Section 3.1] McDuff introduces several notions closely related to the condition in the conclusion of Conjecture 1.2, which we present next. Prior to giving these notions we establish some notation. Let \(P\) be any polytope with facets \(F_{i},i\in I\), and let \(f=\bigcap_{i\in I}F_{i}\) be a face of \(P\). Following McDuff [23] we use the notation: \[\operatorname{Star}(f)=\bigcup_{i\in I}F_{i};\ \ \operatorname{star}(f)= \bigcup_{\begin{subarray}{c}i,j\in I\\ i\neq j\\ f\in F_{i}\cap F_{j}\end{subarray}}F_{i}\cap F_{j};\ \ \operatorname{Star}^{*}(f)= \operatorname{Star}(f)\setminus\operatorname{star}(f).\] **Remark 3.5**.: For any facet \(F\) of a polytope we have the equalities \[\operatorname{Star}(F)=\operatorname{Star}^{*}(F)=F.\] Recall the meaning of the _strong Ewald condition_ from Definition 2.1. **Definition 3.6** (Weak and star Ewald conditions [23, Definition 3.5]).: Let \(n\in\mathbb{N}\). Let \(P\) be a smooth \(n\)-dimensional polytope with the origin in its interior. 1. We say that \(P\) satisfies the _weak Ewald condition_ if \(\mathcal{E}(P)\) contains a unimodular basis of \(\mathbb{Z}^{n}\). 2. We say that a face \(f\) of \(P\) satisfies the _star Ewald condition_ if there exists \(\lambda\in\mathcal{E}(P)\) such that \(\lambda\in\operatorname{Star}^{*}(f)\) and \(-\lambda\not\in\operatorname{Star}(f)\). 3. We say that \(P\) satisfies the _star Ewald condition_, or is _star Ewald_, if every face of \(P\) satisfies the star Ewald condition. **Remark 3.7**.: In her paper McDuff mentions [23, page 14] that the relation between the strong Ewald and the star Ewald conditions is not completely clear, and proves that [23, Lemma 3.7] if \(P\) is monotone and \(v\) is a vertex of \(P\) such that every face of \(P\) containing \(v\) is star Ewald then \(P\) satisfies the weak Ewald condition. She also proves [23, Lemma 3.8] that if a facet of \(P\) contains a unimodular basis of points in \(\mathcal{E}(P)\) then every codimension \(2\) face of \(P\) satisfies the star Ewald condition. **Proposition 3.8**.: _Let \(P\) be a monotone polytope. Then the following statements hold._ 1. _If_ \(P\) _satisfies the strong Ewald condition, then_ \(P\) _satisfies the weak Ewald condition._ 2. _(McDuff) If_ \(P\) _satisfies the star Ewald condition, then_ \(P\) _satisfies the weak Ewald condition._ \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline dimension & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline monotone polytopes & 1 & 5 & 18 & 124 & 866 & 7622 & 72256 & 749892 & 8229721 \\ \hline \end{tabular} \end{table} Table 1. Number of monotone polytopes in each dimension from \(1\) to \(9\). 3. _(Obro and Paffenholz, for_ \(n=6\)_) For every_ \(n\geqslant 6\)_, there exists an_ \(n\)_-dimensional monotone polytope_ \(P\) _which satisfies the strong Ewald condition, but it does not satisfy the star Ewald condition._ Proof.: 1. This is immediate from the definitions. 2. This is proved by McDuff in [23, Lemma 3.7]. 3. This follows from Theorem 2.2 and the following example. First we give a \(6\)-dimensional polytope \(P\) which does not satisfy the star Ewald condition. It is given by the inequalities \(Ax\leqslant\mathbf{1}\), where \[A=\begin{pmatrix}-I\\ A_{1}\end{pmatrix},\] \(I\) is the \(6\) by \(6\) matrix, and \[A_{1}=\begin{pmatrix}-1&0&0&1&0&0\\ -1&0&1&2&0&0\\ -1&1&1&3&1&0\\ 1&0&0&-2&0&1\end{pmatrix}.\] The face \(F\), given by the intersection of the facets in the positions \(2\), \(3\), \(5\), \(6\), \(9\) and \(10\), does not have any integer point in the conditions of Definition 3.6(ii). We thank Paffenholz for providing us this example. Now we generalize Paffenholz's example to higher dimensions by proving that, for any other monotone polytope \(Q\), \(P\times Q\) is not star Ewald. Indeed, the face \(F\times Q\) does not satisfy the star Ewald condition: if there was an integer point \[p\in\operatorname{Star}^{*}(F\times Q)\setminus(-\operatorname{Star}(F\times Q )),\] we have that \[\operatorname{Star}(F\times Q)=\operatorname{Star}(F)\times Q\] and the projection of \(p\) to \(P\) gives an integer point in \(\operatorname{Star}^{*}(F)\setminus(-\operatorname{Star}(F))\), that we know does not exist. **Corollary 3.9**.: _The weak Ewald condition does not imply the star Ewald condition._ ## 4. Ewald's Conjecture for recursively UT-free polytopes In this section we prove the particular case of Ewald's Conjecture (Conjecture 1.2) announced previously: Theorem 2.6. ### An inductive technique The following lemma, for which we are grateful to F. Santos (Universidad de Cantabria), will help us prove Theorem 2.6. **Lemma 4.1** (Santos [36]).: _Let \(n\in\mathbb{N}\) such that \(n\neq 1\). Let \(P\) be an \(n\)-dimensional monotone polytope and let \(F\) be a facet of \(P\). Let \(F_{0}\) be the intersection of \(P\) with the linear hyperplane parallel to \(F\). Then the following statements hold._ 1. \(F_{0}\) _is a reflexive_ \((n-1)\)_-dimensional polytope._ 2. \(F_{0}\) _is smooth, hence monotone, and combinatorially isomorphic to_ \(F\)_, except perhaps if_ \(P\) _has a_ \(2\)_-face that is a unimodular triangle with an edge in_ \(F\) _and third vertex in_ \(F_{0}\) Proof.: 1. Let \(H\) be the linear hyperplane parallel to \(F\). Since \(P\) is reflexive, \(H\) and \(F\) are at lattice distance one. For each edge \(e\) incident in \(F\), let \(u_{e}\) be the endpoint of \(e\) in \(F\) and \(v_{e}\) the intersection of \(e\) with \(H\). Since \(P\) is simple, there is only one edge that leaves \(F\) from each vertex, and since \(P\) is smooth, the facets at \(u_{e}\) form a unimodular basis, so there has to be a lattice point at distance one from \(F\) and the point \(v_{e}\) must have integer coordinates. Hence \(F_{0}\) must have all the \(v_{e}\) as vertices, and there are no more vertices not coming from \(F\). In particular, all the vertices of \(F_{0}\) are integer. Assume without loss of generality that the facet \(F\) is defined by the equation \(x_{n}\leqslant 1\). Then, the inequality description of \(F_{0}\) is the same as that of \(P\), restricted to \(x_{n}=0\); hence, \(F_{0}\) is reflexive (some facet inequalities of \(P\) may become redundant in \(F_{0}\), but that does not affect the statement). 2. If \(v_{e}\neq v_{f}\) for every \(e\) then \(F_{0}\) is combinatorially isomorphic to \(F\), hence simple. It is also smooth since the facet normals of \(F_{0}\) at each vertex \(v_{e}\) are the same as those of \(F\) at the corresponding vertex \(u_{e}\). So, suppose that \(v_{e}=v_{f}\) for two edges \(e,f\). Then, \(u_{e}\) and \(u_{f}\) must be connected with an edge, and the three points form a triangle, with the edge \(u_{e}u_{f}\) in \(F\) and its third vertex \(v_{e}=v_{f}\) in \(F_{0}\). The triangle has width one with respect to its edge in \(F\), and the only smooth triangle of width one is the unimodular one (remember that \(P\) is smooth and that every face of a smooth polytope is also smooth). **Example 4.2**.: The second part of this lemma may not hold if \(P\) has a unimodular triangle as a face. For example, consider the following polytope, for any \(n\geqslant 3\): \[\left\{\begin{array}{rcl}x_{i}&\geqslant-1&\forall i,1\leqslant i\leqslant n ;\\ x_{1}&\leqslant 1;\\ (n-1)x_{1}+x_{2}+\ldots+x_{n}&\leqslant 1.\end{array}\right.\] This is in fact a particular example of a class of polytopes which we introduce later in Definition 5.7. Its intersection with the hyperplane \(x_{n}=0\), parallel to \(x_{n}=-1\), gives a simplex: \[\left\{\begin{array}{rcl}x_{i}&\geqslant-1&\forall i,1\leqslant i\leqslant n -1;\\ (n-1)x_{1}+x_{2}+\ldots+x_{n-1}&\leqslant 1.\end{array}\right.\] This simplex is not smooth: the normals to the facets in the vertex \((1,-1,\ldots,-1,0)\) are not a unimodular basis. But even more is true: being just reflexive (and not smooth) is not enough to ensure that the intersection is reflexive. By taking a second intersection, this time with \(x_{n-1}=0\), we obtain another simplex \[\left\{\begin{array}{rcl}x_{i}&\geqslant-1&\forall i,1\leqslant i\leqslant n -2;\\ (n-1)x_{1}+x_{2}+\ldots+x_{n-2}&\leqslant 1,\end{array}\right.\] which has the non-lattice point \(((n-2)/(n-1),-1,\ldots,-1,0,0)\) as a vertex, so it is not reflexive. **Remark 4.3**.: In dimension \(3\), all monotone polytopes except two (shown in Figure 2) are UT-free (which for this dimension is equivalent to being recursively UT-free). In general, only a small proportion of the monotone polytopes are recursively UT-free (see Table 2). For the following result recall the notions introduced in Definition 3.6. **Theorem 4.4**.: _Let \(n\geqslant 1\) be any integer. Suppose that all monotone polytopes in dimension \(n\) satisfy the weak (respectively strong) Ewald condition. Then, all \((n+1)\)-dimensional monotone \(\operatorname{UT}\)-free polytopes satisfy the weak (respectively strong) Ewald condition._ Proof.: We first prove the case concerning the weak Ewald condition. Let \(P\) be a \((n+1)\)-dimensional \(\operatorname{UT}\)-free polytope, \(F_{1}\) a facet of \(P\) and \(H_{1}\) the linear hyperplane parallel to \(F_{1}\). By Lemma 4.1, \(P\cap H_{1}\) is a monotone polytope, and by hypothesis, \(\mathcal{E}(P\cap H_{1})\) contains a unimodular basis for \(\mathbb{Z}^{n}\). Let \(v\) be an element of this basis. Then, \(v\in\mathcal{E}(P)\). Now let \(F_{2}\) be a facet containing \(v\) and \(H_{2}\) the parallel linear hyperplane. Again by Lemma 4.1, \(P\cap H_{2}\) is monotone and \(\mathcal{E}(P\cap H_{2})\) contains a unimodular basis \(\mathcal{B}\) of \(\mathbb{Z}^{n}\). Since \(P\) is reflexive, \(v\) is at distance \(1\) from \(H_{2}\), so \(\mathcal{B}\cup\{v\}\) is a unimodular basis for \(\mathbb{Z}^{n+1}\), and we are done. Now we consider the part of the statement concerning the strong Ewald condition. Let \(P\) be a monotone \(\operatorname{UT}\)-free polytope in dimension \(n+1\) and \(F\) a facet of \(P\). Take two facets \(F_{1}\) and \(F_{2}\) adjacent to \(F\) that are not parallel, \(H_{1}\) and \(H_{2}\) their two parallel hyperplanes, \[P_{i}=P\cap H_{i}\] and \[F^{\prime}_{i}=F\cap H_{i}.\] By Lemma 4.1, \(P_{1}\) and \(P_{2}\) are monotone polytopes and \(P_{i}\) is combinatorially isomorphic to \(F_{i}\) for \(i=1,2\). As \(F\cap F_{i}\) is a facet of \(F_{i}\), \(F^{\prime}_{i}\) is a facet of \(P_{i}\). By our hypothesis, \(P_{1}\) and \begin{table} \begin{tabular}{|c|c|c|c|} \hline dimension & monotone & monotone \(\operatorname{UT}\)-free & monotone recursively \(\operatorname{UT}\)-free \\ \hline 3 & 18 & 16 & 16 \\ 4 & 124 & 74 & 72 \\ 5 & 866 & 336 & 300 \\ 6 & 7622 & 1699 & 1352 \\ \hline \end{tabular} \end{table} Table 2. The table shows the number of polytopes of each class for each dimension. Theorem 2.6 says that Conjecture 1.2 is true for recursively \(\operatorname{UT}\)-free polytopes (see Section 10.1 for the procedure to find the table). Figure 2. The figure shows two \(3\)-dimensional monotone polytopes. Each of them has a face which is a unimodular triangle. Hence these polytopes are not \(\operatorname{UT}\)-free, as in Definition 2.4. satisfy the strong condition, and there is a unimodular basis \(\mathcal{B}_{i}\) contained in \(F_{i}^{\prime}\). (See Figure 3 for an illustration of this process.) Since \(F_{1}\) and \(F_{2}\) are not parallel, \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) do not span the same subspace, so there must be a vector \(v\in\mathcal{B}_{2}\) that is not in \(H_{1}\). As \(P\) is reflexive and \(v\) is not in \(H_{1}\), it must be at distance \(1\) from \(H_{1}\), so \(\mathcal{B}_{1}\cup\{v\}\) is a unimodular basis contained in \(F\), and we are done. **Corollary 4.5**.: _If \(P\) is an \(8\)-dimensional UT-free monotone polytope (not necessarily recursively UT-free) then \(P\) satisfies the strong Ewald Condition._ Proof.: This follows from Theorem 2.2 together with Theorem 4.4(ii). **Remark 4.6**.: On [23, Page 134] it is mentioned that Obro has verified that the strong Ewald condition holds up to dimension \(8\). We could only find this result proved in Obro's thesis [28] up to dimension \(7\), hence why we stated the corollary above for dimension \(8\). ### A characterization of recursively UT-free polytopes The following result gives a characterization of the UT-free condition. **Proposition 4.7**.: _A polytope \(P\) is recursively UT-free if and only if the intersection of \(P\) with the linear subspace parallel to any face of \(P\) is a UT-free polytope._ Figure 3. The top figures show unimodular bases \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) (yellow points) of two hyperplanes (blue), as obtained in the proof of Theorem 4.4 for the case of the \(3\)-cube, where \(F\) is the facet pointing forward. The bottom figure shows the resulting unimodular basis of \(\mathbb{Z}^{3}\). Proof.: For any face \(F\) of \(P\), let \(L_{F}\) be the linear subspace parallel to \(F\). We can write \(F\) as an intersection of facets, \[F=F_{1}\cap\ldots\cap F_{m},\] and \[L_{F}=H_{1}\cap\ldots\cap H_{m},\] where \(H_{i}\) is the hyperplane parallel to \(F_{i}\). Now we have that \(P\) is recursively UT-free if and only if \[P\cap H_{1}\cap\ldots\cap H_{m}\] is UT-free for any \(H_{1},\ldots,H_{m}\) parallel to facets, that is, if \(P\cap L_{F}\) is UT-free for any face \(F\). ### Proof of Theorem 2.6 The proof proceeds by induction on the dimension. For dimension \(2\) it is clear that this holds. Let us assume that the statement is true for dimension \(n\) and take a polytope \(P\) of dimension \(n+1\). We apply again the same line of argument as in the proof of Theorem 4.4(ii), which we reproduce here again for clarity. Indeed, given a facet \(F\), take two facets \(F_{1}\) and \(F_{2}\) adjacent to \(F\) and not parallel, with parallel hyperplanes \(H_{1}\) and \(H_{2}\). Let \[P_{i}=P\cap H_{i}\] and \[F^{\prime}_{i}=F\cap H_{i}.\] As \(P\) is recursively UT-free, \(P_{1}\) and \(P_{2}\) are also recursively UT-free monotone polytopes. By inductive hypothesis, \(P_{1}\) and \(P_{2}\) satisfy the strong Ewald condition. For \(i=1,2\), by Lemma 4.1, \(P_{i}\) is combinatorially equivalent to \(F_{i}\), and \(F\cap F_{i}\) is a facet of \(F_{i}\), so \(F^{\prime}_{i}=F\cap P_{i}\) is a facet of \(P_{i}\), and it contains a unimodular basis \(\mathcal{B}_{i}\) of \(H_{i}\). As \(H_{1}\neq H_{2}\), there must be a vector \(v\in\mathcal{B}_{2}\) with \(v\notin H_{1}\), \(v\) is at distance \(1\) from \(H_{1}\), and \(\mathcal{B}_{1}\cup\{v\}\) is a unimodular basis contained in \(F\). ### Alternative proof of Theorem 2.6 An alternative proof of this result (_later implemented as an algorithm_ in Section 10) is as follows. The two proofs mostly coincide for dimension \(3\), but are different for higher dimensions. Let \(v\) be a vertex in \(F\) and \[\{e_{1},\ldots,e_{n}\}\] the set of edges starting at this vertex, where \(e_{1}\) is the only edge not contained in \(F\). For \(1\leqslant i\leqslant n\), let \(l_{i}\) be the line parallel to \(e_{i}\) through the origin. We can write \[e_{i}=F_{1}\cap\ldots\cap F_{i-1}\cap F_{i+1}\cap\ldots\cap F_{n}\] where \(F_{i}\) is the facet through \(v\) opposite to \(e_{i}\) (so \(F_{1}=F\)), and \[l_{i}=H_{1}\cap\ldots\cap H_{i-1}\cap H_{i+1}\cap\ldots\cap H_{n}\] where \(H_{i}\) is the linear hyperplane parallel to \(F_{i}\). By Lemma 4.1, for all \(i\neq 1\), \(P\cap H_{i}\) is a recursively UT-free monotone polytope that has \(F\cap H_{i}\) as a facet. Applying it again, we have that \(P\cap H_{i}\cap H_{j}\) is a recursively UT-free monotone polytope with \(F\cap H_{i}\cap H_{j}\) as a facet, for all \(i\neq j\neq 1\). Continuing in this way, we have that \[P\cap l_{1}=P\cap H_{2}\cap\ldots\cap H_{n}\] is a monotone segment that ends in \(F\). The only monotone segment is \([-1,1]\), so \(P\cap l_{1}\) is unimodularly equivalent to \([-1,1]\). Defining \(p_{1}\) as the endpoint of this intersection that is in \(F\), we have that \(p_{1}\in P\) and \(-p_{1}\in P\), or \(p_{1}\in\mathcal{E}(P)\). Moreover, \(-p_{1}\) coincides with the primitive vector in the direction of \(e_{1}\). Now fix \(i\neq 1\) and let \[P_{i}=P\cap(l_{1}+l_{i})=P\cap H_{2}\cap\ldots\cap H_{i-1}\cap H_{i+1}\cap \ldots\cap H_{n}.\] By the previous discussion, \(P_{i}\) is a monotone polygon that intersects \(F\) at a segment \(s_{i}\), which goes on the direction of \(l_{i}\). As we know, all monotone polygons satisfy the strong Ewald condition. This implies that there are (at least) two consecutive symmetric integer points in \(s_{i}\), that is, there is a symmetric point \(p_{i}\) consecutive to \(p_{1}\) in \(s_{i}\) (meaning that \(p_{i}-p_{1}\) is the primitive vector in the direction of \(l_{i}\)). As \(P\) is smooth, \[p_{1},p_{2}-p_{1},\ldots,p_{n}-p_{1},\] which are the primitive vectors in the directions of the edges at \(v\) or their opposites, form a unimodular basis. This implies that \(\{p_{1},\ldots,p_{n}\}\) is a unimodular basis, and we are done. **Remark 4.8**.: In dimensions \(3,4,5\) and \(6\) many but not all monotone polytopes are recursively UT-free (for example, in dimension \(5\), of the \(866\) monotone polytopes that exist, \(300\) of them are recursively UT-free); see Table 2. Hence Theorem 2.6 covers fewer cases than Theorem 2.2, but both proofs we have presented are theoretical and valid for arbitrary dimension. ## 5. Constructing UT-free polytopes and simplex-segment bundles (SSB) In this section we see many examples of UT-free polytopes which arise as fiber bundles over polytopes. The following definition corresponds to McDuff-Tolman [25, Definition 3.10], after a choice of basis. **Definition 5.1** (Bundle of a polytope).: Let \(n,k\in\mathbb{N}\). Given three polytopes \(P\subset\mathbb{R}^{n+k}\), \(B\subset\mathbb{R}^{k}\) and \(Q\subset\mathbb{R}^{n}\), we say that \(P\) is a _bundle with base \(B\) and fiber \(Q\)_ if the following conditions hold: 1. \(P\) is combinatorially equivalent to \(B\times Q\). 2. There is a short exact sequence \[0\to\mathbb{R}^{n}\stackrel{{ i}}{{\to}}\mathbb{R}^{n+k}\stackrel{{ \pi}}{{\to}}\mathbb{R}^{k}\to 0\] such that \(i(Q)=P\cap i(\mathbb{R}^{n})\), \(\pi(P)=B\) and for every \(x\in B\) we have that \[\pi^{-1}(x)\cap P\cong Q,\] where the symbol "\(\cong\)" denotes combinatorial isomorphism. **Example 5.2**.: Any Cartesian product of finitely many polytopes is a bundle as in Definition 5.1, where any one of the two factors is the base and the other the fiber. See Figure 4 for two more examples. **Proposition 5.3** ([23, Lemma 5.2]).: _Let \(P\) be a bundle with base \(B\) and fiber \(Q\). If \(P\) is monotone, then \(B\) and \(Q\) are monotone._ In what follows, we will call the bundle in Proposition 5.3 a _monotone bundle_. **Proposition 5.4**.: _Let \(s\in\mathbb{N}\) and let \(P_{1},\ldots,P_{s}\) be convex polytopes. Then the Cartesian product \(P_{1}\times\ldots\times P_{s}\) is monotone if and only if \(P_{i}\) is monotone for every \(i\in\{1,\ldots,s\}\)._ Proof.: The implication to the right is a particular case of the previous proposition. For the implication to the left, we just need to see that the vertices of \(P_{1}\times\ldots\times P_{s}\) are given by the Cartesian product of the vertices of the factors, and the facet normals of the product are just the facet normals of each one of the factors. **Remark 5.5**.: In [23, Definition 5.1] a different definition of bundle is used, where the concepts of base and fiber are interchanged. This is because they are defined in the dual space, and the short exact sequence is inverted by taking the duals of the involved spaces and maps. We are especially interested in the following particular case. Recall that _the_ monotone simplex is defined as follows. **Definition 5.6** (Smooth unimodular simplex).: Let \(P\) be a smooth simplex in \(\mathbb{R}^{n}\). Without loss of generality we assume that the origin is a vertex and that \(n\) of its \(n+1\) facets have normal vectors \(-\mathrm{e}_{1},\ldots,-\mathrm{e}_{n}\). The remaining facet must have normal vector \((1,\ldots,1)\). Therefore \[P_{b}=\{x\in\mathbb{R}^{n}:x_{i}\geqslant 0\ \forall i\}\cap\Big{\{}x\in \mathbb{R}^{n}:\sum_{i=1}^{n}x_{i}\leqslant b\Big{\}},\] that is, \(P_{b}=\mathrm{conv}\{0,\mathrm{e}_{1},\ldots,\mathrm{e}_{n}\}\) for some constant \(b>0\) (which is the length of every edge of \(P_{b}\)). The polytope \(P_{b}\) defined in this way is the _smooth simplex of size \(b\)_. In the special case of \(b=1\) we call \(P_{b}\) the _smooth unimodular simplex_ and denote it \(\delta_{n}\), that is, \[\delta_{n}=P_{b}=\mathrm{conv}\{0,\mathrm{e}_{1},\ldots,\mathrm{e}_{n}\}\] Recall that reflexive polytopes have the origin as their unique interior lattice point and that for them \(\mathrm{AGL}(n,\mathbb{Z})\)-equivalence coincides with \(\mathrm{GL}(n,\mathbb{Z})\)-equivalence. Therefore we conclude that, up to \(\mathrm{GL}(n,\mathbb{Z})\)-equivalence, the only monotone simplex is \[\Big{\{}x\in\mathbb{R}^{n}:x_{i}\geqslant-1\ \forall i\text{ and }\sum_{i=1}^{n}x_{i} \leqslant 1\Big{\}}=-\mathbf{1}+(n+1)\delta_{n}\cong(n+1)\delta_{n},\] Figure 4. Two monotone bundles. The first has a segment as base and a square as fiber. The second has a hexagon as base and a segment as fiber. where \(\mathbf{1}:=(1,\ldots,1)\in\mathbb{R}^{n}\), and the symbol "\(\cong\)" denotes combinatorial isomorphism. We call \((n+1)\delta_{n}\)_the_ monotone simplex and denote it \(\Delta_{n}\). **Definition 5.7** (Simplex-segment bundle, SSB).: Let \(P\) be a bundle with base \(B\) and fiber \(Q\). We say that \(P\) is SSB (short for _simplex-segment bundle_) if \(B=[-1,1]\) is a monotone segment and \(Q\) is a monotone simplex. **Proposition 5.8**.: _An SSB of dimension \(n\geqslant 2\), maybe after a unimodular transformation, has the following expression in terms of inequalities:_ \[\left\{\begin{array}{rcl}x_{i}&\geqslant-1&\forall i\in\{1,2,\ldots,n\};\\ x_{1}&\leqslant 1;\\ kx_{1}+x_{2}+\ldots+x_{n}&\leqslant 1.\end{array}\right. \tag{1}\] _for some integer \(k\) with \(0\leqslant k\leqslant n-1\). Its vertices are_ \[(-1,-1,\ldots,-1),(-1,n-1+k,\ldots,-1),\ldots,(-1,\ldots,-1,n-1+k),\] \[(1,-1,\ldots,-1),(1,n-1-k,-1,\ldots,-1),\ldots,(1,-1,\ldots,-1,n-1-k).\] Proof.: We may assume, without loss of generality, that \(\pi\) is the projection that sends the polytope to \(x_{1}\). This implies that \(x_{1}\geqslant-1\) and \(x_{1}\leqslant 1\) are facets. Moreover, \(\pi^{-1}(0)\) is the fiber, which in this case is a monotone simplex: \[\left\{\begin{array}{rcl}x_{i}&\geqslant-1&\forall i\in\{2,\ldots,n\};\\ x_{2}+\ldots+x_{n}&\leqslant 1.\end{array}\right.\] so our bundle has the form \[\left\{\begin{array}{rcl}-1\leqslant x_{1}&\leqslant 1;\\ a_{i}x_{1}+x_{i}&\geqslant-1&\forall i\in\{2,\ldots,n\};\\ kx_{1}+x_{2}+\ldots+x_{n}&\leqslant 1.\end{array}\right.\] where \(a_{i}\) and \(k\) are integers. By making the coordinate change \[x_{i}\mapsto x_{i}-a_{i}x_{1},i\geqslant 2\] (which is unimodular), we obtain the bundle in the form (1). For this to be actually a bundle, we need \(|k|\leqslant n-1\). If \(k<0\), we can change \(k\) to \(-k\) by making the coordinate change \[x_{1}\mapsto-x_{1},\] so we may assume that \(0\leqslant k\leqslant n-1\). **Definition 5.9** (\(\operatorname{SSB}(n,k)\)).: Let \(n,k\in\mathbb{N}\) with \(n\geqslant 2\) and \(k\leqslant n-1\). The polytope \(\operatorname{SSB}(n,k)\) is the bundle defined in the statement of Proposition 5.8. This gives \(n\) different bundles. For \(k=0\) we recover the Cartesian product. See Figure 5 for the SSB in dimension \(3\). **Example 5.10**.: Figure 2 shows examples of polytopes in dimension \(3\) that are not UT-free. The first picture in Figure 2 is \(\operatorname{SSB}(3,2)\), and the second one is the result of adding a facet. In dimension \(3\) there are exactly \(18\) monotone polytopes, and the ones shown in the figure are precisely the only ones which are not UT-free. In dimension \(3\) the definition of UT-free polytope and recursively UT-free polytope as in Definition 2.5 coincide, but this is not the case in dimensions strictly greater than \(3\), as shown in Table 2. For the case of SSBs in higher dimensions, we have the following: **Proposition 5.11**.: \(\operatorname{SSB}(n,k)\)_, where \(n\geqslant 2\) and \(0\leqslant k\leqslant n-1\), is \(\operatorname{UT}\)-free if and only if \(n=2\) or \(k\leqslant n-2\). It is recursively \(\operatorname{UT}\)-free if and only if \(k\leqslant 1\)._ Proof.: Note that an SSB has, as the facet with \(x_{1}=-1\), a simplex with edge length \(n+k\), and as the facet with \(x_{1}=1\), another one with edge length \(n-k\), which will contain a unimodular triangle if and only if \(n\geqslant 3\) and \(n-k=1\). The rest of facets will not contribute triangles, and the first part follows. For the second, the intersection of \(\operatorname{SSB}(n,k)\) with the hyperplane of a facet which is a simplex is the monotone \((n-1)\)-simplex. With a different facet, the intersection is: * a monotone segment, if \(n=2\). * \(\operatorname{SSB}(n-1,k)\), if \(n\geqslant 3\) and \(k<n-1\). * a non-smooth simplex with edge lengths \(2\) and \(2n-2\), if \(n\geqslant 3\) and \(k=n-1\). The result follows by induction. Note how the situations in Lemma 4.1 are reproduced in this case: if \(k=0\) or \(k=1\), the intersections with facets give \(\operatorname{SSB}(n,k)\) with decreasing \(n\)'s, until \(n\) becomes \(2\) and the next intersection gives a monotone segment. If \(k>1\), the intersections are SSBs until \(n\) becomes \(k+1\), at which point we have a non-UT-free polytope, and the next intersection may be non-smooth (see Example 4.2). **Corollary 5.12**.: _For every \(n\in\mathbb{N}\) with \(n\geqslant 4\), there are polytopes that are \(\operatorname{UT}\)-free but not recursively \(\operatorname{UT}\)-free._ We can also check the Ewald properties for these polytopes: **Lemma 5.13**.: _The monotone simplex satisfies the weak, strong and star Ewald conditions._ Proof.: We start with the strong condition. It is easy to see that all facets of the monotone simplex are equivalent via unimodular transformation, so we focus on \(x_{1}=-1\). A unimodular basis contained in this facet is \[(-1,0,\ldots,0),(-1,1,\ldots,0),\ldots,(-1,0,\ldots,1)\] All the opposites of these points are in the simplex, so this proves the strong condition and also the weak condition. For the star condition, we have analogously that all the faces of a given dimension are equivalent, and we can take the face \(F\) given by \(x_{1}=\ldots=x_{i}=-1\). Now the point \((-1,0,\ldots,0)\) is in the interior of a facet that contains \(F\), and its opposite point is not in any facet containing \(F\). Figure 5. From left to right: \(\operatorname{SSB}(3,0)\), \(\operatorname{SSB}(3,1)\), \(\operatorname{SSB}(3,2)\). **Proposition 5.14**.: _For \(n\in\mathbb{N},n\geqslant 2,\) and \(k\in\mathbb{N},k\leqslant n-1,\)\(\operatorname{SSB}(n,k)\) satisfies the weak, strong and star Ewald conditions._ Proof.: There are three types of facets in an SSB: those equivalent to \(x_{2}=-1,\) the facet \(x_{1}=1\) and the facet \(x_{1}=-1\). The points \[(1,-1,\ldots,-1),(1,0,-1,\ldots,-1),\ldots,(1,-1,\ldots,-1,0)\] give the unimodular basis we need for \(x_{1}=1\), and their opposites give that for \(x_{1}=-1\). For any other facet, for example \(x_{2}=-1\), its intersection with \(x_{1}=0\) gives a facet of the unimodular simplex, and by Lemma 5.13, there is a unimodular basis for \(x_{1}=0\) contained in this facet. Adding the point \((1,-1,\ldots,-1)\), we have a basis of the total space. This proves the strong and weak conditions. For the star condition, we have four cases: * A facet. This case is automatic, by the strong condition. * A face strictly contained in \(x_{1}=-1\). Without loss of generality, we can take it to be \[x_{1}=x_{2}=\ldots=x_{i}=-1.\] * A face strictly contained in \(x_{1}=1\). We can take it as \[x_{1}=1,x_{2}=\ldots=x_{i}=-1.\] * A face intersecting both \(x_{1}=-1\) and \(x_{1}=1\). We can take it as \[x_{2}=\ldots=x_{i}=-1.\] In all the last three cases, the point we are looking for is \((0,-1,0,\ldots,0)\). ## 6. The Ewald symmetry function and symmetric monotone polytopes In this section we prove several combinatorial and asymptotic properties of the Ewald symmetry function. For example, we will prove that \(\mathcal{E}\) is multiplicative on products and compute the value of \(|\mathcal{E}(\operatorname{SSB}(n,k))|\) explicitly, where \(\operatorname{SSB}(n,k)\) are the polytopes introduced earlier (Definition 5.9). ### The Ewald symmetry function \(\mathcal{E}\) for products and bundles We start this section by describing how the Ewald symmetry map behaves with respect to the Cartesian product and fiber bundle operator. **Proposition 6.1**.: _Let_ \[\mathcal{E}:\mathfrak{M}\longrightarrow\bigcup_{n\in\mathbb{N}}\mathcal{P}( \mathbb{Z}^{n})\] _be the Ewald symmetry function. Then the following statements hold._ * _The map_ \(\mathcal{E}\) _is multiplicative, that is, if_ \(P_{1},\ldots,P_{s}\) _are polytopes then_ \[\mathcal{E}(P_{1}\times\ldots\times P_{s})=\mathcal{E}(P_{1})\times\ldots \times\mathcal{E}(P_{s}).\] * _Let_ \(P\) _be a monotone bundle with fiber_ \(Q\) _and base_ \(B\)_. Recall from Definition_ 5.1 _that we have maps_ \(i:Q\to P\) _and_ \(\pi:P\to B\)_, with_ \(i\) _injective and_ \(\pi\) _surjective. Then_ \[i(\mathcal{E}(Q))\subset\mathcal{E}(P).\] * _Let_ \(P\) _be a monotone bundle with fiber_ \(Q\) _and base_ \(B\)_. If_ \(\mathcal{E}(Q)\neq\{0\}\)_, then_ \[\mathcal{E}(P)\neq\{0\}.\] Proof.: We prove each item separately. 1. We write the proof for two polytopes \(P\) and \(Q\). For any lattice point \((x,y)\), \((x,y)\in\mathcal{E}(P\times Q)\) means that \((x,y),-(x,y)\in P\times Q\), which in turn means that \(x,-x\in P,y,-y\in Q\). On the other hand, \((x,y)\in\mathcal{E}(P)\times\mathcal{E}(Q)\) is equivalent to \(x\in\mathcal{E}(P),y\in\mathcal{E}(Q)\), which is equivalent to \(x,-x\in P,y,-y\in Q\). 2. If \(x\in i(\mathcal{E}(Q))\), then \(x\in i(Q)\subset P\) and \(-x\in i(Q)\subset P\), and \(x\in\mathcal{E}(P)\). 3. This follows from part (ii). The following result discusses how the weak Ewald condition behaves under Cartesian products and fiber bundles. **Proposition 6.2**.: _The following statements hold._ 1. _If_ \(P_{1},\ldots,P_{s}\) _are polytopes, then their Cartesian product_ \[P_{1}\times\ldots\times P_{s}\] _satisfies the weak Ewald condition if and only if every_ \(P_{i},i\in\{1,\ldots,s\}\)_, satisfies the weak Ewald condition. The same is true for the strong and the star conditions._ 2. _Let_ \(P\) _be a monotone bundle with fiber_ \(Q\) _and base_ \(B\)_. If_ \(P\) _satisfies weak Ewald condition,_ \(B\) _also satisfies it. The same holds for the strong and star conditions._ Proof.: 1. We start with the weak condition. If each \(\mathcal{E}(P_{i})\) contains a unimodular basis, their union gives a unimodular basis contained in \(\mathcal{E}(P_{1})\times\ldots\times\mathcal{E}(P_{s})\), which is equal to \(\mathcal{E}(P_{1}\times\ldots\times P_{s})\). For the converse, if \(\mathcal{E}(P_{1}\times\ldots\times P_{s})\) contains a unimodular basis, its projection to each one of the \(\mathcal{E}(P_{i})\)'s is a spanning set that contains a unimodular basis. For the strong condition, the reasoning is essentially the same, because a facet of \(P_{1}\times\ldots\times P_{s}\) is the product of a facet of a \(P_{i}\) by all the other factors. For the star condition, the reasoning is also analogous. This particular result is [23, Corollary 5.5]. 2. If there is a unimodular basis in \(\mathcal{E}(P)\), its projection to \(B\) gives a set containing a unimodular basis in \(\mathcal{E}(B)\). As pointed out by McDuff [23, Text above Proposition 1.4] it seems likely that the total space of a fiber bundle is star Ewald when the base and the fiber are star Ewald, and she proves the case [23, Proposition 1.4] when the fiber (which is called base there) is a simplex. We will later see how this generalizes to the case where the fiber is \((\pm)\)-symmetrical. ### Direct and inverse images of Ewald symmetry function Next we study the images and preimages under the Ewald symmetry function. To start, we have the following property of the image. **Proposition 6.3**.: _For any \(P\in\mathfrak{M}\) of dimension \(n\),_ \[\mathcal{E}(P)\subset\mathcal{E}(\mathrm{C}_{n}),\] _where \(\mathrm{C}_{n}\) is the monotone cube \([-1,1]^{n}\)._ Proof.: Without loss of generality, we may assume that we have a vertex at \[(-1,\ldots,-1)\] with the facet normals in the coordinate directions. In that situation, all points of \(P\) have all coordinates greater than \(-1\), and \[\mathcal{E}(P)\subset P\cap-P\subset\mathrm{C}_{n}\] while \(\mathcal{E}(\mathrm{C}_{n})\) consists in all integer points of \(\mathrm{C}_{n}\), and we are done. **Corollary 6.4**.: _For any \(P\in\mathfrak{M}\) of dimension \(n\),_ \[|\mathcal{E}(P)|\leqslant 3^{n},\] _and this bound is attained exactly for the monotone cube._ In general, if \(I\) is the interval \([-1,1]\), then \[|\mathcal{E}(P\times I)|=3|\mathcal{E}(P)|.\] We can also compute this size for the simplex and for the SSB. In what follows, we will use the notation \[[x^{k}]f(x)\] to mean the coefficient of \(x^{k}\) in the polynomial \(f(x)\). **Proposition 6.5**.: _If \(\Delta_{n}\) is the monotone \(n\)-simplex,_ \[|\mathcal{E}(\Delta_{n})|=[x^{n+1}](1+x+x^{2})^{n+1}.\] _For \(n=1\) to \(9\), this gives \(3,7,19,51,141,393,1107,3139,8953\).1_ Footnote 1: This sequence is registered as A002426 in the OEIS (Online Encyclopedia of Integer Sequences). Proof.: Let \(x\) be a lattice point in \(\Delta_{n}\) such that \(-x\in\Delta_{n}\). This implies that \[\left\{\begin{array}{rl}-1\leqslant x_{i}&\leqslant 1\quad\forall i\in[n]; \\ -1\leqslant x_{1}+\ldots+x_{n}&\leqslant 1,\end{array}\right.\] where \([n]\) stands for \(\{1,\ldots,n\}\). Taking \(x_{n+1}=-x_{1}-\ldots-x_{n}\), this becomes \[\left\{\begin{array}{rl}-1\leqslant x_{i}&\leqslant 1\quad\forall i\in[n+1]; \\ x_{1}+\ldots+x_{n+1}&=0,\end{array}\right.\] and defining \(y_{i}=x_{i}+1\), \[\left\{\begin{array}{rl}0\leqslant y_{i}&\leqslant 2\quad\forall i\in[n+1]; \\ y_{1}+\ldots+y_{n+1}&=n+1.\end{array}\right.\] The number of integer solutions to this system is equal to the aforementioned coefficient. **Theorem 6.6**.: _For \(n\geqslant 2\), \(0\leqslant k\leqslant n-1\), we have that_ \[|\mathcal{E}(\mathrm{SSB}(n,k))|=[x^{n}](1+x+x^{2})^{n}+2[x^{n-k}](1+x+x^{2}) ^{n}.\] Proof.: Taking into account the equations of SSB\((n,k)\), we have that all the symmetric points must have the coordinates between \(-1\) and \(1\). The symmetric points with \(x_{1}=0\) are exactly those in the monotone \((n-1)\)-simplex, that are counted by \[[x^{n}](1+x+x^{2})^{n}.\] For those with \(x_{1}=1\), we have that \[\left\{\begin{array}{rl}-1\leqslant x_{i}&\leqslant 1\quad\forall i\in[2,n]; \\ -1-k\leqslant x_{2}+\ldots+x_{n}&\leqslant 1-k,\end{array}\right.\] where \([2,n]\) stands for \(\{2,\ldots,n\}\). The lower bound of the first line and the upper bound in the second come from the fact that the point is in \(\operatorname{SSB}(n,k)\), and the other bounds come from the symmetric point being also in \(\operatorname{SSB}(n,k)\). We now take \(x_{n+1}\) so that the sum is \(-k\): \[\left\{\begin{array}{rl}-1\leqslant x_{i}&\leqslant 1\quad\forall i\in[2,n+1]; \\ x_{2}+\ldots+x_{n+1}&=-k,\end{array}\right.\] and making \(y_{i}=x_{i}+1\), \[\left\{\begin{array}{rl}0\leqslant y_{i}&\leqslant 2\quad\forall i\in[2,n+1]; \\ y_{2}+\ldots+y_{n+1}&=n-k.\end{array}\right.\] The number of solutions is counted by \[[x^{n-k}](1+x+x^{2})^{n},\] and the result follows. **Remark 6.7**.: For \(n=2\), there are five monotone polytopes (see Figure 1). The four different from the square give the same value for the Ewald symmetry function \(\mathcal{E}\), which consists of the set of points \[0,\mathrm{e}_{1},\mathrm{e}_{2},\mathrm{e}_{1}+\mathrm{e}_{2}\] and their symmetric points, for a total of 7 points. The square has, of course, 9 points. In dimension 3, the two polytopes with smallest \(\mathcal{E}\) are those in Figure 2, which only have 13 points. The simplex and \(\operatorname{SSB}(3,1)\), which is contained in it, have 19 points, the Cartesian products of 2-monotone polytopes with segments have 21, except the cube, that has 27, and the rest all have 17. We can see that, already in these low dimensions, the function \(\mathcal{E}\) is neither injective nor surjective. The polytopes in dimension 2 and 3 with the same value of \[|\mathcal{E}(P)|\] have also the same \(\mathcal{E}(P)\), and there are only two and five elements in the image, respectively. (However, Theorem 6.6 implies that \(\mathcal{E}\) is injective when restricted to SSBs.) ### Applications of Theorem 6.6 By Theorem 6.6 we have the following table of values: \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(|\mathcal{E}(\operatorname{SSB}(n,k))|\) & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline 2 & 9 & 7 & & & & & & & & \\ \hline 3 & 21 & 19 & 13 & & & & & & & \\ \hline 4 & 57 & 51 & 39 & 27 & & & & & & \\ \hline 5 & 153 & 141 & 111 & 81 & 61 & & & & \\ \hline 6 & 423 & 393 & 321 & 241 & 183 & 153 & & & \\ \hline 7 & 1179 & 1107 & 925 & 715 & 547 & 449 & 407 & & \\ \hline 8 & 3321 & 3139 & 2675 & 2115 & 1639 & 1331 & 1179 & 1123 & \\ \hline 9 & 9417 & 8953 & 7747 & 6247 & 4903 & 3967 & 3451 & 3229 & 3157 \\ \hline \end{tabular} We can notice several patterns in the table: **Proposition 6.8**.: _For \(n\geqslant 2\), the Ewald symmetry function for \(\operatorname{SSB}\)s satisfies:_ 1. \(|\mathcal{E}(\operatorname{SSB}(n,0))|=3|\mathcal{E}(\Delta_{n-1})|\)_._ 2. \(|\mathcal{E}(\operatorname{SSB}(n,1))|=|\mathcal{E}(\Delta_{n})|\)_._ 3. \(|\mathcal{E}(\operatorname{SSB}(n,n-1))|=|\mathcal{E}(\Delta_{n-1})|+2n\)_._ _._ 4. _For a fixed_ \(n\) _and varying_ \(k\) _from_ \(0\) _to_ \(n-1\)_, the quantity_ \[|\mathcal{E}(\operatorname{SSB}(n,k))|\] _decreases with_ \(k\)_._ 5. _In the same range of_ \(k\)_,_ \[1<\frac{|\mathcal{E}(\operatorname{SSB}(n,k))|}{|\mathcal{E}(\Delta_{n-1})|} \leqslant 3.\] 6. _In that range, the volume of_ \(\operatorname{SSB}(n,k)\) _increases with_ \(k\)_._ Proof.: 1. This follows from \[\operatorname{SSB}(n,0)=\Delta_{n-1}\times[-1,1].\] 2. This is because \(\operatorname{SSB}(n,1)\) is a subset of \(\Delta_{n}\) that contains all its symmetric points. 3. This is a consequence of the general formula. 4. This follows from the unimodality of the coefficients of \[(1+x+x^{2})^{n}.\] 5. This follows from (i), (iii) and (iv). 6. The intersection with \(x_{1}=t\) is a simplex with edge length \(n-kt\), whose volume is \[\frac{(n-kt)^{n-1}}{(n-1)!},\] and integrating in the direction of \(x_{1}\), \[\operatorname{vol}(\operatorname{SSB}(n,k))=\int_{-1}^{1}\frac{(n-kt)^{n-1}}{ (n-1)!}\mathrm{d}t=\frac{(n+k)^{n}-(n-k)^{n}}{k\cdot n!}.\] After expanding the binomials, \[\operatorname{vol}(\operatorname{SSB}(n,k)) =\frac{1}{k\cdot n!}\left[\sum_{i=0}^{n}\binom{n}{i}n^{n-i}k^{i} -\sum_{i=0}^{n}\binom{n}{i}n^{n-i}(-k)^{i}\right]\] \[=\frac{1}{k\cdot n!}\sum_{i=0}^{n}\binom{n}{i}[1-(-1)^{i}]n^{n-i} k^{i}\] \[=\frac{2}{n!}\sum_{\begin{subarray}{c}i=1\\ i\text{ odd}\end{subarray}}^{n}\binom{n}{i}n^{n-i}k^{i-1}\] which is increasing in \(k\). **Remark 6.9**.: In [https://oeis.org/A002426](https://oeis.org/A002426), section "Formula", B. Cloitre and A. Mihailov's state without proof that \[\lim_{n\to\infty}\frac{\sqrt{n}}{3^{n}}|\mathcal{E}(\Delta_{n})|=\sqrt{\frac{3 }{4\pi}}.\] By (v) above, this implies that, for any \(k\in\mathbb{N}\), \[\sqrt{\frac{3}{4\pi}}\leqslant\lim_{n\to\infty}\frac{\sqrt{n}}{3^{n}}| \mathcal{E}(\operatorname{SSB}(n,k))|\leqslant 3\sqrt{\frac{3}{4\pi}}.\] ### Computation of the values of the Ewald symmetry function We have computed, using software, the values of \[|\mathcal{E}(P)|\] for dimension \(n\leqslant 5\), leading to the following tables. In each of these tables, the numbers in boldface in the first column correspond to the sizes that are triple of a size in the previous dimension, hence containing the products of these polytopes with segments. If the two counts coincide, i.e. all polytopes with the indicated size are products, the number in the second column is also in boldface. \begin{tabular}{|c|c|} \hline \(n=3\) & \(n=4\) \\ \hline \(|\mathcal{E}(P)|\) & count \\ \hline [MISSING_PAGE_POST] *99** & 87 \\ \hline \end{tabular} We can see, in all dimensions, that products by segments tend to occupy the highest ranks, and among those that are not, the simplex is one of the highest. We conclude that the number of symmetric points does not appear to be related to the volume of the polytope (the two cases of 13 in dimension 3 have greater volume than the cube). Instead, it is more related to a "degree of symmetry", understood, for example, as the volume of \(P\cap-P\). An interesting question is: **Question 6.10**.: Which polytope attains the minimum of \(|\mathcal{E}(P)|\)? Concerning this question for dimensions 3 and 4 this is SSB\((n,n-1)\), but for \(n=5\) this pattern breaks: this polytope will have a size of 61, but there are three polytopes that achieve a size of 59. Remember that it is not even known whether \(\mathcal{E}(P)\neq\{0\}\) for all polytopes. However, taking into account Remark 6.9, we think that this minimal size will grow exponentially with base 3. **Conjecture 6.11**.: _The following holds:_ \[\lim_{n\to\infty}\frac{\min\{|\mathcal{E}(P)|:P\text{ monotone in dimension }n+1\}}{\min\{|\mathcal{E}(P)|:P\text{ monotone in dimension }n\}}=3.\] ## 7. \((\pm)\)-symmetrical polytopes and Ewald's Conjecture As we mentioned earlier \((\pm)\)-symmetrical polytopes arise naturally in the study of Conjecture 1.2: in fact, finding a monotone polytope which is _not_\((\pm)\)-symmetrical would give a counterexample to Conjecture 1.2. Hence, in view of Theorem 2.2, if such an example exists, it has at least dimension \(8\). Theorem 2.10, stated in the introduction, follows immediately from the results we prove in this section. In what follows recall the concept of \((\pm)\)-symmetrical polytope given in Definition 2.9. First we give an example to show that not every polytope is of the form given in the definition. **Example 7.1**.: Not every (non-monotone) polytope is \((\pm)\)-symmetrical. To start, applying the condition to \(b=0\), we have that there must be an integer point in \[P\cap-P.\] But that is not the only requirement. For example, the polytope given by \[P=\{x\in\mathbb{R}^{n}:-4\leqslant 3x_{i}\leqslant 4,i=1,\dots,n\},\] which is a dilation of the monotone cube, becomes \[P_{+}=\{x\in\mathbb{R}^{n}:-9\leqslant 3x_{i}\leqslant 6,i=1,\dots,n\}\] \[P_{-}=\{x\in\mathbb{R}^{n}:1\leqslant 3x_{i}\leqslant 2,i=1,\dots,n\}\] Both are combinatorially equivalent to \(P\), but \(P_{-}\) has no integer points. Hence this \(P\) is not \((\pm)\)-symmetrical. The following result establishes relations between the concept of \((\pm)\)-symmetrical polytope and the bundle operation on polytopes. **Proposition 7.2**.: _Let \(Q\) be a monotone polytope. If in all possible bundles with base \([-1,1]\) and fiber \(Q\), the point \(1\) in the base can be lifted to a point in \(\mathcal{E}(P)\), \(Q\) is \((\pm)\)-symmetrical. (The converse of this implication is a particular case of part (i) of the next theorem.) Moreover, if all those bundles satisfy the weak Ewald condition, \(Q\) is \((\pm)\)-symmetrical; in particular, if Conjecture 1.2 is true, then every monotone polytope is \((\pm)\)-symmetrical._ Proof.: Take a fixed \(b\) and consider the following \((n+1)\)-polytope: \[P=\operatorname{conv}((Q_{+}\times\{1\})\cup(Q_{-}\times\{-1\})),\] which is a fiber product with fiber \(Q\) and base \([-1,1]\). By hypothesis, there is \(x\in\mathcal{E}(P)\) that is contained in \(Q_{+}\times\{1\}\), which implies \(-x\in Q_{-}\times\{-1\}.\) Forgetting the last coordinate we obtain the desired point. This proves the first claim of the proposition. Now we prove the second claim. If the bundle \(P\) above satisfies the weak Ewald condition, there is a point that is not contained in the hyperplane of \(Q\times\{0\}\). We may assume that \[x\in Q_{+}\times\{1\}\] \[-x\in Q_{-}\times\{-1\}.\] Again, forgetting the last coordinate gives the desired point. The following result states, in particular, that when the base and fiber of a polytope bundle satisfy Conjecture 1.2, then so does the total space of the bundle provided that the fiber is \((\pm)\)-symmetrical. **Theorem 7.3**.: _Let \(P\) be a monotone bundle with base \(B\) and fiber \(Q\). Then the following statements hold._ 1. _If_ \(Q\) _is_ \((\pm)\)_-symmetrical, every point of_ \(\mathcal{E}(B)\) _can be lifted to a point in_ \(\mathcal{E}(P)\)_._ 2. _If_ \(B\) _and_ \(Q\) _are both_ \((\pm)\)_-symmetrical, then_ \(P\) _is_ \((\pm)\)_-symmetrical._ 3. _If_ \(P\) _is_ \((\pm)\)_-symmetrical, then_ \(B\) _is_ \((\pm)\)_-symmetrical._ 4. _If both_ \(B\) _and_ \(Q\) _satisfy the weak Ewald condition and_ \(Q\) _is_ \((\pm)\)_-symmetrical, then_ \(P\) _satisfies the weak Ewald condition._ 5. _The same holds for the star condition._ Proof.: We prove each item above separately. First we prove (i). Let \[P=\{x\in\mathbb{R}^{k+n}:Ax\leqslant\mathbf{1}\},\] where \(A\in\mathbb{Z}^{N\times(k+n)}\). Without loss of generality, we may assume that the projection \(\pi\) sends a point to the first \(k\) coordinates. In this setting, we can arrange the facets of \(P\) so that the ones projecting to facets of \(B\) appear first: \[A=\begin{pmatrix}A_{11}&0\\ A_{21}&A_{22}\end{pmatrix}\] where \[A_{11}\in\mathbb{Z}^{N_{1}\times k},A_{21}\in\mathbb{Z}^{N_{2}\times k},A_{22 }\in\mathbb{Z}^{N_{2}\times n},\] and \(A_{11}x\leqslant\mathbf{1}\) are the equations of \(B\). Now \(\pi^{-1}(y)\), for any \(y\in B\), is given by \[\begin{pmatrix}0\\ A_{22}\end{pmatrix}x\leqslant\mathbf{1}-\begin{pmatrix}A_{11}\\ A_{21}\end{pmatrix}y\] The first equations give \[0\leqslant\mathbf{1}-A_{11}y,\] which is true because \(y\in B\), and the last equations are those of \(\pi^{-1}(y)\): \[A_{22}x\leqslant\mathbf{1}-A_{21}y.\] Concretely, \(Q=\pi^{-1}(0)\) is given by \(\{x\in\mathbb{R}^{n}:A_{22}x\leqslant\mathbf{1}\}\). Take now a point \(v\in\mathcal{E}(B)\). As \(Q\) is \((\pm)\)-symmetrical, there is \[w\in\{x\in\mathbb{R}^{n}:A_{22}x\leqslant\mathbf{1}-A_{21}v\}\] with \[-w\in\{x\in\mathbb{R}^{n}:A_{22}x\leqslant\mathbf{1}+A_{21}v\}\] that is, exactly \(w\in\pi^{-1}(v)\) with \(-w\in\pi^{-1}(-v)\), and we are done. Now we prove (ii). Let \[P=\{x\in\mathbb{R}^{k+n}:Ax\leqslant\mathbf{1}\}\] \[P_{+}=\{x\in\mathbb{R}^{k+n}:Ax\leqslant\mathbf{1}+b\}.\] We may assume, as in the previous part, that \(\pi\) is the projection that sends a point to the first \(k\) coordinates and \[A=\begin{pmatrix}A_{11}&0\\ A_{21}&A_{22}\end{pmatrix},b=\begin{pmatrix}b_{1}\\ b_{2}\end{pmatrix}\] where \[A_{11}\in\mathbb{Z}^{N_{1}\times k},A_{21}\in\mathbb{Z}^{N_{2}\times k},A_{22 }\in\mathbb{Z}^{N_{2}\times n},b_{1}\in\mathbb{Z}^{N_{1}},b_{2}\in\mathbb{Z}^ {N_{2}},\] and \(A_{11}x\leqslant\mathbf{1}\) are the equations of \(B\). After displacing the facets by \(b\), the fiber \(\pi^{-1}(y)\), for any \(y\in\mathbb{Z}^{k}\), gives \[\begin{pmatrix}0\\ A_{22}\end{pmatrix}x\leqslant\mathbf{1}+\begin{pmatrix}b_{1}\\ b_{2}\end{pmatrix}-\begin{pmatrix}A_{11}\\ A_{21}\end{pmatrix}y\] so now the first equations give \[0\leqslant\mathbf{1}+b_{1}-A_{11}y\] and the last equations give \[A_{22}x\leqslant\mathbf{1}+b_{2}-A_{21}y.\] This corresponds to a bundle with base \[B_{+}=\pi(P_{+})=\{x\in\mathbb{R}^{k}:A_{11}x\leqslant\mathbf{1}+b_{1}\}\] and fiber \[Q_{+}=\pi^{-1}(0)=\{x\in\mathbb{R}^{n}:A_{22}x\leqslant\mathbf{1}+b_{2}\}.\] Note that the new base and fiber are combinatorially equivalent to the original ones. By displacing by \(-b\), we obtain analogously \(P_{-}\), \(B_{-}\) and \(Q_{-}\), with the opposite displacement to the previous ones. By the hypothesis applied to \(B\), we obtain a point \(v\in B_{+}\) with \(-v\in B_{-}\). Now, the fibers \[\pi^{-1}(v)\cap P_{+}=\{x\in\mathbb{R}^{n}:A_{22}x\leqslant\mathbf{1}+b_{2}-A _{21}v\}\] and \[\pi^{-1}(-v)\cap P_{-}=\{x\in\mathbb{R}^{n}:A_{22}x\leqslant\mathbf{1}-b_{2}+ A_{21}v\}\] are combinatorially equivalent to \(Q\). Applying now the hypothesis to \(Q\), we get \[w\in\pi^{-1}(v)\cap P_{+}\] with \[-w\in\pi^{-1}(-v)\cap P_{-},\] so \(w\in P_{+}\) with \(-w\in P_{-}\), and we are done. Next we prove (iii). For every displacement of \(B\), given by an integer vector \(b\in\mathbb{Z}^{N_{1}}\), the same displacement can be carried out on \(P\) completing the vector with \(N_{2}\) zeros. Applying the hypothesis to \(P\), we obtain a point \(x\in P_{+}\) with \(-x\in P_{-}\). Then the first equations of \(P_{+}\) applied to \(x\) give \[A_{11}\pi(x)\leqslant\mathbf{1}+b_{1}\] and those for \(P_{-}\) give \[-A_{11}\pi(x)\leqslant\mathbf{1}-b_{1},\] and hence \(B\) is \((\pm)\)-symmetrical. Now we prove (iv). By hypothesis, we have a unimodular basis \(\mathcal{B}\) of \(\mathbb{Z}^{k}\) contained in \(\mathcal{E}(B)\). By part (i), every point in \(\mathcal{E}(B)\) can be lifted to a point in \(\mathcal{E}(P)\). Applying this to every point in \(\mathcal{B}\), we get a basis of the quotient \(\mathbb{Z}^{n+k}/\pi\). Also by hypothesis, there is a unimodular basis of \(\ker\pi\cong\mathbb{Z}^{n}\) contained in \(\mathcal{E}(Q)\). The union of these two bases gives the desired unimodular basis of \(\mathbb{Z}^{n+k}\). Finally we prove (v). This is essentially [23, Proposition 5.3]. A face \(F\) of \(P\) is the product of a face \(F_{B}\) of \(B\) and a face \(F_{Q}\) of \(Q\). If \(F_{Q}=Q\), \[\operatorname{Star}(F_{B}\times Q)=\operatorname{Star}(F_{B})\times Q\] so applying the star condition to \(F_{B}\) to get a point in \(\mathcal{E}(B)\), and then using part (i) to lift this point to \(\mathcal{E}(P)\), we obtain the desired point. If \[F_{Q}\subsetneq Q,\] applying the star condition to it we obtain a point \(p\) in \(\mathcal{E}(Q)\). The inclusion of \(p\) in \(P\) gives a point which is contained exactly in the facets \(B\times F_{Q}^{\prime}\), where \(F_{Q}^{\prime}\) is a facet containing \(p\), which is what we want. ### Proof of Theorem 2.10 It follows from Proposition 7.2 and Theorem 7.3. ## 8. Nill's Conjecture: the generalized Ewald Conjecture for \(n=2\) In this section we prove Theorem 2.8. First we need to define some transformations in the plane. **Definition 8.1** (Sliding).: A _sliding_ is the transformation \(\mathbb{R}^{2}\to\mathbb{R}^{2}\) that sends the point \((x,y)\) to \((x+ky,y)\), for some parameter \(k\in\mathbb{R}\). This transformation keeps the \(x\)-axis fixed, while the lines \(y=1\) and \(y=-1\) map to themselves but shifted \(k\) units in opposite directions. If \(k\in\mathbb{Z}\), this is a unimodular transformation. **Lemma 8.2**.: _If a lattice polygon \(P\) has at least two integer points with \(y=0\), of which one is interior, it has at least one integer point with \(y=1\)._ Proof.: Without loss of generality, we may assume that the points are \((0,0)\), which is interior, and \((1,0)\). Suppose that there is no integer point in \(P\) with \(y=1\). The intersection with \(y=1\) must be nonempty, because \((0,0)\) is an interior point, so this intersection is contained in an open segment \[((x_{1},1),(x_{1}+1,1)).\] Now we slide this segment to \(((0,1),(1,1))\). The two sides of \(P\) that enclose the interval cross in between these two points, and they leave between them \((0,0)\) and \((1,0)\), so their endpoint with greater \(y\) must have an \(x\) coordinate strictly between \(0\) and \(1\). This contradicts that \(P\) is a lattice polygon. **Proof of Theorem 2.8.** For clarity we have divided the proof in two steps. _First step._ The goal of this step is to find an integer point in \(\tilde{\mathcal{E}}(P)\). To start, any vertex of \(P\) is an integer point, so its primitive point must also be in \(P\). By a unimodular transformation, we can move this point to \((1,0)\). If \((-1,0)\in P\), this step is done, so suppose now \((-1,0)\notin P\). By Lemma 8.2, there is a lattice point with \(y=1\) in \(P\), so we can slide it to \((0,1)\). If \((0,-1)\in P\), we are done, so suppose that \((0,-1)\notin P\). If the points \((1,-1)\) and \((-1,1)\) are both in \(P\), we are done. Without loss of generality, we take \((1,-1)\notin P\). Let \(a\) be the side intersecting \(((0,-1),(0,0))\). Then \(a\) must have no vertex with \(x=0\), so its right endpoint has \(x\geqslant 1\) and it also intersects \(((1,-1),(1,0)]\). Let \(u=(x_{u},y_{u})\) be the right endpoint of \(a\) and \(v=(x_{v},y_{v})\) the left endpoint. On the other hand, there is a side \(b\) intersecting \(((-1,0),(0,0))\). We will prove now that \(a=b\). Suppose, for a contradiction, that \(a\neq b\). Let \(w=(x_{w},y_{w})\) be the lower endpoint of \(b\). \(w\) cannot be the right endpoint, because it would be equal to \(v\) and they would not be integer, so \(w\) must be the left endpoint of \(b\). Moreover, as \((0,1)\) is inside \(b\) and \((-1,0)\) is outside \(b\), we have \[x_{w}-y_{w}>-1.\] At the same time, \((1,0)\) is inside \(a\) and \((0,-1)\) is outside \(a\), so \[x_{v}-y_{v}<1.\] But, as \(v\) is before \(w\) in clockwise order, \[x_{v}-y_{v}\geqslant x_{w}-y_{w},\] so both must be \(0\) and \(v=w\). Now, the sides \(a\) and \(b\), that start in \(v=w\), have as primitive direction vectors \((a_{1},a_{2})\) and \((b_{1},b_{2})\), with \(a_{1}>a_{2}>0\) and \(b_{2}>b_{1}>0\), all integer. This implies \[a_{1}b_{2}-a_{2}b_{1} = a_{1}(b_{2}-b_{1})+b_{1}(a_{1}-a_{2})\] \[\geqslant a_{1}+b_{1}\geqslant 2,\] contradicting the smoothness of \(P\). So we have now that \(a=b\), and it intersects the open segment \(((-1,0),(0,0))\). See Figure 6 (left) for an illustration. Let \(t\) be the intersection of \(a\) with \(y=-1\) (it is possible that \(t=u\)). This \(t\) has a positive \(x\) coordinate. We will now slide \(t\) to fall in the segment \(((-1,-1),(0,-1)]\), as in Figure 6 (center). By Lemma 8.2, there is an integer point with \(y=-1\), and concretely \((0,-1)\in P\), because it is the first integer point after \(t\). Now we have two cases: * \(a\) intersects \(y=1\) at \((0,1)\) or at its left. The point which was \((0,1)\) before the sliding is now \((x,1)\) for some positive \(x\), and it is in \(P\), so by convexity \((0,1)\in P\), and we are done. * \(a\) intersects \(y=1\) at the right of \((0,1)\). Then \((0,1)\notin P\), and \(a\) intersects \(((-1,-1),(0,-1))\), \(((-1,0),(0,0))\) and \(((0,0),(0,1))\). Moreover, as \((0,0)\) is inside \(P\) and \((-1,-1)\) is outside, at this moment \[y_{u}<x_{u}<0.\] By making the rotation \((x,y)\mapsto(-y,x)\), as in Figure 6 (right), now \((1,0),(0,1)\in P\), \[(-1,0),(0,-1),(1,-1)\notin P,\] and they are separated by the same side \(a\) as in the initial setup. The new value of \(y_{u}\) is the former \(x_{u}\), that is closer to \(0\) than the former \(y_{u}\). By iterating the process, we will end up with the desired integer point. _Second step._ Now we have an integer symmetric point. Without loss of generality, we can take this point to be \((1,0)\). That is, \((1,0),(-1,0)\in P\). Let \((x_{1},-1)\) be the rightmost point in \(P\) with \(y=-1\), \((x_{2},0)\) the rightmost point with \(y=0\) and \((x_{3},1)\) the rightmost point with \(y=1\). We will prove now that \(x_{1}+x_{3}\geqslant 1\). If \((x_{2},0)\) is not a vertex, \[x_{1}+x_{3}=2x_{2}\geqslant 2.\] If it is a vertex, let \((a_{1},a_{2})\) and \((b_{1},b_{2})\) the primitive direction vectors in the directions of the sides starting at this vertex, with \(a_{2}>0\) and \(b_{2}<0\). Now \[x_{1}+x_{3} = x_{2}-\frac{b_{1}}{b_{2}}+x_{2}+\frac{a_{1}}{a_{2}}\] \[= 2x_{2}+\frac{a_{1}b_{2}-a_{2}b_{1}}{a_{2}b_{2}}\] \[= 2x_{2}+\frac{1}{a_{2}b_{2}}\geqslant 2-1=1\] as we wanted. Defining analogously \(x_{1}^{\prime}\), \(x_{2}^{\prime}\) and \(x_{3}^{\prime}\) for the leftmost points, we have that \(x_{1}^{\prime}+x_{3}^{\prime}\leqslant-1\) by symmetry. If there is an integer \(n\) in \[I:=[x_{3}^{\prime},x_{3}]\cap[-x_{1},-x_{1}^{\prime}],\] the points \((n,1)\) and \((-n,-1)\) are both in \(P\), and \[\{(1,0),(n,1)\}\] is the basis we need. So suppose there is no such integer. Then the length of \(I\) is less than one. We can have four cases: * \(I=[-x_{1},x_{3}]\). This contradicts \(x_{3}+x_{1}\geqslant 1\). Figure 6. A possible situation in the proof, before sliding (left), after sliding (center) and after rotation (right). The point \(u\) is shown as a black dot, with the side \(a\) starting at it. In each stage, the lattice points inside \(P\) are shown as circles and those outside \(P\) as crosses. At the end of the process, the side \(a\) is again separating the three points \((-1,0),(0,-1),(1,-1)\) from \(P\), while the \(y\) coordinate of \(u\) has increased. * \(I=[x_{3}^{\prime},-x_{1}^{\prime}]\). This contradicts \(x_{1}^{\prime}+x_{3}^{\prime}\leqslant-1\). * \(I=[x_{3}^{\prime},x_{3}]\). In this case, \[I=P\cap\{y=1\}.\] If this has no integer point, this contradicts Lemma 8.2. * \(I=[-x_{1},-x_{1}^{\prime}]\). If this has no integer point, neither has \(-I\), which coincides with \(P\cap\{y=-1\}\), again contradicting Lemma 8.2. **Remark 8.3**.: The original statement of Conjecture 1.3 does not include the word "lattice". However we think that the term "smooth" really refers to "lattice smooth", hence why we stated the conjecture in this way. Without the assumption of having integer vertices, the conclusion is false for every \(n\), as the smooth cube \[[-1/2,1/2]^{n}\] shows. ## 9. Some consequences of our results in symplectic toric geometry The monotone polytopes we have studied in this paper are precisely the images under the momentum map of the so called monotone symplectic toric manifolds. In this section we briefly recall this connection and how the problem of being displaceable for the fibers of the momentum map can be studied using the polytope. ### Monotone polytopes and symplectic toric manifolds A _symplectic toric manifold_ is a quadruple \[(M,\omega,\mathbb{T}^{n},\mu:M\to\mathbb{R}^{n})\] where \((M,\omega)\) is a compact connected symplectic manifold of dimension \(2n\), \(\mathbb{T}^{n}\) is the standard \(n\)-dimensional torus which acts effectively and Hamiltonianly on \((M,\omega)\), and \[\mu:M\to\mathbb{R}^{n}\] is the \(\mathbb{T}^{n}\)-action momentum map (which is uniquely defined up to translations and \(\operatorname{GL}(n,\mathbb{Z})\) transformations). By a theorem of Atiyah [2] and Guillemin-Sternberg [17] the image \(\mu(M)\subset\mathbb{R}^{n}\) is a convex polytope, precisely given by the convex hull of the images under \(\mu\) of the fixed points of the \(\mathbb{T}^{n}\)-action on \((M,\omega)\). Later, Delzant [11] proved that \(\mu(M)\) is a _smooth polytope_, and that the application \[(M,\omega,\mathbb{T}^{n},\mu)\mapsto\mu(M) \tag{2}\] induces a bijection from the set of \(2n\)-dimensional symplectic toric manifolds, modulo isomorphism, to the set of smooth polytopes in \(\mathbb{R}^{n}\) modulo \(\operatorname{AGL}(n,\mathbb{Z})\)-equivalence (see for example [31, section 4] for precise definitions). That is, Delzant gave a classification of symplectic toric manifolds in terms of polytopes. The convex polytope \(\mu(M)\) is often called the _momentum polytope_ of the symplectic toric manifold. If in addition (after normalization) the first Chern class \(c_{1}(M)\) of \(M\) is equal to \([\omega]\), the symplectic toric manifold is called _monotone_. _Monotone polytopes_ are the polytopes associated to monotone symplectic toric manifolds via the bijection induced by (2). That is, a smooth polytope is _monotone_2 if it is the image under the momentum map of a monotone symplectic toric manifold. **Example 9.1**.: Figure 1 shows all monotone polygons (i.e. all monotone polytopes in dimension \(2\)). From the point of view of symplectic toric geometry, the first one of them on the left corresponds to the complex projective space \(\mathbb{C}P^{2}\) endowed with the Fubini-Study form, while the third one corresponds to the product of two copies of the complex projective line \(\mathbb{C}P^{1}\times\mathbb{C}P^{1}\). Figure 7 shows the fibers of the momentum map of symplectic geometry in these two cases. We refer to Cannas da Silva [6] and McDuff-Salamon [24] for an introduction to Hamiltonian group actions and symplectic toric manifolds, and to McDuff's paper [22] for an in depth study of the properties of _monotone_ symplectic toric manifolds and _monotone_ polytopes. Symplectic geometry has its origin in classical mechanics, and we recommend Abraham-Marsden [1], De Leon-Rodrigues [10] and Marsden-Ratiu [21] for references which emphasize the connection with classical mechanics. **Remark 9.2**.: Delzant's classifcation was generalized by the second author and Vu Ngoc [33, 34], in dimension \(4\), to _semitoric symplectic manifolds_ (or _semitoric integrable systems_), case in which the \(\mathbb{T}^{2}\)-action is essentially replaced by an \(S^{1}\times\mathbb{R}\)-action. In this case the classification also involves a polytope, but the polytope is not enough to "classify" the manifold (i.e., establish a bijection between manifolds and polytopes) and more invariants are needed to establish a classification. We refer to [35, 30] for an introduction to symplectic toric/semitoric manifolds. ### Displaceable fibers in symplectic toric geometry and Conjecture 1.2 In general, the top-dimensional fibers of the momentum map of a symplectic toric manifold \[\mu:M\to\mathbb{R}^{n}\] (that is, the regular \(\mathbb{T}^{n}\)-orbits) are _Lagrangian_ submanifolds of \((M,\omega)\) in the sense that \(\omega\) vanishes along them (see Figure 7). These orbits correspond to the preimages \(\mu^{-1}(u),u\in\operatorname{Int}(P)\), where \(\operatorname{Int}(P)\) is the interior of the polytope \(P\). An important problem in symplectic topology is deciding whether a fiber \[L_{u}:=\mu^{-1}(u),u\in\operatorname{Int}(P),\] is _displaceable_ by a Hamiltonian isotopy, meaning that there exists a smooth family of functions \(H_{t}:M\to\mathbb{R},t\in[0,1]\), with associated flow \(\phi_{t}\), and such that \[\phi_{1}(L_{u})\cap L_{u}=\varnothing.\] This question goes back to Biran-Entov-Polterovich [4] and Cho [8]; see also Entov-Polterovich [12] where they prove a number of results about (non-)displaceable fibers, and the references therein, as well as the paper of McDuff [23] where she proves several results about displaceable fibers of the manifolds we have been interested in the present paper: _monotone symplectic toric manifolds_. As we mentioned earlier in this paper, smooth reflexive polytopes are often called _monotone polytopes_ in symplectic geometry, because they correspond to the momentum polytopes of monotone symplectic toric manifolds. In her paper McDuff exploits the one-to-one correspondence between monotone symplectic toric manifolds and smooth reflexive polytopes, and is able to produce several results about the manifolds, by proving results about the polytopes. where \[\mu([z_{0}:z_{1}:z_{2}])=\left(\frac{|z_{1}|^{2}}{|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^ {2}},\frac{|z_{2}|^{2}}{|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}}\right),\] is a \(4\)-dimensional monotone symplectic toric manifold. It has the standard triangle (the convex hull of \(\{(0,0),(1,0),(0,1)\}\)) as momentum polytope, and as mentioned earlier, this momentum polytope completely characterizes the symplectic toric manifold up to isomorphisms (by Delzant's result). The figure shows the fibers of the momentum map \(\mu:\mathbb{C}P^{2}\to\mathbb{R}^{2}\) of the symplectic toric manifold \(\mathbb{C}P^{2}\) and also of the momentum map \(\mathbb{C}P^{1}\times\mathbb{C}P^{1}\to\mathbb{R}^{2}\) of the symplectic toric manifold \(\mathbb{C}P^{1}\times\mathbb{C}P^{1}\) endowed with the product symplectic form. A fiber \(L_{u}\) is a _stem_ if all other fibers are displaceable. McDuff proves [23, Theorem 1.1] that if \(\dim M\leqslant 6\), then \(L_{u_{0}}\) is a stem, where \(u_{0}\in P\) is the so-called _unique special point_. More precisely, \(u_{0}\) is the unique interior integral point of the polytope \(P\) (often taken to be the origin). McDuff's approach to the problem of which fibers can be displaced is closely related to Conjecture 1.2. In order to illustrate this we introduce two definitions. **Definition 9.3** (Probe [23, Definition 2.3]).: Let \(P\) be a rational polytope. Let \(F\) be a facet of \(P\). Let \(w\in F\). Let \(\lambda\in\mathbb{Z}^{n}\) be integrally transverse to \(F\) (in the sense that \(\lambda\) can be completed to a unimodular basis by vectors parallel to \(F\)). The _probe with direction \(\lambda\) and Figure 7. The complex projective space with the Fubini-Study form and the standard \(\mathbb{T}^{2}\)-action initial point_\(w\in F\), denoted by \(p_{F,\lambda}(w)\), is the half open line segment consisting of \(w\) together with the points in \(\operatorname{Int}(P)\) that are in the ray starting at \(w\) with direction \(\lambda\). **Definition 9.4** (Being displaceable by probes, [23, Definition 2.5]).: Let \(P\) be a rational polytope and let \(u\in\operatorname{Int}(P)\). If there exists a facet \(F\) of \(P\), \(w\in\operatorname{Int}(F)\) and \(\lambda\in\mathbb{Z}^{n}\) such that \(u\in p_{F,\lambda}(w)\) and \(u\) is less than halfway along \(p_{F,\lambda}(w)\), we say that \(u\) is _displaceable by the probe_\(p_{F,\lambda}(w)\). In [23, Lemma 2.4], McDuff shows that if \((M,w,\mathbb{T}^{n},\mu:M\to\mathbb{R}^{n})\) is a symplectic toric manifold with image the smooth polytope \(P:=\mu(M)\) in \(\mathbb{R}^{n}\) and if \(u\in\operatorname{Int}(P)\) is displaceable by the probe \(p_{F,\lambda}(w)\), then \(L_{u}\) is displaceable by a Hamiltonian isotopy. It is stated in [23, Corollary 1.3] that: _if an \(n\)-dimensional convex polytope \(P\subset\mathbb{R}^{n}\) is monotone and all points except for \(u_{0}\) are_ _displaceable by probes then \(\mathcal{E}(P)\) spans \(\mathbb{R}^{n}\)._ From [23, Theorem 1.2 and Lemma 3.7] it follows that "spans \(\mathbb{R}^{n}\)" can be replaced by "contains a unimodular basis of \(\mathbb{Z}^{n}\)". In dimension \(3\) the star Ewald condition is always satisfied for monotone polytopes (see [23, Proposition 4.7]). For a rational polytope \(P\) one can construct the so-called _central point \(v_{0}\) of \(P\)_, by a procedure of Fukaya-Oh-Ohta-Ono [15] and prove [23, Lemma 2.7] that \(v_{0}\) is not displaceable by a probe. If \(P\) is in addition a monotone polytope, then \(v_{0}\) coincides with the unique interior integral point of \(P\) (this follows from the discussion in [23, Section 2.2] since \[P_{1}=\{0=v_{0}=u_{0}\}\] with the notation therein). **Question 9.5**.: What are all \(n\)-dimensional monotone polytopes for which all points except for \(u_{0}=0\) are displaceable by a probe? From the discussion above, it follows that the polytopes which give the answer to Question 9.5 satisfy the weak Ewald condition (that is, Conjecture 1.2 holds for them). In view of the discussion above we have the following conclusions. **Corollary 9.6**.: _If \(M\) is a monotone symplectic toric manifold of dimension \(2n\) whose momentum polytope \(\mu(M)\) is is not \((\pm)\)-symmetrical then there exists a symplectic toric manifold \(M^{\prime}\) of dimension \(2n+2\) and an interior point of \(M^{\prime}\) distinct from the origin which is not displaceable by a probe._ Proof.: It follows from Theorem 2.10 applied to dimension \(n+1\) that, if all monotone polytopes in dimension \(n+1\) satisfied the weak Ewald condition, \(\mu(M)\) would be \((\pm)\)-symmetrical. Since \(\mu(M)\) is not \((\pm)\)-symmetrical by hypothesis, this must not hold, and for another monotone polytope \(P^{\prime}\) of dimension \(n+1\), we have that \(\mathcal{E}(P^{\prime})\) does not contain a unimodular basis of \(\mathbb{Z}^{n}\). By definition \(P^{\prime}=\mu(M^{\prime})\) for some symplectic toric manifold of dimension \(2n+2\). Hence it is not true that all points of \(\mu(M^{\prime})\) except for the origin are displaceable by probes, and the conclusion follows. (Recall that the origin \((0,\dots,0)\) is the unique interior integral point of \(\mu(M^{\prime})\) and it is never displaceable by a probe according to [23, Lemma 2.7]). **Lemma 9.7**.: _Suppose that \(P\) is a monotone polytope, \(F\) a facet of \(P\), \(\pi\) the projection that sends \(F\) to \(1\), i.e. the quotient by the linear hyperplane parallel to \(F\), and \(Q=\pi^{-1}(0)\). Every facet of \(Q\) is the intersection of a facet of \(P\) with the hyperplane, hence, there is a vector \(b\) that contains the dispacement of the facets of \(F=\pi^{-1}(1)\) respect to those of \(Q=\pi^{-1}(0)\). Then, if \(Q\) is not \((\pm)\)-symmetrical respect to \(b\), \(\mathcal{E}(P)\) does not contain a unimodular basis of \(\mathbb{R}^{n}\). In particular, this covers the case where \(P\) is a bundle with base a segment and fiber \(Q\)._ Proof.: Suppose that \(\mathcal{E}(P)\) contains a unimodular basis. Then, one point \(p\) of the unimodular basis, at least, should be in the facet \(F\), because they are the points projecting to \(1\). By constructing \(P_{+}\) and \(P_{-}\) respect to \(b\), we have that \(P_{+}\) contains \(F\) and \(P_{-}\) contains \(\pi^{-1}(-1)\), which implies \(p\in P_{+}\) and \(-p\in P_{-}\), and, \(P\) is \((\pm)\)-symmetrical respect to \(b\), which is a contradiction. **Corollary 9.8**.: _If \(M\) is a monotone symplectic toric manifold of dimension \(2n\) with momentum polytope \(\mu(M)\) which satisfies the condition of Corollary 9.7 then there exists an interior point of \(\mu(M)\) distinct from the origin which is not displaceable by a probe._ Proof.: It follows from the fact that the hypothesis implies that \(\mathcal{E}(\mu(M))\) does not contain a unimodular basis of \(\mathbb{R}^{n}\). ## 10. Algorithms for our main theorems In this section we provide algorithms which implement the proofs of our main results: Theorem 2.6 and Theorem 2.8. We start by giving an algorithm to detect recursively UT-free polytopes polytopes. ### Algorithm to detect recursively UT-free polytopes In this we explain the algorithm to detect when a polytope if recursively UT-free, which gives Table 4.1. Deciding whether a polytope is UT-free amounts to checking the \(2\)-faces to see whether one of them is a unimodular triangle: 1. input: a monotone polytope \(P\) 2. construct list of \(2\)-faces of \(P\) 3. for each face \(F\) in the list: if \(F\) is a unimodular triangle, output FALSE 4. output TRUE Deciding whether a polytope is recursively UT-free consists of checking (Proposition 4.7) that the intersections with linear subspaces parallel to faces (including the whole polytope) to see if they are UT-free: 1. input: a monotone polytope \(P\) 2. construct list of faces of \(P\) with dimension at least \(3\) 3. for each face \(F\) in the list: 1. calculate the intersection \(F_{0}\) of \(P\) with the linear hyperplane parallel to \(F\) 2. if \(F_{0}\) is not UT-free, output FALSE 4. output TRUE Both algorithms may be implemented in Sage, and the table is the result of this implementation. ### Algorithm to prove Theorem 2.6 Any of the two proofs we gave can be implemented as an algorithm, but the first one would require two recursive calls to dimension \(n-1\) for an \(n\)-dimensional polytope, which would result in an exponential time in \(n\). Moreover, the second one gives an easy algorithm. 1. input: a monotone recursively UT-free polytope \(P\) and a facet \(F\) 2. \(v:=\) an arbitrary vertex of \(F\) 3. calculate the direction vectors \(u_{1},\ldots,u_{n}\) of the edges starting at \(v\), with \(u_{1}\) not contained in the hyperplane of \(F\) 4. \(p_{1}:=-u_{1}\) 5. for \(i\) from 2 to \(n\): 1. if \(p_{1}+u_{i}\in F\), \(p_{i}:=p_{1}+u_{i}\) 2. else \(p_{i}:=p_{1}-u_{i}\) 6. output \(p_{1},\ldots,p_{n}\) ### Algorithm to prove Theorem 2.8 In what follows, lattice polygons will be given by the list of coordinates of their vertices. We work with the inverse of the transform matrix because we need to send variable points to fixed locations such as \((1,0)\), instead of sending fixed points to variable locations. Subroutine TRANS: transform polygon, updating the transform matrix 1. input: \(P\) lattice polygon, \(A\) inverse of current transform matrix, \(B\) inverse of transform to be applied 2. calculate \(P^{\prime}\) with vertices \(v^{\prime}:=B^{-1}v\), for each \(v\) vertex of \(P\) 3. output \((P^{\prime},AB)\) Subroutine SYM: transform \(P\) such that \((1,0),(-1,0)\in P\). Returns transformed polygon and matrix of the inverse transform. 1. input: \(P\) lattice smooth polygon 2. \((v_{1},v_{2}):=\) vertex of \(P\) 3. calculate \(d:=\gcd(v_{1},v_{2}),a_{1},a_{2}\in\mathbb{Z}\) with \(a_{1}v_{1}+a_{2}v_{2}=d\) 4. \[(P,A):=\text{TRANS}(P,\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\begin{pmatrix}v_{1}/d&-a_{2}\\ v_{2}/d&a_{1}\end{pmatrix})\] (at this point, \((1,0)\in P\)) 5. if \((-1,0)\in P\), return \((P,A)\) 6. \((x_{1},1):=\) leftmost integer point in \(P\) with \(y=1\) 7. \[(P,A):=\text{TRANS}(P,A,\begin{pmatrix}1&x_{1}\\ 0&1\end{pmatrix})\] (at this point, \((1,0),(0,1)\in P\) and \((-1,0)\notin P\)) 8. if \((0,-1)\in P\): 1. \[(P,A):=\text{TRANS}(P,A,\begin{pmatrix}0&-1\\ 1&0\end{pmatrix})\] 2. return \((P,A)\) 9. if \((1,-1),(-1,1)\in P\): 1. \[(P,A):=\text{TRANS}(P,A,\begin{pmatrix}1&0\\ -1&1\end{pmatrix})\] 2. return \((P,A)\) 10. if \((1,-1)\in P\), \[(P,A):=\operatorname{TRANS}(P,A,\begin{pmatrix}0&1\\ 1&0\end{pmatrix})\] (at this point, \((1,0),(0,1)\in P\) and \((-1,0),(0,-1),(1,-1)\notin P\)) 11. while \((-1,0)\notin P\): 1. \((x_{1},-1):=\) leftmost integer point in \(P\) with \(y=-1\) 2. \[(P,A):=\operatorname{TRANS}(P,A,\begin{pmatrix}x_{1}&1\\ -1&0\end{pmatrix})\] (see Section 8, first part, for the proof that this loop always terminates: essentially we are moving a vertex towards the origin) 12. return \((P,A)\) Main algorithm: 1. input: a lattice smooth polygon \(P\) 2. \((P,A):=\operatorname{SYM}(P)\) 3. \((x_{1},-1):=\) rightmost integer point in \(P\) with \(y=-1\) 4. \((x_{3}^{\prime},1):=\) leftmost integer point in \(P\) with \(y=1\) 5. \(x_{0}:=\max\{-x_{1},x_{3}^{\prime}\}\) 6. return \[A\begin{pmatrix}1&x_{0}\\ 0&1\end{pmatrix}\] (see Section 8, second part, for the proof that this works) ## 11. Further questions ### Question about recursively UT-free polytopes It follows from Table 2 that the condition of being recursively UT-free for a monotone polytope is quite strong, in the sense that as the dimension increases the proportion of monotone polytopes which are recursively UT-free appears to decrease. In view of this rigidity, it may be possible to give a classification of such polytopes for each fixed dimension, which is what we propose next. **Question 11.1**.: Let \(n\geqslant 1\) be any fixed integer. * What are the shapes of \(n\)-dimensional recursively UT-free polytopes? * More concretely, is there a list of minimal models for \(n\)-dimensional recursively UT-free polytopes and a finite list of operations to obtain all of them from these models? ### Questions about the star and strong Ewald conditions It was already indicated my McDuff in her paper [23, p. 144] that "the relationships between the strong Ewald and star Ewald conditions are not completely clear" (see also Remark 3.7). The answer to the following question, which as far as we know remains open, will depend on the dimension (see Proposition 3.8). **Question 11.2**.: Let \(P\subset\mathbb{R}^{n}\) an \(n\)-dimensional monotone polytope. For each of the following, is the implication true? 1. If \(P\) satisfies the weak Ewald condition, then \(P\) satisfies the strong Ewald condition. 2. If \(P\) satisfies the star Ewald condition, then \(P\) satisfies the strong Ewald condition. Of course, the answer to both questions would be positive if the strong condition is true for all monotone polytopes, which, to the point we know, is also an open question. **Question 11.3**.: Could the proof of Theorem 4.4 be generalized to deal with the star Ewald condition? One could attempt a similar reasoning to the one given in the proof Theorem 4.4 with the star Ewald condition. However, in order to achieve the condition for \(P\) having it for \(P_{1}\), we need that the hyperplane parallel to \(F_{1}\) intersects the initial face \(F\) (that is not in general a facet), and also that the hyperplane does not contain a codimension 2 face of \(P\) (otherwise, it would become a facet of \(P_{1}\) and the argument would not work).
2301.09176
The probability of non-isomorphic group structures of isogenous elliptic curves in finite field extensions, I
Let $\ell$ be a prime number and let $E$ and $E'$ be $\ell$-isogenous elliptic curves defined over a finite field $k$ of characteristic $p \ne \ell$. Suppose the groups $E(k)$ and $E'(k)$ are isomorphic, but $E(K) \not \simeq E'(K)$, where $K$ is an $\ell$-power extension of $k$. In a previous work we have shown that, under mild rationality hypotheses, the case of interest is when $\ell=2$ and $K$ is the unique quadratic extension of $k$. In this paper we study the likelihood of such an occurrence by fixing a pair of 2-isogenous elliptic curves $E$, $E'$ over ${\mathbf{Q}}$ and asking for the proportion of primes $p$ for which $E(\mathbf{F}_p) \simeq E'(\mathbf{F}_p)$ and $E(\mathbf{F}_{p^2}) \not \simeq E'(\mathbf{F}_{p^2})$.
John Cullinan, Nathan Kaplan
2023-01-22T18:24:17Z
http://arxiv.org/abs/2301.09176v1
The probability of non-isomorphic group structures of isogenous elliptic curves in finite field extensions, I ###### Abstract. Let \(\ell\) be a prime number and let \(E\) and \(E^{\prime}\) be \(\ell\)-isogenous elliptic curves defined over a finite field \(k\) of characteristic \(p\neq\ell\). Suppose the groups \(E(k)\) and \(E^{\prime}(k)\) are isomorphic, but \(E(K)\not\simeq E^{\prime}(K)\), where \(K\) is an \(\ell\)-power extension of \(k\). In a previous work we have shown that, under mild rationality hypotheses, the case of interest is when \(\ell=2\) and \(K\) is the unique quadratic extension of \(k\). In this paper we study the likelihood of such an occurrence by fixing a pair of \(2\)-isogenous elliptic curves \(E\), \(E^{\prime}\) over \(\mathbf{Q}\) and asking for the proportion of primes \(p\) for which \(E(\mathbf{F}_{p})\simeq E^{\prime}(\mathbf{F}_{p})\) and \(E(\mathbf{F}_{p^{2}})\not\simeq E^{\prime}(\mathbf{F}_{p^{2}})\). ## 1. Introduction ### Overview Let \(E\) and \(E^{\prime}\) be elliptic curves defined over a finite field \(k\). If \(E\) and \(E^{\prime}\) are isogenous, then the groups \(E(k)\) and \(E^{\prime}(k)\) have the same order, but might not be isomorphic. For example, if \(k\) is the field of \(5\) elements and \(E\) and \(E^{\prime}\) are defined by \[E:y^{2} =x^{3}+x\] \[E^{\prime}:y^{2} =x^{3}+x+2,\] then \(E(k)\simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\) and \(E^{\prime}(k)\simeq\mathbf{Z}/4\mathbf{Z}\). However, if \(E(k)\simeq E^{\prime}(k)\), does that imply that \(E(K)\simeq E^{\prime}(K)\), as \(K\) ranges over finite extensions of \(k\)? It is similarly easy to see that the answer is no: _Example 1.1.1_.: Let \(k\) be the field of \(17\) elements and \(K\) the field of \(17^{2}\) elements. Let \[E:y^{2} =x(x+6)(x+12)\] \[E^{\prime}:y^{2} =(x+1)(x+4)(x-4).\] Observe that \(E^{\prime}=E/\langle(0,0)\rangle\), so \(E\) and \(E^{\prime}\) are \(2\)-isogenous. One can check that \[E(k)\simeq E^{\prime}(k)\simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/10 \mathbf{Z},\] but \(E(K)=\mathbf{Z}/8\mathbf{Z}\times\mathbf{Z}/40\mathbf{Z}\) and \(E^{\prime}(K)=\mathbf{Z}/4\mathbf{Z}\times\mathbf{Z}/80\mathbf{Z}\). It is even possible to come up with examples where \(E(K)\simeq E^{\prime}(K)\) for all finite extensions \(K/k\), but \(E\) and \(E^{\prime}\) are not isomorphic as curves. However, extreme examples such as these, and routine ones like 1.1.1, only happen under very specific circumstances. In fact, Example 1.1.1 can be viewed as a "worst case scenario" in the sense that, under mild rationality conditions, it is only in the context of rational \(2\)-isogenies and quadratic extensions that we can have \(E(k)\simeq E^{\prime}(k)\) and \(E(K)\not\simeq E^{\prime}(K)\). (We will explain these rationality assumptions in detail below.) In this paper we aim to understand _how often_ examples such as 1.1.1 occur. ### Reduction to 2-isogenies Throughout this paper we consider the simplest case where \(E\) and \(E^{\prime}\) are related by an isogeny of prime degree \(\ell\), coprime to the characteristic of \(k\). If we additionally assume that \(E(k)\) and \(E^{\prime}(k)\) have order divisible by \(\ell\) and the kernel of the isogeny \(E\to E^{\prime}\) is generated by a rational point of order \(\ell\), then we recall the main theorem of [6]: **Theorem 1.2.1** (Theorem 1 of [6]).: _Let \(k\) be a finite field of odd characteristic \(p\), \(\ell\neq p\) an odd prime, and \(E\) and \(E^{\prime}\) ordinary elliptic curves over \(k\). Suppose \(E\) and \(E^{\prime}\)_ * _each have a point of order_ \(\ell\) _defined over_ \(k\)_, and_ * _are_ \(\ell\)_-isogenous with kernel generated by a point defined over_ \(k\)_._ _Then \(E(k)\simeq E^{\prime}(k)\) if and only if \(E(K)\simeq E^{\prime}(K)\) for all finite extensions \(K/k\)._ Concretely, _when \(\ell\) is odd_, if \(E\) and \(E^{\prime}\) are \(\ell\)-isogenous and each has a non-trivial point of order \(\ell\) defined over \(k\), then \(E(k)\simeq E^{\prime}(k)\) implies \(E(K)\simeq E^{\prime}(K)\) in all finite extensions \(K/k\). Because it is known how the \(\ell\)-part of the groups of rational points grow in towers [11], we can take from this theorem that the group structure of \(E(k)\) completely determines the group structure of all \(\ell\)-isogenous curves to \(E\), in all finite extensions \(K/k\), under the hypothesis that the isogeny is generated by a \(k\)-rational point of order \(\ell\). (In case the isogeny has degree \(\ell\), but the groups \(E(k)\) and \(E^{\prime}(k)\) have order coprime to \(\ell\), then one must perform an initial base-field extension to (say) \(L\) to obtain a point of order \(\ell\). Then we can apply Theorem 1.2.1 taking \(L\) as the base field.) Example 1.1.1 shows that Theorem 1.2.1 cannot be true when \(\ell=2\). But it also exemplifies the only way for Theorem 1.2.1 to fail. More precisely, in [7] the first author proved the following theorem, showing exactly under which circumstances the groups of rational points of 2-isogenous curves fail to be isomorphic in towers: **Theorem 1.2.2** (Theorem 2 of [7]).: _Let \(E\) and \(E^{\prime}\) be ordinary, 2-isogenous elliptic curves defined over a finite field \(k\) such that the isogeny is also defined over \(k\). Suppose \(E(k)\simeq E^{\prime}(k)\). Let the endomorphism ring of each curve be an order in the quadratic imaginary ring \(\mathbf{Z}[\omega]\) and write \(\pi=a+b\omega\in\mathbf{Z}[\omega]\), where \(a\) is odd and \(b\) is even, for the Frobenius endomorphism. Then \(E(K)\simeq E^{\prime}(K)\) for all finite extensions \(K/k\) unless the following holds:_ \[v_{2}(a-1)=1\text{ and }v_{2}(a+1)>v_{2}(b)-s_{2}.\] _In that case, \(E(K)\simeq E^{\prime}(K)\) for odd-degree extensions \(K/k\) only._ _Remark 1.2.3_.: In the published version of [7, Theorem 2] there is a typographical error in the statement of the theorem. There is an extra "\(+1\)" in the inequality for \(v_{2}(a+1)\). The corrected statement is listed above. In the theorem, \(s_{2}\) is a positive integer related to the conductors of the endomorphism rings of \(E\) and \(E^{\prime}\). The upshot of this result is that there are precisely two possibilities. Either 1. \(E(K)\simeq E^{\prime}(K)\) for all finite extensions \(K/k\), or 2. we can detect that \(E(K)\not\simeq E^{\prime}(K)\) in the unique quadratic extension \(K/k\). Moreover, we can detect this failure by performing computations _exclusively over \(k\)_. We will review all of this background in detail in later sections of the paper. ### Setup and Statement of the Main Results Granting this background, we now set our notation and aims for the paper. Let \(E\) and \(E^{\prime}\) be \(2\)-isogenous elliptic curves defined over a field \(k\) such that the isogeny is also defined over \(k\). We call such a pair \((E,E^{\prime})\)**rationally 2-isogenous**. In this paper we focus exclusively on the cases \(k=\mathbf{F}_{p}\) and \(k=\mathbf{Q}\). Fix an odd prime \(p\). If \(E\) and \(E^{\prime}\) are rationally \(2\)-isogenous over \(\mathbf{F}_{p}\), then \(E(\mathbf{F}_{p})\) has a point \(P\) of order \(2\) and \[E^{\prime}=E/\langle P\rangle.\] We say that the pair \((E,E^{\prime})\) is an **anomalous pair** if \(E\) and \(E^{\prime}\) are rationally \(2\)-isogenous over \(\mathbf{F}_{p}\), \(E(\mathbf{F}_{p})\simeq E^{\prime}(\mathbf{F}_{p})\), and \(E(\mathbf{F}_{p^{2}})\not\simeq E^{\prime}(\mathbf{F}_{p^{2}})\). As explained above, this is precisely the obstruction for rationally \(2\)-isogenous curves having isomorphic group structures in towers over \(\mathbf{F}_{p}\). Here is the point of view we take for the paper. Fix a pair of rationally \(2\)-isogenous curves \((E,E^{\prime})\) over \(\mathbf{Q}\). We assume henceforth that \(E\) and \(E^{\prime}\) do not have CM. However, we will address the CM case in a forthcoming paper [9]; see Section 7 for further details. To streamline notation, we will also use \(E\) and \(E^{\prime}\) to denote the reductions modulo \(p\) of the curves over \(\mathbf{Q}\). We call a prime \(p\) of good reduction **anomalous** for \((E,E^{\prime})\) if \(E(\mathbf{F}_{p})\simeq E^{\prime}(\mathbf{F}_{p})\) and \(E(\mathbf{F}_{p^{2}})\not\simeq E^{\prime}(\mathbf{F}_{p^{2}})\). Therefore, at an anomalous prime for the pair \((E,E^{\prime})\) defined over \(\mathbf{Q}\), we have that \((E,E^{\prime})\) is an anomalous pair. Depending on whether \(p\) or \(E\) is fixed, the two usages of "anomalous" should not be in conflict. Given this setup, we seek to understand the ratio \[\mathcal{P}(X)=\frac{\#\{\text{anomalous }p\leq X\}}{\pi(X)}, \tag{1.3.1}\] where \(\pi(X)\) is the prime counting function, and also the limit \(\mathcal{P}=\lim_{X\to\infty}\mathcal{P}(X)\), if it exists. We note that \(\mathcal{P}(X)\) and \(\mathcal{P}\) depend on both \(E\) and \(E^{\prime}\) (more specifically, they depend on the images of the \(2\)-adic representations over \(\mathbf{Q}\) for each curve). In this paper we only make one computation explicit: the case where the \(2\)-adic images are isomorphic and as large as possible given the constraints of the setup. The following examples show that there exist pairs \((E,E^{\prime})\) for which anomalous primes exist, and there exist pairs for which they do not. Throughout this paper when we refer to a proportion of primes with some property, or the probability that a prime has some property, we mean it is in this sense of counting primes up to \(X\) and taking a limit. _Example 1.3.2_.: Let \(E\) be the elliptic curve 210e5 and \(E^{\prime}\) the curve 210e4 of the LMFDB[13]. Then \(E\) and \(E^{\prime}\) are \(2\)-isogenous, with \(\mathbf{Q}\)-torsion subgroups \(\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/8\mathbf{Z}\) and \(\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/4\mathbf{Z}\), respectively. There are no anomalous primes for these curves, a consequence (as we will see below) of the sizes of \(E(\mathbf{Q})_{\mathsf{tors}}\) and \(E^{\prime}(\mathbf{Q})_{\mathsf{tors}}\). _Example 1.3.3_.: The isogeny class 10608y consists of two elliptic curves, \(E\) and \(E^{\prime}\), such that \[E(\mathbf{Q})\simeq E^{\prime}(\mathbf{Q})\simeq\mathbf{Z}/2\mathbf{Z};\] these are the smallest Mordell-Weil groups possible given that \(E\) and \(E^{\prime}\) are rationally \(2\)-isogenous. Moreover, the mod \(2\) representation of each curve has order \(2\), and the \(2\)-adic representation of each has index \(3\) in \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\), _i.e._, is as large as possible given the hypotheses on each curve. We consider all primes up to some bound and count those that are anomalous: \begin{tabular}{|c|c|c|c|} \hline total \(\#p\) & \(\#p\mid E({\bf F}_{p})\not\simeq E^{\prime}({\bf F}_{p})\) & \(\#p\mid E({\bf F}_{p})\simeq E^{\prime}({\bf F}_{p})\) & \# anomalous \\ \hline 1000 & 539 & 457 & 30 \\ \hline 10000 & 5324 & 4672 & 335 \\ \hline \end{tabular} Converting the number of good primes to a value of \(X\), we see that \[\mathcal{P}(7919) =\frac{30}{1000}\sim 0.03,\text{ and }\] \[\mathcal{P}(104741) =\frac{335}{10000}\sim 0.0335,\] suggesting that the limit might exist. The main result of this paper is that the limit does exist and can be computed using the image of the 2-adic representations attached to \(E\) and \(E^{\prime}\). In general, a pair of rationally 2-isogenous elliptic curves define adjacent vertices on an isogeny-torsion graph over \({\bf Q}\). In [5] and [4], the authors give a classification of all isogeny-torsion graphs over \({\bf Q}\). Moreover, the classification of Rouse and Zureick-Brown [22] of the possible images of the 2-adic representation of elliptic curves over \({\bf Q}\) presents us with a finite list of graphs and images to consider for \(E\) and \(E^{\prime}\). In a forthcoming paper [9] we work out, among other things, the possible values that can occur for elliptic curves over \({\bf Q}\), including the CM case. In this paper, we consider only one case and prove the following theorem. **Theorem 1.3.4**.: _Let \(E\) and \(E^{\prime}\) be rationally 2-isogenous elliptic curves over \({\bf Q}\) such that \([\operatorname{GL}_{2}({\bf Z}_{2}):\operatorname{im}\rho_{E,2}]=[ \operatorname{GL}_{2}({\bf Z}_{2}):\operatorname{im}\rho_{E^{\prime},2}]=3\), i.e., both curves have maximal 2-adic image given that each has a rational 2-torsion point. Then \(\mathcal{P}=1/30\)._ _Remark 1.3.5_.: The elliptic curves of Theorem 1.3.4 are parameterized by the curve \(\mathtt{X}_{\mathtt{6}}\) of the RZB database. _Remark 1.3.6_.: See Section 7 for a discussion of the non-maximal cases and setup to be addressed in [9]. In order to get the result that \(\mathcal{P}=1/30\), we make full use of the structure of the **2-isogeny volcano**\(V_{p}\) of \(E\) at \(p\). The 2-isogeny volcano is a graph, the connected components of which consist of vertices (elliptic curves over \({\bf F}_{p}\)) and edges (\({\bf F}_{p}\)-rational 2-isogenies), that organizes the curves into levels (we reserve the term _height_ for the entire volcano and review our conventions in later sections). All curves at the same level have isomorphic endomorphism rings, which implies that all curves at the same level have isomorphic group structures over \({\bf F}_{p}\). A 2-isogeny \(E\to E^{\prime}\) defined over \({\bf F}_{p}\) can be **vertical** (\([\operatorname{End}(E):\operatorname{End}(E^{\prime})]=2\) or \(1/2\)) or **horizontal** (\(\operatorname{End}(E)\simeq\operatorname{End}(E^{\prime})\)); horizontal isogenies necessarily preserve the group structure over \({\bf F}_{p}\), while vertical isogenies may or may not. At an anomalous prime \(p\), we have the following confluence of events: * the \({\bf Q}\)-isogeny \(E\to E^{\prime}\) reduces to a vertical isogeny over \({\bf F}_{p}\), and * \(E({\bf F}_{p})[2^{\infty}]\simeq E^{\prime}({\bf F}_{p})[2^{\infty}]\simeq{ \bf Z}/2{\bf Z}\times{\bf Z}/2{\bf Z}\) so that the volcano \(V_{p}\) has the rough structure where either \(E\) or \(E^{\prime}\) lies at least two levels above the floor, and * \(E({\bf F}_{p^{2}})[2^{\infty}]\not\simeq E^{\prime}({\bf F}_{p^{2}})[2^{\infty}]\) and \(E\) and \(E^{\prime}\) are situated on \(V_{p^{2}}\) as follows \[\begin{array}{c}\vdots\\ \vdots\\ \mathbf{Z}/2^{m+1}\mathbf{Z}\times\mathbf{Z}/2^{u-1}\mathbf{Z}\\ \mathbf{Z}/2^{m}\mathbf{Z}\times\mathbf{Z}/2^{u}\mathbf{Z}\\ \mathbf{Z}/2^{m+u}\mathbf{Z}\end{array}\] We interpret the value \(\mathcal{P}=1/30\) as the sum of a geometric series, where the summands reflect the group theory of \(\operatorname{im}\rho_{E,2}\) and \(\operatorname{im}\rho_{E^{\prime},2}\). In particular, we filter the anomalous primes by **defect** (which we explain in detail in the sections below). Briefly, an anomalous prime has defect \((a,b)\) if \(E({\bf F}_{p^{2}})\) has full \(2^{a}\)-torsion but not full \(2^{a+1}\)-torsion and \(E({\bf F}_{p^{2}})\) has full \(2^{b}\)-torsion, but not full \(2^{b+1}\)-torsion. It is a fact about adjacent vertices on an isogeny volcano that a prime can only have defect \((m+1,m)\) or \((m,m+1)\). (This is exemplified in the figure above.) Filtering by defect, and weighting each defect by the size of the kernels of the homomorphisms \(\operatorname{im}\overline{\rho}_{E,2^{m+1}}\to\operatorname{im}\overline{ \rho}_{E,2^{m}}\) and \(\operatorname{im}\overline{\rho}_{E^{\prime},2^{m+1}}\to\operatorname{im} \overline{\rho}_{E^{\prime},2^{m}}\), we obtain the summands in the geometric series. To ease the cumbersome notation, we let \(G=\operatorname{im}\rho_{E,2}\) and \(G^{\prime}=\operatorname{im}\rho_{E^{\prime},2}\). If \(p\) is a good prime, let \(F\in G\) and \(F^{\prime}\in G^{\prime}\) denotes representatives of the class of Frobenius. Note that even though as a quadratic irrational number \(\pi=a+b\omega\) (the Frobenius endomorphism) is represented in \(\operatorname{End}(E)\) and \(\operatorname{End}(E^{\prime})\) by the same integral expression, the interpretation in each ring is different when those rings are not isomorphic. Given all of this, we prove the following finer version of Theorem 1.3.4. **Theorem 1.3.7**.: _Let \(E\) and \(E^{\prime}\) be rationally 2-isogenous elliptic curves over \(\mathbf{Q}\) such that \([\operatorname{GL}_{2}(\mathbf{Z}_{2}):G]=[\operatorname{GL}_{2}(\mathbf{Z}_{ 2}):G^{\prime}]=3\). Let \(p\) be a prime such that \(F\equiv-I\pmod{2^{m}}\) but \(F\not\equiv-I\pmod{2^{m+1}}\). Then with probability 1/2, \(F^{\prime}\equiv-I\pmod{2^{m}}\) and \(p\) is not anomalous, and with probability 1/2, \(F^{\prime}\equiv-I\pmod{2^{m-1}}\) and \(F^{\prime}\not\equiv-I\pmod{2^{m}}\) and \(p\) is anomalous of defect \((m+1,m)\). Furthermore, this characterizes all anomalous primes of defect \((m+1,m)\)._ _Remark 1.3.8_.: A similar result holds for primes of defect \((m,m+1)\). This brings us to the final portion of the paper where we re-interpret our results on anomalous primes and their defects in terms of a probabilistic model of the distribution of heights of volcanoes and the discriminants of the endomorphism rings at each level. ### Organization of the Paper In the next section we review background on elliptic curves over finite fields, in particular the relationship between the endomorphism ring and rational points. We also recall the relevant history of this problem as well as the results in [6] and [7] that are applicable in this context. As a rough guide to the results, the main point of Section 3 is to determine the structure of the 2-Sylow subgroup of \(E(\mathbf{F}_{p})\) and \(E^{\prime}(\mathbf{F}_{p})\) at anomalous primes. This leads to the notion of the defect of an anomalous prime. In Section 5 we prove Theorem 1.3.4 by filtering the anomalous primes by defect, determining the exact proportion of each defect, and then summing over all defects. We determine the exact proportion of each defect by re-interpreting the criteria of Section 3 for a prime to be anomalous in terms of matrix conditions in the 2-adic representations attached to \(E\) and \(E^{\prime}\). Following this, we interpret the defect of an anomalous prime as determining where on the isogeny volcano of the pair \((E,E^{\prime})\) lies and give numerical data suggesting a finer relationship between anomalous primes and endomorphism rings. Section 7 is dedicated to future work. In particular, we contextualize the results of the present paper within the goals of a follow-up paper in which we explore the range of values of \(\mathcal{P}\) that can occur for elliptic curves over \(\mathbf{Q}\), including CM curves. ### Databases We use two online databases in this work: the \(L\)-Functions and Modular Forms Database (LMFDB), and the classification of 2-adic images of Galois representations attached to elliptic curves over \(\mathbf{Q}\), due to Rouse and Zureick-Brown (RZB), based on the paper [22]. Whenever we use an entry in the database, such as an isogeny class or elliptic curve in the LMFDB, or a modular curve in the RZB database, we link to that entry in the database. ### Notation We will explain any specialized notation in main body of the paper, but we remind the reader of some standard conventions. If \(k\) is a field and \(k^{s}\) a separable closure of \(k\), then we write \(\operatorname{Gal}_{k}\) for the Galois group of \(k^{s}/k\). If \(E\) is an elliptic curve over \(k\) and \(\ell\) is a prime number, then we write \(T_{\ell}E\) for the \(\ell\)-adic Tate module of \(E\) and \[\rho_{E,\ell}:\operatorname{Gal}_{k}\to\operatorname{Aut}(T_{ \ell}E),\text{ and }\] \[\overline{\rho}_{E,\ell^{n}}:\operatorname{Gal}_{k}\to \operatorname{Aut}(T_{\ell}E\otimes\mathbf{Z}/\ell^{n}\mathbf{Z})\] for the \(\ell\)-adic and mod \(\ell^{n}\) representations of \(E\), respectively. If \(G\subseteq\operatorname{GL}_{2}(\mathbf{Z}_{\ell})\) is the image of the \(\ell\)-adic representation, then we write \(G(\ell^{n})\subseteq\operatorname{GL}_{2}(\mathbf{Z}/\ell^{n}\mathbf{Z})\) for its reduction modulo \(\ell^{n}\). If \(R\) is a ring, then we write \(M_{n}(R)\) for the ring of \(n\times n\) matrices with entries in \(R\). Finally, if \(p\) is a prime number, then we write \(v_{p}:\mathbf{Q}^{\times}\to\mathbf{Z}\) for the \(p\)-adic valuation. ### Acknowledgments We would like to thank Andrew Sutherland for supplying us with the initial computations that suggested the correct value of \(\mathcal{P}\). The second author was supported by NSF Grants DMS 1802281 and DMS 2154223. ## 2. Elliptic Curves over Finite Fields ### Endomorphism Rings and Rational Points Let \(q\) be a power of an odd prime \(p\) and let \(E\) be an ordinary elliptic curve defined over \(\mathbf{F}_{q}\); we will address supersingular curves in Section 2.3. Since \(E\) is ordinary, its endomorphism ring \(\operatorname{End}(E)\) is isomorphic to an order \(\mathcal{O}\) in an imaginary quadratic field \(K=\mathbf{Q}(\sqrt{D})\) for a squarefree negative integer \(D\), and all endomorphisms of \(E\) are defined over \(\mathbf{F}_{q}\). Let \(\mathcal{O}_{K}\) denote the ring of integers of \(K\). Write \(d_{K}\) for the discriminant of \(\mathcal{O}_{K}\), the maximal order of \(K\). Then \[d_{K}=\begin{cases}4D&\text{ if }D\equiv 2,3\pmod{4}\\ D&\text{ if }D\equiv 1\pmod{4}.\end{cases}\] Recall that if \(g\) is a positive integer, then we denote by \(\mathcal{O}_{g}:=\mathbf{Z}\oplus\mathbf{Z}g\omega\) the order of conductor \(g\) in \(\mathcal{O}_{K}\), where \[\omega=\begin{cases}(1+\sqrt{D})/2&\text{ if }D\equiv 2,3\pmod{4}\\ \sqrt{D}&\text{ if }D\equiv 1\pmod{4}.\end{cases}\] We may therefore write \(\mathbf{Z}[\pi]=\mathcal{O}_{f}\) and \(\mathcal{O}_{K}=\mathcal{O}_{1}\). Since \(\operatorname{End}(E)=\mathcal{O}\) contains \(\mathbf{Z}[\pi]\), we may write \(\mathcal{O}=\mathcal{O}_{g}\) for some \(g\mid f\) with \[\mathcal{O}_{f}\subseteq\mathcal{O}_{g}\subseteq\mathcal{O}_{1}.\] If \(\Delta_{g}\) denotes the discriminant of \(\mathcal{O}_{g}\), then \(\Delta_{g}=g^{2}d_{K}\). Identifying \(\operatorname{End}(E)\) with an order in \(\mathcal{O}_{K}\), we may write the Frobenius endomorphism \(\pi\in\operatorname{End}(E)\) explicitly as an element of \(\mathcal{O}_{K}\). We now review how to do this. Recall the well-known formulas relating the cardinality of \(E(\mathbf{F}_{q})\), the fundamental discriminant of \(K\), and the trace \(t\) of \(\pi\): \[\#E(\mathbf{F}_{q}) =1+q-t \tag{2.1.2}\] \[4q =t^{2}-\beta^{2}\Delta_{g}, \tag{2.1.1}\] where \(t\) is the trace of Frobenius, \(\beta\) is a positive integer, and \(\Delta_{g}=g^{2}d_{K}\), as above. Then \(\pi\) has a unique integral representation \(\pi=a+b\omega\in\mathbf{Z}[\omega]\) given by \[a =\begin{cases}t/2&\text{ if }D\equiv 2,3\pmod{4}\\ (t-\beta g)/2&\text{ if }D\equiv 1\pmod{4},\end{cases}\] \[b =\beta g.\] We also recall a fundamental result of Lenstra [17], which gives the structure of \(E(\mathbf{F}_{q^{m}})\) for all positive integers \(m\): \[E(\mathbf{F}_{q^{m}})\simeq\frac{\mathcal{O}}{(\pi^{m}-1)}. \tag{2.1.3}\] ### Isogenies Keeping with the notation above, suppose that \(E\) and \(E^{\prime}\) are isogenous (ordinary) elliptic curves defined over \(\mathbf{F}_{q}\). Then the groups \(E(\mathbf{F}_{q})\) and \(E^{\prime}(\mathbf{F}_{q})\) have the same cardinality, as do the groups \(E(\mathbf{F}_{q^{m}})\) and \(E^{\prime}(\mathbf{F}_{q^{m}})\), for all positive integers \(m\). Let \(\ell\neq p\) be a prime number. If \(E\) and \(E^{\prime}\) have endomorphism rings \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\), respectively, and are \(\ell\)-isogenous, then by a result of Kohel [15, Prop. 21] we have \[[\mathcal{O}:\mathcal{O}^{\prime}]=\ell,\ell^{-1},\text{ or }1.\] In the first two cases, the isogeny is called _vertical_ (ascending/descending, depending on the inclusion) and in the latter it is _horizontal_. Isogenous elliptic curves have the same trace of Frobenius. In the case of a vertical isogeny, \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) are orders in \(\mathcal{O}_{K}\) of relative index \(\ell\). We explain what happens when \(\mathcal{O}^{\prime}\subseteq\mathcal{O}\). (There is a completely analogous setup when \(\mathcal{O}\subseteq\mathcal{O}^{\prime}\).) There exist divisors \(g\) and \(g^{\prime}\) of \(f\) such that \(g^{\prime}/g=\ell\) and \(\mathcal{O}=\mathcal{O}_{g}\), \(\mathcal{O}^{\prime}=\mathcal{O}_{g^{\prime}}\) with \[\mathbf{Z}[\pi]=\mathcal{O}_{f}\subseteq\mathcal{O}_{g^{\prime}}\subseteq \mathcal{O}_{g}\subseteq\mathcal{O}_{1}=\mathcal{O}_{K}.\] Turning to the group structures of isogenous curves, we recall that the main results of [14] and [27] give criteria for any pair of isogenous elliptic curves to have isomorphic groups of \(\mathbf{F}_{q^{m}}\)-rational points in terms of the prime divisors of the integral components of \(\pi^{m}\). We now recall some of the special notation introduced in [14] that we will adopt throughout the rest of this paper. Define a finite set of prime numbers \(\mathbf{P}\) as follows, incorporating the notation above: \[\mathbf{P}=\{p\text{ prime}\mid v_{p}(g)\neq v_{p}(g^{\prime})\}.\] For each \(p\in\mathbf{P}\) we set \[s_{p}=\max\{v_{p}(g),v_{p}(g^{\prime})\},\] whence \(s_{p}\geq 1\). With this notation in place, write \[\pi^{m}=a_{m}+b_{m}\omega,\] for integers \(a_{m},b_{m}\). Finally, we recall the criterion of [14, Thm. 2.4] for \(E(\mathbf{F}_{q^{m}})\) and \(E^{\prime}(\mathbf{F}_{q^{m}})\) to be isomorphic: \[E(\mathbf{F}_{q^{m}})\simeq E^{\prime}(\mathbf{F}_{q^{m}})\Longleftrightarrow v _{p}(a_{m}-1)\leq v_{p}(b_{m})-s_{p}, \tag{2.2.1}\] for all \(p\in\mathbf{P}\). Now we specialize to the situation that is the primary focus of this paper. When the degree of the vertical isogeny \(E\to E^{\prime}\) is a prime number \(\ell\), then \(g^{\prime}/g=\ell^{\pm 1}\) and so \(\mathbf{P}=\{\ell\}\). For descending isogenies we have \(v_{\ell}(g^{\prime})=1+v_{\ell}(g)\) and for ascending isogenies we have \(v_{\ell}(g)=1+v_{\ell}(g^{\prime})\). Specializing further, we set \(\ell=2\) for the remainder of the paper. In [7, Thm. 2] the first author proved that if \(E(\mathbf{F}_{q})\simeq E^{\prime}(\mathbf{F}_{q})\) and \(E(\mathbf{F}_{q^{2}})\simeq E^{\prime}(\mathbf{F}_{q^{2}})\), then \(E(\mathbf{F}_{q^{m}})\simeq E(\mathbf{F}_{q^{m}})\) for all positive integers \(m\). Theorem 1.2.2 gives the precise conditions under which the second isomorphism fails, given the first. ### Supersingular Curves In the case where \(E\) and \(E^{\prime}\) are supersingular curves over \(\mathbf{F}_{p}\) the situation is (perhaps surprisingly) much simpler. We recall the following result of Wittmann. **Theorem 2.3.1** (Theorem 4.1 of [27]).: _Let \(E/\mathbf{F}_{p}\) be a supersingular elliptic curve. Then_ \[E(\mathbf{F}_{p^{2k}})\simeq\mathbf{Z}/((-p)^{k}-1)\mathbf{Z}\times\mathbf{Z }/((-p)^{k}-1)\mathbf{Z}.\] _Further:_ * _If_ \(p\not\equiv 3\pmod{4}\) _or_ \(p\equiv 3\pmod{4}\) _and_ \(E[2]\not\subseteq E(\mathbf{F}_{p})\) _we have_ \[E(\mathbf{F}_{p^{2k+1}})\simeq\mathbf{Z}/(p^{2k+1}+1)\mathbf{Z}\text{ and }\operatorname{End}_{\mathbf{F}_{p}}(E)\simeq\mathbf{Z}[\sqrt{-p}].\] * _If_ \(p\equiv 3\pmod{4}\) _and_ \(E[2]\subseteq E(\mathbf{F}_{p})\) _we have_ \[E(\mathbf{F}_{p^{2k+1}})\simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/\left( \frac{p^{2k+1}+1}{2}\right)\mathbf{Z}\text{ and }\operatorname{End}_{\mathbf{F}_{p}}(E)\simeq \mathbf{Z}[(1+\sqrt{-p})/2].\] In [7] we observed that this immediately implies that when when \(E\) and \(E^{\prime}\) are supersingular, then the group structure over \(\mathbf{F}_{p}\) completely determines the group structure over any finite extension: **Corollary 2.3.2** (Corollary 1 of [7]).: _Let \(p\) be a prime. Let \(E_{1}\) and \(E_{2}\) be supersingular, isogenous elliptic curves defined over \(\mathbf{F}_{p}\). Suppose \(E_{1}(\mathbf{F}_{p})\simeq E_{2}(\mathbf{F}_{p})\). Then \(E_{1}(K)\simeq E_{2}(K)\) for every finite extension \(K/\mathbf{F}_{p}\)._ ## 3. General Properties of Anomalous Primes and Curves We retain the notation and setup of the previous sections, in particular we assume \(E\) and \(E^{\prime}\) are ordinary. We start with a general property of anomalous pairs. **Proposition 3.0.1**.: _Let \((E,E^{\prime})\) be an anomalous pair of elliptic curves defined over the finite field \(\mathbf{F}_{p}\). Then \(p\equiv 1\pmod{4}\)._ Proof.: Suppose \(p\equiv 3\pmod{4}\). We distinguish between the cases where \(|E(\mathbf{F}_{p})|\equiv 2\pmod{4}\) versus \(|E(\mathbf{F}_{p})|\equiv 0\pmod{4}\). Recall that if \((E,E^{\prime})\) is an anomalous pair then in the representation \(\pi=a+b\omega\) of Frobenius as an element of \(\mathcal{O}_{K}\), we have that \(b\) is even; write \(b=2b^{\prime}\). If \(|E(\mathbf{F}_{p})|\equiv 0\pmod{4}\), then \(t\equiv 0\pmod{4}\); write \(t=4t^{\prime}\). Since \[4p=t^{2}-b^{2}d_{K}=16(t^{\prime})^{2}-4(b^{\prime})^{2}d_{K},\] we must have \(p=4(t^{\prime})^{2}-(b^{\prime})^{2}d_{K}\). Thus \(b^{\prime}\) and \(d_{K}\) are odd. In particular, \(v_{2}(b)=1\). But since \((E,E^{\prime})\) is an anomalous pair, we have \[v_{2}(a-1)=1\leq v_{2}(b)-s_{2}=1-s_{2},\] whence \(s_{2}=0\). But this means \(\operatorname{End}(E)\simeq\operatorname{End}(E^{\prime})\), contradicting the fact that \((E,E^{\prime})\) are an anomalous pair. If \(|E(\mathbf{F}_{p})|\equiv 2\pmod{4}\), then \(t\equiv 2\pmod{4}\), so write \(t=2t^{\prime}\) with \(t^{\prime}\) odd. But then \[p=(t^{\prime})^{2}-(b^{\prime})^{2}d_{K}.\] Since \(p\equiv 3\pmod{4}\) and \((t^{\prime})^{2}\equiv 1\pmod{4}\), we must have \((b^{\prime})^{2}d_{K}\equiv 2\pmod{4}\). But since \(b^{\prime}\) is odd and \(d_{K}\equiv 0\) or \(1\pmod{4}\), this is impossible. We conclude that if \(p\equiv 3\pmod{4}\) then \(p\) cannot be anomalous. **Lemma 3.0.2**.: _If \(|E(\mathbf{F}_{p})|\equiv 2\pmod{4}\) then \(E(\mathbf{F}_{p})\simeq E^{\prime}(\mathbf{F}_{p})\)._ Proof.: Since \(E\) and \(E^{\prime}\) are \(2\)-isogenous, the prime-to-\(2\) parts of \(E(\mathbf{F}_{p})\) and \(E^{\prime}(\mathbf{F}_{p})\) are isomorphic [6, Cor. 3]. Since each has a single point of order \(2\), the result follows by the structure theorem for finite abelian groups. **Theorem 3.0.3**.: _If \(|E(\mathbf{F}_{p})|\equiv 2\pmod{4}\) then \(E(\mathbf{F}_{p^{2}})\simeq E^{\prime}(\mathbf{F}_{p^{2}})\)._ Proof.: If \(|E(\mathbf{F}_{p})|\equiv 2\pmod{4}\) then by Lemma 3.0.2 we have \(E(\mathbf{F}_{p})\simeq E^{\prime}(\mathbf{F}_{p})\). If, in addition, \(E(\mathbf{F}_{p^{2}})\not\simeq E^{\prime}(\mathbf{F}_{p^{2}})\), then \((E,E^{\prime})\) is anomalous whence \(p\equiv 1\pmod{4}\). Writing \(\pi=a+b\omega\) in the notation of Section 2, we have 1. \(v_{2}(a-1)=1\leq v_{2}(b)-s_{2},\ \ \text{and}\) 2. \(v_{2}(a+1)>v_{2}(b)-s_{2}.\) Since \(v_{2}(a-1)=1\), we have \(a\equiv 3\pmod{4}\). We also have \[|E(\mathbf{F}_{p})|=1+p-t\equiv 2\pmod{4}, \tag{3.0.4}\] hence \(t\equiv 0\pmod{4}\). Now we divide the argument into two cases based on \(D\pmod{4}\), where \(D\) is the squarefree integer for which \(K=\mathbf{Q}(\sqrt{D})\) is the endomorphism algebra of \(E\) (and \(E^{\prime}\)). If \(D\equiv 2,3\pmod{4}\), then \(a=t/2\) and so \(t\equiv 6\pmod{8}\), a contradiction. If \(D\equiv 1\pmod{4}\), then we first recall the inequality (1). Since \((E,E^{\prime})\) is an anomalous pair, we must have \(s_{2}\geq 1\) (otherwise, \(\operatorname{End}(E)\simeq\operatorname{End}(E^{\prime})\)), and so we conclude that \(v_{2}(b)\geq 2\). But when \(D\equiv 1\pmod{4}\), we have \(a=(t-b)/2\). Since both \(t\) and \(b\) must be divisible by \(4\), we get that \(a\) is even. This contradicts \(a\equiv 3\pmod{4}\), established above. **Corollary 3.0.5**.: _If \(|E(\mathbf{F}_{p})|\equiv 2\pmod{4}\) then \(E(\mathbf{F}_{p^{m}})\simeq E^{\prime}(\mathbf{F}_{p^{m}})\) for all positive integers \(m\)._ Proof.: This follows from [7, Thm. 2]: if \(E(\mathbf{F}_{p^{m}})\simeq E^{\prime}(\mathbf{F}_{p^{m}})\) for \(m\in\{1,2\}\), then \(E(\mathbf{F}_{p^{m}})\simeq E^{\prime}(\mathbf{F}_{p^{m}})\) for all positive integers \(m\). Therefore, _every_ pair of curves \(E,E^{\prime}\) over \(\mathbf{F}_{p}\) with \(|E(\mathbf{F}_{p})|\equiv 2\pmod{4}\) and that are rationally 2-isogenous have isomorphic Mordell-Weil groups in all finite extensions. Therefore, any anomalous pair must have \(|E(\mathbf{F}_{p})|\equiv 0\pmod{4}\) and \(p\equiv 1\pmod{4}\). Next, we define a finer notion of \((E,E^{\prime})\) being an anomalous pair. This will carry over to a refined notion of \(p\) being an anomalous prime, which will be an important topic in the following sections. Because the 2-Sylow subgroups of \(E(\mathbf{F}_{p^{2}})\) and \(E^{\prime}(\mathbf{F}_{p^{2}})\) have the same size, but are not isomorphic, we can ask how they differ. We describe this difference using the notion of defect. **Definition 3.0.6**.: _Let \(E\to E^{\prime}\) be rationally 2-isogenous elliptic curves over \(\mathbf{Q}\) and let \(p\) be an anomalous prime. If_ \[a =\max\{i\in\mathbf{N}\ |\ E(\mathbf{F}_{p^{2}})[2^{\infty}]\supseteq \mathbf{Z}/2^{i}\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}^{i}\},\text{ and }\] \[a^{\prime} =\max\{i\in\mathbf{N}\ |\ E^{\prime}(\mathbf{F}_{p^{2}})[2^{\infty}] \supseteq\mathbf{Z}/2^{i}\mathbf{Z}\times 2^{i}\mathbf{Z}\},\] _then we say that \(p\) has **defect**\((a,a^{\prime})\)._ _Remark 3.0.7_.: It is a well-known property of the \(\ell\)**-isogeny volcano** (which we will recall in Section 4) that if \(E\) and \(E^{\prime}\) are \(\ell\)-isogenous elliptic curves over a finite field \(k\) and the \(\ell\)-Sylow subgroups of \(E(k)\) and \(E^{\prime}(k)\) are not isomorphic, then \(E(k)[\ell^{\infty}]\simeq\mathbf{Z}/\ell^{u}\mathbf{Z}\times\mathbf{Z}/\ell^ {v}\mathbf{Z}\) and \(E^{\prime}(k)[\ell^{\infty}]\simeq\mathbf{Z}/\ell^{u-1}\mathbf{Z}\times \mathbf{Z}/\ell^{v+1}\mathbf{Z}\) or \(E^{\prime}(k)[\ell^{\infty}]\simeq\mathbf{Z}/\ell^{u+1}\mathbf{Z}\times \mathbf{Z}/\ell^{v-1}\mathbf{Z}\) for some positive integer \(u\) and nonnegative integer \(v\). Theorem 3.0.10 establishes a similar result and relates the defect of an anomalous prime to the 2-valuation of the Frobenius endomorphism. We now make an observation concerning the 2-Sylow subgroups of anomalous pairs. **Lemma 3.0.8**.: _Suppose \((E,E^{\prime})\) is an anomalous pair. Then \(E(\mathbf{F}_{p})[2^{\infty}]\simeq E^{\prime}(\mathbf{F}_{p})[2^{\infty}] \simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\)._ Proof.: If \((E,E^{\prime})\) is an anomalous pair, then we must have \(p\equiv 1\pmod{4}\) and \(|E(\mathbf{F}_{p})|\equiv 0\pmod{4}\), as previously established. If neither curve has full 2-torsion defined over \(\mathbf{F}_{p}\), then the 2-Sylow subgroups of \(E(\mathbf{F}_{p})\) and \(E^{\prime}(\mathbf{F}_{p})\) are cyclic and the curves are rationally 2-isogenous. By [1, Thm. 1.2], this is not possible. This establishes that \(E(\mathbf{F}_{p})[2]\simeq E^{\prime}(\mathbf{F}_{p})[2]\simeq\mathbf{Z}/2 \mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\). To see that \(E(\mathbf{F}_{p})[2^{\infty}]\simeq E^{\prime}(\mathbf{F}_{p})[2^{\infty}] \simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\) as well, recall from [21, p. 742] that if \(E\) and \(E^{\prime}\) are 2-isogenous and have isomorphic group structures over \(\mathbf{F}_{p}\), then it must be the case that \(E(\mathbf{F}_{p})[2^{\infty}]\simeq E^{\prime}(\mathbf{F}_{p})[2^{\infty}] \simeq\mathbf{Z}/2^{k}\mathbf{Z}\times\mathbf{Z}/2^{k}\mathbf{Z}\) for some \(k\), hence \(|E(\mathbf{F}_{p})|=p+1-t\equiv 0\pmod{2^{k}}\). Suppose \(k>1\). Then both curves will have at least full \(2^{k+1}\)-torsion over \(\mathbf{F}_{p^{2}}\), and at least one will have full \(2^{k+2}\)-torsion (since \(E(\mathbf{F}_{p^{2}})\not\simeq E^{\prime}(\mathbf{F}_{p^{2}})\)). Therefore, \[|E(\mathbf{F}_{p^{2}})|=(p+1-t)(p+1+t)\equiv 0\pmod{2^{k+4}},\] and so \(p+1+t\equiv 0\pmod{16}\). Since \(k>1\), we have \(p+1-t\equiv 0\pmod{16}\) as well, which implies that \(t\equiv 0\pmod{8}\). But this contradicts the fact that for an anomalous pair we must have \(t\equiv 2\pmod{4}\). This completes the proof. We will apply the following result in the proof of Theorem 3.0.10. **Lemma 3.0.9**.: _Let \(E\) be an ordinary elliptic curve defined over a finite field \(\mathbf{F}_{q}\) of odd characteristic. Let \(\pi\in\operatorname{End}(E)\) be the Frobenius endomorphism. If \(v\) is the largest integer such that \(\pi^{m}-1\) factors as \(2^{v}\alpha\) in \(\operatorname{End}(E)\), then \(E(\mathbf{F}_{q^{m}})\) has full \(2^{v}\)-torsion but not full \(2^{v+1}\)-torsion._ Proof.: By Lenstra's theorem (2.1.3) [17, Thm. 1(a)], we have \(E(\mathbf{F}_{q^{m}})\simeq\operatorname{End}(E)/(\pi^{m}-1)\). If \(\pi^{m}-1\) factors as \(2^{v}\alpha\), then clearly \(E(\mathbf{F}_{q^{m}})\) has full \(2^{v}\)-torsion. By factoring isogenies via [12, Thm. 25.1.2], any \(\mathbf{F}_{q}\)-rational endomorphism of \(E\) whose kernel contains the \(2^{v+1}\)-torsion points would have to factor as \(2^{v+1}\beta\) in \(\operatorname{End}(E)\). Thus \(E(\mathbf{F}_{q^{m}})\) has full \(2^{v}\)-torsion but not full \(2^{v+1}\)-torsion. **Theorem 3.0.10**.: _Let \(E\to E^{\prime}\) be 2-isogenous elliptic curves over \(\mathbf{Q}\) and let \(p\) be an anomalous prime. Suppose \(\operatorname{End}(E)=\mathcal{O}_{g}\) and \(\operatorname{End}(E)=\mathcal{O}_{g^{\prime}}\) are orders of conductor \(g\) and \(g^{\prime}\), respectively, in the the imaginary number ring \(\mathcal{O}_{K}=\mathbf{Z}+\mathbf{Z}\omega\); write \(\pi=a+b\omega\) with \(b=\beta g=\beta^{\prime}g^{\prime}\). Then \(p\) has defect \((m+1,m)\) or \((m,m+1)\) for some integer \(m\geq 2\), where \(m=v_{2}(\beta)\)._ Proof.: The isogeny \(E\to E^{\prime}\), initially defined over \(\mathbf{Q}\), reduces modulo \(p\) to a vertical isogeny (if the reduction were horizontal then \(\mathcal{O}_{g}=\mathcal{O}_{g^{\prime}}\) and \(p\) would not be anomalous). For the remainder of the proof we assume the isogeny is descending and will conclude that \(p\) has defect \((m+1,m)\); an identical argument for ascending isogenies would show that \(p\) has defect \((m,m+1)\). Write \(\operatorname{End}(E)=\mathcal{O}_{g}=\mathbf{Z}+g\mathbf{Z}\omega\) and \(\operatorname{End}(E^{\prime})=\mathcal{O}_{g^{\prime}}=\mathbf{Z}+g^{\prime} \mathbf{Z}\omega\). We have \(g^{\prime}=2g\) and also write \(\pi=a+b\omega\) with \(b=\beta g\) as established in Section 2.1. Since \(p\) is anomalous and since \(v_{2}(g^{\prime})=v_{2}(g)+1\), we have \[v_{2}(a-1)=1\leq v_{2}(b)-s_{2}=v_{2}(\beta)-1<v_{2}(a+1).\] Observe that \(v_{2}(\beta)\geq 2\). Now we compute \[\pi^{2}-1=\begin{cases}(a^{2}-1+b^{2}D)+2ab\omega&\text{if $d_{K}=4D$ and $D\equiv 2,3\pmod{4}$, and}\\ (a^{2}-1+b^{2}\left(\frac{D-1}{4}\right))+(2ab+b^{2})\omega&\text{if $d_{K}=D$ with $D\equiv 1\pmod{4}$.}\end{cases}\] In \(\mathcal{O}_{g}\) we can factor, \[\pi^{2}-1=\begin{cases}&(a^{2}-1+\beta^{2}g^{2}D)+(2\beta)ag\omega,\text{ or}\\ &(a^{2}-1+\beta^{2}g^{2}\left(\frac{D-1}{4}\right))+2\beta(a+(\beta/2))g\omega,\end{cases}\] depending on \(d_{K}\pmod{4}\). In the first case (since \(a\) is odd) and in the second case (since \(a\) is odd and \(\beta/2\) is even), \(\pi^{2}-1\) is divisible in \(\mathcal{O}_{g}\) by \(2^{v_{2}(\beta)+1}\) and no higher power of \(2\). Similarly, in \(\mathcal{O}_{g^{\prime}}=\mathcal{O}_{2g}\), \(\pi^{2}-1\) is divisible by \(2^{v_{2}(\beta)}\) and no higher power of \(2\). By Lemma 3.0.9, \(E(\mathbf{F}_{p^{2}})\) has full \(2^{v_{2}(\beta)+1}\)-torsion (and no higher) and \(E^{\prime}(\mathbf{F}_{p^{2}})\) has full \(2^{v_{2}(\beta)}\)-torsion (and no higher). Thus \(p\) has defect \((m+1,m)\) for some integer \(m=v_{2}(\beta)\geq 2\). In the next section we interpret anomalous primes and their defects in relation to isogeny volcanoes. ## 4. Isogeny Volcanoes of Elliptic Curves Following a brief recap of the theory of isogeny volcanoes of ordinary elliptic curves, our purpose in this section is to prove a key proposition in service of Theorems 1.3.4 and 1.3.7. We do not intend for this to be a complete treatment of the background material; we refer the reader to [25] for further details and proofs. Let \(q\) be a power of a prime \(p\) and \(E\) an ordinary elliptic curve over \(\mathbf{F}_{q}\). Let \(V_{q}\) be the connected component of the \(2\)-isogeny graph (volcano) containing \(E\). Then \(V_{q}\) is a graph whose vertices correspond to elliptic curves defined over \(\mathbf{F}_{q}\) that are \(2\)-power \(\mathbf{F}_{q}\)-rationally isogenous to \(E\) and edges are \(\mathbf{F}_{q}\)-rational \(2\)-isogenies. Thus, in our setup, \(E\) and \(E^{\prime}\) represent adjacent vertices on the graph \(V_{p}\); note that \(V_{p}\) is a subgraph of \(V_{p^{2}}\). Let \(q\) be a power of \(p\) and \(T\) the trace of Frobenius over \(\mathbf{F}_{q}\). Let \(\mathsf{sqf}(m)\) denote the squarefee part of an integer \(m\). where \(\mathcal{O}_{0}\) is the endomorphism ring of an elliptic curve lying on the crater of \(V_{q}\). Let \(K=\mathbf{Q}(\sqrt{T^{2}-4q})=\mathbf{Q}(\sqrt{D})\) where \(D=\mathsf{sqf}(T^{2}-4q)\). Then \[\operatorname{disc}\mathcal{O}_{K}=\begin{cases}D&\text{ if $D\equiv 1\pmod{4}$, and}\\ 4D&\text{ if $D\equiv 2,3\pmod{4}$.}\end{cases}\] A theorem of Kohel [25, Theorem 7(5)] shows that for a \(2\)-isogeny volcano \(2\nmid[\mathcal{O}_{K}\colon\mathcal{O}_{0}]\). The _height_ of the volcano \(V_{q}\) is given by [25, Thm. 7] \[h(V_{q})=\frac{1}{2}v_{2}\left(\frac{T^{2}-4q}{\operatorname{disc}\mathcal{O }_{0}}\right)=\frac{1}{2}v_{2}\left(\frac{T^{2}-4q}{\operatorname{disc} \mathcal{O}_{K}}\right). \tag{4.0.1}\] We choose the opposite labeling of the height as defined in [25] (there it is called the **depth**) and declare the floor of the volcano to have height \(0\). In the case that \(V_{q}\) consists of an isolated vertex, we set \(h(V_{q})=0\). The subgraph of vertices at level \(h(V_{q})\) is called the **crater** of the volcano. This labeling is more convenient for interpreting the defect of an anomalous prime in terms of the location of \(E\) and \(E^{\prime}\). The endomorphism rings of the elliptic curves at the same level of the volcano are isomorphic, hence the \(2\)-Sylow subgroups at the same level are isomorphic. Elliptic curves on the floor of a volcano have cyclic 2-Sylow subgroups [25, SS3], say of order \(2^{\nu}\). Then, for each \(0<m\leq h_{\rm stab}\), we have the 2-Sylow subgroup at height \(m\) is \({\bf Z}/2^{m}{\bf Z}\times{\bf Z}/2^{\nu-m}{\bf Z}\). If \(h_{\rm stab}<h(V_{p})\) then the volcano is called **irregular** and \(h_{\rm stab}\) is called the **stability level**[21, p. 742]. By [21, SS4], all curves between the stability level and the crater have isomorphic 2-Sylow subgroups. We refer to the levels of the volcano between the stability level and the crater as the **stability zone**. **Lemma 4.0.2**.: _Let \(E\) and \(E^{\prime}\) be 2-isogenous elliptic curves defined over \({\bf F}_{p}\). Let \(V_{p}\) be the isogeny volcano which contains \(E\) and \(E^{\prime}\) as adjacent vertices; let \(V_{p^{2}}\) be the isogeny volcano over \({\bf F}_{p^{2}}\). Suppose \(t\equiv 2\pmod{4}\). Then \(h(V_{p^{2}})=h(V_{p})+1\)._ Proof.: Let \(t\) be the trace of \(\pi_{E}\) and \(T\) the trace of \(\pi_{E}^{2}\). By assumption \(v_{2}(t)=1\). We have \(T=t^{2}-2p\) since \(|E({\bf F}_{p^{2}})|=(p+1-t)(p+1+t)\). Then \[h(V_{p^{2}})=\frac{1}{2}v_{2}\left(\frac{T^{2}-4p^{2}}{\operatorname{disc} \mathcal{O}_{0}}\right)=\frac{1}{2}v_{2}\left(\frac{(T-2p)(T+2p)}{\operatorname {disc}\mathcal{O}_{0}}\right)=\frac{1}{2}v_{2}\left(\frac{t^{2}-4p}{ \operatorname{disc}\mathcal{O}_{0}}\,t^{2}\right)=h(V_{p})+1.\] _Remark 4.0.3_.: The hypothesis that \(t\equiv 2\pmod{4}\) means that this lemma will be applicable to the case of anomalous pairs of elliptic curves. **Proposition 4.0.4**.: _Let \(E\) and \(E^{\prime}\) be 2-isogenous elliptic curves defined over a finite field \({\bf F}_{p}\) and suppose \((E,E^{\prime})\) is an anomalous pair. Then:_ * \(V_{p}\) _is irregular, and_ * \(E\) _and_ \(E^{\prime}\) _represent adjacent edges on_ \(V_{p}\) _in the stability zone, and_ * \(E\) _and_ \(E^{\prime}\) _do not both lie in the stability zone on_ \(V_{p^{2}}\)_._ Proof.: This is just a matter of terminology. Since \((E,E^{\prime})\) is an anomalous pair, they are vertically isogenous. By Lemma 3.0.8, we have \(E({\bf F}_{p})[2^{\infty}]\simeq E^{\prime}({\bf F}_{p})[2^{\infty}]\simeq{ \bf Z}/2{\bf Z}\times{\bf Z}/2{\bf Z}\), hence neither curve lies on the floor of the volcano \(V_{p}\). Since the 2-Sylow subgroups are isomorphic, \(V_{p}\) is an irregular volcano and the curves must lie in the stability zone. However, over \({\bf F}_{p^{2}}\) the 2-Sylow subgroups are not isomorphic, hence at least one curve lies outside the stability zone. Note that since \(\operatorname{disc}\mathcal{O}_{0}=\operatorname{disc}\mathcal{O}_{K}[ \mathcal{O}_{K}\colon\mathcal{O}_{0}]^{2}\) and \([\mathcal{O}_{K}\colon\mathcal{O}_{0}]\) is odd, we have that \(\operatorname{disc}\mathcal{O}_{0}\equiv\operatorname{disc}\mathcal{O}_{K} \pmod{8}\). Turning now to the endomorphism rings, we distinguish between the congruence classes \(\operatorname{disc}\mathcal{O}_{0}\equiv 0,1,4,5\pmod{8}\). In these cases, the shape of the crater corresponds to the discriminant in the following way, as established by [25, Thm. 7]. When \(\operatorname{disc}\mathcal{O}_{0}\equiv 0\pmod{4}\) or \(\operatorname{disc}\mathcal{O}_{0}\equiv 5\pmod{8}\) then the volcanoes have shapes \[\xy(0,0)*{\ar@{->}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{\text{or}}"_{ \text{or}}"_{\text{or}}" _Remark 4.0.5_.: Observe that when \(\operatorname{disc}\mathcal{O}_{0}\equiv 5\pmod{8}\) and \(E\) is on the crater, then all \(2\)-isogenies from \(E\) are descending. We now discuss some aspects of the volcano \(V_{q}\) in terms of a matrix representation of Frobenius. We will use this material in the proof of Theorem 5.2.4. We continue with the notation from earlier in this section. If \(p\) is a prime number, then the Frobenius endomorphism at \(p\) has a representative conjugacy class in \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\) via the \(2\)-adic representation. Let \(F\in\operatorname{GL}_{2}(\mathbf{Z}_{2})\) be a matrix in this conjugacy class. For any positive integer \(k\), we have \(\det(F)\equiv q\pmod{2^{k}}\). We note that a unit in \(\mathbf{Z}_{2}\) is a square in \(\mathbf{Z}_{2}\) if and only if it is \(1\) modulo \(8\). Therefore, it still makes sense to take \(\mathsf{sqf}(\alpha)\pmod{8}\) for an \(\alpha\in\mathbf{Z}_{2}\). Suppose \(F=-I+2^{m}M\) where \(m\geq 2\) and \(M=(\begin{smallmatrix}x&y\\ z&w\end{smallmatrix})\in\operatorname{M}_{2}(\mathbf{Z}_{2})\). This implies \[q\equiv(-1+2^{m}x)(-1+2^{m}w)-2^{2m}yz\equiv 1-2^{m}(x+w)-2^{2m}(yz-xw) \pmod{2^{k}},\] and \[t^{2}-4q \equiv (-2+2^{m}(x+w))^{2}-4\left((-1+2^{m}x)(-1+2^{m}w)-2^{2m}(yz-xw) \right)\pmod{2^{k}}\] \[\equiv 2^{2m}\left((x-w)^{2}+4yz\right)\pmod{2^{k}}.\] Moreover, \[\mathsf{sqf}(t^{2}-4q)\equiv\mathsf{sqf}\left((x-w)^{2}+4yz\right)\pmod{8}.\] Therefore, we have the following: 1. \(v_{2}(\operatorname{disc}\mathcal{O}_{0})\) is determined by \(\mathsf{sqf}\left((x-w)^{2}+4yz\right)\pmod{8}\), and 2. \(h(V_{q})\) is determined by * \(v_{2}\left((x-w)^{2}+4yz\right)\), and * \(\mathsf{sqf}\left((x-w)^{2}+4yz\right)\pmod{8}\). ## 5. Elliptic Curves over \(\mathbf{Q}\) We now turn to the proof of Theorem 1.3.4. Let \(E,E^{\prime}\) be rationally \(2\)-isogenous elliptic curves defined over \(\mathbf{Q}\). Because the \(2\)-isogeny is defined over \(\mathbf{Q}\), each curve has at least a rational \(2\)-torsion point. The exact proportion of anomalous primes is determined by the images of the \(2\)-adic representations of \(E\) and \(E^{\prime}\), as we will see below. For the remainder of this section we will assume that both \(G:=\operatorname{im}\rho_{E,2}\) and \(G^{\prime}:=\operatorname{im}\rho_{E^{\prime},2}\) have index \(3\) in \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\). Up to isomorphism, \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\) has a unique subgroup of index \(3\). ### Frobenius at Anomalous Primes In this section we will describe the conjugacy class in \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\) associated to Frobenius at an anomalous prime \(p\). If \(p\) is anomalous then both \(E\) and \(E^{\prime}\) have \(E(\mathbf{F}_{p})[2^{\infty}]\simeq E^{\prime}(\mathbf{F}_{p})[2^{\infty}] \simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\) by Lemma 3.0.8. Write \(F\) and \(F^{\prime}\) for matrix representatives of the Frobenius classes of \(E\) and \(E^{\prime}\), respectively, as elements of \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\). It follows that \[F\equiv F^{\prime}\equiv I\pmod{2}\] and that neither \(F\pmod{4}\) nor \(F^{\prime}\pmod{4}\) fixes a cyclic subgroup of \(\mathbf{Z}/4\mathbf{Z}\times\mathbf{Z}/4\mathbf{Z}\) of order \(4\). Since anomalous primes can be partitioned by defect as in Theorem 3.0.10, let us fix \(m\geq 2\) and suppose that \(p\) has defect \((m+1,m)\). In particular, we assume that the isogeny \(E\to E^{\prime}\) is descending. Then we have \[E(\mathbf{F}_{p^{2}})[2^{\infty}]\simeq\mathbf{Z}/2^{a}\mathbf{Z}\times \mathbf{Z}/2^{m+1}\mathbf{Z}\qquad\text{ and }\qquad E^{\prime}(\mathbf{F}_{p^{2}})[2^{\infty}]\simeq\mathbf{Z}/2^{a+1} \mathbf{Z}\times\mathbf{Z}/2^{m}\mathbf{Z},\] where \(a\geq m+1\). Therefore \[F^{2} \equiv I\pmod{2^{m+1}}\text{ but }F^{2}\not\equiv I\pmod{2^{m+2}}, \text{ and }\] \[(F^{\prime})^{2} \equiv I\pmod{2^{m}}\text{ but }(F^{\prime})^{2}\not\equiv I \pmod{2^{m+1}}.\] We are thus led to the problem of determining, for fixed \(m\geq 2\), matrices \(A\in\operatorname{GL}_{2}(\mathbf{Z}_{2})\) such that the following are simultaneously satisfied * \(A\equiv I\pmod{2}\), and * \(A\pmod{4}\) does not fix any cyclic subgroup of \(\mathbf{Z}/4\mathbf{Z}\times\mathbf{Z}/4\mathbf{Z}\) of order \(4\), and * \(A^{2}\equiv I\pmod{2^{m+1}}\) but \(A^{2}\not\equiv I\pmod{2^{m+2}}\). It is now an exercise in squaring matrices (which we omit) to conclude that there exist matrices \(M,M^{\prime}\in\operatorname{M}_{2}(\mathbf{Z}_{2})\) such that neither \(M\) nor \(M^{\prime}\) is \(\equiv 0\pmod{2}\) and that \(F\) and \(F^{\prime}\) are, up to conjugation, given by \[F =-I+2^{m}M\] \[F^{\prime} =-I+2^{m-1}M^{\prime}.\] We finish this subsection by collecting some known results on the Galois theory of torsion point fields and their consequences for anomalous primes. The important point is that if \(k\) is a number field and \(E/k\) is an elliptic curve for which \(k(E[\ell^{n}])/k\) has Galois group \(\operatorname{GL}_{2}(\mathbf{Z}/\ell^{n}\mathbf{Z})\), then the normal subgroup \(\{\pm I\}\) of \(\operatorname{GL}_{2}(\mathbf{Z}/\ell^{n}\mathbf{Z})\) is the Galois group of \(k(E[\ell^{n}])/k(x(E[\ell^{n}]))\), with clear implications for the Frobenius at anomalous primes. **Proposition 5.1.1**.: _Let \(k\) be a number field and \(E/k\) an elliptic curve. Let \(\ell\) be a prime number and \(n\geq 1\) an integer. Let \(k(E[\ell^{n}])\) be the \(\ell^{n}\)-torsion field of \(E\) and \(k(x(E[\ell^{n}]))\) the subfield generated by the \(x\)-coordinates of the points of \(E[\ell^{n}]\). Let \(G(\ell^{n})=\operatorname{im}\overline{\rho}_{E,\ell^{n}}\subseteq\operatorname {GL}_{2}(\mathbf{Z}/\ell^{n}\mathbf{Z})\) be the image of the mod \(\ell^{n}\) representation. Then \([k(E[\ell^{n}]):k(x(E[\ell^{n}]))]\leq 2\) with \(\operatorname{Gal}(k(E[\ell^{n}])/k(x(E[\ell^{n}])))\simeq G(\ell^{n})\cap\{ \pm I\}\)._ Proof.: This is contained in [2, Ch. 5]; see especially Figs. 5.4, 5.5, 5.7. **Lemma 5.1.2**.: _Let \(E\) be an elliptic curve over \(\mathbf{Q}\) and suppose \(p\neq 2\) is a good prime for \(E\). Let \(K_{2^{n}}=\mathbf{Q}(E[2^{n}])\) with Galois group \(\operatorname{Gal}(K_{2^{n}}/\mathbf{Q})\simeq G(2^{n})\subseteq\operatorname{ GL}_{2}(\mathbf{Z}/2^{n}\mathbf{Z})\). Suppose \(\operatorname{Frob}_{p}\in\operatorname{Gal}(K_{2^{n}}/\mathbf{Q})\) is a lift of the Frobenius automorphism at \(p\) (so that the decomposition group of \(K_{2^{n}}\) is generated by \(\operatorname{Frob}_{p}\)) and suppose that \(\overline{\rho}_{E,2^{n}}(\operatorname{Frob}_{p})=F=-I\in G(2^{n})\). Then \(\mathbf{F}_{p}(x(E[2^{n}]))=\mathbf{F}_{p}\) and \(\mathbf{F}_{p}\) contains no \(y\)-coordinate of any \(2^{n}\)-torsion point of \(E\)._ Proof.: This is a matter of translating the arithmetic of elliptic curves into the Galois theory of torsion point fields and the behavior of Frobenius at unramified primes. In particular, it is the "reduction modulo \(p\)" of Proposition 5.1.1. Since \(p\) is an odd prime of good reduction for \(E\) it is unramified in \(K_{2^{n}}\), hence we can appeal to the explicit polynomial descriptions in [2, Table 5.1]. Let \(K_{2^{n}}\) be the splitting field of the polynomial \(T_{2^{n}}(x)\) and \(\mathbf{Q}(x(E[2^{n}]))\) the splitting field of \(\Lambda_{2^{n}}(x)\). In general, the field extension \(K_{2^{n}}/\mathbf{Q}(x(E[2^{n}]))\) has degree \(1\) or \(2\), depending on whether \(\mathbf{Q}(x(E[2^{n}]))\) contains any \(y\)-coordinates of any \(2^{n}\)-torsion points (note that if \(G(2^{n})=\operatorname{GL}_{2}(\mathbf{Z}/2^{n}\mathbf{Z})\) then the extension has degree \(2\)). We have \(\operatorname{Gal}(K_{2^{n}}/\mathbf{Q}(x(E[2^{n}])))\simeq\{\pm I\}\cap G(2^ {n})\) by Proposition 5.1.1, \(\Lambda_{2^{n}}(x)\) splits completely in \(\mathbf{Q}(x(E[2^{n}]))\), and that \(K_{2^{n}}\) is generated over \(\mathbf{Q}(x(E[2^{n}]))\) by a single \(y\)-coordinate of a single \(2^{n}\)-torsion point (see [2, p. 74]). The Galois theory of number fields then says that either \(T_{2^{n}}(x)\) splits completely over \(\mathbf{Q}(x(E[2^{n}]))\) or factors as a product of irreducible quadratic polynomials, each of them Galois-conjugate. In either case, the Galois group \(\operatorname{Gal}(K_{2^{n}}/\mathbf{Q}(x(E[2^{n}])))\) is the decomposition group at \(p\), which is isomorphic to \(\langle\operatorname{Frob}_{p}\rangle\). The hypothesis that \(F\equiv-I\pmod{2^{n}}\) means that the polynomial \(\Lambda_{2^{n}}(x)\) splits completely modulo \(p\), hence \(\mathbf{F}_{p}(x(E[2^{n}]))=\mathbf{F}_{p}\). The fact that Frobenius is non-trivial implies that \(\mathbf{F}_{p}(E[2^{n}])\) is a quadratic extension of \(\mathbf{F}_{p}\), hence contains no \(y\)-coordinate of any \(2^{n}\)-torsion point of \(E\). Next, we recall a basic fact about towers of torsion fields. **Theorem 5.1.3**.: _Let \(k\) be a field, \(\ell\) a prime number, and \(E/k\) an elliptic curve. Then we have the following inclusions of fields for all \(n\geq 1\):_ **Corollary 5.1.4**.: _With all notation as above, suppose \(E/\mathbf{F}_{p}\) is an elliptic curve such that \(\mathbf{F}_{p}(x(E[2^{n}]))=\mathbf{F}_{p}\). Then \(\mathbf{F}_{p}(x(E[2^{k}]))=\mathbf{F}_{p}\) for all \(k\leq n\)._ Proof.: This follows immediately. _Remark 5.1.5_.: One can see this from a representation theory point of view too: if \(F\equiv-I\pmod{2^{n}}\), then \(F\equiv-I\pmod{2^{k}}\) for all \(k\leq n\) as well. The next proposition shows that over a finite field \(\mathbf{F}_{p}\), if \(E\to E^{\prime}\) is descending and \(F\equiv-I\pmod{2^{m}}\) then we automatically get that \(F^{\prime}\equiv-I\pmod{2^{m-1}}\). This does not immediately imply that that \(p\) is anomalous because it could further be the case that \(F^{\prime}\equiv-I\pmod{2^{m}}\). This will be used in the proof of Theorem 5.2.1 below where we argue that \(F^{\prime}\equiv-I\pmod{2^{m}}\) for half of the primes for which \(F\equiv-I\pmod{2^{m}}\) and \(F^{\prime}\equiv-I\pmod{2^{m-1}}\) for the other half. **Proposition 5.1.6**.: _Let \(E\) and \(E^{\prime}\) be ordinary 2-isogenous elliptic curves defined over \(\mathbf{F}_{p}\) and suppose that the isogeny \(E\to E^{\prime}\) is descending. Suppose \(E(\mathbf{F}_{p})[2^{\infty}]\simeq E^{\prime}(\mathbf{F}_{p})[2^{\infty}] \simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\) and that \(F\equiv-I\pmod{2^{m}}\). Then \(F^{\prime}\equiv-I\pmod{2^{m-1}}\)._ Proof.: Since \(F\equiv-I\pmod{2^{m}}\), we have \(E(\mathbf{F}_{p})[2^{\infty}]\simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2 \mathbf{Z}\) and \(E(\mathbf{F}_{p^{2}})[2^{m+1}]\simeq\mathbf{Z}/2^{m+1}\mathbf{Z}\times\mathbf{ Z}/2^{m+1}\mathbf{Z}\). Since \(E\) and \(E^{\prime}\) are isogenous, the groups \(E(\mathbf{F}_{p^{2}})\) and \(E^{\prime}(\mathbf{F}_{p^{2}})\) have the same size, hence their 2-Sylow subgroups have the same size. If the 2-Sylow subgroups over \(\mathbf{F}_{p^{2}}\) are isomorphic, then \((F^{\prime})^{2}\equiv I\pmod{2^{m+1}}\). It is also the case that \(F^{\prime}\equiv I\pmod{2}\) and \(F\) does not fix a cyclic subgroup of \(\mathbf{Z}/4\mathbf{Z}\times\mathbf{Z}/4\mathbf{Z}\) of order 4. A calculation with matrices shows that \(F^{\prime}=-I\in\operatorname{GL}_{2}(\mathbf{Z}/2^{m}\mathbf{Z})\) is the unique matrix satisfying these conditions simultaneously. Thus, \(F^{\prime}\equiv-I\pmod{2^{m}}\). Hence it is also true that \(F^{\prime}\equiv-I\pmod{2^{m-1}}\). If the 2-Sylow subgroups of \(\mathbf{F}_{p^{2}}\) are not isomorphic, then because the isogeny is descending we have \(E(\mathbf{F}_{p^{2}})[2^{m}]\simeq\mathbf{Z}/2^{m}\mathbf{Z}\times\mathbf{Z}/ 2^{m}\mathbf{Z}\). Hence \(F^{\prime}\) is a matrix such that \(F^{\prime}\equiv I\pmod{2}\), does not stabilize a cyclic subgroup of \(\mathbf{Z}/4\mathbf{Z}\times\mathbf{Z}/4\mathbf{Z}\) of order 4, and satisfies \((F^{\prime})^{2}\equiv I\pmod{2^{m}}\). Therefore \(F^{\prime}\equiv-I\pmod{2^{m-1}}\) by the same reasoning. _Remark 5.1.7_.: This proposition tells us that if \(\mathbf{F}_{p}(x(E[2^{n}]))=\mathbf{F}_{p}\) then \(\mathbf{F}_{p}(x(E^{\prime}[2^{n-1}]))=\mathbf{F}_{p}\). To finish off this section we will record a technical lemma that we will need in the proof of Theorem 5.2.1 below. **Lemma 5.1.8**.: _Let \(E\) be an elliptic curve over a field \(k\) of characteristic \(p>3\) and write_ \[E:\ y^{2}=x^{3}+ax+b.\] _Suppose that \(k\) contains \(x(E[2^{n}])\). Let \(P=(\xi,\eta)\) be a point of order \(2^{n+1}\) and let \(\langle P\rangle\) denote the cyclic subgroup of \(E[2^{n+1}]\) generated by \(P\). Then the set of \(x\)-coordinates of the points in \(\langle P\rangle\) are contained in \(k\) if and only if the the polynomial_ \[x^{4}-4\xi x^{3}-2ax^{2}+(-4\xi a-8b)x+(a^{2}-4\xi b)\] _splits in \(k\)._ Proof.: The difference between any two points in \(\langle P\rangle\) is a point of order dividing \(2^{n}\). By Theorem 5.1.3, since \(k\) contains \(x(E[2^{n}])\), it contains the \(x\)-coordinates of all points of order dividing \(2^{n}\). Thus, one point of \(\langle P\rangle\) of exact order \(2^{n+1}\) will have rational \(x\)-coordinate if and only if they all do. Therefore, all the points of \(\langle P\rangle\) will have rational \(x\)-coordinates if and only if the points of exact order \(2^{n+1}\) do. Such a point \(P\) is the preimage under the duplication map of a point of order \(2^{n}\) in \(\langle P\rangle\), hence by [24, III.2.3(d)] the \(x\)-coordinate of \(P\) is \(k\)-rational if and only if the quartic polynomial (whose roots are the \(x\)-coordinates of these points of order \(2^{m+1}\)) \[x^{4}-4\xi x^{3}-2ax^{2}+(-4\xi a-8b)x+(a^{2}-4\xi b)\] has all its roots defined over \(k\). ### The Proportion of Anomalous Primes Fix \(m\geq 2\). The key step in proving Theorem 1.3.4 is the following. **Theorem 5.2.1**.: _Suppose \(E\) and \(E^{\prime}\) are rationally 2-isogenous elliptic curves defined over \(\mathbf{Q}\) such that \(G\) and \(G^{\prime}\) each have index 3 in \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\). Let \(m\geq 2\). Then the proportion of anomalous primes of defect \((m+1,m)\) is_ \[\frac{1}{2}\cdot\frac{1}{|G(2^{m})|}=\frac{1}{2^{4m+2}}.\] We break this proof into two steps, starting with a Lemma. **Lemma 5.2.2**.: _Suppose \(p\) is a prime for which \(F\equiv-I+2^{m}M\pmod{2^{m+1}}\) with \(M\not\equiv 0\pmod{2}\). Then \(p\) is anomalous of defect \((m+1,m)\) if and only if \(\mathbf{F}_{p}(x(E^{\prime}[2^{m}]))\neq\mathbf{F}_{p}\)._ Proof.: This follows from the results of the previous section. We have that \(p\) is anomalous of defect \((m+1,m)\) if and only if \(E(\mathbf{F}_{p})[2^{\infty}]\simeq E^{\prime}(\mathbf{F}_{p})[2^{\infty}] \simeq\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\), \(E(\mathbf{F}_{p^{2}})[2^{\infty}]\simeq\mathbf{Z}/2^{a}\mathbf{Z}\times \mathbf{Z}/2^{m+1}\mathbf{Z}\), and \(E^{\prime}(\mathbf{F}_{p^{2}})\simeq\mathbf{Z}/2^{a+1}\mathbf{Z}\times \mathbf{Z}/2^{m}\mathbf{Z}\). By our matrix calculations, this is true if and only if \(F\equiv-I+2^{m}M\pmod{2^{m+1}}\) and \(F^{\prime}\equiv-I+2^{m-1}M^{\prime}\pmod{2^{m}}\) with neither \(M\) nor \(M^{\prime}\equiv 0\pmod{2}\). By Proposition 5.1.6, \(F\equiv-I+2^{m}M\pmod{2^{m+1}}\) implies \(F^{\prime}\equiv-I+2^{m-1}M^{\prime}\pmod{2^{m}}\). By the Galois theory of torsion point fields from Lemma 5.1.2, \(M^{\prime}\not\equiv 0\pmod{2}\) if and only if \(\mathbf{F}_{p}(x(E^{\prime}[2^{m}]))\neq\mathbf{F}_{p}\). We now make some global choices for \(E\) that we use in the next proof. Fix a basis \(P,Q\) for \(T_{2}E\) and write \(P_{2^{k}}\), \(Q_{2^{k}}\) for the reductions modulo \(2^{k}\) of \(P\) and \(Q\), respectively. Since \(E\) and \(E^{\prime}\) are rationally 2-isogenous, there exists a 2-isogeny \(\varphi:E\to E^{\prime}\) defined over \(\mathbf{Q}\). Since \(\varphi\) is defined over \(\mathbf{Q}\), we may write \(E^{\prime}=E/\langle S\rangle\) for some \(\mathbf{Q}\)-rational 2-torsion point \(S\) of \(E\). We choose our basis so that \(S=P_{2}\). Let \(P^{\prime}=\varphi(P)\) and \(Q^{\prime}=\varphi(Q)\) with \(P^{\prime}_{2^{k}}\) and \(Q^{\prime}_{2^{k}}\) defined similarly. We have that \(Q^{\prime}_{2^{k}}=Q_{2^{k}}+\langle P_{2}\rangle\) is a \(2^{k}\)-torsion point of \(E^{\prime}\), but \(P^{\prime}_{2^{k}}\) is not necessarily independent of \(Q^{\prime}_{2^{k}}\). We fix a basis \(Q^{\prime},R^{\prime}\) for \(T_{2}E^{\prime}\) so that for all \(m,\ Q^{\prime}_{2^{m}}\) and \(R^{\prime}_{2^{m}}\) form a basis for \(E^{\prime}[2^{m}]\). By applying Velu's explicit formulas, we see that there exists a change of coordinates such that \(E\) and \(E^{\prime}\) are given by the explicit Weierstrass equations \[E:\ y^{2} =(x+a_{2})(x^{2}-4a_{4})\] \[E^{\prime}:\ y^{2} =x(x^{2}+a_{2}x+a_{4}),\] where \(P_{2}=(-a_{2},0)\) and \(P^{\prime}_{2}=(0,0)\). Write \(R^{\prime}_{2^{m-1}}=(\xi_{m-1},\eta_{m-1})\) with \(\xi_{m-1},\eta_{m-1}\in\overline{\mathbf{Q}}\). Then the \(x\)-coordinate \(\xi_{m}\) of \(R^{\prime}_{2^{m}}\) is given by one of the roots the quartic \[x^{4}-4\xi_{m-1}x^{3}+(-4\xi_{m-1}a_{2}+6a_{4})x^{2}+(4a_{4}a_{2}-8a_{4})x+(-4 a_{4}a_{2}+(a_{4}^{2}-4\xi_{m-1}a_{4})). \tag{5.2.3}\] Next we show the existence of anomalous primes of defect \((m+1,m)\) for all \(m\geq 2\). **Theorem 5.2.4**.: _Let \(E\) and \(E^{\prime}\) be rationally 2-isogenous elliptic curves over \(\mathbf{Q}\) and suppose that \(G\) and \(G^{\prime}\) each have index 3 in \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\). Then for all \(m\geq 2\) there exist anomalous primes of defect \((m+1,m)\)._ Proof.: Fix \(m\geq 2\). By the assumption on the size of \(G\) and \(G^{\prime}\) and the Chebatorev Density Theorem, there exist infinitely many primes p for which \(F\pmod{2^{m+1}}\) is in the conjugacy class of \[-I+2^{m}\begin{pmatrix}1&1\\ 1&0\end{pmatrix}.\] Let \(V_{p}\) be the isogeny volcano which contains \(E\) and \(E^{\prime}\) as adjacent vertices and let \(V_{p^{2}}\) be the corresponding volcano over \(\mathbf{F}_{p^{2}}\). By our work in Section 4 we know that \(E\) is on level at least \(m\) of \(V_{p}\). **Claim**.: The height of \(V_{p}\) is \(m\) and \(\operatorname{disc}\mathcal{O}_{0}\equiv 5\pmod{8}\). We now prove the claim. Write \(F=-I+2^{m}\left(\begin{smallmatrix}x&y\\ z&w\end{smallmatrix}\right)\in\operatorname{GL}_{2}(\mathbf{Z}_{2})\). As in the end of Section 4, \[t^{2}-4p\equiv 2^{2m}((x-w)^{2}+4yz)\pmod{2^{2m+1}}.\] Therefore \(v_{2}(t^{2}-4p)=2m+v_{2}((x-w)^{2}+4yz)\). We have \(\mathsf{sqf}(t^{2}-4p)\equiv\mathsf{sqf}((x-w)^{2}+4yz))\pmod{8}\). If \(x-w\) is odd, then \(v_{2}((x-w)^{2}+4yz)=0\) and \(v_{2}(t^{2}-4p)=2m\). In this case it is also true that \(\mathsf{sqf}((x-w)^{2}+4yz))\equiv 5\pmod{8}\) if and only if \(yz\) is odd. If this holds, \(\operatorname{disc}\mathcal{O}_{0}\equiv 5\pmod{8}\). Consequently, since \(\operatorname{disc}\mathcal{O}_{0}\equiv 1\pmod{4}\), equation (4.0.1) shows that the height of \(V_{p}\) is \(v_{2}(t^{2}-4p)/2=m\). Now set \(\left(\begin{smallmatrix}x&y\\ z&w\end{smallmatrix}\right)=\left(\begin{smallmatrix}1&1\\ 1&0\end{smallmatrix}\right)\) as above. We saw in Section 4 that for a volcano \(V_{p}\) in which \(\operatorname{disc}\mathcal{O}_{0}\equiv 5\pmod{8}\), there is a unique vertex on the crater. Since \(E\) lies on level at least \(m\) of \(V_{p}\) and \(h(V_{p})=m\), we see that \(E\) is the unique vertex on the crater of \(V_{p}\). Returning to the proof of the theorem, we conclude from the Claim that \(V_{p^{2}}\) has height \(m+1\) and the group structure on the crater is \(\mathbf{Z}/2^{a}\mathbf{Z}\times\mathbf{Z}/2^{m+1}\mathbf{Z}\). Since a vertex on the floor of \(V_{p^{2}}\) has a cyclic group of rational points, it must be the case that the curves on each level of \(V_{p^{2}}\) have different group structures. So in particular, \(E^{\prime}(\mathbf{F}_{p^{2}})[2^{\infty}]=\mathbf{Z}/2^{a-1}\mathbf{Z}\times \mathbf{Z}/2^{m}\mathbf{Z}\). This means that \(p\) must have defect \((m+1,m)\). We now finish the proof of Theorem 5.2.1. Proof of Theorem 5.2.1.: By Lemma 5.2.2 a prime \(p\) is anomalous of defect \((m+1,m)\) if and only if \(F=-I+2^{m}M\) and \(F^{\prime}=-I+2^{m-1}M^{\prime}\), with neither \(M\) nor \(M^{\prime}\equiv 0\pmod{2}\). We will interpret our proportion \(1/2^{4m+2}\) as a conditional probability. Suppose \(p\) is a prime such that \(F=-I+2^{m}M\). By the Chebotarev Density Theorem, the proportion of such primes is \(1/|G(2^{m})|=1/2^{4m+1}\). By Proposition 5.1.6, we have that \(F^{\prime}\equiv-I\pmod{2^{m-1}}\) at these primes as well. We will show that for a proportion of \(1/2\) of these primes we have \(F^{\prime}\not\equiv-I\pmod{2^{m}}\) and so \(p\) is anomalous with defect \((m+1,m)\). Now we compute in the basis we have set above: \[F(P_{2^{m}})=-P_{2^{m}}\] \[F(Q_{2^{m}})=-Q_{2^{m}}\] so that \[F^{\prime}(Q_{2^{m}}^{\prime})=F^{\prime}(Q_{2^{m}}+\langle P_{2}\rangle)=F(Q_ {2^{m}})+F(\langle P_{2}\rangle)=-Q_{2^{m}}+\langle P_{2}\rangle=-Q_{2^{m}}^{\prime}\] because \(P_{2}\) is defined over \(\mathbf{Q}\). Therefore, we have determined that \(F^{\prime}\) acts on \(E^{\prime}[2^{m}]\) via \[F^{\prime}\equiv\begin{pmatrix}-1&*\\ 0&*\end{pmatrix}\pmod{2^{m}}.\] But since \(p\equiv\det(F)\pmod{2^{m}}\) and \(\det(F)\equiv 1\pmod{2^{m}}\), we must additionally have \(F^{\prime}\equiv\left(\begin{smallmatrix}-1&*\\ 0&-1\end{smallmatrix}\right)\pmod{2^{m}}\). Therefore, Proposition 5.1.1 shows that in this setting \(p\) is not anomalous of defect \((m+1,m)\) if and only all the \(x\)-coordinates of the \(2^{m}\)-torsion points of \(E^{\prime}\) are defined over \(\mathbf{F}_{p}\). To determine when this happens, we examine the quartic (5.2.3). By Propositions 5.1.1 and 5.1.6, we know that the the \(x\)-coordinates of the \(2^{m-1}\)-torsion points on \(E^{\prime}\) are \(\mathbf{F}_{p}\)-rational. Any two choices of \(R^{\prime}_{2^{m}}\) differ by a \(2^{m-1}\)-torsion point. Therefore, the \(x\)-coordinate of any choice of \(R^{\prime}_{2^{m}}\) is defined over \(\mathbf{F}_{p}\) if and only if the \(x\)-coordinate of one choice of of \(R^{\prime}_{2^{m}}\) is defined over \(\mathbf{F}_{p}\). With notation as above, consider the quartic polynomial given in (5.2.3). The roots of this polynomial give the \(x\)-coordinates of the \(2^{m}\)-torsion points of all preimages of \(R_{2^{m-1}}\). Since the \(x\)-coordinates of all of the \(2^{m}\)-torsion points of \(E\) are defined over \(\mathbf{F}_{p^{2}}\), the quartic polynomial in (5.2.3) must factor over \(\mathbf{F}_{p}\) as a product of irreducible polynomials each of degree at most \(2\). In particular, it is reducible over \(\mathbf{F}_{p}\). We have shown that if the quartic (5.2.3) has one root defined over \(\mathbf{F}_{p}\), then it splits completely into linear factors over \(\mathbf{F}_{p}\). Therefore, since (5.2.3) is reducible over \(\mathbf{F}_{p}\), it factors as a product of two conjugate quadratic polynomials over \(\mathbf{F}_{p}\). If it were the case that these polynomials split into linear factors over \(\mathbf{F}_{p}\) for every \(p\), there would not exist any primes of defect \((m+1,m)\), contradicting Theorem 5.2.4. Thus they must be irreducible for \(1/2\) of the primes considered in this proof and and split for the complementary primes, and so the proportion of primes of defect \((m+1,m)\) is \((1/2)\cdot(1/2^{4m+1})=1/2^{4m+2}\), as claimed. We now complete the proof of Theorem 1.3.4 as a corollary. **Corollary 5.2.5**.: _With all notation as above, we have \(\mathcal{P}=1/30\)._ Proof.: For all \(m\geq 2\), Theorem 5.2.1 shows that the proportion of anomalous primes of defect \((m+1,m)\) is \(2^{-4m-2}\). By symmetry via the dual isogeny, the proportion of anomalous primes of defect \((m,m+1)\) is \(2^{-4m-2}\) as well. Therefore, the proportion of anomalous primes \(\mathcal{P}\) is given by the geometric series \[\mathcal{P}=2\sum_{m=2}^{\infty}\frac{1}{2^{4m+2}}=\frac{1}{32}\sum_{k=0}^{ \infty}\frac{1}{16^{k}}=\frac{1}{30}.\] ## 6. The Distribution of Anomalous Primes by Volcano Height In this section we take a different point of view and explore how the defect of an anomalous prime corresponds to the height and shape of the associated volcano. These results are motivated by experiments with the pair \((E,E^{\prime})\) of rationally \(2\)-isogenous elliptic curves over \(\mathbf{Q}\) where where \(E\) has LMFDB label 69a2 and \(E^{\prime}\) has label 69a1. We computed the anomalous primes \(p\) up to \(2\cdot 10^{7}\) and divided them up by defect, the height of the associated volcano \(h(V_{p})\), and \(\operatorname{disc}\mathcal{O}_{0}\pmod{8}\), which determines the shape of the crater of \(V_{p}\). We include the data for anomalous primes of defect \((3,2)\) and for anomalous primes of defect \((4,3)\) in Appendix A. Let \(S_{m}\) be the set of anomalous primes of defect \((m+1,m)\). For \(i\in\{0,1,4,5\}\) and a positive integer \(H\geq m\), let \(S_{m}(i,H)\) be the subset of \(p\in S_{m}\) for which \(\operatorname{disc}\mathcal{O}_{0}\equiv i\pmod{8}\) and \(h(V_{p})=H\). Let \(S^{\prime}_{m}(i,H)\) denote the proportion of primes in \(S_{m}\) that lie in \(S_{m}(i,H)\). The data we have collected strongly suggest the following results. **Conjecture 6.0.1**.: _Let \(E\) and \(E^{\prime}\) be rationally 2-isogenous elliptic curves over \(\mathbf{Q}\) such that \([\operatorname{GL}_{2}(\mathbf{Z}_{2}):\operatorname{im}\rho_{E,2}]=[ \operatorname{GL}_{2}(\mathbf{Z}_{2}):\operatorname{im}\rho_{E^{\prime},2}]=3\). For any \(H\geq m\), we have_ \[S^{\prime}_{m}(1,H)=S^{\prime}_{m}(5,H)=4^{-(H-(m-1))}\] _and_ \[S^{\prime}_{m}(0,H)=S^{\prime}_{m}(4,H)=\frac{1}{2}\cdot 4^{-(H-(m-1))}.\] We give one quick check that this conjecture is reasonable. Since every \(p\in S_{m}\) lies in exactly one of the sets \(S_{m}(i,H)\), it must be the case that \[\sum_{i\in\{0,1,4,5\}}\sum_{H\geq m}S_{m}(i,H)=1.\] For \(i\in\{1,5\}\) we have \[\sum_{H\geq m}S_{m}(i,H)=\sum_{H\geq m}4^{-(H-(m-1))}=\frac{1}{4}\cdot\frac{1} {1-\frac{1}{4}}=\frac{1}{3}.\] For \(i\in\{0,4\}\) we have \[\sum_{H\geq m}S_{m}(i,H)=\sum_{H\geq m}\frac{1}{2}\cdot 4^{-(H-(m-1))}=\frac{1} {8}\cdot\frac{1}{1-\frac{1}{4}}=\frac{1}{6}.\] There are three possibilities for the shape of the crater of the volcano \(V_{p}\), depending on whether \(\operatorname{disc}\mathcal{O}_{0}\) is congruent to 1 modulo 8, 5 modulo 8, or 0 modulo 4. This calculation suggests that among the set of all anomalous primes, these three shapes are equally likely, and further that if we divide up the volcanoes of a fixed height \(H\geq m\), all three crater shapes are equally likely. Another nice consequence of this conjecture is that for any fixed \(i\in\{0,1,4,5\}\) it is clear how \(S_{m}(i,H)\) changes with \(H\), as it predicts that \[\frac{S_{m}(i,H+1)}{S_{m}(i,H)}=\frac{1}{4}.\] In Section 5 we saw that if \(p\) is anomalous of defect \((m+1,m)\) and \(F\in\operatorname{GL}_{2}(\mathbf{Z}_{2})\) is in the conjugacy class of Frobenius, then \[F=-I+2^{m}\left(\begin{array}{cc}x&y\\ z&w\end{array}\right)\] where \(x,y,z,w\) are not all 0 (mod 2). At the end of Section 4 we saw that \(\operatorname{disc}\mathcal{O}_{0}\pmod{8}\) is determined by \(\operatorname{\mathsf{sdf}}((x-w)^{2}+4yz)\pmod{8}\) and that \(h(V_{p})\) is determined by both \(\operatorname{disc}\mathcal{O}_{0}\pmod{8}\) and \(v_{2}\left((x-w)^{2}+4yz\right)\). The goal of this section is to show that if the matrix \(\left(\begin{smallmatrix}x&y\\ z&w\end{smallmatrix}\right)\) were distributed like a Haar random matrix in \(\operatorname{M}_{2}(\mathbf{Z}_{2})\) subject to the additional constraint that \(v_{2}(y)=0\), we would see the behavior predicted in Conjecture 6.0.1. We do not currently have a satisfactory explanation of why Frobenius at anomalous primes of defect \((m+1,m)\) should correspond to these 'random matrices with \(y\) odd'. Fix a positive integer \(m\geq 2\). We now explain our model for anomalous primes of defect \((m+1,m)\). Let \(E\) be an elliptic curve over \(\mathbf{F}_{p}\) with trace of Frobenius \(t\) and \(V_{p}\) be the associated \(2\)-isogeny volcano over \(\mathbf{F}_{p}\). Let \(K=\mathbf{Q}(\sqrt{t^{2}-4p})=\mathbf{Q}(\sqrt{D})\) where \(D=\mathsf{sqf}(t^{2}-4p)\). Recall from Section 4 that \(h(V_{p})=H\) if and only if \[v_{2}(t^{2}-4p)=2m+v_{2}((x-w)^{2}+4yz)=\begin{cases}2H&\text{if $D\equiv 1$}\pmod{ 4}\\ 2H+2&\text{if $D\equiv 3$}\pmod{4}\\ 2H+3&\text{if $D\equiv 2$}\pmod{4}.\end{cases}\] Also recall that \(\operatorname{disc}\mathcal{O}_{0}\equiv\operatorname{disc}\mathcal{O}_{K} \pmod{8}\). Instead of starting from an elliptic curve over \(\mathbf{F}_{p}\) we consider a Haar random matrix \(M=\left(\begin{smallmatrix}x&y\\ z&w\end{smallmatrix}\right)\) with entries in \(\mathbf{Z}_{2}\) subject to the additional constraint that \(v_{2}(y)=0\). We use \(\det(-I+2^{m}M)\) in place of \(p\) and \(\operatorname{trace}\left(-I+2^{m}M\right)=-2+2^{m}(x+w)\) in place of \(t\). Note that for any fixed \(x\), the map taking \(w\) to \(\alpha=x-w\) is a bijection on \(\mathbf{Z}_{2}\). For the rest of the proof we usually do not refer to \(x\) and \(w\), but only to \(\alpha\). Let \(\alpha\) and \(z\) be random elements of \(\mathbf{Z}_{2}\) distributed with respect to Haar measure, and \(y\in\mathbf{Z}_{2}^{*}\) be a random unit in \(\mathbf{Z}_{2}\). We write \(\operatorname{Prob}(\cdot)\) to denote the proportion of \(\alpha,y,z\) for which some property holds. We define a kind of height associated to the matrix \(M\). Let \[H_{M}=\begin{cases}m+\frac{v_{2}((x-w)^{2}+4yz)}{2}&\text{if $\mathsf{sqf}((x-w)^{2}+4yz) \equiv 1$}\pmod{4}\\ m-1+\frac{v_{2}((x-w)^{2}+4yz)}{2}&\text{if $\mathsf{sqf}((x-w)^{2}+4yz)\equiv 3$} \pmod{4}\\ m-1+\frac{v_{2}((x-w)^{2}+4yz)-1}{2}&\text{if $\mathsf{sqf}((x-w)^{2}+4yz)\equiv 2$} \pmod{4}\end{cases}.\] **Theorem 6.0.2**.: _Let \(m\geq 2\) and \(H\geq m\) be positive integers. Let \(M=\left(\begin{smallmatrix}x&y\\ z&w\end{smallmatrix}\right)\in\operatorname{M}_{2}(\mathbf{Z}_{2})\) be a Haar random matrix subject to the additional constraint that \(v_{2}(y)=0\)._ 1. _For_ \(i\in\{1,5\}\)_, the probability that_ \(\mathsf{sqf}((x-w)^{2}+4yz)\equiv i\pmod{8}\) _and_ \(H_{M}=H\) _is_ \(4^{-(H-(m-1))}\)_._ 2. _For_ \(i\in\{2,3\}\)_, the probability that_ \(\mathsf{sqf}((x-w)^{2}+4yz)\equiv i\pmod{4}\) _and_ \(H_{M}=H\) _is_ \(\frac{1}{2}\cdot 4^{-(H-(m-1))}\)_._ Theorem 6.0.2 follows from the following stronger result. **Theorem 6.0.3**.: 1. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\end{array}\right) = \operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz )&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&5\pmod{8}\end{array}\right)\] \[= \begin{cases}0&\text{if $k$ is odd},\\ 2^{-(k+2)}&\text{if $k$ is even}.\end{cases}\] 2. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&2\pmod{4}\end{array}\right)=\begin{cases} 0&\text{if $k$ is odd or $k=0$},\\ 2^{-(k+1)}&\text{if $k\geq 2$ is even}.\end{cases}\] 3. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&2\pmod{4}\end{array}\right)=\begin{cases} 0&\text{if $k$ is even or $k=1$},\\ 2^{-k}&\text{if $k\geq 3$ is odd}.\end{cases}\] It is straightforward to check that this result implies Theorem 6.0.2 by checking the base case \(H=m\) and then thinking about what happens to these probabilities as we increase \(H\), dividing things into cases based on \(\mathsf{sqf}(\alpha^{2}+4yz)\pmod{8}\) and using the definition of \(H_{M}\). We prove this result by dividing the set of all \(\alpha,z\in\mathbf{Z}_{2}\) and \(y\in\mathbf{Z}_{2}^{*}\) based on the relative sizes of \(v_{2}(\alpha^{2})\) and \(v_{2}(4yz)=2+v_{2}(z)\). More precisely, we prove Theorem 6.0.3 in three parts, where each part is divided into cases based on \(\mathsf{sqf}(\alpha^{2}+4yz)\pmod{8}\). **Lemma 6.0.4**.: 1. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\\ v_{2}(\alpha^{2})<v_{2}(4yz)&&\end{array}\right)=\begin{cases}0&\text{if $k$ is odd},\\ 2^{-(3k/2+2)}&\text{otherwise}.\end{cases}\] 2. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&5\pmod{8}\\ v_{2}(\alpha^{2})<v_{2}(4yz)&&\end{array}\right)=\begin{cases}0&\text{if $k$ is odd},\\ 2^{-(3k/2+2)}&\text{otherwise}.\end{cases}\] 3. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&3\pmod{4}\\ v_{2}(\alpha^{2})<v_{2}(4yz)&&\end{array}\right)=\begin{cases}0&\text{if $k$ is odd or $k=0$},\\ 2^{-(3k/2+1)}&\text{otherwise}.\end{cases}\] 4. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&2\pmod{4}\\ v_{2}(\alpha^{2})<v_{2}(4yz)&&\end{array}\right)=0.\] **Lemma 6.0.5**.: 1. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\\ v_{2}(\alpha^{2})>v_{2}(4yz)&&\end{array}\right)=\begin{cases}0&\text{if $k$ is odd},\\ 2^{-(3k/2+2)}&\text{otherwise}.\end{cases}\] 2. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&5\pmod{8}\\ v_{2}(\alpha^{2})>v_{2}(4yz)&&\end{array}\right)=\begin{cases}0&\text{if $k$ is odd},\\ 2^{-(3k/2+2)}&\text{otherwise}.\end{cases}\] 3. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&3\pmod{4}\\ v_{2}(\alpha^{2})>v_{2}(4yz)&&\end{array}\right)=\begin{cases}0&\text{if $k$ is odd or $k=0$},\\ 2^{-(3k/2+1)}&\text{otherwise}.\end{cases}\] 4. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&2\pmod{4}\\ v_{2}(\alpha^{2})>v_{2}(4yz)&&\end{array}\right)=\begin{cases}0&\text{if $k$ is even or $k=1$},\\ 2^{-(3k/2-1/2)}&\text{otherwise}.\end{cases}\] **Lemma 6.0.6**.: 1. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\\ v_{2}(\alpha^{2})=v_{2}(4yz)=2\beta\end{array}\right)=\begin{cases}0&\text{ if $k$ is odd or $k\in\{0,2\}$},\\ 2^{-(k+\beta+2)}&\text{ if $k$ is even and $2\leq 2\beta\leq k-1$}.\end{cases}\] 2. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&5\pmod{8}\\ v_{2}(\alpha^{2})=v_{2}(4yz)=2\beta\end{array}\right)=\begin{cases}0&\text{ if $k$ is odd or $k\in\{0,2\}$},\\ 2^{-(k+\beta+2)}&\text{ if $k$ is even and $2\leq 2\beta\leq k-1$}.\end{cases}\] 3. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&3\pmod{4}\\ v_{2}(\alpha^{2})=v_{2}(4yz)=2\beta\end{array}\right)=\begin{cases}0&\text{ if $k$ is odd or $k\in\{0,2\}$},\\ 2^{-(k+\beta+1)}&\text{ if $k$ is even and $2\leq 2\beta\leq k-1$}.\end{cases}\] 4. \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&2\pmod{4}\\ v_{2}(\alpha^{2})=v_{2}(4yz)=2\beta\end{array}\right)=\begin{cases}0&\text{ if $k$ is even or $k=1$},\\ 2^{-(k+\beta)}&\text{ if $k$ is odd and $2\leq 2\beta\leq k-1$}.\end{cases}\] Before proving these individual results, we see how they imply Theorem 6.0.3. We divide this argument into cases. Combining these three lemmas, it is clear that \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\end{array}\right)=\operatorname {Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&5\pmod{8}\end{array}\right),\] and that these probabilities are \(0\) when \(k\) is odd. When \(k=0\) we have \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&0\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\end{array}\right)=2^{-2}+0+0,\] and when \(k=2\) we have \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&2\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\end{array}\right)=2^{-5}+2^{-5 }+0=2^{-4}.\] Now suppose \(k\geq 4\) is even. Note that \(\lfloor\frac{k-1}{2}\rfloor=k/2-1\). We have \[\operatorname{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\end{array}\right) = 2^{-(3k/2+2)}+2^{-(3k/2+2)}+\sum_{\beta=1}^{\lfloor\frac{k-1}{2} \rfloor}2^{-(k+\beta+2)}\] \[= 2^{-(3k/2+1)}+2^{-(k+2)}\sum_{\beta=1}^{k/2-1}2^{-\beta}.\] We write \[\sum_{\beta=1}^{k/2-1}2^{-\beta}=2^{-1}\sum_{\beta=0}^{k/2-2}2^{-\beta}=2^{-1 }\left(2^{-(k/2-2)}+2^{-(k/2-3)}+\cdots+2^{-1}+2^{0}\right).\] We see that \[\mathrm{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&1\pmod{8}\end{array}\right) = 2^{-(3k/2+1)}+2^{-(k+3)}(2^{-(k/2-2)}+2^{-(k/2-3)}+\cdots+2^{-1}+2 ^{0})\] \[= 2^{-(3k/2+1)}+2^{-(3k/2+1)}+2^{-(3k/2+2)}+\cdots+2^{-(k+3)}\] \[= 2^{-(k+2)}.\] We next consider the analogous computation for the case where \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv 3\pmod{4}\). Combining the lemmas above, we see that \[\mathrm{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&3\pmod{4}\end{array}\right)=0\] if \(k\) is odd or \(k=0\). Suppose that \(k\geq 2\) is even. We have \[\mathrm{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&3\pmod{4}\end{array}\right) = 2^{-(3k/2+1)}+2^{-(3k/2+1)}+2^{-(k+2)}\sum_{\beta=1}^{k/2-1}2^{- \beta},\] where for \(k=2\) the empty sum in the final term is \(0\). Arguing as above, it is now clear that this sum is \(2\) times the analogous one for \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv 1\pmod{8}\). Finally, we consider the computation for the case where \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv 2\pmod{4}\). Combining the lemmas above, we see that \[\mathrm{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&2\pmod{4}\end{array}\right)=0\] if \(k\) is even or \(k=1\). Suppose \(k\geq 3\) is odd. We have \[\mathrm{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&2\pmod{4}\end{array}\right)=2^{-(3k/2-1 /2)}+\sum_{\beta=1}^{\frac{k-1}{2}}2^{-(k+\beta)}.\] Note that \[\sum_{\beta=1}^{\frac{k-1}{2}}2^{-(k+\beta)}=2^{-(k+1)}\sum_{\beta=0}^{\frac{ k-3}{2}}2^{-\beta}=2^{-(k+1)}\left(2^{-(k/2-3/2)}+2^{-(k/2-4/2)}+\cdots+2^{-1}+2 ^{0}\right).\] This gives \[\mathrm{Prob}\left(\begin{array}{ccc}v_{2}(\alpha^{2}+4yz)&=&k\\ \mathsf{sqf}(\alpha^{2}+4yz)&\equiv&2\pmod{4}\end{array}\right) = 2^{-(3k/2-1/2)}+\left(2^{-(3k/2-1/2)}+2^{-(3k/2-3/2)}+\cdots+2^{-( k+1)}\right)\] \[= 2^{-k}.\] We now prove the three lemmas. Proof of Lemma 6.0.: Suppose \(v_{2}(\alpha^{2})=k<v_{2}(4yz)\). Therefore \(k\) is even and \(v_{2}(\alpha^{2}+4yz)=k\). The probability that \(v_{2}(\alpha)=k/2\) is \(2^{-(k/2+1)}\). The probability that \(v_{2}(4yz)=2+v_{2}(z)>k\) is the probability that \(v_{2}(z)\geq k-1\), which is \(1\) if \(k=0\) and is \(2^{-(k-1)}\) if \(k\geq 2\) is even. Write \(\alpha=2^{k/2}u\) where \(u\in\mathbf{Z}_{2}^{*}\) and \(4yz=2^{k+1}\nu\) where \(\nu\in\mathbf{Z}_{2}\). If \(k=0\) we must have \(v_{2}(\nu)\geq 1\). By varying \(z\), we see that \(\nu\) is a Haar random element of \(2\mathbf{Z}_{2}\) when \(k=0\) and is a Haar random element of \(\mathbf{Z}_{2}\) otherwise. We have \[\mathsf{sqf}(\alpha^{2}+4yz)=\mathsf{sqf}(u^{2}+2\nu)\equiv u^{2}+2\nu\pmod{8}.\] Since \(u^{2}\equiv 1\pmod{8}\) we see that \[u^{2}+2\nu\equiv\begin{cases}1\pmod{8}&\text{ if }v_{2}(\nu)\geq 2,\\ 5\pmod{8}&\text{ if }v_{2}(\nu)=1,\\ 3\pmod{4}&\text{ if }v_{2}(\nu)=0.\end{cases}\] We see that \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv 1\pmod{8}\) if and only if \(v_{2}(z)\geq 2k+1\), which happens with probability \(2^{-(2k+1)}\), that \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv 5\pmod{8}\) if and only if \(v_{2}(z)=2k\), which happens with probability \(2^{-(2k+1)}\), and that \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv 3\pmod{4}\) if and only if \(v_{2}(z)=2k-1\), which happens with probability \(2^{-2k}\) if \(k\geq 2\) and probability \(0\) if \(k=0\). Proof of Lemma 6.0.5.: Suppose \(v_{2}(4yz)=2+v_{2}(z)=k<v_{2}(\alpha^{2})\). So \(v_{2}(\alpha^{2}+4yz)=k\). The probability that \(v_{2}(z)=k-2\) is \(2^{-(k-1)}\) if \(k\geq 2\) and is \(0\) otherwise. If \(k\geq 2\) is even, \[\operatorname{Prob}(v_{2}(\alpha^{2})>k)=\operatorname{Prob}(v_{2}(\alpha) \geq k/2+1)=2^{-(k/2+1)},\] and if \(k\) is odd, \[\operatorname{Prob}(v_{2}(\alpha^{2})>k)=\operatorname{Prob}(v_{2}(\alpha) \geq k/2+1/2)=2^{-(k/2+1/2)}.\] Suppose \(v_{2}(z)=k-2\) where \(k\geq 2\). We write \(z=2^{k-2}u\) where \(u\in\mathbf{Z}_{2}^{*}\), so \(4yz=uy2^{k}\). Suppose \(v_{2}(\alpha^{2})>k\). If \(k\) is even, then \(\alpha=\gamma 2^{k/2+1}\) where \(\gamma\in\mathbf{Z}_{2}\) is not necessarily a unit. In this case, \(\mathsf{sqf}(\alpha^{2}+4yz)=\mathsf{sqf}(4\gamma+yu)\). For a fixed value of \(z\), by varying \(y\) we see that \(yu\) is a Haar random element of \(\mathbf{Z}_{2}^{*}\). Therefore, the probability that \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv 3\pmod{4}\) is \(1/2\), and the probability that \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv i\pmod{8}\) is \(1/4\) for \(i\in\{1,5\}\). This completes the proof in the case that \(k\) is even. If \(k\) is odd, then \(\alpha=\gamma 2^{k/2+1/2}\) where \(\gamma\in\mathbf{Z}_{2}\) is not necessarily a unit. In this case, \(\mathsf{sqf}(\alpha^{2}+4yz)=\mathsf{sqf}(4\gamma+2yu)\equiv 2\mod{4}\). This completes the proof when \(k\geq 3\) is odd. Proof of Lemma 6.0.6.: Suppose that \(v_{2}(\alpha^{2})=v_{2}(4yz)=2+v_{2}(z)\). Since \(v_{2}(\alpha^{2})=2v_{2}(\alpha)\), we must have \(v_{2}(\alpha^{2})=v_{2}(4yz)=2+v_{2}(z)=2\beta\) with \(\beta\geq 1\). Suppose \(v_{2}(\alpha)=\beta\) and write \(\alpha=2^{\beta}u\) where \(u\in\mathbf{Z}_{2}^{*}\). Suppose that \(v_{2}(z)=2\beta-2\) and write \(z=2^{2\beta-2}\nu\) where \(\nu\in\mathbf{Z}_{2}^{*}\). So \(4yz=2^{2\beta}y\nu\). For a fixed value of \(z\), varying \(y\) shows that \(y\nu\) is a Haar random element of \(\mathbf{Z}_{2}^{*}\). We have \(v_{2}(\alpha^{2}+4yz)=2\beta+v_{2}(u^{2}+y\nu)\). Since \(u^{2}\) and \(y\nu\) are both units, \(v_{2}(u^{2}+y\nu)\geq 1\) and we can write \(u^{2}+y\nu=2\delta\) where \(\delta\in\mathbf{Z}_{2}\). Since \(y\nu\) is a Haar random element of \(\mathbf{Z}_{2}^{*}\), we see that \(\delta\) is a Haar random element of \(\mathbf{Z}_{2}\). Suppose that \(k-2\beta\geq 0\). Therefore, \[\operatorname{Prob}(v_{2}(u^{2}+y\nu)=k-2\beta)=\begin{cases}0&\text{if }k-2\beta=0\\ 2^{-(k-2\beta)}&\text{otherwise}.\end{cases}\] We have \[\mathsf{sqf}(\alpha^{2}+4yz)=\mathsf{sqf}(u^{2}+y\nu)=\mathsf{sqf}(2\delta).\] If \(v_{2}(\delta)\) is even, then \(\mathsf{sqf}(2\delta)\equiv 2\pmod{4}\). If \(v_{2}(\delta)\) is odd, then for some nonnegative integer \(r\) we have \(2\delta=2^{2r}\delta^{\prime}\), where \(\delta^{\prime}\in\mathbf{Z}_{2}^{*}\), and \(\mathsf{sqf}(2\delta)=\mathsf{sqf}(\delta^{\prime})\). If we restrict to any particular value of \(r\), since \(\delta\) is a Haar random element of \(\mathbf{Z}_{2}\), we see that \(\delta^{\prime}\) is a Haar random element of \(\mathbf{Z}_{2}^{*}\). In particular, the probability that \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv 3\pmod{4}\) is \(1/2\) and the probability that \(\mathsf{sqf}(\alpha^{2}+4yz)\equiv i\pmod{8}\) is \(1/4\) for \(i\in\{1,5\}\). The probability that \(v_{2}(\alpha)=\beta\) is \(2^{-(\beta+1)}\). The probability that \(v_{2}(z)=2\beta-2\) is \(2^{-(2\beta-1)}\) if \(\beta\geq 1\) and is \(0\) if \(\beta=0\). We note that \[2^{-(\beta+1)}2^{-(2\beta-1)}2^{-(k-2\beta)}=2^{-(k+\beta)}.\] Considering the different cases for \(\mathsf{sqf}(\alpha^{2}+4yz)\) modulo \(4\) and \(8\) completes the proof. ## 7. Future Work If the groups \(G\) and \(G^{\prime}\) are not as large as possible (_i.e._, do not have index \(3\) in \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\)), or if \(G\not\simeq G^{\prime}\), then the proportion \(\mathcal{P}\) of anomalous primes might be quite different than \(1/30\), as the following example shows. _Example 7.0.1_.: Let \(E\) be the elliptic curve 1200e5 and \(E^{\prime}\) the curve 1200e2. Both mod \(4\) representations have order \(4\) and neither mod \(8\) representation contains \(-I\). By inspecting the \(2\)-adic representations, one can check that the only possible defects of anomalous primes are \((3,2)\) and \((2,3)\). In fact, more is true. If we look explicitly at the images of the mod \(4\) representations, we see \[G(4) =\left\{\begin{pmatrix}\pm 1&0\\ 0&\pm 1\end{pmatrix}\right\}\] \[G^{\prime}(4) =\left\{\begin{pmatrix}1&0\\ 0&\pm 1\end{pmatrix},\begin{pmatrix}-1&2\\ 0&\pm 1\end{pmatrix}\right\}.\] If \(p\) is anomalous, then using the fact that \(p\equiv 1\pmod{4}\) and that the \(2\)-Sylow subgroups of \(E(\mathbf{F}_{p})\) and \(E^{\prime}(\mathbf{F}_{p})\) are both \(\mathbf{Z}/2\mathbf{Z}\times\mathbf{Z}/2\mathbf{Z}\), we must have \(F\equiv-I\pmod{4}\) and \(F^{\prime}\equiv\left(\begin{smallmatrix}-1&2\\ 0&-1\end{smallmatrix}\right)\pmod{4}\). Therefore, every anomalous prime has defect \((3,2)\) and by the Chebotarev density theorem this is exactly \(1/4\) of all primes. In a forthcoming paper [9], we take up the problem of determining all possible values of \(\mathcal{P}\), for all pairs of rationally \(2\)-isogenous elliptic curves over \(\mathbf{Q}\), including the case where \(E\) and \(E^{\prime}\) have CM. What makes this a finite task is that 1. all images of \(2\)-adic representations have been classified ([22] for the non-CM case and [18] for the CM case), and 2. all isogeny-torsion graphs over \(\mathbf{Q}\) have been classified in [4] and [5]. There are additional consequences for the isogeny volcanoes attached to these curves that we explore as well, including how the torsion point fields \(\mathbf{Q}(E[2^{m}])\) and \(\mathbf{Q}(E^{\prime}[2^{m}])\) are "entangled". For example, we are able to show the following two results. * If \(\mathbf{Q}(E[2])=\mathbf{Q}(E^{\prime}[2])\) then \(G\) and \(G^{\prime}\) must each have index greater than \(3\) in \(\operatorname{GL}_{2}(\mathbf{Z}_{2})\). * If there are no primes of defect \((m+1,m)\) then we must have \(\mathbf{Q}(E[2^{m}])=\mathbf{Q}(E^{\prime}[2^{m}])\) and \(\mathbf{Q}(x(E[2^{m}]))=\mathbf{Q}(x(E^{\prime}[2^{m}]))\). We explore the consequences of these and similar results for anomalous primes. ## Appendix A Sample Calculations Here we present some corroborating evidence for Theorem 1.3.4 which served as the impetus for this project. In the table below we present \(15\) pairs of curves whose \(2\)-adic images have index \(3\) in \(\operatorname{GL}_{2}(\mathbf{Z}_{\ell})\) and list the number of anomalous primes up to \(2^{30}\). The proportions listed are the number of anomalous primes divided by \(\pi(2^{30})=54400028\). One can see the \(1/30\) proportion very clearly emerging in the data. These calculations were performed on Magma [19] by Andrew Sutherland and we thank him for allowing us to include these data in this paper. \begin{tabular}{|l|l|l|l|} \hline \(E\) & \(E^{\prime}\) & Anomalous & Proportion \\ \hline 69a1 & 69a2 & 1814517 & 0.033355075 \\ \hline 77c1 & 77c2 & 1812315 & 0.033314597 \\ \hline 84b1 & 84b2 & 1813293 & 0.033332575 \\ \hline 99a1 & 99a2 & 1812977 & 0.033326766 \\ \hline 99c1 & 99c2 & 1812977 & 0.033326766 \\ \hline 132a1 & 132a2 & 1812966 & 0.033326564 \\ \hline 132b1 & 132b2 & 1812959 & 0.033326435 \\ \hline 138a1 & 138a2 & 1813813 & 0.033342134 \\ \hline 141b1 & 142b2 & 1812863 & 0.033324670 \\ \hline 154a1 & 154a2 & 1812080 & 0.033310277 \\ \hline 154c1 & 154c2 & 1813344 & 0.033333512 \\ \hline 155b1 & 155b2 & 1813606 & 0.033338328 \\ \hline 156a1 & 156a2 & 1813340 & 0.033333439 \\ \hline 10608y1 & 10608y2 & 1812615 & 0.033320112 \\ \hline 10608j1 & 10608j2 & 1814206 & 0.033349358 \\ \hline \end{tabular} We also include some data for the pair \((E,E^{\prime})\) of rationally 2-isogenous elliptic curves over \(\mathbf{Q}\) where \(E\) has LMFDB label 69a2 and \(E^{\prime}\) has label 69a1. We computed that there were 42298 anomalous primes less than \(2\cdot 10^{7}\), a proportion of approximately 0.0333 among all primes. They are distributed by defect as follows: \begin{tabular}{|l l|l|l|} \hline (3,2): & 19821 & & \\ (2,3): & 19831 & **Total:** & 39652 \\ \hline (4,3): & 1264 & & \\ (3,4): & 1205 & **Total:** & 2469 \\ \hline (5,4): & 84 & & \\ (4,5): & 86 & **Total:** & 170 \\ \hline (6,5): & 3 & & \\ (5,6): & 4 & **Total:** & 7 \\ \hline \end{tabular} We now look more closely at the 19821 of these anomalous primes with defect \((3,2)\) and divide them up into rows based on \(\operatorname{disc}\mathcal{O}_{0}\pmod{8}\) and columns based on the height of \(V_{p}\), the isogeny volcano associated to \((E,E^{\prime})\): \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\operatorname{disc}\mathcal{O}_{0}\pmod{8}\ \backslash\ h(V_{p})\) & 2 & 3 & 4 & 5 & 6 & \(\geq 7\) \\ \hline 1 & 4930 & 1279 & 322 & 76 & 22 & 7 \\ \hline 5 & 5024 & 1225 & 308 & 82 & 31 & 4 \\ \hline 0 & 2501 & 570 & 168 & 45 & 10 & 3 \\ \hline 4 & 2363 & 628 & 172 & 38 & 8 & 5 \\ \hline \end{tabular} We give an analogous table for the 1264 of these anomalous primes with defect \((4,3)\): \begin{tabular}{|c|c|c|c|c|c|} \hline \(\operatorname{disc}\mathcal{O}_{0}\pmod{8}\setminus h(V_{p})\) & 3 & 4 & 5 & 6 & \(\geq\) 7 \\ \hline 1 & 305 & 73 & 20 & 5 & 2 \\ \hline 5 & 318 & 85 & 18 & 5 & 1 \\ \hline 0 & 155 & 40 & 13 & 5 & 0 \\ \hline 4 & 158 & 28 & 9 & 2 & 2 \\ \hline \end{tabular} In both cases observe that the values decrease roughly by a factor of 4 as we move along a row, as predicted by Conjecture 6.0.1.
2306.02084
The zoo of isolated neutron stars
In this brief review I summarize our basic knowledge about different types of isolated neutron stars. I discuss radio pulsars, central compact objects in supernova remnants, magnetars, near-by cooling neutron stars (aka the Magnificent seven), and sources of fast radio bursts. Several scenarios of magneto-rotational evolution are presented. Recent observational data, in the first place -- discovery of long period radio pulsar, require non-trivial evolution of magnetics fields or/and spin periods of neutron stars. In some detail I discuss different models of magnetic field decay and interaction of young neutron stars with fallback matter.
Sergei B. Popov
2023-06-03T11:11:59Z
http://arxiv.org/abs/2306.02084v1
# The zoo of isolated neutron stars ###### Abstract In this brief review I summarize our basic knowledge about different types of isolated neutron stars. I discuss radio pulsars, central compact objects in supernova remnants, magnetars, near-by cooling neutron stars (aka the Magnificent seven), and sources of fast radio bursts. Several scenarios of magneto-rotational evolution are presented. Recent observational data, in the first place - discovery of long period radio pulsar, require non-trivial evolution of magnetics fields or/and spin periods of neutron stars. In some detail I discuss different models of magnetic field decay and interaction of young neutron stars with fallback matter. neutron stars; radio pulsars; magnetars; fast radio bursts 1 Footnote 1: email: {[email protected]; Tel.: +39 040 2240 368 ## 1 Introduction Neutron stars (NSs) are very fascinating objects. More we study them - more interesting they seem to be. There are numerous types of sources related to NSs. Many of them contain a NS as a member of a binary system. In the first place, these are X-ray binaries with accretion onto the compact object. The first discovery (but not identification!) of such source - Sco X-1, - has happened already in 1962 [1]. The first robust identification of a NS in an accreting binary - Cen X-3, - was done a few years later thanks to observations on-board of UHURU satellite [2]. Even as accreting objects NSs in close binary systems can appear as sources with very different properties. This is possible because binary evolution provides many possibilities to obtain systems with various parameters [3]. In low-mass X-ray binaries (LMXBs) mass transfer from a low-mass component spins up the NS. This can result in formation of a millisecond radio pulsar (mPSR) [4]. The first mPSR was discovered in 1982 [5]. Spin up in a binary system was confirmed after identification of so-called transitional mPSRs [6]. Such NSs are characterized by very short spin periods (\(\sim\)1-10 msec) and low magnetic fields (\(\sim 10^{8}\)-\(10^{10}\) G). The later leads to large characteristic ages, \(\tau_{\rm ch}=P/(2\dot{P})\sim\) few billion years, and long life time. Some of PSRs (ordinary and mPSRs) are found in binary systems with other NSs [7]. There are several evolutionary channels which can result in formation of such systems [8]. Some of these binary NSs are doomed to end there lives in a spectacular coalescence accompanied by a short gamma-ray burst (sGRB), a gravitational wave burst, and a kilonova [9]. NSs are usually found in binary systems due to their own activity (as in the case of PSRs), or due to their interaction with the companion (accretion or coalescence). However, recently several candidates to inactive/non-interacting NSs in binaries have been reported on the base of astrometric and spectroscopic data, see [10] and references therein to other proposed candidates. Observations of binary systems provide plethora of data on NSs. Still, in many cases it is necessary to study isolated NSs, as in this case properties of compact objects are not modified by presence of a companion. In this review I focus on different types of isolated (mostly - young) NSs and their evolution. ## 2 Main species in the Zoo Observations in the whole range of electromagnetic waves - from radio to gamma, - demonstrate very different phenomena related to isolated NSs. Various observational appearances and intrinsic properties (spin period, magnetic field, surface temperature, etc.) result in classification of isolated NSs into several main types, see Fig. 1. In this section I briefly present descriptions of them. ### Radio pulsars Radio pulsars (PSRs) are the best known (and the most numerous if we speak about observed objects) type of isolated NSs. They were discovered 55 years ago [11] by detection of their periodic radio pulses. The periodicity is due to rotation of these compact objects. Spin periods cover the range \(P\sim 0.001\) -100 s. Presently, the ATNF catalogue 1 contains \(\sim 3000\) of these sources [12]. Magnetic fields determined from the standard spin-down equation \(B_{\rm p}=3.2\times 10^{19}\left(P\dot{P}\right)^{1/2}\) are in the range \(\sim 10^{8}-10^{14}\) G. These are dipolar fields on magnetic poles. On the equator the field it twice smaller: \(B=B_{\rm p}/2\). Non-dipolar components might also exist, but their determination in the case of radio pulsars is not possible with any confidence, at least now. Footnote 1: https:www.atnf.csiro.au/people/pulsar/psrcat/. Low fields together with small spin periods correspond to old "recycled" mPSRs originated from LMXBs. In this section we do not discuss them, focusing on young "normal" PSRs (in many respects, a group of so-called Rotating Radio Transients, - RRATs, see [13] - can be unified with PSRs as they share the main properties). Radio emission of PSRs is generated in magnetospheric processes which are not completely understood, yet (see a review and references to early studies in [14]). Magnetospheric emission might have a very wide spectrum. In some cases, e.g. the Crab pulsar, it is observed in the whole spectral range: from radio to gamma-rays. The energy reservoir is related to rotation of a PSR. This allows to relate key properties of a NS with observed parameters in a simplified magneto-dipole formula: \[I\omega\dot{\omega}=\frac{2\mu^{2}\omega^{4}}{3c^{3}}. \tag{1}\] Here \(I\) is the moment of inertia of a NS, \(\omega=2\pi/P\) - spin frequency, \(\mu=BR_{\rm NS}^{3}\) - magnetic moment, and \(c\) - the speed of light. \(R_{\rm NS}\) is the NS radius. Detailed calculations broadly confirm this equation [15], still alternative views also exist e.g., [16]. Different studies indicate that the majority of young NSs passes through the stage of a PSR. Thus, the birth rate of PSRs is not much smaller than the birth rate of NSs in general. The later, in its turn, is not much smaller than the rate of core-collapse supernovae (CCSN) which is about \(1/60\) yr\({}^{-1}\)[17]. All these numbers are known with an uncertainty smaller than a factor \(\sim 2\). Let us assume for a simple estimate that the birth rate of PSRs is \(1/100\) yr\({}^{-1}\). Near the so-called "death line", where the number of observed pulsars in the \(P-\dot{P}\) diagram drops (this might be related to a drop in efficiency of \(ee^{-}\)-pair production in the magnetosphere), a typical pulsar has \(P\sim 2\) s and \(\dot{P}\sim 3\times 10^{-16}\) s/s. Then its characteristic age is about \(10^{8}\) yrs. So, the total number of PSRs in the Galaxy is \(\sim 10^{6}\). As radio emission of PSRs is significantly beamed, just a small fraction (about 10%, [18]) of them can be observed even with infinite sensitivity. At the moment, the Five-hundred-meter Aperture Spherical radio Telescope (FAST) is the most sensitive instrument looking for new PSRs [19; 20]. It is expected that SKA will allow us to detect most of the Galactic pulsars potentially visible from the Earth [21]. PSRs are characterised by large spatial velocities, \(\sim\) few hundred km s\({}^{-1}\) - about an order of magnitude larger than in the case of their progenitors [22]. Additional velocity, kick, is received by a NS during the SN explosion [23]. For young NSs spatial velocity is not changing significantly. So, there were hopes that properties of velocity distributions of different subpopulations of isolated NSs could shed light on their origin and causes of diversity of their properties. However, it seems that velocities of different types of isolated NSs are similar to each other. Young ages of normal PSRs are confirmed by their associations with supernova remnants (SNRs) [24] and spiral arms [25]. Some types of isolated NSs that we are going to discuss are even younger on average and are associated with SNRs by definition. ### Central compact objects Central compact objects in SNRs (CCOs for short) are young NSs observed due to their thermal emission inside supernova remnants. Known CCOs are not numerous, there about a dozen of them 2[26; 27]. But their very small ages - about few thousand years, - point to a significant birth rate. So, this is a non-negligible subpopulation of isolated NSs. Footnote 2: See the on-line catalogue at [http://www.iasf-milano.inaf.it/](http://www.iasf-milano.inaf.it/)\(\sim\)deluca/cco/main.htm. Mostly, parameters of CCOs mentioned in this subsection refer to this catalogue. CCOs can be an inhomogeneous group of sources united by their appearance inside SNRs and absence of any traces of radio pulsar activity. These objects are observed as soft X-ray sources with a thermal spectrum [28]. The origin of emission is attributed to hot areas of a NS surface with a typical temperature \(\sim 10^{6}\) K. As sources are young (ages \(\sim\) few thousand years), the most obvious source of energy is the residual heat. This assumption is in correspondence with modeling of thermal evolution of NSs [29; 30]. However, at least in few cases an additional source of energy is suspected. Standard cooling is determined by neutrino emission from interiors and photon emission from the surface [31]. At early stage of evolution, typically up to \(10^{4}\) - \(10^{5}\) yrs, the neutrino emission prevails. The neutrino emissivity strongly depends on the NS mass: low-mass objects can stay hot for a longer time. Still, residual heat can explain only temperatures \(\lesssim 3\times 10^{6}\) K for ages \(\lesssim 10^{5}\) yrs and \(\lesssim 10^{6}\) K for ages \(\lesssim 10^{6}\) yrs. Larger temperatures at young ages are typically explained by magnetic energy release, e.g. [32]. Measurable temperature in mature NSs (e.g., [33]) can be explained by chemical [34] or rotochemical heating [35]. In three CCOs spin periods and their derivatives are measured due to the X-ray flux variability. Periods \(\sim\) 0.1 - 0.4 s and \(\dot{P}\sim 10^{-17}\) s/s according to the magneto-dipole Figure 1: \(P-\dot{P}\) diagram. The main types of young isolated NSs are shown together with recycled mPSRs (lower left part of the plot) and the simplest tracks corresponding to Eq. (1) with constant magnetic field. Scales on axes are approximate. formula correspond to fields \(\sim 10^{10}-10^{11}\) G. Because of their position in the \(P-\dot{P}\) diagram these CCOs are often called "antimagnetars". Still, some of CCOs can be real magnetars. Analysis of the light curve of PSR J1852+0040 in the SNR Kes 79 allowed Shabaltas & Lai [36] to state that large pulse fraction observed in this source requires crustal field of the magnetar scale (see also [37]). Additional energy release due to the field decay in the crust, or modification of the surface temperature distribution due to the influence of the magnetic field on the heat transfer might be responsible for the small emitting area in PSR J1852+0040 which is necessary to explain large pulsations of the flux in presence of the light bending effect in strong gravitational field of the compact object. As no magnetospheric activity was ever observed from this source, it was proposed in [36] that the NS is a "hidden" magnetar. I.e., the strong field is "screened" by the matter which fallback onto the NS soon after the SN explosion (see [38] and references therein to early papers on this scenario). Another magnetar among CCOs is the famous NS inside the RCW 103 remnant. This source has been discovered years ago with the Einstein observatory [39]. A prominent feature of the central source - its variability on a time scale about a few years, - was successfully described in the model of magnetic energy release in the crust [40], and it was suspected that the source can also belong to the class of "hidden" magnetars. However, soon it was demonstrated that it is not so hidden. Bright short high energy bursts, quite similar to the bursts of soft gamma-ray repeaters, were detected from this source [41; 42]. This clearly points towards the magnetar nature of the source. In addition, the 6.67 hour spin period was measured for the NS in RCW103 [43]. Such long spin periods in young presently non-accreting objects can be explained in a model where a strong magnetic field of the NS interacts with a fallback disc [44]. Despite the majority of CCOs demonstrate just surface thermal emission, at least in few cases CCOs can have an additional source of energy - its magnetic field. I.e., they can be related to magnetars. ### Magnetars A NS is called a magnetar if its observational appearance is mainly due to magnetic energy release. There are two main manifestations of these sources: high energy bursts and surface thermal emission. Often magnetars are considered as the most extreme and interesting type of NSs. Their unusual properties manifest themselves in spectacular observational appearance, unfamiliar to other types of compact objects. Indeed, during the hyper flare of SGR 1806-20 in 2004 the peak luminosity was above \(10^{47}\) erg s\({}^{-1}\) and the total energy release in the event was \(\sim 10^{46}\) erg [45]. This tremendous burst demonstrates that magnetic energy in NSs can reach very large values and also can be rapidly released in a huge amount. It is not easy to say exactly when magnetars were discovered. May be, the most reasonable approach is to attribute it to the identification of the source which was called in 1979 as "X-ray burster 0525.9-66.1" [46; 47], and which is now mostly known as SGR 0525-66. Observations with gamma-ray detectors demonstrated existence of subsequent powerful flares from the same source including one giant flare with \(L\gtrsim 10^{44}\) erg s\({}^{-1}\). A stable period of \(\sim 8\) s was found and the object was localised in a SNR in the Large Magellanic cloud (LMC). Now the McGill catalogue of magnetars3 lists about 30 sources [48]. They belong to two main subclasses: anomalous X-ray pulsars (AXPs) and soft gamma-ray repeaters (SGRs). A recent review with large bibliography dedicated specifically to these types of sources can be found in [49]. Footnote 3: [http://www.physics.mcgill.ca/](http://www.physics.mcgill.ca/) pulsar/magnetar/main.html Initially, division into AXPs and SGRs has been very clear. Anomalous X-ray pulsars were characterized by relatively stable X-ray emission with luminosity \(\sim 10^{35}\) erg s\({}^{-1}\) (i.e., substantially smaller than in most of accreting X-ray pulsars in binary systems); they had spin periods about a few seconds, which were always increasing; they did not have any optical or IR counterparts. Soft gamma-ray repeaters, in the first place, were characterised by intense bursts observed in hard X-rays and/or in soft gamma-ray range, AXPs did not show this type of activity. However, step by step it became clear that AXPs and SGRs share similar properties. Out of active periods SGRs often resemble AXPs. And in 2002 Gavrili et al. demonstrated that well-known AXPs can have bursting activity identical to that of SGRs [50]. Typical spin periods of magnetars are about few seconds. However, there are several important examples of outliers. A high-B young pulsar in a Crab-like plerion PSR J1846-0258 which started to demonstrate SGR-like flares [51] has spin period 0.327 s. Oppositely, the source 1E 161348-5055 in the SNR RCW 103 (which I described above) has spin period \(\sim 6.67\) hours. Situation with magnetic fields is also not so univocal. "Classical" magnetars have fields \(\sim 10^{14}-10^{15}\) G. However, there are several so-call "low-field magnetars". Up to now three objects are reported (see a review in [52]). According to estimates based on the usual magneto-dipole equation, these sources has dipole field well below \(10^{13}\) G. But phase-resolved spectroscopy demonstrated existence of proton cyclotron lines which indicates local surface fields \(\sim 10^{14}-10^{15}\) G [53; 54]. These might be small-scale non-dipolar components of the magnetic field. The best definition of a magnetar involves magnetic field dissipation. That is, a magnetar is not just a NS with large field, but such a compact object that magnetic energy release dominates in its luminosity at least for some period of time. The total energy budget can be roughly estimated as follows: \[E_{\rm mag}\sim\frac{4}{3}\pi R_{\rm NS}^{3}\bigg{(}\frac{B^{2}}{8\,\pi}\bigg{)} =1.7\times 10^{47}\,B_{15}^{2}\,{\rm erg}. \tag{2}\] Naive estimates of magnetar ages based on the characteristic age \(\tau_{\rm ch}\sim P/(2\dot{P})\) are not valid as this simple equation is written for a constant field. However, association of some of the magnetars with SNRs and there position in the Galaxy [25] robustly confirm their young ages \(\sim 10^{3}-10^{5}\) yrs. Young ages of magnetars indicate that an active period of magnetic energy dissipation does not last long. Magnetars might constitute a significant fraction of young NSs. Many studies indicate their fraction \(\lesssim 10\%\), see e.g. [55] and references therein. However, at least one study suggests a much higher fraction of these objects: \(\sim 40\%\)[56]. The question of the magnetar fraction is closely related to the problem of formation of these NSs. It is still unknown what defines if a newborn NSs is a magnetar. In one framework it is necessary to have a strongly magnetized progenitor [57]. In another, the magnetic field is amplified by several orders of magnitude via a dynamo mechanism operating in a newborn NS [58; 59]. In both scenarios it is quite probable that evolution in binaries can play a role. For example, observations of the magnetic star \(\tau\) Sco suggest that its magnetic field was substantially increased due to coalescence of two main sequence stars in a binary, and with magnetic flux conservation this star can become a magnetar in future [60]. On other hand, evolution in binaries can result in significant spin-up of the stellar core which later might make the dynamo mechanism efficient enough to produce a magnetar-scale magnetic field [61]. Rapid rotation which is necessary for production of large dipolar fields with the dynamo mechanism [62] can be obtained in different ways. For example, a newborn NS can be spun-up due to fallback accretion [63]. Modeling of the magnetar evolution demonstrates [64; 65] that after some time \(\sim 10^{4}-10^{5}\) yrs the rate of magnetic energy dissipation decreases, all types of activity of a NS ceases. Thus, magnetars become sources of a different type. Most probably, one of their descendants are X-ray dim isolated NSs (XDINS), also known as the Magnificent seven (M7). ### Magnificent seven The M7 (or XDINSs) is group of near-by (\(\lesssim\) a few hundred pc, see [66]) young isolated NSs observed due to their thermal surface emission, see a review in [67; 68]. The first member of this class of NSs was discovered in 1996 [69]. The first period of the history of studies of the M7 is dominated by results from the ROSAT satellite, see [70] for an early brief review and discussion (early ideas included, e.g., the possibility that these NSs can be accreting sources with decayed magnetic field [71]). Since then many observations in different wavelengths were obtained (in addition to X-rays, many sources are detected as dim optical sources with magnitudes \(\sim 26-28\) and in near-UV, near-IR; in radio deep upper limits are obtained [72; 73]). Now, for all but one of these sources spin periods and their derivatives are measured, see e.g., Table 5 in [74]. The magneto-dipole formula provides an estimate of the magnetic field \(\sim 10^{13}-10^{14}\) G. The observed thermal emission can be either due to the residual heat, e.g. [75], or there is some contribution from magnetic field decay [55]. Evolutionary, M7 might be descendants of magnetars [55]. It was shown by population synthesis modeling that the population of the M7 originated mainly from the Gould Belt - the local (\(\sim 500\) pc) starforming structure [76]. As these NSs are relatively weak (\(L\sim 10^{31}-10^{32}\) erg s\({}^{-1}\)) and soft (\(kT\lesssim 100\) eV), it is difficult to detect such sources at large distances, mostly due to the interstellar absorption. Despite intensive searches (e.g., [77] and references therein) very few NSs similar to the M7 were found and none of them ideally resembles the original seven sources. The first one was Calvera - a soft X-ray source high above the Galactic plane [78]. But later it was shown that this NS has a short spin period (0.06 s) and also in some other respects is different from the M7 sources [79]. The next one is 2XMM J104608.7-594306 [80]. Again, the spin period (18.6 msec) does not fit the M7 family [81]. Finally, the latest discovery is the source 4XMM J022141.5-735632 [82]. For this object the spin period is not reported, yet. There have been hopes that the eROSITA telescope can find much more M7-like sources [83; 84], but as it has been switched off just after two years of the survey program these hopes more or less disappeared. ## 3 Standard evolution and its problems It is convenient to discuss evolution of young NSs in terms of \(P\), \(\dot{P}\), and \(B\); and to illustrate it with the \(P-\dot{P}\) diagram. In the simplest and the most standard way the evolution is described by Eq. (1) for \(\mu=\)const. Then NSs evolve in the \(P-\dot{P}\) diagram along strait tracks, Fig. 1. Absence of significant field decay in normal PSRs was found in many papers, e.g. [85] (see, however, the next section). Also, in PSRs we do not see any evidence for additional release of magnetic energy (situation with magnetars is drastically different, of course). Typically, in this standard approach it is assumed that the initial spin periods are very short. Sometimes authors can assume that the initial period is close to the limiting rotation (e.g., 1 ms). Sometimes, these initial spin periods are assumed to be close to the initial period of the Crab pulsar. Such assumptions were very popular, for example, in early models of binary population synthesis, e.g. [86]. Due to gradual progress in our understanding of initial parameters of NSs such simplified approaches were replaced by more advanced ones. If \(P_{0}\ll P\) and the field is constant then the real age of a pulsar is close to the characteristic age \(\tau_{\rm ch}\). Population synthesis studies and analysis of young NSs in SNRs with known ages indicate that typical initial periods of the majority of PSRs are of the order of 0.1 s [24; 85; 87]. Thus, for many standard PSRs with observed \(P\sim 1\) s the assumption of small initial period can be acceptable. Still, in many cases it does not work well and results e.g., in a significant discrepancy between the real age and \(\tau_{\rm ch}\). The simplest model of magneto-rotational evolution nearly excludes links between different subpopulations: a CCO cannot become an M7-like object, and a magnetar cannot appear later in its life as a standard radio pulsar. This feature leads to an interesting controversy. The sum of birthrates of different subpopulations is larger than the rate of CCSN [88] (note, in that paper the authors do not include CCOs in their calculations, with this subpopulation the problem is even more severe). Also, in the simplest model it is difficult to explain lack of magnetars with periods larger than \(\sim 10\) s, e.g. [89]. As well as the absence of descendants of CCOs which might be visible at ages \(\lesssim 10^{6}\) yrs when a SNR is already dissolved [90]. Thus, it necessary to consider more complicated evolutionary paths. ## 4 Double nature and non-standard evolution In this section I discuss two possible features of NSs evolution: magnetic field decay and fallback. Simplified tracks are shown in Fig. 2. Also I present several examples of sources which absolutely do not fit the simplified NS evolution but require either field decay, or fallback, or both. ### Magnetic field decay It is quite natural to expect that magnetic fields of NSs might decay in time. In addition to obvious general physics arguments that electric currents which are responsible for the magnetic field might decay due to finite conductivity in the crust (and maybe due to other processes, like transport of magnetic flux tubes from the core to the crust, e.g. [91]) there are observational arguments. The existence of low fields in old NSs was demonstrated e.g., by discovery of mPSRs [5]. Theory of magnetic field decay in NSs and related observational data were reviewed many times, see e.g. [64; 65] and references therein. Here I just briefly remind several basic features. Magnetic field can exist in a solid NS crust or/and in a liquid (and, most probably, superconducting) core. In the first case the magnetic field is produced by electric currents. In the second, the field is confined in magnetic flux tubes, as the core is expected to be type-II superconductor, see e.g. [92]. Thus, physics of the field evolution is very different in these two cases. Physics of the core is much less understood. Partly because of that often only field evolution in the crust is considered. Basics of crustal field evolution are perfectly described in [93]. Two main time scales can be defined: the Ohmic (\(\tau_{\rm Ohm}\)) and the Hall one (\(\tau_{\rm Hall}\)). Figure 2: The same as Fig, 1, but with evolutionary tracks corresponding to field decay in magnetars and re-emerging fields in CCOs. The Ohmic scale can be written as: \[\tau_{\rm Ohm}=\frac{4\pi\sigma L^{2}}{c^{2}}. \tag{3}\] Here \(\sigma\) is conductivity and \(L\) is a length scale in the crust. The Ohmic time scale depends on temperature of the crust as electrons can scatter off phonons, and on crustal composition (impurities). For usual field configurations, \(L\) is sufficiently large (\(\sim\) a few hundred meters in deeper layers, comparable to the crust thickness, see [93]) to make the time scale relatively long. In young hot NSs \(\tau_{\rm Ohm}\) can be about \(10^{5}\) yrs, see e.g. Fig. 4 in [94]. And in older cold NSs this time scale is long \(\sim 10^{9}\) yrs. Rapid release of magnetic energy can proceed via the Hall cascade [95]. This is a non-dissipative process. But it reconfigures the field such that \(L\) decreases, so that now the field can decay faster. The time scale of this process is: \[\tau_{\rm Hall}=\frac{4\pi en_{e}L^{2}}{cB(t)}, \tag{4}\] where \(e\) is the elementary charge and \(n_{e}\) is concentration of electrons. For fields \(\sim 10^{15}\) G it can be as small as \(\sim 100\) yrs, e.g. [32]. However, the value of \(\tau_{\rm Hall}\) is not well-known, and it can be two orders of magnitude large for the same field \(10^{15}\) G. It is widely accepted that magnetars activity is related to the Hall cascade in NS crust. Evolution of magnetic fields in a core is described in a more complicated way (see a brief review is Sec.3.2 in [65]). Recently, Gusakov, Kantor, and Ofengeim developed a new approach to calculate field behaviour in superconducting cores [96; 97]. In particular, their results suggest that in vicinity if the crust the time scale can be as short as \(\sim 100\) yrs [98]. This is intriguing as it potentially gives an opportunity to explain magnetar activity in the framework of the core field evolution. From the observational point of view there are many arguments in favour of decaying fields in young NSs of different types. In the first place, magnetar activity provides evidence for the field decay, as obviously the magnetic energy is released in bursts and it is responsible for the crust heating. Active lifetime of magnetars might be short as it comes out from independent ages estimates of these sources (SNR and kinematic ages, associations with clusters of young stars, etc.). However, bursts can be produced also by older NSs, but more seldom [99]. Thermal properties of NSs also provide arguments in favour of the field decay. E.g., Pons et al. [100] demonstrated that typically a high-B NS cannot have low surface temperature due to additional heating related to the magnetic energy release in the crust. Analysis of properties of high mass X-ray binaries (HMXBs) showed that distribution of magnetic fields of NSs in these systems is compatible with models of crustal field evolution [101]. Finally, even for normal PSRs some modeling favoured decaying field along their evolution, see e.g., [102] and references therein. A different conclusion was made in [103]. These authors constructed a modified model of so-called "pulsar current" [104; 105] and concluded that a significant fraction of normal pulsars experiences an episode of field decay with a time scale \(\sim 4\times 10^{5}\) yrs at ages \(\lesssim 10^{6}\) yrs. Later this decay might be terminated. This points to the Ohmic decay due to electron scattering off phonons as this type of decay disappears when a NS becomes sufficiently cold. This happens at ages \(\lesssim 10^{6}\) yrs even for low-mass objects. In the same framework anomalous braking indices of PSRs can be explained, too [106]. To summarize, magnetic field decay in NSs is now a standard ingredient of modeling their evolution. Different modes of decay with different time scales are possible. So, presently the situation is far from being clear. That is why observations of peculiar objects are important, and I discuss some of them in the following subsection. ### Werewolves and secret agents Till the beginning of this century it was possible to attribute each young NS to some well-defined category (PSR, AXP, SGR, CCO, M7, etc.). In 2002 the discovery of SGR-like bursts from an AXP has been announced [50]. That was the first, but not very prominent example of "double nature". Not so prominent because for some time is had been already suspected that AXPs and SGRs form the same family of objects - magnetars. May be, SGRs are slightly younger - and so, more active. But later on more pronounced examples of transition from one subpopulation to another were found. Here I give some examples. PSR 1846-0258 is observed only as an X-ray source - the radio beam is not pointing towards the Earth. The NS has \(B\sim 5\times 10^{13}\) G and the largest rotational energy losses \(\dot{E}_{\rm rot}\) among PSRs. A plerion and a SNR are observed around the source. The characteristic age is \(\sim 884\) yrs. The spin period is \(\sim 0.33\) s. So, it looked like a Crab-like PSR with an order of magnitude large field and an order of magnitude longer period. But in 2008 a magnetar-like activity was reported from this object [51; 107]. X-ray luminosity of the PSR significantly increased and it started to produce SGR-like bursts. This came out to be the first example, when a radio pulsar became a magnetar. Another example of pulsar \(\rightarrow\) magnetar transition is PSR 1622-4950. It has \(P=4.3\) s and a large period derivative corresponding to \(B\sim 3\times 10^{14}\) G. Since its discovery it has been suspected [108] that the source can be a magnetar in a quiescent state. Indeed, in 2017 the source re-activated [109]. Its X-ray luminosity significantly increased, however, no bursts were detected. Above I already described a very peculiar source in the SNR RCW 103 which initially looked like an atypical CCO, but then appeared to be an active magnetar. Its activity continued in 2016 with an outburst [110] which then decayed following the general scenario of crustal magnetic energy release. PSR J1852+0040 in Kes 79 also was presented above. It is a candidate to "hidden" magnetars, i.e. a magnetar covered during a fallback episode such that only its crustal (but not magnetospheric) activity can be visible. This NS has a spin period \(\sim 0.1\) s [111]. If we assume that the magnetar scale field was buried by fallback, then the initial episode has been very rapid so that the NS had no time to increase the spin period due to interaction with the fallback flow (as it, most probably, also happened in RCW 103). Then, the spin period of PSR J1852+0040 could be "frozen". So, the present day value can be similar to the initial spin. Then, it means that magnetar formation does not necessary require rotation with a rate much smaller than 0.1 s [112]. It is tempting to say that a compact remnant of the SN 1987A also can be a "hidden" magnetar [40] as its progenitor was a product of a coalescence of two massive stars in a binary system [113]. This coalescence could enhance spin rate and magnetic field of the stellar core of the progenitor. Discovery of more "frozen" magnetars can shed light on their initial rotation rate and so - on the mechanism of magnetar formation. After a fallback episode the field is expected to diffuse out on a time scale \(\sim 10^{3}\) - \(10^{5}\) yrs [114]. While the field is diffusing out, its external structure can be changed which can prevent appearance of radio pulsar emission [115]. Still, there is a possibility to observe a PSR on the stage of field re-emergence. Then, it might have a very non-standard track in the \(P-\dot{P}\) diagram. And such objects are known! The most famous example is PSR 1734-3333 [116]. This is a standard PSR, but its period derivative is rapidly increasing. The rate corresponds to the braking index \(\sim 1\), \(\dot{P}\propto P^{2-n}\) (the standard Eq. 1 corresponds to \(n=3\)). With \(P=1.17\) s and a large period derivative \(\dot{P}=2.28\times 10^{-12}\) s/s it can after some time - \(\sim 20\) - 30 kyr, - enter the region of magnetars, see [116]. Finally, it is necessary to say few words about low-field magnetars which were also mentioned above. As proposed in [117] these sources can form a significant subpopulation of relatively old magnetars. These sources might demonstrate activity just very seldom and have low quiescent luminosities. Their existence points to the possibility of quasistationary configuration of magnetic field with relatively low dipolar component. In the following subsection I discuss one of recently proposed stable field configurations - the Hall attractor. ### Fallback and Hall attractor Examples of peculiar sources described above suggest that NS field evolution can follow non-standard routes. In this subsection we consider two important features of such evolution: fallback and Hall attractor. The idea that some fraction of matter ejected after a bounce in a SN explosion can later fallback onto the NS was proposed in early 1970s (see a brief historical review in the introductory section of [118]). In 1995 Muslimov and Page [119] suggested that fallback can significantly influence external magnetic field of NSs and delay switch-on of radio pulsar emission mechanism. The scenario with magnetic field submergence due to fallback became popular when it was applied to CCOs by Ho [114] and then by Vigano and Pons [120]. These authors demonstrated that for a realistic fallback amount (\(\Delta M\sim 10^{-6}-10^{-4}\)\(M_{\odot}\)) magnetic field can be significantly submerged and diffuses out on the time scale \(\sim 10^{3}-10^{4}\) yrs. This perfectly fits properties of CCOs and explains why "evolved CCOs" are not observed as purely thermal emitters with \(P\lesssim 1\) s, e.g. [90]. Bernal et al. presented 2D and 3D simulations of magnetic field submergence due to fallback [38]. They modeled dynamics of interaction between falling matter and magnetic field on the scale \(\lesssim 100\) msec for different fallback rates. The authors show that for rates \(\dot{M}\gtrsim 10\)\(M_{\odot}\) yr\({}^{-1}\) the total submergence happens, for lower rates the field is submerged just partially. Such rates are realistic at early stages of fallback, so in some fraction of young NSs the external magnetic field can be smaller than the crustal field by several orders of magnitude. Fallback can be prevented by activity of the central source. This possibility has been neglected in e.g. [38], but later on it was studied by the Japanese group [121; 122]. These authors attribute diversity of young NSs mainly to different amount of fallback. In particular, in [121] they define criteria according to which in a simplified 1D model for a given fallback rate a NS becomes a CCO, a PSR, or a magnetar depending on its spin and magnetic field. In [122] the same situation was studied in more details with a numerical approach, but again in the 1D approximation and without accounting for instabilities. Advanced fallback calculations are performed in a framework of SN explosion modeling [123]. A specific feature of this modeling is motion of the NS relative to the ejecta. It is shown, that this results in spin-up of a newborn NS due to fallback and in spin-velocity alignment. In this framework a NS spin in mostly determined by the fallback (and not by the spin rate of the progenitor). Influence of fallback on the spin of a newborn NS is studied also in [63]. In this scenario a NS formed from a slowly rotating progenitor star can be spun-up so significantly that conditions necessary for a magnetar formation are fulfilled. Thus, in this model fallback also produces rapidly rotating compact objects. An opposite situation is also possible in other scenarios. Interaction between a fallback disc and magnetic field of a magnetar can result in significant spin-down of the NS. This possibility was recently analysed in [44]. For a wide range of realistic fallback rates the disc can penetrate within the light cylinder. Thus, the NS enters the propeller stage of magneto-rotational evolution. At this stage a compact object can spin-down rapidly. Periods \(\sim 10^{2}-10^{4}\) s can be easily reached even within a lifetime of a SNR (\(\lesssim 10^{5}\) yrs). This scenario is applicable for recently discovered long-period pulsars, discussed in the following section. Long spin periods also can be reached if magnetic field remains large for a long time. This is possible if the Hall cascade in a magnetar crust is terminated or at least significantly slowed down. Such situation has been found numerically and the stage was named "the Hall attractor" [124; 125]. Later, it was confirmed in [126; 127]. In the original paper [125] the authors obtained that the attractor is reached in \(\lesssim 1\) Myr for initial fields \(\sim 10^{14}\) G. For larger fields it is reached faster. At the attractor stage the dipolar field is about \(\exp(-3)\times B_{0}\), where \(B_{0}\) is the initial field. Details significantly depend on the model, in particular - on the initial conditions (see a review of magnetic field evolution in NSs in [65]). The bottom line is the following: rapid initial Hall evolution of large magnetic fields can be significantly slowed, this potentially allows existence of NSs with relatively large fields at ages at least \(\sim 1\) Myr (which is also important for explanation of magnetar candidates in accreting binary systems [128]). In this case such objects can reach relatively large spin periods just due to standard losses. This can help to explain some sources discussed in the next section. ## 5 New puzzle - new tracks Recent discoveries of long spin period pulsars demand new non-trivial evolutionary tracks in comparison with those shown in Fig. 2. MeerKAT observations allowed Caleb et al. to discover a radio pulsar PSR J0901-4046 with a record-long spin period 76 s [129]. With \(\dot{P}=2.25\times 10^{-13}\) s s\({}^{-1}\) the source has the characteristic age 5.3 Myr. The magneto-dipole field estimate provides the value \(1.3\times 10^{14}\) G. In the standard scenario of magneto-rotational evolution such combination of parameters is impossible due to short initial spin periods and rapid decay of large magnetic fields. GLEAM-X J162759.5-523504.3 is even more exotic with the period of pulsations of its radio emission \(\sim 18\) minutes [130]. This object was discovered wiht a help of the Murchison Widefield Array. The period derivative is not measured, yet. This prevents robust determination of the source nature. Still, most probably it is a NS not a WD, see discussion and references in [131]. Moreover, it might be a magnetar as an upper limit on \(\dot{P}\lesssim 10^{-9}\) s s\({}^{-1}\) provides that the observed luminosity is larger than the rotational energy losses. Thus, an additional source of energy is necessary and it can be magnetic energy of the magnetar. If \(\dot{P}\) of GLEAM-X J162759.5-523504.3 is close to the upper limit then the dipolar field is \(\sim 3\times 10^{16}\) G. Such values have been never observed. The characteristic age for such field is \(\tau_{\rm ch}\sim 10^{4}\) yrs which is significantly larger that the expected time scale of Hall cascade for so huge fields. If \(\dot{P}\sim 10^{12}\) s s\({}^{-1}\) then the field is \(\sim 10^{15}\) G and \(\tau_{\rm ch}\gtrsim 10^{7}\) yrs. Again, such combination is not a part of the standard scenario of NS evolution. Possible solutions are related to physical processes discussed in the previous section. Either real ages of both objects are much smaller than their characteristic ages due to large initial spin periods, or the dipolar magnetic field in both cases could survive for a very long time, much longer than the initial time scale of the Hall cascade. Figure 3: The same as Fig. 2, but with recently discovered long period pulsars (the star symbol – PSR J0901-4046, the arrow corresponds to the spin period and the upper limit on the period derivative of GLEAM-X 162759.5-523504) and tracks which can illustrate their evolution, see text for details. tracks with large initial spin periods. The initial short-dashed part of the tracks pointing towards PSR J0901-4046 and GLEAM-X J162759.5-523504.3 corresponds to a rapid spin down from short period just after the core collapse to longer periods, e.g. due to interaction with the fallback disc (of course, this spin-down does not proceed with constant period derivative, so do not take this part of the tracks literally). For PSR J0901-4046 two variants of the further evolution are shown: standard field decay and stalled decay. Note, that above we discussed only scenarios involving single stars. Evolution in a binary system can open an additional channel of producing long spin periods of NSs. If a NS in a HMXB system rapidly starts to accrete [132], or at least reaches the propeller stage, then its period can be rapidly increased up to hundreds or thousand of seconds in case of large magnetic fields and accretion from a stellar wind, see a catalogue of HMXBs in [133]. If this happens close to the moment of explosion of the secondary component then we can expect a "birth" of an isolated NS with a large spin period. NSs which can rapidly reach long spin periods (and which, probably, save large value of their dipolar magnetic fields for a long time) can be of special interest for a long-term evolution of NSs. I discuss it in the following section. ## 6 Towards accretion from the ISM Already more than 50 years ago it was suggested that isolated NSs sooner or later can start to accrete gas from the interstellar medium (ISM) [134; 135]. More than 30 years ago it has been proposed that e.g., ROSAT could detect thousands of accreting isolated NSs (AINSs) [136], but none were found. This was explained in [137] as an evolutionary effect: most of isolated NSs under the standard assumptions cannot reach the stage of accretion during lifetime of the Galaxy. In addition, the rate of accretion onto surface of a NS can be much lower than the standard Bondi value \(\dot{M}\propto\eta\frac{(GM)^{2}}{v^{3}}\rho\sim 10^{11}\left(\frac{10\,\mathrm{ km\,s^{-1}}}{v}\right)^{3}\!\left(\frac{\rho}{10^{-24}\,\mathrm{g\,cm^{-3}}} \right)\mathrm{g\,s^{-1}}\) due to magnetic inhibition [138]. In the formula \(v\) is the NS velocity relative to the ISM and \(\rho\) is the ISM density, the coefficient \(\eta\sim 10\) depends on details of accretion flow around the NS. Velocity distribution is an important ingredient of isolated NS evolution modeling as interaction of a compact object with the interstellar medium strongly depends on this parameter. Also, the initial velocity distribution determines spatial distribution of NSs in the Galaxy, see e.g. [139]. Already early observations demonstrated that NSs can have spatial velocities significantly large than their progenitors [140]. It is assumed that NSs obtain an additional velocity at birth (so-called "kick"). The origin of kick, shape of the velocity distribution, and possible correlations of the kick velocity with other parameters are not completely understood, yet. During last \(\lesssim 50\) yrs many attempt were made to derive the kick velocity distribution from observations or to obtain it from theoretical considerations, e.g. SN explosion models (see a brief review and references to early studies in the introductory part of [141]). It is quite popular to use bimodal velocity distributions as they fit better various data on radio pulsars and X-ray binaries (especially those with a Be-star donor). Recently, in [142] the authors presented a new analysis where they investigated properties of radio pulsars and HMXBs. Their best fit is a bimodal distribution with \(\sigma_{1}\sim\) 30-70 km s\({}^{-1}\) and \(\sigma_{2}=336\) km s\({}^{-1}\), where 10-30% of NSs come from the low velocity component. If isolated, such NSs can become potentially observable accreting sources within the Galactic life time. NSs with larger magnetic fields can start to accrete faster. This was studied in detail in [143]. The problem of low accretion luminosity can be solved in the settling accretion scenario [144]. In this framework an accreting isolated NS can be observed as a relatively bright (e.g., for eROSITA) transient source [145]. But still, the number of isolated accretors is not expected to be very high which makes their searches problematic. Discovery of isolated accreting NSs is very much welcomed as it can open a unique possibility to study old isolated NSs and to learn a lot about their properties and evolution. eROSITA could be a perfect instrument to reach this goal [84]. On other hand, it is important to provide better estimates of the number of accreting isolated NSs and their properties in order to simplify identification of these objects. In the standard approach [137] the main obstacle on the way to accretion is related to relatively slow spin-down of an isolated NS with a standard magnetic field \(\sim 10^{12}\,\)G. Recently discovered young very long period NSs are good candidates to reach the stage of accretion in a relatively short time. Thus, estimation of the number of such objects is of great interest for long term NS evolution. In general, all NSs with long spin periods, large long lived magnetic fields, and/or low spatial velocities (e.g., those born in e\({}^{-}\)-capture SN) have good chances to reach the stage of accretion from the ISM. ## 7 Magnetars and FRBs Fast radio bursts (FRBs) are millisecond-scale radio transients discovered in 2007 [146], see a recent comprehensive review in [147]. A possible link to NSs, in particular - to magnetars, - was proposed already in 2007 [148]. In 2020 it was confirmed by detection of simultaneous radio and high-energy flares from the Galactic magnetar SGR 1935+2154 [149; 150; 151; 152; 153; 154]. The number of known sources of FRBs is rapidly growing and now it is about \(\sim 10^{3}\). About 50 of the known sources demonstrate repeating activity [155], four of the repeaters show very high rate of events producing up to several hundred bursts per hour [156]. In near future FRBs might become the most numerous known sources related to NSs, what is also important - they are extragalactic up to \(z\gtrsim 1\). Thus, they will be one of the main sources of information about the universal population of NSs [157]. NSs producing FRBs can have different peculiar properties and origin. In the first place, it is expected that FRB sources are extreme magnetars with large fields producing hyperflares with total energy release \(\sim 10^{44}\) erg and peak luminosities \(\sim 10^{47}\) erg s\({}^{-1}\), which correspond to a millisecond radio burst with \(L\sim 10^{43}\) erg s\({}^{-1}\) with a ratio \(\frac{L_{\rm radio}}{L_{\rm total}}\sim 10^{-4}\) (see e.g. [153]). Four of the most active FRB sources demonstrate such a huge rate of flares (hundreds per hour) that from the energetic point of view such behavior cannot last longer than few years, as the whole magnetic energy \(\gtrsim 10^{47}\left(\frac{B}{10^{15}\,{\rm G}}\right)^{2}\) erg, see Eq. (2), would be emitted in this period [158]. Such intense outbursts are not observed among Galactic magnetars. Figure 4: The same as Fig. 3, but with addition of the region of AINSs and illustration of corresponding evolutionary tracks. See text for details. Position of AINSs is added out of scale. Two of the repeating sources of FRBs demonstrate periodicity on the scale \(\sim 16\)[159] and \(\sim 160\)[160] days. The origin of this periodicity is unknown. Among the proposed hypotheses there are the following: binarity [161; 162], NS precession [163; 164], and extra-long spin periods [165]. All of these opportunities are very intriguing as we do not know robust examples of active magnetars in binary system (see a review in [166]), we have just a few unconfirmed candidates for precessing magnetars (see e.g., [167] and references therein), and we do not know any examples of so long spin periods of NSs. Emission mechanism of FRBs is not figured out, yet. Presently, two main frameworks are discussed: magnetospheric emission and external relativistic shocks, see reviews in [168; 169]. Advanced theoretical scenarios are proposed in large number for both families of models. Growing variety of observational data (including polarization measurements, burst structure, spectra and their evolution during bursts) on the one hand poses many questions, and on the other hand - provides lots of opportunities to test model predictions. Probably, observations of simultaneous radio and X/\(\gamma\)-ray flares from Galactic magnetars will help to select the correct approach. Understanding of the origin of FRB radiation might shed light on important properties related to NS emission properties, in general. The Galactic population of magnetars is consistent with an assumption that all these sources originated from core-collapse SN. Indeed, these NSs demonstrate clear correlation with young stellar populations and sometimes are situated inside standard SNRs, see e.g. [170] for a review. However, a magnetar (or a NS, in general) can be formed via several other channels. Mostly, they are related to coalescence of compact objects: NSs or/and WDs. FRB sources are identified in different types of host galaxies in various environment [171], including a source in a globular cluster [172]. Localisation of FRBs at sites of very low star formation points towards alternative evolutionary channels related to old stellar populations. Coalescence NS-NS, NS-WD, WD-WD altogether can produce NSs with a rate at most \(\sim 10^{-4}\) yrs\({}^{-1}\) per a Milky way-like galaxy (see references in e.g., [157]). Thus, the probability to find at least one active magnetar with such origins in our Galaxy is not high. Observations of FRBs allow us to study these sources, even in different epochs of cosmic history. Moreover, in near future new sensitive low-frequency radio telescopes might allow to observe FRBs from objects originated from Pop III stars! Understanding properties of sources of FRBs can bring us new surprises about NS physics and observational appearances. ## 8 Conclusions The field of NS astrophysics actively develops, in the first place thanks to discoveries of new peculiar sources (like long spin period pulsars) and types of sources (like FRBs). Phenomenology of NSs becomes richer and richer and this requires more advanced theoretical approaches. We see more and more evolutionary links between different beasts in the zoo of NSs. Understanding of this diverse population of sources is a fascinating task and we keep going on. **Funding:** SP acknowledges support from the Simons Foundation which made possible the visit to the ICTP. **Acknowledgments:** I am grateful to the Organizers of the 2nd International Electronic Conference on Universe (ECU 2023), and personally to Nicholas Chamel, for the invitation to present a talk about different types of isolated NSs which became the basis for the present review. **Conflicts of Interest:** The author declare no conflict of interest. ## Abbreviations The following abbreviations are used in this manuscript:
2302.10177
Analysis of the intra-night variability of BL Lacertae during its August 2020 flare
We present an analysis of the $BVRI$ photometry of the blazar BL Lacertae on diverse timescales from mid-July to mid-September 2020. We have used 11 different optical telescopes around the world and have collected data over 84 observational nights. The observations cover the onset of a new activity phase of BL Lacertae started in August 2020 (termed as the August 2020 flare by us), and the analysis is focused on the intra-night variability. On short-term timescales, (i) flux varied with ~2.2\,mag in $R$ band, (ii) the spectral index was found to be weakly dependent on the flux (i.e., the variations could be considered mildly chromatic) and (iii) no periodicity was detected. On intra-night timescales, BL Lacertae was found to show bluer-when-brighter chromatism predominantly. We also found two cases of significant inter-band time lags of the order of a few minutes. The duty cycle of the blazar during the August 2020 flare was estimated to be quite high (~90\% or higher). We decomposed the intra-night light curves into individual flares and determined their characteristics. On the basis of our analysis and assuming the turbulent jet model, we determined some characteristics of the emitting regions: Doppler factor, magnetic field strength, electron Lorentz factor, and radius. The radii determined were discussed in the framework of the Kolmogorov theory of turbulence. We also estimated the weighted mean structure function slope on intra-night timescales, related it to the slope of the power spectral density, and discussed it with regard to the origin of intra-night variability.
Aditi Agarwal, B. Mihov, Vipul Agrawal, S. Zola, Aykut Ozdonmez, Ergun Ege, L. Slavcheva-Mihova, D. E. Reichart, D. B. Caton, Avik Kumar Das
2023-02-15T04:02:50Z
http://arxiv.org/abs/2302.10177v2
# Analysis of the intra-night variability of BL Lacertae during its August 2020 flare ###### Abstract We present an analysis of the \(BVRI\) photometry of the blazar BL Lacertae on diverse timescales from mid-July to mid-September 2020. We have used 11 different optical telescopes around the world and have collected data over 84 observational nights. The observations cover the onset of a new activity phase of BL Lacertae started in August 2020 (termed as the August 2020 flare by us), and the analysis is focused on the intra-night variability. On short-term timescales, (i) flux varied with \(\sim\)2.2 mag in \(R\) band, (ii) the spectral index was found to be weakly dependent on the flux (i.e., the variations could be considered mildly chromatic) and (iii) no periodicity was detected. On intra-night timescales, BL Lacertae was found to show bluer-when-brighter chromatism predominantly. We also found two cases of significant inter-band time lags of the order of a few minutes. The duty cycle of the blazar during the August 2020 flare was estimated to be quite high (\(\sim\)90% or higher). We decomposed the intra-night light curves into individual flares and determined their characteristics. On the basis of our analysis and assuming the turbulent jet model, we determined some characteristics of the emitting regions: Doppler factor, magnetic field strength, electron Lorentz factor, and radius. The radii determined were discussed in the framework of the Kolmogorov theory of turbulence. We also estimated the weighted mean structure function slope on intra-night timescales, related it to the slope of the power spectral density, and discussed it with regard to the origin of intra-night variability. galaxies: general - galaxies: active - BL Lacertae objects: general - BL Lacertae objects: individual: BL Lacertae 0000-0002-0002-0002]A. Agarwal 0000-0002-3181-7885]B. Mihov 0000-0002-4882-7885]V. Agrawal 0000-0002-1881-7885]S. Zola 0000-0002-4880-3307]Aykhut Ozdonmez 0000-0002-0783-0885]Ergun Ege 0000-0002-0783-0885]L. Slavcheva-Mihova 0000-0002-4880-0888]D. E. Reichart 0000-0002-0788-0888]D. B. Caton ## 1 Introduction Blazars are a subclass of radio-loud active galactic nuclei whose relativistic jets are closely aligned with the line of sight (Urry and Padovani, 1995). Blazars display peculiar characteristics across the entire electromagnetic spectrum, including non-thermal continuum emission variables on timescales ranging from a few minutes to years (e.g. Wagner and Witzel, 1995; Gupta et al., 2008; Mohan et al., 2015; Bhatta and Dhital, 2020; Agarwal et al., 2021), strong optical linear polarization, and superluminal motions (Lister et al., 2019). Blazars are divided into two categories, namely BL Lacertae objects (BL Lacs) and flat-spectrum radio quasars, based on their optical spectra and compact radio morphology. Flat-spectrum radio quasars show strong emission lines, while BL Lacs display very weak or no emission lines in their optical spectra. The observed spectral energy distribution (SED) of blazars shows two broad humps: the first one extends from \(10^{12}\) Hz to \(10^{17}\) Hz, while the second one is peaking between \(10^{21}\) Hz and \(10^{26}\) Hz (e.g. Abdo et al., 2010). The low-frequency hump is attributed to the synchrotron radiation of the relativistic electrons in the magnetic field of Doppler-boosted jets. On the other hand, the high-energy hump is generally associated with the inverse Compton scattering of the infrared/optical/ultraviolet photons by the jet electrons (Sikora et al., 2009). The seed photons for the inverse Compton scattering could be originating from the synchrotron emission within the jet, commonly known as synchrotron self-Compton (Bottcher et al., 2002), or from the external photon fields such as accretion disk, broad emission line region, and dusty torus and named as external Compton (Sikora et al., 1994). Blazars are further classified based on the location of their synchrotron peak as follows (Abdo et al., 2010): high synchrotron peaked (\(\nu_{\rm peak}\geq 10^{15}\,\rm Hz\)), intermediate synchrotron peaked (\(10^{14}\,\rm Hz\leq\nu_{peak}\leq 10^{15}\,\rm Hz\)), and low synchrotron peaked (\(\nu_{\rm peak}\leq 10^{14}\,\rm Hz\)). BL Lacertae is the prototype of the BL Lac class of blazars and has a redshift of \(z=0.0686\pm 0.0004\)(Vermeulen et al., 1995). It is classified as a low-synchrotron-peaked blazar (Nilsson et al., 2018). BL Lacertae has been of great interest for numerous intense multi-wavelength (MWL) campaigns (e.g. Villata et al., 2002, 2003; Bottcher et al., 2003; Raiteri et al., 2010; Wierzcholska et al., 2015; Agarwal et al., 2017; MAGIC Collaboration et al., 2019; Weaver et al., 2020; Jorstad et al., 2022; Kalita et al., 2023; Shablovinskaya et al., 2023); in particular, BL Lacertae is one of the favorite targets of the campaigns organized by the Whole Earth Blazar Telescope collaboration. More-than-century-long observations of BL Lacertae reveal intense variability on diverse timescales ranging from a few minutes (e.g. Villata et al., 2002; Gaur et al., 2015; Meng et al., 2017; Fang et al., 2022) to years (Carini et al., 1992; Villata et al., 2004, 2004, 2009; Raiteri et al., 2013). As an example of yearly variability, Carini et al. (1992) detected an erratic behavior of the source with a \(V\) band magnitude ranging from 14 to 16 over about 17 years of observations. BL Lacertae shows outbursts of a few magnitudes, which is typical for blazars; for example, Villata et al. (2004) reported a brightness excursion of about 3 mag in all bands during the 1997 outburst (see also Bachev, 2018). BL Lacertae generally shows a bluer-when-brighter (BWB) chromatism, whose strength was found to be related to the timescale considered: Villata et al. (2002) reported strongly BWB chromatic, fast flares on intra-night timescales and mildly chromatic variations on longer timescales (see also Villata et al., 2004; Bhatta and Webb, 2018; Gaur et al., 2019). The mildly chromatic component was explained as arising because of the Doppler factor change, while the strongly chromatic flares were assumed to be of synchrotron origin. Previous studies of BL Lacertae in optical bands show both the lack (e.g. Nesci et al., 1998; Li et al., 2021) and the presence of inter-band time lags, \(\tau\): Papadakis et al. (2003) found a time lag of \(\tau=13.8^{+11.4}_{-9.0}\) min between \(B\) and \(I\) bands (\(B\) band leads), Hu et al. (2006) found a lag of 11.6 min between \(e\) and \(m\) bands (\(e\) band leads), Meng et al. (2017) found a lag of 11.8 min between \(R\) and \(V\) bands (\(R\) band leads), and Fang et al. (2022) found a lag of \(\sim\)16 min between \(B\) and \(V\) bands (\(B\) band leads) and a lag of \(\sim\)18 min between \(B\) and \(R\) bands (\(B\) band leads). Therefore, the so-called soft lag - that is, the lower-frequency/softer energy emission variations are lagging - dominates the inter-band time lags observed in BL Lacertae. The Doppler factor, \(\delta\), is an important jet characteristic, and for BL Lacertae it was determined by a number of authors using various approaches. Jorstad et al. (2017) used the observed variability timescale and the angular size of the six moving knots, observed by the Very Long Baseline Array, to get Doppler factors of \(6.2\pm 1.5\), \(11.0\pm 5.6\), \(5.6\pm 3.3\), \(8.4\pm 1.7\), \(8.6\pm 2.6\), and \(7.1\pm 4.3\). Liodakis et al. (2017) and Liodakis et al. (2018) compared observed and intrinsic brightness temperatures and got the following variability Doppler factors \(6.1\pm 0.8\) and \(12.17^{+3.44}_{-2.81}\), respectively, while Chen (2018) used broadband SED to derive \(\delta=3.8\). Zhang et al. (2020) proposed a new method to estimate the Doppler factor for a source of known \(\gamma\)-rays and broad emission line luminosities; the authors got \(\delta=8.13\) for BL Lacertae. Ye and Fan (2021) used the relation between the core and extended radio luminosities to estimate \(\delta=14.22\) for a continuous jet and \(\delta=6.66\) for a moving blob; to get these values, the authors assumed a spectral index \(\alpha=0.5\) (\(F_{\nu}\propto\nu^{-\alpha}\), where \(F_{\nu}\) is the monochromatic flux density). Generally, the different methods result in different Doppler factors because of the different assumptions made. During the summer of 2020, a new phase1 of the BL Lacertae activity began, which continued throughout 2021. The source was reported as flaring during August 2020 in the optical (Grishina and Larionov, 2020; Jankowsky and Wagner, 2020; Steineke et al., 2020) and high-energy \(\gamma\)-rays (Cheung, 2020; Ojha and Valverd, 2020). The MAGIC system of Cherenkov telescopes detected very high energy \(\gamma\)-rays during the night of Aug 19 (Blanch, 2020); the next peak of the very high energy \(\gamma\)-rays was detected on Sep 19 (Blanch, 2020). A significant optical intra-night variability (INV) was also observed (Jankowsky and Wagner, 2020). Footnote 1: During this long-lasting activity phase BL Lacertae reached its historical maximum of \(R=11.271\pm 0.003\) mag at JD 2459426.4930 (Jul 30, 2021, Kunkel et al., 2021) In this paper, we report the results from our observations of BL Lacertae on intra-night timescales during its August 2020 flare; the mid-August to mid-September BL Lacertae activity will be termed by us as an August 2020 flare throughout the paper. In particular, we focus on the analysis of the individual intra-night light curves (INLCs) recorded in the course of our monitoring. The paper is organized as follows. In Section 2 we describe our observations and data reduction. In Section 3 the analysis techniques used by us are described in detail. In Section 4 we present the results obtained, and in Section 5 we discuss them. ## 2 Observations and Data Reductions To understand the source behavior in the optical regime, we carried out optical observations of BL Lacertae from July to September 2020 using 11 different optical telescopes around the globe over 84 observational nights and gathering \(\sim\)12 800 frames in \(BVRI\) bands. The telescopes used are as follows: 50 cm OAUJDK500 (Corrected Dall-Kirkham Astrograph, telescope A) of the Astronomical Observatory operated by the Jagiellonian University, Krakow, Poland; Kirkham astrograph telescope (KRK, telescope B) of the Jagiellonian University, Krakow, Poland; 40 cm PROMPTSUASK telescope of Sleaford Observatory (PASK, Telescope C); 60 cm Rapid Response Robotic Telescope (RRRT, telescope D) of the Fan Mountain Observatory, SUH (telescope E); 50/70 cm Schmidt telescope at the Rozhen National Astronomical Observatory, Bulgaria (telescope F, Kostov, 2010); 2.01 m RC Himalayan Chandra Telescope (HCT, telescope G) at Indian Astronomical Observatory, Hanle, India; 40 cm telescope of the Dark Sky Observatory (DSO, telescope H); 40 cm telescope of the Montana Learning Center (MLC-COS16, telescope I); 60 cm RC robotic telescope, Turkey (telescope J); and 1.0 m RC telescope, Turkey (telescope K). Telescopes F and G are described in Agarwal et al. (2019), and telescopes J and K are described in Agarwal et al. (2021). The technical details about the rest of the telescopes are given in Table 1. Telescopes A, C, D, H, and I work in the robotic mode under the Skynet Robotic Telescope Network software (Zola et al., 2021). The complete log of our observations is presented in Table 2. The data reduction procedure includes bias/dark subtraction, flat-fielding, and cosmic-ray treatment which was performed using the standard IRAF2 tasks. This was followed by the extraction of the instrumental magnitudes of the source and standard stars in the field using the Dominion Astronomical Observatory Photometry (DOPHOT II) software (Stetson, 1987, 1992). To perform differential photometry, we finally chose stars B and C from the source field3 that are in close proximity to the target and with magnitudes similar to the blazar. A more detailed data reduction procedure is discussed in Agarwal et al. (2019). Footnote 2: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation. Footnote 3: [https://www.lsw.uni-heidelberg.de/projects/extragalactic/charts/2200+420](https://www.lsw.uni-heidelberg.de/projects/extragalactic/charts/2200+420) To get the optimum aperture for each night, we performed aperture photometry for different radii: 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.5, and 3.0 times the full width at the half-maximum (FWHM) of the field stars. For background subtraction, we selected the sky annulus to approximately 5\(\times\)FWHM. We finally selected the aperture with the best signal-to-noise ratio and minimum standard deviation of the difference between instrumental magnitudes of standard stars. The above procedure was applied on all the \(BVRI\) frames, and the calibrated magnitudes of the source were derived. The calibrated \(BVRI\) magnitudes of the blazar were dereddened by subtracting the Galactic extinction values from the NASA/IPAC Extragalactic Database: \(A_{B}=0.43\) mag, \(A_{V}=0.54\) mag, \(A_{R}=0.64\) mag, and \(A_{I}=0.80\) mag. The flux from the nucleus of the source is contaminated by its elliptical host galaxy. Hence, to perform host galaxy subtraction, we converted extinction-corrected magnitudes to fluxes using the zero point values from Bessell et al. (1998). Thereafter using the measurements from Nilsson et al. (2007), we estimated the host galaxy emission in the \(R\) band. This \(R\) band value is further used to obtain the corresponding contributions for the \(BVI\) bands by using the galaxy colors (Fukugita et al., 1995) as \(B-V=0.96\) mag, \(V-R=0.61\) mag, and \(R-I=0.70\) mag. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline Date & Telescope & \multicolumn{3}{c}{Number of data points} & Date & Telescope & \multicolumn{3}{c}{Number of data points} \\ (yyyy mm dd) & & \(B\) & \(V\) & \(R\) & \(I\) & (yyyy mm dd) & & \(B\) & \(V\) & \(R\) & \(I\) \\ \hline [MISSING_PAGE_POST] & 0 & 0 & 25 & 0 & 2020 09 11 & B & 0 & 0 & 573 & 0 \\ \hline \end{tabular} \end{table} Table 2: Log of photometric observations for the blazar BL Lacertae \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Date & Telescope & \multicolumn{3}{c}{Number of data points} & Date & Telescope & \multicolumn{3}{c}{Number of data points} \\ (yyyy mm dd) & & \(B\) & \(V\) & \(R\) & \(I\) & (yyyy mm dd) & \(B\) & \(V\) & \(R\) & \(I\) \\ \hline 2020 07 13 & J & 1 & 1 & 1 & 2020 08 27 & J & 0 & 2 & 1 & 1 \\ 2020 07 14 & J & 1 & 1 & 1 & 1 & 2020 08 28 & B & 0 & 0 & 152 & 0 \\ 2020 07 15 & J & 1 & 1 & 1 & 1 & 2020 08 28 & C & 0 & 0 & 10 & 0 \\ 2020 07 16 & J & 1 & 1 & 1 & 1 & 2020 08 28 & F & 328 & 18 & 18 & 348 \\ 2020 07 17 & J & 1 & 1 & 1 & 1 & 2020 08 28 & J & 0 & 2 & 1 & 1 \\ 2020 07 19 & J & 1 & 1 & 1 & 1 & 2020 08 29 & D & 0 & 23 & 0 \\ 2020 07 20 & J & 1 & 1 & 1 & 1 & 2020 08 29 & J & 0 & 2 & 1 & 1 \\ 2020 07 21 & J & 0 & 1 & 0 & 1 & 2020 08 30 & D & 0 & 0 & 9 & 4 \\ 2020 07 23 & J & 0 & 1 & 0 & 1 & 2020 08 30 & J & 0 & 2 & 0 & 1 \\ 2020 07 24 & J & 1 & 0 & 0 & 1 & 2020 08 30 & K & 2 & 2 & 518 & 2 \\ 2020 07 25 & J & 1 & 1 & 1 & 1 & 2020 09 01 & J ## 3 Analysis Techniques Having obtained the light curves (LCs) in flux units, we 1. combined the LCs in the case in which multi-telescope data are available and cleaned the combined LCs of the outliers if any; and 2. corrected the combined LCs for the smooth flux variation in the case in which the LCs show two variability components. The corrected LCs were further 1. decomposed into individual flares; and 2. used to build the structure functions (SFs). In addition, the corrected MWL LCs were 1. used to build the color-magnitude diagrams (CMDs); and 2. used to search for inter-band time lags. Below we shall describe in detail the analysis techniques used in each of the above steps. ### Variability Detection and Amplitude We quantified the flux variability of BL Lacertae using \(C\)-, \(F\)-, and \(\chi^{2}\)-tests and the percentage amplitude variation, \(A\). A brief introduction to these methods is given below. #### 3.1.1 C-test The most frequently used variability detection criterion is the \(C\)-test (Romero et al., 1999), which is defined as \[C_{1}=\frac{\sigma(\rm BL-S_{B})}{\sigma(\rm S_{B}-S_{C})},\quad C_{2}=\frac{ \sigma(\rm BL-S_{C})}{\sigma(\rm S_{B}-S_{C})}, \tag{1}\] where BL\(-\)S\({}_{\rm B}\), BL\(-\)S\({}_{\rm C}\), and S\({}_{\rm B}-\)S\({}_{\rm C}\) are the differential instrumental LCs of the blazar (BL) against the standard star B (S\({}_{\rm B}\)), BL against the standard star C (S\({}_{\rm C}\)), and S\({}_{\rm B}\) against S\({}_{\rm C}\), respectively, while \(\sigma(\rm BL-S_{B})\), \(\sigma(\rm BL-S_{C})\), and \(\sigma(\rm S_{B}-S_{C})\) are the standard deviations of the respective LCs. If \(C\geq 2.576\), then we marked the LC as a variable at a confidence level of 99.5% or greater; otherwise, we call it a non-variable (here \(C\) is a mean over \(C_{1}\) and \(C_{2}\)). As pointed out by Zibecchi et al. (2017), through their study of INV in active galactic nuclei using various statistical methods, the \(C\)-test \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Date} & Telescope & \multicolumn{3}{c}{Number of data points} & \multicolumn{3}{c}{Date} & Telescope & \multicolumn{3}{c}{Number of data points} \\ \cline{3-11} (yyyy mm dd) & & \(B\) & \(V\) & \(R\) & \(I\) & (yyyy mm dd) & & \(B\) & \(V\) & \(R\) & \(I\) \\ \hline [MISSING_PAGE_POST] & 332 & 18 & 18 & 331 & 2020 09 14 & J & 1 & 0 & 1 & 1 \\ \hline \end{tabular} \end{table} Table 2: (continued) could be considered a suitable test to detect variability with more reliable results as compared to the \(F\)-test. #### 3.1.2 F-test The \(F\)-test (Zibecchi et al., 2017) is a powerful tool to quantify variability at diverse timescales and is defined as \[F_{1}=\frac{\sigma^{2}(\mathrm{BL-S_{B}})}{\sigma^{2}(\mathrm{S_{B}-S_{C}})}, \quad F_{2}=\frac{\sigma^{2}(\mathrm{BL-S_{C}})}{\sigma^{2}(\mathrm{S_{B}-S_{C }})}, \tag{2}\] where \(\mathrm{BL-S_{B}}\), \(\mathrm{BL-S_{C}}\), and \(\mathrm{S_{B}-S_{C}}\) are the differential instrumental LCs of \(\mathrm{BL}\) against \(\mathrm{S_{B}}\), \(\mathrm{BL}\) against \(\mathrm{S_{C}}\), and \(\mathrm{S_{B}}\) against \(\mathrm{S_{C}}\), respectively, while \(\sigma^{2}(\mathrm{BL-S_{B}})\), \(\sigma^{2}(\mathrm{BL-S_{C}})\), and \(\sigma^{2}(\mathrm{S_{B}-S_{C}})\) are the variances of the respective LCs. Averaging \(F_{1}\) and \(F_{2}\) gives the mean observational \(F\) value, which is then compared with the critical value, \(F_{\mathrm{c}}=F_{\nu_{\mathrm{BL}},\nu_{\mathrm{S}}}^{(\alpha)}\), where \(\nu_{\mathrm{BL}}\) and \(\nu_{\mathrm{S}}\) give the number of degrees of freedom for the blazar and star LCs, respectively, estimated as the number of measurements, \(N_{\mathrm{data}}\), minus 1 (\(\nu=N_{\mathrm{data}}-1\)). The significance level, \(\alpha\), is set as 0.1% and 1% (i.e. 3\(\sigma\) and 2.6\(\sigma\)) for this work. If the mean \(F\) value is more than the critical value, the null hypothesis (i.e. no variability) is rejected and the LC is marked as variable. #### 3.1.3 \(\chi^{2}\)-test Further, to detect the genuine variability in our source, we also used the \(\chi^{2}\)-test, which is interpreted as: \[\chi^{2}=\sum_{i=1}^{N}\frac{(V_{i}-\overline{V})^{2}}{e_{i}^{2}}, \tag{3}\] where \(\overline{V}\) is the mean magnitude and \(V_{i}\) the magnitude corresponding to the \(i\)-th observation with a respective uncertainty \(e_{i}\). Estimating the exact values of uncertainties is unattainable in the IRAF package used for data reduction, whereas, the theoretical uncertainties have been found to be smaller by 1.3-1.75 (Gopal-Krishna et al., 2003). For our data, the factor is around 1.6, on average. Therefore, for a better calculation of photometric uncertainties, we should multiply the uncertainties obtained from data analysis by the above factor. The obtained \(\chi^{2}\) value is then compared with a critical value \(\chi^{2}_{\alpha,\nu}\) where \(\alpha\) is the significance level and \(\nu=N_{\mathrm{data}}-1\) is the degree of freedom. When \(\chi^{2}>\chi^{2}_{\alpha,\nu}\), it indicated the presence of variability. Depending on the sampling of the individual INLCs and on the monitoring duration, there could happen the blazar to have both variable and non-variable status for one and the same night; the nights of Sep 2 and Sep 7 are examples in this context. In such cases, we adopted the status obtained by testing the better LCs in terms of sampling and/or duration. #### 3.1.4 Percentage amplitude variation To estimate the percentage amplitude change in our LCs, we calculated the variability amplitude parameter \(A\)(Heidt & Wagner, 1996): \[A=100\times\sqrt{(m_{\mathrm{max}}-m_{\mathrm{min}})^{2}-2\langle e^{2}\rangle} \ [\%], \tag{4}\] where \(m_{\mathrm{max}}\) and \(m_{\mathrm{min}}\) are the maximum and minimum magnitudes attained by the blazar and \(\langle e^{2}\rangle\) the mean squared uncertainty of the measurements. ### Combination of the Light Curves The INLCs obtained with two or more telescopes were combined in order to get a single LC for the given night and band. If the individual LCs have overlapping parts, then, before the combination, the LCs were adjusted such that (i) a single band LC was adjusted to match the corresponding LC from a MWL data set (in order to avoid the systematic uncertainties when the LCs are used to build CMDs) and (ii) a poorly sampled LC was adjusted to match the densely sampled one (if it does not contradict the first condition). Technically, the adjustment was made as follows: we interpolated the first LC over the second one in their overlapping parts, computed the median offset and its standard uncertainty, and applied the so-obtained offset according to the above conditions. If the LCs have no overlapping parts, then the LCs were combined without adjustment. Finally, the observations consisting of a few data points per band were (adjusted and) combined with the so-built composite INLCs (an exception were the telescope J data, see below). During the combination of the LCs, a few outlying measurements were identified and cleaned. The so-combined INLCs were merged with the rest of the data to build the short-term4 variability LCs (STLCs) of \(\mathrm{BL}\) Lacertae for each band. To these STLCs, the telescope J STLCs were adjusted (actually, the adjustment was needed only for the \(BV\) bands) and combined. Footnote 4: Variability on timescales from days to weeks/months is usually termed as the short-term variability (STV, Singh & Meintjes, 2020). We are interested in the analysis of the INV, and so the above procedure is optimized for the accurate combination of the individual INLCs, but not for the STLCs of the individual telescopes. This would result in increased night-to-night scatter in the STLCs, but this is not an issue for the presented research. ### Correction for the Smooth Flux Variation Generally, the INLCs obtained in the course of our study could be described as flares superimposed onto a smooth flux variation; that is, the LCs show two variable components. The flare timescales are much shorter than the smooth component timescale. The latter timescale is usually longer than several hours, which is longer than the typical duration of a single-telescope intra-night monitoring session. We are interested in the analysis of the flaring activity of BL Lacertae and so a correction has to be done in order to minimize the contribution of the smooth variability component. For example, to make flares more evident, Ghisellini et al. (1997) divided their LC by a curve interpolated through the local minima of the same LC. The correction of the LCs for the smooth flux variation (or detrending for short) was done following an approach closely related to that of Villata et al. (2004a); see also Xiong et al. (2020) and Raiteri et al. (2021). Firstly, we selected the regions of the LC that are free of flares - they were assumed to be related to the smooth component we want to correct for. Secondly, we fitted to these regions a low-degree polynomial. For more complicated LCs, the fitting was done by splitting the LC into segments and fitting a polynomial to each segment. The polynomials could be of different degrees for different segments, or, for some of the segments, the polynomial could be replaced by another fitting function (e.g. cubic spline or Gaussian). Upon completion of the fit, care was taken to ensure the individual fitting functions were joined smoothly. If MWL data are available for a given night, then the fitted regions and the fitting functions are one and the same for all bands. Finally, we rescaled each data point of the LC by dividing the corresponding flux value by the scaling factor \(C_{k}(t)=F_{k,\mathrm{fit}}(t)/F_{k,\mathrm{min}}\) (here \(k\) represents the \(BVRI\) bands), which is the ratio between the value of the (composite) fitting function at the corresponding time and the fitting function minimum value. That minimum value served as the base level in the LC decompositions. ### Decomposition of the INLCs The INLCs that show flaring activity were decomposed using the following double exponential function (DE, Abdo et al., 2010): \[F(\Delta t)=F_{\mathrm{base}}+\\ F_{0}\left[\exp\left(\frac{\Delta t_{0}-\Delta t}{\mathcal{T}_{ \mathrm{r}}}\right)+\exp\left(\frac{\Delta t-\Delta t_{0}}{\mathcal{T}_{ \mathrm{d}}}\right)\right]^{-1}, \tag{5}\] where \(F_{\mathrm{base}}\) is the constant base level, \(F_{0}\) twice the flare amplitude (with respect to the base level), \(\Delta t_{0}\) the approximate position in the time of the flare peak, and \(\{\mathcal{T}_{\mathrm{r}},\mathcal{T}_{\mathrm{d}}\}\) the rise and decay timescales. If the LC has been detrended, then the base level was set to the minimal value of the function, used to fit the smooth component, and was held fixed during the decomposition. If no detrending has been done, then the base level is left free (we, however, have no such LCs). The time variable, \(\Delta t=t-t_{0}\), we used represents the time since the earliest observation (taken at \(t_{0}\)) among the available data sets for the given night; the JD of the earliest observation is indicated in the LC plots. The characteristics of the DE function can be summarized as follows. The actual position in the time of the flare maximum is \[\Delta t_{\mathrm{max}}=\Delta t_{0}+\frac{\mathcal{T}_{\mathrm{r}}\mathcal{T }_{\mathrm{d}}}{\mathcal{T}_{\mathrm{r}}+\mathcal{T}_{\mathrm{d}}}\ln\left( \frac{\mathcal{T}_{\mathrm{d}}}{\mathcal{T}_{\mathrm{r}}}\right) \tag{6}\] and it is equal to \(\Delta t_{0}\) in the case of symmetric flares, \(\mathcal{T}_{\mathrm{r}}=\mathcal{T}_{\mathrm{d}}\). An estimate of the total duration of the flare could be found as \(\Delta\mathcal{T}\simeq 2\left(\mathcal{T}_{\mathrm{r}}+\mathcal{T}_{\mathrm{d}}\right)\). The asymmetry parameter is defined as \[\xi=\frac{\mathcal{T}_{\mathrm{d}}-\mathcal{T}_{\mathrm{r}}}{\mathcal{T}_{ \mathrm{d}}+\mathcal{T}_{\mathrm{r}}}\qquad\begin{cases}\xi\in[-1,1];\\ \xi=0\implies\text{symmetric flare}.\end{cases} \tag{7}\] Finally, the doubling and halving timescales are equal to \(\ln(2)\mathcal{T}_{\mathrm{r}}\) and \(\ln(2)\mathcal{T}_{\mathrm{d}}\), respectively (Albert et al., 2007). ### Structure Function The SF was introduced by Simonetti et al. (1985) and is particularly useful for analyzing unevenly sampled astronomical data (e.g. Bhatta and Webb, 2018). Various aspects of the SF application are thoroughly discussed by Emmanoulopoulos et al. (2010) and Kozlowski (2016). For a time separation \(\delta t\) and a bin of size \(\mathrm{d}t\), we calculated the first-order SF as \[D^{1}(\delta t,\mathrm{d}t)=\frac{1}{N(\delta t,\mathrm{d}t)}\sum_{i>j}[F(t_{i })-F(t_{j})]^{2}, \tag{8}\] where \(N(\delta t,\mathrm{d}t)\) is the number of pairs \((t_{i},t_{j})\) for which \(\delta t<t_{i}-t_{j}<\delta t+\mathrm{d}t\). The choice of bin size depends on the LC sampling. The uncertainties of the SF were calculated simply as the standard uncertainty of the mean in the bins (see Sergison et al., 2020, for discussion about the SF uncertainties). The value of \(\delta t\) in each bin was set to the middle of the bin. Ideally, the SF has two plateaus connected by a curve, whose slope depends on the nature of the observed flux variation (shot noise, flicker noise, etc.; see Hughes et al., 1992; Sergison et al., 2020). Let us assume that the LC can be represented by the sum \((s+n)\), where is the signal and \(n\) is the noise, both having Gaussian distribution. Then, the first plateau (at \(\delta t\to 0\)) equals \(2\sigma_{n}^{2}\) and the second one equals to \(2\sigma_{s}^{2}\), where \(\sigma^{2}\) represents the corresponding variances. These plateaus bracket the time separations over which the flux variations are correlated. The upward-sloping curve between the plateaus is usually characterized by its logarithmic slope \(\mathrm{d}[\log(D^{1})]/\mathrm{d}[\log(\delta t)]\). The time separation at which this curve flattens could be considered as a robust characteristic variability timescale; if the second plateau is not reached, then the timescale is longer than the observation span. Next, the SF could be used to study the time asymmetry of the LCs (Kawaguchi et al., 1998; Bachev et al., 2017, 2021). Finally, if the LC shows periodicity, then the SF has a dip at the time separation equal to the corresponding period. It is common practice for the measurement uncertainties to be subtracted off during the SF build, and there are various ways to do that (see Kozlowski, 2016, for discussion on this topic). If the measurement uncertainties, \(e\), are assumed to follow a Gaussian distribution, then \(\sigma_{n}^{2}\) could be approximated as \(\sigma_{n}^{2}\simeq\langle e^{2}\rangle\) and, therefore, \(D^{1}(\delta t)-2\langle e^{2}\rangle\) is the noise-free SF estimate we wanted. The problem here is that any incorrectness in the measurement uncertainty estimation affects the slope of the SF. Hence, we prefer to add \(2\sigma_{n}^{2}\) as a free parameter during the SF fitting rather than subtracting \(2\langle e^{2}\rangle\) from the SF. In particular, in this way, we could obtain an independent estimate of the mean measurement uncertainty. In the case of no noise subtraction, we fitted the SF using a single power-law (SPL) model plus a noise term to determine the SF slope: \[D^{1}(\delta t)=2\sigma_{n}^{2}+D_{0}^{1}\left(\frac{\delta t}{\delta t_{0}} \right)^{\varrho}, \tag{9}\] where \(D_{0}^{1}\) is the variability amplitude at the fixed timescale \(\delta t_{0}\) (we arbitrarily choose \(\delta t_{0}=1\,\mathrm{min}\)), \(\varrho\) the power-law index, and \(\sigma_{n}^{2}\) the variance of the measurement noise. The fitting was done up to the turnover point, \(\delta t_{\mathrm{to}}\), at which the SF changes its slope and starts to flatten. After that point, the SPL overestimates the SF. It is worth mentioning two issues that affect the SF fitting, namely the lack of statistical independence and Gaussianity; that is, the individual SF estimates are not independent of each other and the distribution of the SF estimates within the individual bins is not Gaussian (Emmanoulopoulos et al., 2010). The latter problem could be solved particularly by fitting not \(D^{1}(\delta t)\), but \(\log[D^{1}(\delta t)]\) as we actually did; see Emmanoulopoulos et al. (2010) and Kasliwal et al. (2015) for details about these issues. There is an approximate relation between the slopes of the power spectral density (PSD), \(\varkappa\), and SF, namely \(\varkappa\simeq\varrho+1\) (the equality is obtained under special conditions, see Emmanoulopoulos et al., 2010, for details). ### Color-magnitude Diagram Given the BL Lacertae fluxes \(F_{\nu}\), we built the following CMDs: \(F_{\nu_{1}}/F_{\nu_{3}}\) vs \(F_{\nu_{2}}\) if three- or four-band data are available (\(\nu_{1}>\nu_{2}>\nu_{3}\), where \(\nu_{i}\) is the frequency corresponding to the \(i\)-th band) and \(F_{\nu_{1}}/F_{\nu_{2}}\) vs \((F_{\nu_{1}}+F_{\nu_{2}})/2\) if two-band data are available. The CMD forms were chosen to minimize the possibility of introducing spurious effects if we are correlating the flux ratio with one of the fluxes used to build the ratio itself (Massaro and Trevese, 1996; Papadakis et al., 2007). The flux ratios we used are representative for the two-point spectral index, \(\alpha_{\nu_{1}\nu_{2}}\propto-\log(F_{\nu_{1}}/F_{\nu_{2}})\), under the assumption that \(F_{\nu}\propto\nu^{-\alpha}\). The CMDs were built by selecting the data points from the corresponding LCs closest to each other. In addition, we required the time intervals among the data points used to get a single CMD data point to be smaller than a predefined threshold, which depends on the sampling of the LCs used and was typically set to a few minutes. The CMDs were fitted by the power-law model \(F_{\nu_{1}}/F_{\nu_{2}}\propto X^{\varpi}\), where \(\varpi\) is the power-law index5 and \(X\) either the flux or the mean flux depending on the CMD form used. Further analysis of the CMDs was done after taking a logarithm of both sides of the above equation. Footnote 5: The power-law index corresponds to the slope of the CMD in magnitude units (e.g. Papadakis et al., 2007). To consider a CMD trend significant at 99% confidence level, we required (i) the linear Pearson correlation coefficient to be \(|r|\geq 0.5\) and (ii) the probability to get such a correlation coefficient by chance to be \(p\leq 0.01\)(e.g. Gupta et al., 2016; Agarwal et al., 2021). For the nights for which we have \(BVRI\) band data, we used the following CMD forms: \(F_{i}/F_{I}\) vs \(F_{R}\) (\(i=B,V\)). To assign a significant BWB or redder-when-brighter CMD trend for these nights, we further required both CMDs to show a significant correlation. ### Cross-correlation Analysis To search for inter-band time lags, we used a Python implementation pyDCF6(Robertson et al., 2015) of the discrete cross-correlation function (DCF, Edelson and Krolik, 1988), which is suitable to cross-correlate un evenly sampled time series. In our runs (i) the measurement uncertainties were not taken into account in the build of the DCF following White & Peterson (1994) and (ii) the Gaussian weighting scheme was applied in order to assign higher importance to the values closer to the bin center. The estimation of the time lag and its uncertainty was done utilizing the flux randomization/random subset selection method (FR/RSS, Peterson et al., 1998, 2004) based on Monte Carlo simulations. During the RSS process, the data points counted more than once were rejected. At the end of each FR/RSS run, the time lag was found as the centroid of the DCF, defined as the DCF-weighted mean lag. The centroid was calculated using DCF points above a predefined threshold, which was set to the DCF peak value of less than one to three times its uncertainty - we varied the threshold value so as to ensure at least ten data points for the centroid calculation. We ran a total of 2500 cycles, and the resulting time lags were used to build the cross-correlation centroid distribution (CCCD). Given the CCCD, the time lag is estimated as the 50th percentile (or the median) of the CCCD, while the 16th and 84th percentiles serve as the \(1\sigma\) uncertainties. The significance of the cross-correlation results was estimated by means of Monte Carlo simulation following the approach of Max-Moerbeck et al. (2014). To generate the LCs, we used a Python implementation DELCgen7(Connolly, 2015) of the method of Emmanoulopoulos et al. (2013), which accounts for the flux probability density function (PDF) and PSD of the observed LC; the alternative LC generation method of Timmer & Koenig (1995) produces LCs having a Gaussian flux PDF. To produce evenly sampled LCs needed for the PSD build, we used interpolation onto a regular grid having a time interval of 2 min. We fitted the PSD by a single-slope power law, PSD \(\propto f^{-\varkappa}\)(here \(f\) is the temporal frequency; Vaughan, 2005, 2010; Gonzalez-Martin & Vaughan, 2012). The PDF was approximated either with a Gaussian or with a sum of Gaussians. Each simulated LC has the same statistical properties and sampling as the observed one. In addition, the noise was added to each simulated LC according to the mean observational uncertainty. We generated a total of 2500 LCs for each of the bands involved in the cross-correlation. Then, we cross-correlated the simulated LCs in the same way as we have done for the observed ones. Finally, the distribution of the simulated cross-correlation coefficients for each time lag bin was used to estimate the significance levels of the observed coefficients. Footnote 7: [https://github.com/samconnolly/DELightcurveSimulation](https://github.com/samconnolly/DELightcurveSimulation) The LCs produced during a typical intra-night monitoring session are of good sampling, so it is worth trying the interpolated cross-correlation function (ICF) for the time lag search. We used a Python implementation PyCCF8(Sun et al., 2018) of the method of Peterson et al. (1998). To estimate the lag and its uncertainty, we used the cross-correlation peak distribution (CCPD) because there is no need for additional free parameters, namely the bin size and threshold. Footnote 8: [https://bitbucket.org/cgrier/python_ccf_code](https://bitbucket.org/cgrier/python_ccf_code) ## 4 Results ### Short-term Variability The LCs from Jul 11 to Sep 14, 2020 (built as described in Section 3.2) are shown in Figure 1. In Figure 2, we show the \(R\) band LC along with the \(\gamma\)-rays9 LC in the 0.1-300 GeV band for inter-band comparison. The comparison reveals a good correlation between the optical and \(\gamma\)-rays LCs. In general, the LCs could be split visually into two parts - a pre-flare and a flare (see also Shablovinskaya et al., 2023). Footnote 9: The \(\gamma\)-rays LC is derived at the Large Area Telescope Instrument Science Operations center in a “quick-look” analysis. These preliminary flux estimates should be used with caution, so we shall use them only for illustrative purposes. The pre-flare LCs (untill the end of July 2020 = JD 2459062, the top panel of Figure 1) are characterized by a smooth and gradual flux increase. Since the beginning of August 2020, the flux increase has continued, but it is not as smooth as in July. During the pre-flare period, we recorded the minimal \(R\) band flux of 13.37 mJy (or calibrated magnitude of 14.0545 \(\pm\) 0.0016, telescope J) at JD 2459045.52063. The pre-flare is followed by a period of flaring activity, namely the August 2020 flare, which starts in the first decade of August and continues beyond the end of the time interval considered in this paper. The maximal \(R\) band flux of 109.88 mJy (or calibrated magnitude of 11.8190 \(\pm\) 0.0033, telescope A) for the monitoring period was reached at JD 2459083.45823 - that is, soon after the August 2020 flare onset. Unfortunately, the period between the flare onset and the flare peak is very sparsely covered by data points, so we cannot study the shape of the rising part of the August 2020 flare. According to the preliminary \(\gamma\)-rays LC, it seems that the flux rise is steeper than the flux decay; that is, there is an asymmetry. We also have no information about the optical intra-night activity of BL Lacertae at that period - we have detected only a non-well-sampled flare on Jul 31. We cannot, however, rule out the pres ence of other flares during the rising phase of the August 2020 flare because of the sparse sampling and the lack of intra-night monitoring sessions. On the other hand, the decaying phase of the August 2020 flare shows the high activity of BL Lacertae on intra-night timescales. That activity will be our focus from now on: in what follows, we shall not consider the pre-flare, and all analysis will be related to the August 2020 flare. #### 4.1.1 Searching for Periodicity To search for periodicity in the STLCs of BL Lacertae, we used the Lomb-Scargle periodogram (Lomb, 1976; Scargle, 1982) and weighted wavelet \(Z\)-transform (WWZ, Foster, 1996) techniques. Before the periodicity search, we performed nightly binning of our data following the approach of Agarwal et al. (2021) - in this way, we removed the influence of the different number of data points per night on the search results. We also cut out the weakly variable part of the LCs (namely before JD = 2459075). Given our data, we found no signs of peri Figure 1: Light curves in \(BVRI\) bands from Jul 11 to Sep 14, 2020. The LCs are ordered as indicated in the top panel; \(RI\) band LCs are shifted by the corresponding offsets for display purposes. The blue dashed lines are the fits used to determine the shape of the smooth component for the corresponding nights – see Section 4.2 for the description of the \(BRI\) band LCs around JD = 2459088 and of the \(R\) band LC around JD = 2459104. odicity for all bands using both techniques (Figure 3). Recently, Jorstad et al. (2022) reported a detection of a transient periodicity of 0.55 days in the \(R\) band LC generated by the Whole Earth Blazar Telescope; their WWZ time interval encompasses ours. #### 4.1.2 Spectral Energy Distribution For the nights of \(BVRI\) observations, we built the SEDs as follows. If a single measurement is available for the given night, then we use the corresponding flux directly. If repeating observations were performed during the given night, then we calculated the weighted mean fluxes for the corresponding bands. The averaging was done over the same time interval for the corresponding bands to avoid the influence of the different duration of the INLCs on the mean value obtained. This time interval was taken to be the duration of the shortest LC for the given night. The uncertainty of the mean flux was taken to be the larger between (i) the weighted standard deviation and (ii) the standard uncertainty of the weighted mean. The effective wavelengths for the \(BVRI\) bands were taken from Bessell et al. (1998). The so-derived SEDs have been plotted in Figure 4 for all nights jointly. To estimate the spectral index, we fitted a linear polynomial of the form \(\log(F_{\nu})=-\alpha\log(\nu)+\mathrm{const}\) to each SED. We used only \(VRI\) bands in the fitting because of the large scatter of the \(B\) band fluxes: for most of the nights the \(B\) band flux is below the power-law model expectation. The similar behavior of the \(B\) band measurements was discussed by Weaver et al. (2020). They attributed this behavior to the combination of the wide \(B\) filter band and the spectral shape of BL Lacertae. We show in Figure 5 the relation between the spectral index and the \(R\) band flux. There is a hint of steepening of the spectral index as the flux decreases. However, the overall spectral index behavior of BL Lacertae on short-term timescales could be considered mildly chromatic - Figure 4: Spectral energy distribution for the individual nights. Note the scatter in the \(B\) band fluxes (see text). Figure 5: Dependence of the spectral index on the \(R\) band flux. The blue circles are the spectral indices calculated using the \(VRI\) bands, while the red squares are the spectral indices calculated using the \(BVRI\) ones. The error bars reflect the variability amplitude in the cases when the intra-night monitoring data are included in the spectral index calculation (see text). Figure 3: Weighted wavelet \(Z\)-transform of the nightly binned and cut \(R\) band LC (see text). _Left panel_: the colored WWZ power in the time-period plane. _Right panel_: the time-averaged WWZ power as a function of the period. The colored dashed curves represent the corresponding local significance contours. Figure 2: Optical (\(R\) band, red circles) and the “quick-look” \(\gamma\)-rays (0.1–300 GeV band, black stepped curve) LCs from Jul 11 to Sep 14, 2020. The \(R\) band flux is in units of mJy, while the \(\gamma\)-rays flux is in units of \(10^{-7}\) photons s\({}^{-1}\) cm\({}^{-2}\). the dependence of \(\alpha\) on the flux level is weak. The median spectral index over the August 2020 flare was found to be \(\langle\alpha_{VRI}\rangle_{\rm med}=0.885\pm 0.020\) (a standard deviation of 0.096). For six nights, we were able to calculate the spectral index using the \(BVRI\) bands - for these nights, the \(B\) band flux behaves not so unusually (see above). The corresponding median spectral index was calculated to be \(\langle\alpha_{BVRI}\rangle_{\rm med}=1.038\pm 0.025\) (a standard deviation of 0.061). In any case, the inclusion of the \(B\) band leads to slightly steeper indices (Figure 5). Figure 6: Intra-night LCs of BL Lacertae. The blue, green, red, and black colored data points code \(BVRI\) bands, respectively; the \(B\) band offsets are indicated. In each plot, the JDs are along the \(x\)-axis and the BL Lacertae brightness in magnitudes is along the \(y\)-axis. The observation date and the telescope used are indicated in each plot. Figure 6: Continued. Figure 6: Continued. Figure 6: Continued. ### Intra-night Variability To study the INV of BL Lacertae, we included those nights that have more than two hours of monitoring. In this way, we got a total of 48 INLCs. They are shown in Figure 6 and the results from the INV tests are summarized in Table 3. We tested for variability in the INLCs of each telescope individually for a total of 25 nights. For 22 of them, BL Lacertae was found to show variable status, for two of them, probably variable status, and for one of them, non-variable status. If we define10 the duty cycle as the number of nights the blazar shows INV over the total number of nights the blazar being monitored, then we found a duty cycle of 96% (the probably variable cases considered variable) or 88% (the probably variable cases considered non-variable). Footnote 10: A discussion about the duty cycle definition could be found in Webb et al. (2021). After the magnitudes were transformed into fluxes, the multi-telescope data for the given night and band were combined. In what follows, we shall use the combined LCs unless otherwise specified. After the combination, we selected a total of 18 nights of intra-night monitoring suitable to perform an analysis of the INV of BL Lacertae; the corresponding LCs are of good sampling and show flaring activity (Figure 7). The so-combined LCs were then detrended - the (composite) fitting functions used are shown in Figure 7 along with the LCs. The detrending of the Aug 26 \(BI\) and Sep 11 \(R\) band LCs deserves special attention. For these LCs, we were not able to derive the shape of the smooth components that are to be fitted because of the shape of the LCs themselves (Figure 7). So, we had to take into account the data for the preceding night to get an idea of what the smooth component looks like. According to Figure 1, the Aug 26 \(R\) band flux variations are superimposed onto a linearly decaying flux trend marked by a blue dashed line. We used that fit to determine what regions to fit for the \(BI\) bands. For Sep 11, we also assumed a linear trend, but it is obvious that alternative functional forms are also possible (Figure 1). The above considerations show that the main source of uncertainty in the detrending process is the unknown shape of the underlying, smooth variable component. In general, the shape, assumed by us for each night, should be considered as an approximate one; however, the determination of the accurate shape of the smooth component is beyond the scope of the presented paper. To test the influence of that shape on the LC decomposition, a few LCs were detrended using alternative fitting functions (these functions are denoted in Figure 7 with dashed lines). Another source of uncertainty is the choice of regions free of flares. However, the choice of these regions is dependent to some extent on the assumed shape of the underlying component, and so we shall consider it as an uncertainty source of lower importance. Generally, the presence of enough data points on the LC that could be attributed to the smooth component is of utmost importance to estimate its shape accurately. This requires dense sampling and the large duration of the LCs that could be achieved performing "around-the-world" observations (e.g. Bhatta et al., 2013). #### 4.2.1 Color Behaviour The CMDs of BL Lacertae are shown in Figure 8 and the fitting results are listed in Table 4; CMDs for the nights at which the MWL LCs are probably variable or non-variable according to Table 3 were not analyzed. Most of the non-corrected CMDs show significant BWB trends on intra-night timescales, already observed by other authors (e.g. Papadakis et al., 2003). We found no loops in the CMDs. #### 4.2.2 Structure Function The SFs built using the corrected LCs are presented in Figure 9, and the results from the SPL fits are listed in Table 5. We found no dependence of the SF slopes on the bands, and so we weight-averaged all slopes together - their mean value is \(\langle\varrho\rangle_{\rm wt}=1.624\pm 0.007\) (a weighted standard deviation of 0.275). Regarding the turnover point, its median value (in the observer's frame) over all nights and bands is \(\langle\delta t_{\rm to}\rangle_{\rm med}=36.1\pm 3.7\) min (a standard deviation of 19.8 min). #### 4.2.3 Cross-correlation Analysis For each night of MWL LCs of good sampling, we calculated DCFs using the original and detrended LCs and ICFs using the detrended LCs. For our further analysis, we shall consider only the time lags obtained using the DCF, based on the detrended LCs, while the results from the other two cross-correlation functions will serve as a check: the consistency among the various values for a given night and bands supports the reliability of the lag obtained. The DCFs of BL Lacertae are shown in Figure 10, and the resulting lags are listed in Table 6. We have a total of seven nights suitable for cross-correlation analysis. To consider a given time lag real, we require the lag under consideration to be larger than (i) the modal sampling of the LCs, (ii) the bin size used to build the DCF, and (iii) the lag uncertainties obtained by the FR/RSS method; in addition, the DCF should exceed the 99% confidence limit, and there should be consistency among the different cross-correlation functions used (see above). From Figure 10 and Table 6 one can see that the \(BVRI\) band LCs are not affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are not affected by the \(BVRI\) band LCs, but the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. The \(BVRI\) band LCs are affected by the \(BVRI\) band LCs, and the \(BVRI\) band LCs are affected by the \(BVRI\) band LCs. can see that the lags satisfying the above conditions are those for Aug 20 and Aug 26. In both cases, the variability at shorter wavelengths is leading; that is, we have soft lags. The lag values themselves are consistent with the previous lag estimates for BL Lacertae. For Aug 20, the \(VI\) band LCs sampling is larger, while the \(R\) band LC sampling is smaller than the lag found (Table 6). To check the reliability of the lags obtained using such LCs, we performed the following test. We shifted the detrended \(R\) band LC with the measured \(V\) vs \(R\) time lag (2.2 min); we choose \(V\) band LC for this Figure 7: Continued. test because it is of worse sampling compared to the \(I\) band one (Figure 7). Then, the shifted \(R\) band LC was interpolated onto the \(V\) band JDs. Finally, the \(V\) band LC uncertainties were assigned to the transformed \(R\) band LC. The so-generated fake \(V\) band LC was cross-correlated with the original \(R\) band LC - the time lag found is \(2.9^{+6.0}_{-4.8}\) min; that is, it is consistent with the lag found using the original detrended \(V\) band LC. Hence, we can conclude that the lags obtained for Aug 20 are reliable and could be used for further analysis. Regarding Aug 26, we were not able to estimate the significance levels because of the specific LC shape (Figure 7). Figure 8: Color-magnitude diagrams built using the non-corrected LCs that show INV. The fitted power-law models are overplotted. The LCs used for the cross-correlation analysis are a combination of various numbers of flares, and so the measured time lags are a kind of weight-averaged lags over the individual flares (Xu et al., 2019). The attempts to measure the lags using the individual flares, forming the INLCs, lead to inaccurate results either because of the flare overlapping (mainly) or because of the bad flare sampling. #### 4.2.4 Decomposition of the INLCs The decomposition of the detrended LCs was done employing a non-linear least-squares technique implemented into the MPFIT fitter (Markwardt, 2009). If (i) a flare is not fully recorded, (ii) a flare is of low amplitude, or (iii) flares overlap to a great extent, then we used a symmetric DE function for fitting. In addition, if, for a flare, the fitted uncertainties are comparable to or larger than the fitted values after a general DE fit, then we have redone the decomposition using the symmetric DE function. Once we have the flare model at hand, we need to estimate how many flares to fit. For most of the LCs the number of flares to be fitted, \(N_{\rm fla}\), could easily be obtained; for complex or noisy LCs, however, that tack could be difficult. Hence, to avoid the overfitting, we used the Bayesian Information Criterion (BIC, Schwarz, 1978) to get the final estimate of \(N_{\rm fla}\). The BIC penalizes the \(\chi^{2}\) of the fit for the newly added parameters as follows: \[\mathrm{BIC}=\chi^{2}+N_{\rm pars}\,\ln(N_{\rm data}), \tag{10}\] where \(N_{\rm pars}\) is the number of model free parameters and \(N_{\rm data}\) the number of the data points of the fitted LC. Using BIC, we could identify the number of flares beyond which the addition of a new flare does not significantly improve the fit. To accept the addition of a new flare, we required BIC to decrease by ten or larger: \(\Delta\mathrm{BIC}=\mathrm{BIC}(N_{\rm fla}+1)-\mathrm{BIC}(N_{\rm fla})\geq 10\). The decompositions are shown in Figures 11 and 12; the fitted parameters are listed in Table 7. As we mentioned in Section 4.2, the unknown shape of the smooth variability component is the main source of systematic uncertainties in the timescales. To make a crude estimate of these uncertainties, we compare in Figure 8: Continued. Figure 9: Structure functions built using the corrected LCs. For MWL LCs, only \(R\) (or \(I\)) band SFs are shown. The SPL function fits are overplotted with a red line. Figure 13 the timescales obtained using two alternative fitting functions to detrend the original LC (see Figure 7). The mean difference between the timescales was found to be 1.4 min with a standard deviation of 3.9 min; these values were obtained after the most deviant data points were clipped out. These results give a crude estimate of the systematic uncertainty of the timescales due to the unknown shape of the underlying smooth component. The difference, however, is within the scatter of individual data points, and so we shall neglect it in our further considerations. Next, we searched for the dependence of the derived decay timescales on the band. We plot in Figure 14 the \(I\) band timescales against the \(BR\) band ones: one can see the lack of significant dependence of \(\mathcal{T}_{\rm d}\) on the band; the same applies for the rise timescales as long as all of the flare fits are done using symmetric DE functions (we have four exceptions of this). Hence, we plot the distribution of the decay timescales jointly for all bands (Figure 15) - the clipped modal value is \(\langle\mathcal{T}_{\rm d}\rangle_{\rm mode}=11.6^{+10.5}_{-5.1}\) min. The lack of dependence on the band was found for the flare duration as well, and so we plot in Figure 16 the distribution of the flare duration altogether for all bands - the clipped modal value is \(\langle\Delta\mathcal{T}\rangle_{\rm mode}=46.6^{+41.0}_{-20.6}\) min. The parameter uncertainties listed above represent the 16-th and 84-th percentiles of the corresponding distributions. Finally, using the four asymmetric flares, we calculated a weighted mean asymmetry parameter \(\langle\xi\rangle_{\rm wt}=0.49\pm 0.10\). Figure 10: Results from the cross-correlation analysis of the corrected MWL LCs. In each plot the left panel shows the DCF (black lines) and its uncertainties (green shaded area). The red solid and dashed lines indicate the significance levels of 99% and 95%, respectively, while the red dotted line indicates the zero correlation. The right panel in each plot shows the corresponding CCCD. Figure 11: Decomposition of the corrected MWL LCs. In each plot, we indicate the evening date, the value of \(t_{0}\), the bands plotted, and the corresponding offsets used for display purposes. The bands are coded as follows: \(B\) – blue, \(V\) – green, \(R\) – red, \(I\) – magenta. The blue dashed lines are the individual flares to which the LC is decomposed, while the black solid line is the model LC. The error bars are not shown for the sake of clarity. Figure 12: Same as in Figure 11, but for the \(R\)-band-only LCs. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Date, 2020 & CMD & \(\varpi\) & \(r\) & \(p\) & Trend \\ \hline Aug 20 & \(F_{B}/F_{I}\) vs \(F_{R}\) & 0.265 \(\pm\) 0.020 & 0.636 & \(<\)10\({}^{-5}\) & BWB \\ & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.211 \(\pm\) 0.011 & 0.836 & \(<\)10\({}^{-5}\) & \\ Aug 23 & \(F_{B}/F_{I}\) vs \(F_{R}\) & 0.260 \(\pm\) 0.045 & 0.555 & \(<\)10\({}^{-3}\) & \\ & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.036 \(\pm\) 0.032 & 0.058 & 0.739 & \\ Aug 25 & \(F_{B}/F_{I}\) vs \(F_{R}\) & 0.222 \(\pm\) 0.010 & 0.909 & \(<\)10\({}^{-5}\) & BWB \\ & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.147 \(\pm\) 0.007 & 0.898 & \(<\)10\({}^{-5}\) & \\ Aug 26 & \(F_{B}/F_{I}\) vs \(\frac{F_{B}+F_{I}}{2}\) & 0.260 \(\pm\) 0.006 & 0.916 & \(<\)10\({}^{-5}\) & BWB \\ Aug 27 & \(F_{B}/F_{I}\) vs \(\frac{F_{B}+F_{I}}{2}\) & 0.357 \(\pm\) 0.014 & 0.757 & \(<\)10\({}^{-5}\) & BWB \\ Aug 28 & \(F_{B}/F_{I}\) vs \(\frac{F_{B}+F_{I}}{2}\) & 0.525 \(\pm\) 0.012 & 0.879 & \(<\)10\({}^{-5}\) & BWB \\ Aug 31 & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.201 \(\pm\) 0.024 & 0.533 & \(<\)10\({}^{-5}\) & BWB \\ Sep 3 & \(F_{B}/F_{I}\) vs \(F_{R}\) & 0.307 \(\pm\) 0.132 & 0.548 & 0.004 & BWB \\ & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.336 \(\pm\) 0.060 & 0.751 & \(<\)10\({}^{-5}\) & \\ Sep 8 & \(F_{B}/F_{I}\) vs \(F_{R}\) & 0.289 \(\pm\) 0.051 & 0.688 & \(<\)10\({}^{-5}\) & BWB \\ & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.296 \(\pm\) 0.035 & 0.866 & \(<\)10\({}^{-5}\) & \\ Sep 10 & \(F_{B}/F_{I}\) vs \(F_{R}\) & \(-\)0.030 \(\pm\) 0.015 & \(-\)0.173 & 0.210 & \\ & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.062 \(\pm\) 0.010 & 0.512 & \(<\)10\({}^{-3}\) & \\ Sep 11 & \(F_{B}/F_{I}\) vs \(F_{R}\) & 0.317 \(\pm\) 0.011 & 0.923 & \(<\)10\({}^{-5}\) & BWB \\ & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.189 \(\pm\) 0.007 & 0.936 & \(<\)10\({}^{-5}\) & \\ Sep 13 & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.195 \(\pm\) 0.014 & 0.666 & \(<\)10\({}^{-5}\) & BWB \\ Sep 14 & \(F_{V}/F_{I}\) vs \(F_{R}\) & 0.206 \(\pm\) 0.005 & 0.901 & \(<\)10\({}^{-5}\) & BWB \\ \hline \end{tabular} Note. – To derive the values of \(\varpi\), \(r\), and \(p\), the CMDs were fitted in a “log-log” form. \end{table} Table 4: Results from the power-law fits to the non-corrected CMDs Figure 14: Plot of the \(I\) band decay timescales against the \(B\) band (blue diamonds) and \(R\) band (red circles) ones. The dotted line is the line of exact correspondence. Figure 13: Comparison of the timescales obtained after the decomposition of the LCs detrended using two alternative fitting functions. The symbols denote the bands as follows: \(B\) – blue diamonds, \(V\) – green triangles, \(R\) – red circles, \(I\) – magenta squares. The timescales along the \(x\)-axis are those adopted by us for the further analysis. The dotted line is the line of exact correspondence. The solid line is the line corresponding to the clipped mean difference between the timescales of 1.4 min. Figure 15: Distribution of the decay timescales jointly for all bands. \begin{table} \begin{tabular}{c c c c c} \hline \hline Date, 2020 & Band & Bin Size & \(\varrho\) & \(\delta t_{\rm to}\) \\ & & (min) & & (min) \\ (1) & (2) & (3) & (4) & (5) \\ \hline Aug 20 & \(B\) & 2.50 & \(0.86\pm 0.11\) & 46.8 \\ & \(V\) & 2.50 & \(1.72\pm 0.08\) & 44.1 \\ & \(R\) & 2.50 & \(1.31\pm 0.03\) & 44.1 \\ & \(I\) & 2.50 & \(1.61\pm 0.06\) & 44.1 \\ Aug 21 & \(R\) & 1.50 & \(1.34\pm 0.06\) & 28.1 \\ Aug 25 & \(R\) & 1.50 & \(1.55\pm 0.03\) & 29.7 \\ Aug 26 & \(B\) & 2.50 & \(2.00\pm 0.02\) & 94.8 \\ & \(I\) & 2.50 & \(2.00\pm 0.02\) & 94.8 \\ Aug 27 & \(B\) & 2.50 & \(0.89\pm 0.08\) & 52.1 \\ & \(I\) & 2.50 & \(1.17\pm 0.09\) & 33.4 \\ Aug 28 & \(B\) & 2.50 & \(1.62\pm 0.04\) & 49.4 \\ & \(I\) & 2.50 & \(1.49\pm 0.04\) & 52.1 \\ Aug 30 & \(R\) & 1.00 & \(1.15\pm 0.09\) & 16.6 \\ Aug 31 & \(V\) & 2.00 & \(1.30\pm 0.10\) & 26.7 \\ & \(R\) & 2.00 & \(1.86\pm 0.18\) & 28.9 \\ & \(I\) & 2.00 & \(1.79\pm 0.20\) & 28.9 \\ Sep 2 & \(R\) & 1.50 & \(1.07\pm 0.06\) & 21.6 \\ Sep 3 & \(R\) & 2.50 & \(1.36\pm 0.05\) & 36.1 \\ Sep 8 & \(R\) & 1.50 & \(1.51\pm 0.05\) & 24.8 \\ Sep 9 & \(R\) & 2.50 & \(1.58\pm 0.02\) & 49.4 \\ Sep 10 & \(R\) & 1.50 & \(1.55\pm 0.05\) & 18.4 \\ Sep 11 & \(R\) & 2.50 & \(1.64\pm 0.02\) & 46.8 \\ Sep 12 & \(R\) & 2.00 & \(0.93\pm 0.10\) & 31.0 \\ Sep 13 & \(V\) & 2.00 & \(1.16\pm 0.08\) & 65.2 \\ & \(R\) & 1.75 & \(1.30\pm 0.06\) & 43.9 \\ & \(I\) & 2.00 & \(1.36\pm 0.08\) & 52.4 \\ Sep 14 & \(V\) & 1.75 & \(1.57\pm 0.17\) & 23.4 \\ & \(R\) & 1.25 & \(1.50\pm 0.02\) & 31.4 \\ & \(I\) & 1.75 & \(1.97\pm 0.26\) & 29.0 \\ \hline \end{tabular} Note. – Column 3: Bin sizes used to build the SFs. Column 5: Position of the SF turn-off point in the observer’s frame; the SPL is fitted up to this point. \end{table} Table 5: Results from the SF fits \begin{table} \begin{tabular}{c c c c c c} \hline \hline Date, 2020 & DCF & Sampling & \(\tau\) & Bin Size & Detrended? \\ & & (min) & (min) & (min) & \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline Aug 20 & \(B\) vs \(R\) & 3.24, 0.63 & \(+4.9^{+2.1}_{-2.5}\) & 2.00 & Yes \\ & & & \(+4.4^{+4.1}_{-2.9}\) & – & Yes \\ & & & \(+4.3^{+2.7}_{-2.3}\) & 2.00 & No \\ & \(V\) vs \(R\) & 3.24, 0.63 & \(+2.2^{+1.9}_{-1.8}\) & 2.00 & Yes \\ & & & \(+1.5^{+1.8}_{-2.9}\) & – & Yes \\ & & & \(+1.0^{+2.0}_{-2.0}\) & 2.00 & No \\ & \(I\) vs \(R\) & 3.24, 0.63 & \(-2.9^{+2.0}_{-2.0}\) & 2.00 & Yes \\ & & & \(-2.6^{+2.3}_{-1.2}\) & – & Yes \\ & & & \(-1.0^{+2.0}_{-2.0}\) & 2.00 & No \\ Aug 26 & \(B\) vs \(I\) & 1.40, 1.41 & \(+3.8^{+2.5}_{-1.3}\) & 2.50 & Yes \\ & & & \(+3.4^{+2.8}_{-2.8}\) & – & Yes \\ & & & \(+6.2^{+2.5}_{-2.5}\) & 2.50 & No \\ Aug 27 & \(B\) vs \(I\) & 1.44, 1.44 & \(-2.5^{+2.2}_{-2.1}\) & 2.50 & Yes \\ & & & \(-0.8^{+2.8}_{-2.8}\) & – & Yes \\ & & & \(+1.0^{+2.0}_{-2.1}\) & 2.00 & No \\ Aug 28 & \(B\) vs \(I\) & 1.44, 1.43 & \(+0.4^{+2.2}_{-2.2}\) & 1.75 & Yes \\ & & & \(+0.6^{+1.4}_{-0.0}\) & – & Yes \\ & & & \(+4.4^{+2.3}_{-1.8}\) & 1.75 & No \\ Aug 31 & \(V\) vs \(I\) & 2.01, 2.01 & \(+1.8^{+2.3}_{-2.1}\) & 2.00 & Yes \\ & & & \(+1.6^{+1.6}_{-1.6}\) & – & Yes \\ & & & \(+0.0^{+2.7}_{-2.0}\) & 2.00 & No \\ & \(R\) vs \(I\) & 2.01, 2.01 & \(-0.3^{+2.0}_{-2.4}\) & 2.00 & Yes \\ & & & \(-2.3^{+2.3}_{-0.0}\) & – & Yes \\ & & & \(+0.0^{+2.0}_{-2.1}\) & 2.00 & No \\ Sep 13 & \(V\) vs \(R\) & 1.86, 0.72 & \(-4.5^{+4.0}_{-3.0}\) & 1.50 & Yes \\ & & & \(+0.7^{+2.7}_{-3.3}\) & – & Yes \\ & & & \(+3.1^{+3.6}_{-2.3}\) & 1.50 & No \\ & \(I\) vs \(R\) & 1.87, 0.72 & \(-3.6^{+2.4}_{-1.9}\) & 1.50 & Yes \\ & & & \(-1.3^{+3.3}_{-3.3}\) & – & Yes \\ & & & \(+4.5^{+2.3}_{-2.1}\) & 1.50 & No \\ Sep 14 & \(V\) vs \(R\) & 1.86, 0.68 & \(+1.3^{+1.2}_{-1.2}\) & 1.25 & Yes \\ & & & \(+1.3^{+1.3}_{-1.3}\) & – & Yes \\ & & & \(+1.5^{+1.5}_{-0.8}\) & 1.50 & No \\ & \(I\) vs \(R\) & 1.87, 0.68 & \(+0.0^{+1.2}_{-0.9}\) & 1.25 & Yes \\ & & & \(+0.7^{+1.3}_{-2.0}\) & – & Yes \\ & & & \(+0.0^{+1.5}_{-1.5}\) & 1.50 & No \\ \hline \end{tabular} Note. – Time lags are in the observer’s frame. In our DCF notation, namely “band1” vs “band2”, the positive lag means that the variability at “band1” is the leading one (see also Section 5.1). Column 2: Cross-correlated LCs. Column 3: Modal sampling of the cross-correlated LCs. Column 4: Time lag and its lower and upper uncertainties. Zero lower uncertainties are due to the strongly asymmetric shape of the lag distribution. Column 5: Bin size used to build the DCF. The lags with no bin size specified are obtained by means of the ICF. Column 6: Indication whether the used LCs are detrended or not. \end{table} Table 6: Results from the cross-correlation analysis of the LCs \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Date, 2020 & Band & \(F_{0}\) & \(\Delta t_{0}\) & \(\mathcal{T}_{\rm r}\) & \(\mathcal{T}_{\rm d}\) & \(\Delta\mathcal{T}\) & \(\sigma_{\rm fit}\) \\ & & (mJy) & (min) & (min) & (min) & (min) & (mJy) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline Aug 20 & \(R\) & 16.6 \(\pm\) 0.2 & 240.6 \(\pm\) 0.7 & 20.3 \(\pm\) 0.6 & 20.3 \(\pm\) 0.6 & 81.2 \(\pm\) 1.7 & 0.41 \\ & & 6.8 \(\pm\) 0.3 & 434.1 \(\pm\) 1.2 & 15.8 \(\pm\) 0.9 & 15.8 \(\pm\) 0.9 & 63.2 \(\pm\) 2.5 & \\ & & 4.8 \(\pm\) 0.6 & 472.9 \(\pm\) 1.8 & 12.7 \(\pm\) 2.7 & 12.7 \(\pm\) 2.7 & 50.8 \(\pm\) 7.6 & \\ & & 3.4 \(\pm\) 0.8 & 497.5 \(\pm\) 1.8 & 9.9 \(\pm\) 1.5 & 9.9 \(\pm\) 1.5 & 39.6 \(\pm\) 4.2 & \\ & & 5.0 \(\pm\) 0.2 & 583.9 \(\pm\) 0.9 & 15.0 \(\pm\) 0.8 & 15.0 \(\pm\) 0.8 & 60.0 \(\pm\) 2.3 & \\ & & 4.5 \(\pm\) 0.2 & 620.6 \(\pm\) 0.6 & 10.3 \(\pm\) 0.9 & 10.3 \(\pm\) 0.9 & 41.2 \(\pm\) 2.5 & \\ \(I\) & 19.8 \(\pm\) 0.3 & 237.7 \(\pm\) 1.1 & 21.8 \(\pm\) 0.8 & 21.8 \(\pm\) 0.8 & 87.2 \(\pm\) 2.3 & 0.36 \\ & & 1.9 \(\pm\) 0.4 & 324.4 \(\pm\) 2.0 & 6.8 \(\pm\) 2.0 & 6.8 \(\pm\) 2.0 & 27.2 \(\pm\) 5.7 & \\ & & 5.9 \(\pm\) 0.4 & 435.0 \(\pm\) 2.1 & 17.4 \(\pm\) 1.7 & 17.4 \(\pm\) 1.7 & 69.6 \(\pm\) 4.8 & \\ & & 5.0 \(\pm\) 0.4 & 479.4 \(\pm\) 1.7 & 13.5 \(\pm\) 2.5 & 13.5 \(\pm\) 2.5 & 54.0 \(\pm\) 7.1 & \\ & & 2.8 \(\pm\) 0.6 & 508.5 \(\pm\) 1.1 & 3.9 \(\pm\) 1.5 & 3.9 \(\pm\) 1.5 & 15.6 \(\pm\) 4.2 & \\ & & 3.4 \(\pm\) 0.3 & 587.6 \(\pm\) 2.0 & 14.1 \(\pm\) 2.3 & 14.1 \(\pm\) 2.3 & 56.4 \(\pm\) 6.5 & \\ & & 2.6 \(\pm\) 0.5 & 620.5 \(\pm\) 1.4 & 5.5 \(\pm\) 1.9 & 5.5 \(\pm\) 1.9 & 22.0 \(\pm\) 5.4 & \\ Aug 21 & \(R\) & 2.0 \(\pm\) 0.1 & 56.4 \(\pm\) 0.7 & 5.3 \(\pm\) 0.4 & 16.5 \(\pm\) 1.1 & 43.6 \(\pm\) 2.3 & 0.41 \\ & & 2.4 \(\pm\) 0.2 & 432.6 \(\pm\) 0.6 & 5.0 \(\pm\) 0.7 & 5.0 \(\pm\) 0.7 & 20.0 \(\pm\) 2.0 & \\ & & 3.3 \(\pm\) 0.2 & 459.2 \(\pm\) 0.8 & 9.8 \(\pm\) 0.8 & 9.8 \(\pm\) 0.8 & 39.2 \(\pm\) 2.3 & \\ & & 2.8 \(\pm\) 0.2 & 565.6 \(\pm\) 0.6 & 6.5 \(\pm\) 0.6 & 6.5 \(\pm\) 0.6 & 26.0 \(\pm\) 1.7 & \\ & & 4.4 \(\pm\) 0.1 & 595.0 \(\pm\) 0.4 & 8.1 \(\pm\) 0.4 & 8.1 \(\pm\) 0.4 & 32.4 \(\pm\) 1.1 & \\ & & 9.0 \(\pm\) 0.9 & 805.3 \(\pm\) 2.0 & 8.0 \(\pm\) 1.3 & 27.5 \(\pm\) 4.2 & 71.0 \(\pm\) 8.8 & \\ Aug 25 & \(R\) & 3.3 \(\pm\) 0.1 & 146.4 \(\pm\) 0.7 & 13.4 \(\pm\) 0.7 & 13.4 \(\pm\) 0.7 & 53.6 \(\pm\) 2.0 & 0.35 \\ & & 5.0 \(\pm\) 0.1 & 190.4 \(\pm\) 0.5 & 15.9 \(\pm\) 0.6 & 15.9 \(\pm\) 0.6 & 63.6 \(\pm\) 1.7 & \\ & & 2.0 \(\pm\) 0.1 & 271.1 \(\pm\) 0.9 & 13.1 \(\pm\) 1.0 & 13.1 \(\pm\) 1.0 & 52.4 \(\pm\) 2.8 & \\ & & 2.7 \(\pm\) 0.1 & 320.3 \(\pm\) 0.7 & 14.3 \(\pm\) 0.8 & 14.3 \(\pm\) 0.8 & 57.2 \(\pm\) 2.3 & \\ Aug 26 & \(B\) & 13.2 \(\pm\) 5.0 & 4.8 \(\pm\) 8.7 & 47.8 \(\pm\) 34.5 & 47.8 \(\pm\) 34.5 & 191.2 \(\pm\) 97.6 & 0.54 \\ & & 14.5 \(\pm\) 7.5 & 86.2 \(\pm\) 10.7 & 45.8 \(\pm\) 8.1 & 45.8 \(\pm\) 8.1 & 183.2 \(\pm\) 22.9 & \\ & & 3.9 \(\pm\) 0.4 & 212.9 \(\pm\) 4.3 & 37.2 \(\pm\) 3.6 & 37.2 \(\pm\) 3.6 & 148.8 \(\pm\) 10.2 & \\ & & 15.3 \(\pm\) 0.5 & 604.8 \(\pm\) 1.2 & 30.4 \(\pm\) 1.0 & 30.4 \(\pm\) 1.0 & 121.6 \(\pm\) 2.8 & 0.58 \\ & & 13.4 \(\pm\) 0.5 & 716.0 \(\pm\) 1.5 & 30.4 \(\pm\) 1.0 & 30.4 \(\pm\) 1.0 & 121.6 \(\pm\) 2.8 & \\ & & 20.0 \(\pm\) 3.3 & 7.5 \(\pm\) 4.0 & 44.3 \(\pm\) 14.2 & 44.3 \(\pm\) 14.2 & 177.2 \(\pm\) 40.2 & 0.76 \\ & & 23.7 \(\pm\) 4.8 & 88.3 \(\pm\) 4.7 & 43.2 \(\pm\) 4.1 & 43.2 \(\pm\) 4.1 & 172.8 \(\pm\) 11.6 & \\ & & 7.3 \(\pm\) 0.5 & 201.8 \(\pm\) 2.6 & 35.8 \(\pm\) 2.2 & 35.8 \(\pm\) 2.2 & 143.2 \(\pm\) 6.2 & \\ Aug 27 & \(B\) & 2.7 \(\pm\) 0.4 & 37.0 \(\pm\) 1.3 & 6.1 \(\pm\) 1.5 & 6.1 \(\pm\) 1.5 & 24.4 \(\pm\) 4.2 & 0.53 \\ & & 3.2 \(\pm\) 0.3 & 76.1 \(\pm\) 2.2 & 15.8 \(\pm\) 2.6 & 15.8 \(\pm\) 2.6 & 63.2 \(\pm\) 7.4 & \\ & & 4.0 \(\pm\) 0.3 & 175.2 \(\pm\) 2.1 & 17.7 \(\pm\) 2.2 & 17.7 \(\pm\) 2.2 & 70.8 \(\pm\) 6.2 & \\ & \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Date, 2020 & Band & \(F_{0}\) & \(\Delta t_{0}\) & \({\cal T}_{\rm r}\) & \({\cal T}_{\rm d}\) & \(\Delta{\cal T}\) & \(\sigma_{\rm fit}\) \\ & & (mJy) & (min) & (min) & (min) & (min) & (mJy) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline & & 2.3 \(\pm\) 0.6 & 388.2 \(\pm\) 1.0 & 3.1 \(\pm\) 1.0 & 3.1 \(\pm\) 1.0 & 12.4 \(\pm\) 2.8 & \\ & & 2.9 \(\pm\) 0.5 & 495.3 \(\pm\) 1.5 & 5.5 \(\pm\) 1.2 & 5.5 \(\pm\) 1.2 & 22.0 \(\pm\) 3.4 & \\ & \(I\) & 4.9 \(\pm\) 0.5 & 37.5 \(\pm\) 1.0 & 6.8 \(\pm\) 1.1 & 6.8 \(\pm\) 1.1 & 27.2 \(\pm\) 3.1 & 0.68 \\ & & 5.0 \(\pm\) 0.3 & 77.9 \(\pm\) 1.5 & 15.0 \(\pm\) 1.7 & 15.0 \(\pm\) 1.7 & 60.0 \(\pm\) 4.8 & \\ & & 7.6 \(\pm\) 0.3 & 179.8 \(\pm\) 1.1 & 16.6 \(\pm\) 1.1 & 16.6 \(\pm\) 1.1 & 66.4 \(\pm\) 3.1 & \\ & & 5.1 \(\pm\) 0.3 & 230.5 \(\pm\) 1.5 & 14.4 \(\pm\) 1.5 & 14.4 \(\pm\) 1.5 & 57.6 \(\pm\) 4.2 & \\ & & 4.9 \(\pm\) 0.7 & 390.2 \(\pm\) 0.4 & 2.0 \(\pm\) 0.4 & 2.0 \(\pm\) 0.4 & 8.0 \(\pm\) 1.1 & \\ & & 6.4 \(\pm\) 0.4 & 494.0 \(\pm\) 0.8 & 8.7 \(\pm\) 0.6 & 8.7 \(\pm\) 0.6 & 34.8 \(\pm\) 1.7 & \\ Aug 28 & \(B\) & 4.2 \(\pm\) 0.2 & 87.0 \(\pm\) 2.8 & 29.4 \(\pm\) 2.6 & 29.4 \(\pm\) 2.6 & 117.6 \(\pm\) 7.4 & 0.57 \\ & & 9.8 \(\pm\) 0.3 & 171.8 \(\pm\) 0.9 & 22.6 \(\pm\) 1.2 & 22.6 \(\pm\) 1.2 & 90.4 \(\pm\) 3.4 & \\ & & 9.0 \(\pm\) 0.2 & 298.3 \(\pm\) 1.2 & 41.3 \(\pm\) 1.2 & 41.3 \(\pm\) 1.2 & 165.2 \(\pm\) 3.4 & \\ & \(I\) & 6.0 \(\pm\) 0.2 & 78.5 \(\pm\) 1.7 & 27.0 \(\pm\) 1.8 & 27.0 \(\pm\) 1.8 & 108.0 \(\pm\) 5.1 & 0.85 \\ & & 13.5 \(\pm\) 0.2 & 171.5 \(\pm\) 0.7 & 24.8 \(\pm\) 0.9 & 24.8 \(\pm\) 0.9 & 99.2 \(\pm\) 2.5 & \\ & & 14.3 \(\pm\) 0.2 & 295.6 \(\pm\) 0.8 & 41.6 \(\pm\) 0.9 & 41.6 \(\pm\) 0.9 & 166.4 \(\pm\) 2.5 & \\ Aug 30 & \(R\) & 2.4 \(\pm\) 0.2 & 30.5 \(\pm\) 0.5 & 8.6 \(\pm\) 0.8 & 8.6 \(\pm\) 0.8 & 34.4 \(\pm\) 2.3 & 0.54 \\ & & 3.1 \(\pm\) 0.2 & 76.3 \(\pm\) 2.4 & 17.5 \(\pm\) 2.5 & 17.5 \(\pm\) 2.5 & 70.0 \(\pm\) 7.1 & \\ & & 3.2 \(\pm\) 0.4 & 101.8 \(\pm\) 0.5 & 7.4 \(\pm\) 1.0 & 7.4 \(\pm\) 1.0 & 29.6 \(\pm\) 2.8 & \\ & & 1.3 \(\pm\) 0.3 & 140.8 \(\pm\) 1.0 & 5.5 \(\pm\) 1.7 & 5.5 \(\pm\) 1.7 & 22.0 \(\pm\) 4.8 & \\ & & 2.3 \(\pm\) 0.2 & 166.1 \(\pm\) 1.8 & 14.3 \(\pm\) 3.7 & 14.3 \(\pm\) 3.7 & 57.2 \(\pm\) 10.5 & \\ & & 5.3 \(\pm\) 0.3 & 199.5 \(\pm\) 1.0 & 14.5 \(\pm\) 0.7 & 14.5 \(\pm\) 0.7 & 58.0 \(\pm\) 2.0 & \\ Aug 31 & \(V\) & 4.5 \(\pm\) 0.4 & 58.5 \(\pm\) 1.0 & 9.0 \(\pm\) 1.0 & 9.0 \(\pm\) 1.0 & 36.0 \(\pm\) 2.8 & 0.36 \\ & & 3.8 \(\pm\) 0.2 & 87.4 \(\pm\) 1.8 & 13.1 \(\pm\) 1.6 & 13.1 \(\pm\) 1.6 & 52.4 \(\pm\) 4.5 & \\ & & 1.5 \(\pm\) 0.3 & 136.6 \(\pm\) 1.3 & 4.2 \(\pm\) 1.3 & 4.2 \(\pm\) 1.3 & 16.8 \(\pm\) 3.7 & \\ & & 3.8 \(\pm\) 0.6 & 58.5 \(\pm\) 1.2 & 9.8 \(\pm\) 1.1 & 9.8 \(\pm\) 1.1 & 39.2 \(\pm\) 3.1 & 0.30 \\ & & 4.4 \(\pm\) 0.4 & 82.8 \(\pm\) 1.7 & 13.2 \(\pm\) 1.8 & 13.2 \(\pm\) 1.8 & 52.8 \(\pm\) 5.1 & \\ & & 1.8 \(\pm\) 0.2 & 129.0 \(\pm\) 2.1 & 14.6 \(\pm\) 2.5 & 14.6 \(\pm\) 2.5 & 58.4 \(\pm\) 7.1 & \\ & & 5.5 \(\pm\) 0.2 & 61.3 \(\pm\) 0.9 & 10.5 \(\pm\) 0.7 & 10.5 \(\pm\) 0.7 & 42.0 \(\pm\) 2.0 & 0.50 \\ & & 3.5 \(\pm\) 0.6 & 85.1 \(\pm\) 0.9 & 6.8 \(\pm\) 1.6 & 6.8 \(\pm\) 1.6 & 27.2 \(\pm\) 4.5 & \\ & & 2.2 \(\pm\) 0.3 & 109.5 \(\pm\) 4.7 & 15.6 \(\pm\) 3.1 & 15.6 \(\pm\) 3.1 & 62.4 \(\pm\) 8.8 & \\ Sep 2 & \(R\) & 4.8 \(\pm\) 0.2 & 49.9 \(\pm\) 0.8 & 13.5 \(\pm\) 0.6 & 13.5 \(\pm\) 0.6 & 54.0 \(\pm\) 1.7 & 0.48 \\ & & 5.3 \(\pm\) 0.4 & 86.5 \(\pm\) 1.0 & 10.6 \(\pm\) 1.1 & 10.6 \(\pm\) 1.1 & 42.4 \(\pm\) 3.1 & \\ & & 6.1 \(\pm\) 0.7 & 108.7 \(\pm\) 0.6 & 8.1 \(\pm\) 1.1 & 8.1 \(\pm\) 1.1 & 32.4 \(\pm\) 3.1 & \\ & & 4.0 \(\pm\) 0.3 & 134.0 \(\pm\) 1.6 & 13.1 \(\pm\) 2.3 & 13.1 \(\pm\) 2.3 & 52.4 \(\pm\) 6.5 & \\ & & 3.4 \(\pm\) 0.3 & 165.2 \(\pm\) 0.6 & 7.3 \(\pm\) 1.0 & 7.3 \(\pm\) 1.0 & 29.2 \(\pm\) 2.8 & \\ & & 5.6 \(\pm\) 0.3 & 201.7 \(\pm\) 2.0 \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Date, 2020 & Band & \(F_{0}\) & \(\Delta t_{0}\) & \({\cal T}_{\rm r}\) & \({\cal T}_{\rm d}\) & \(\Delta{\cal T}\) & \(\sigma_{\rm fit}\) \\ & & (mJy) & (min) & (min) & (min) & (min) & (mJy) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline & & 7.0 \(\pm\) 0.1 & 370.2 \(\pm\) 0.6 & 15.1 \(\pm\) 1.0 & 15.1 \(\pm\) 1.0 & 60.4 \(\pm\) 2.8 & \\ & & 6.4 \(\pm\) 0.2 & 409.8 \(\pm\) 0.4 & 8.8 \(\pm\) 0.6 & 8.8 \(\pm\) 0.6 & 35.2 \(\pm\) 1.7 & \\ & & 3.8 \(\pm\) 0.2 & 436.9 \(\pm\) 0.6 & 8.4 \(\pm\) 0.7 & 8.4 \(\pm\) 0.7 & 33.6 \(\pm\) 2.0 & \\ & & 3.3 \(\pm\) 0.1 & 481.7 \(\pm\) 0.6 & 9.1 \(\pm\) 0.7 & 9.1 \(\pm\) 0.7 & 36.4 \(\pm\) 2.0 & \\ Sep 3 & \(R\) & 1.7 \(\pm\) 0.1 & 58.3 \(\pm\) 1.5 & 24.5 \(\pm\) 2.0 & 24.5 \(\pm\) 2.0 & 98.0 \(\pm\) 5.7 & 0.37 \\ & & 1.1 \(\pm\) 0.1 & 113.1 \(\pm\) 0.9 & 5.0 \(\pm\) 1.0 & 5.0 \(\pm\) 1.0 & 20.0 \(\pm\) 2.8 & \\ & & 3.8 \(\pm\) 0.6 & 174.3 \(\pm\) 1.3 & 14.4 \(\pm\) 1.7 & 14.4 \(\pm\) 1.7 & 57.6 \(\pm\) 4.8 & \\ & & 5.1 \(\pm\) 0.8 & 215.9 \(\pm\) 1.9 & 22.6 \(\pm\) 4.6 & 22.6 \(\pm\) 4.6 & 90.4 \(\pm\) 13.0 & \\ & & 4.0 \(\pm\) 0.8 & 262.4 \(\pm\) 4.2 & 24.7 \(\pm\) 3.7 & 24.7 \(\pm\) 3.7 & 98.8 \(\pm\) 10.5 & \\ & & 4.2 \(\pm\) 0.9 & 317.6 \(\pm\) 1.0 & 6.5 \(\pm\) 1.4 & 6.5 \(\pm\) 1.4 & 26.0 \(\pm\) 4.0 & \\ & & 3.6 \(\pm\) 0.3 & 386.0 \(\pm\) 2.1 & 15.5 \(\pm\) 1.9 & 15.5 \(\pm\) 1.9 & 62.0 \(\pm\) 5.4 & \\ Sep 6 & \(R\) & 4.2 \(\pm\) 0.3 & 574.4 \(\pm\) 1.7 & 15.8 \(\pm\) 1.5 & 15.8 \(\pm\) 1.5 & 63.2 \(\pm\) 4.2 & 0.47 \\ & & 5.5 \(\pm\) 0.4 & 652.3 \(\pm\) 0.9 & 12.2 \(\pm\) 1.2 & 12.2 \(\pm\) 1.2 & 48.8 \(\pm\) 3.4 & \\ Sep 8 & \(R\) & 1.1 \(\pm\) 0.2 & 21.0 \(\pm\) 0.6 & 3.2 \(\pm\) 0.7 & 3.2 \(\pm\) 0.7 & 12.8 \(\pm\) 2.0 & 0.20 \\ & & 2.1 \(\pm\) 0.1 & 41.7 \(\pm\) 0.6 & 8.4 \(\pm\) 0.6 & 8.4 \(\pm\) 0.6 & 33.6 \(\pm\) 1.7 & \\ & & 1.3 \(\pm\) 0.1 & 122.0 \(\pm\) 0.9 & 12.9 \(\pm\) 1.0 & 12.9 \(\pm\) 1.0 & 51.6 \(\pm\) 2.8 & \\ Sep 9 & \(R\) & 1.4 \(\pm\) 0.1 & 115.9 \(\pm\) 1.3 & 16.8 \(\pm\) 1.9 & 16.8 \(\pm\) 1.9 & 67.2 \(\pm\) 5.4 & 0.21 \\ & & 4.9 \(\pm\) 0.2 & 215.5 \(\pm\) 2.7 & 40.6 \(\pm\) 2.4 & 40.6 \(\pm\) 2.4 & 162.4 \(\pm\) 6.8 & \\ & & 1.4 \(\pm\) 0.7 & 286.9 \(\pm\) 8.4 & 22.1 \(\pm\) 9.8 & 22.1 \(\pm\) 9.8 & 88.4 \(\pm\) 27.7 & \\ & & 2.3 \(\pm\) 0.7 & 321.9 \(\pm\) 3.6 & 18.7 \(\pm\) 2.3 & 18.7 \(\pm\) 2.3 & 74.8 \(\pm\) 6.5 & \\ Sep 10 & \(R\) & 0.8 \(\pm\) 0.2 & 31.3 \(\pm\) 0.9 & 3.1 \(\pm\) 0.9 & 3.1 \(\pm\) 0.9 & 12.4 \(\pm\) 2.5 & 0.26 \\ & & 2.1 \(\pm\) 0.1 & 90.1 \(\pm\) 0.9 & 23.4 \(\pm\) 1.1 & 23.4 \(\pm\) 1.1 & 93.6 \(\pm\) 3.1 & \\ & & 3.0 \(\pm\) 0.1 & 200.8 \(\pm\) 0.3 & 8.3 \(\pm\) 0.3 & 8.3 \(\pm\) 0.3 & 33.2 \(\pm\) 0.8 & \\ & & 2.2 \(\pm\) 0.1 & 261.1 \(\pm\) 0.2 & 4.4 \(\pm\) 0.2 & 4.4 \(\pm\) 0.2 & 17.6 \(\pm\) 0.6 & \\ & & 7.5 \(\pm\) 0.0 & 314.4 \(\pm\) 0.3 & 15.2 \(\pm\) 0.2 & 15.2 \(\pm\) 0.2 & 60.8 \(\pm\) 0.6 & \\ & & 2.7 \(\pm\) 0.1 & 339.5 \(\pm\) 0.2 & 6.1 \(\pm\) 0.3 & 6.1 \(\pm\) 0.3 & 24.4 \(\pm\) 0.8 & \\ & & 2.3 \(\pm\) 0.3 & 390.1 \(\pm\) 0.6 & 5.9 \(\pm\) 0.7 & 5.9 \(\pm\) 0.7 & 23.6 \(\pm\) 2.0 & \\ & & 4.8 \(\pm\) 0.3 & 408.8 \(\pm\) 0.6 & 9.2 \(\pm\) 1.4 & 9.2 \(\pm\) 1.4 & 36.8 \(\pm\) 4.0 & \\ & & 4.3 \(\pm\) 0.3 & 440.6 \(\pm\) 1.2 & 14.7 \(\pm\) 2.0 & 14.7 \(\pm\) 2.0 & 58.8 \(\pm\) 5.7 & \\ & & 3.0 \(\pm\) 0.2 & 481.1 \(\pm\) 1.5 & 16.3 \(\pm\) 1.4 & 16.3 \(\pm\) 1.4 & 65.2 \(\pm\) 4.0 & \\ Sep 11 & \(R\) & 6.5 \(\pm\) 0.3 & 10.3 \(\pm\) 1.4 & 9.7 \(\pm\) 0.9 & 40.4 \(\pm\) 5.0 & 100.2 \(\pm\) 10.2 & 0.26 \\ & & 1.1 \(\pm\) 0.2 & 63.4 \(\pm\) 0.7 & 4.3 \(\pm\) 1.0 & 4.3 \(\pm\) 1.0 & 17.2 \(\pm\) 2.8 & \\ & & 10.6 \(\pm\) 0.5 & 99.0 \(\pm\) 0.8 & 19.2 \(\pm\) 1.0 & 19.2 \(\pm\) 1.0 & 76.8 \(\pm\) 2.8 & \\ & & 7.6 \(\pm\) 1.8 & 134.9 \(\pm\) 1.4 & 11.7 \(\pm\) 1.7 & 11.7 \(\pm\) 1.7 & 46.8 \(\pm\) 4.8 & \\ & & 9.7 \(\pm\) 2.9 & 157.3 \(\pm\) 1.3 \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Date, 2020 & Band & \(F_{0}\) & \(\Delta t_{0}\) & \(\mathcal{T}_{\rm r}\) & \(\mathcal{T}_{\rm d}\) & \(\Delta\mathcal{T}\) & \(\sigma_{\rm fit}\) \\ & & (mJy) & (min) & (min) & (min) & (min) & (mJy) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline & & 3.6 \(\pm\) 0.1 & 306.1 \(\pm\) 0.6 & 15.3 \(\pm\) 0.7 & 15.3 \(\pm\) 0.7 & 61.2 \(\pm\) 2.0 & \\ & & 1.4 \(\pm\) 0.2 & 364.3 \(\pm\) 0.5 & 3.4 \(\pm\) 0.7 & 3.4 \(\pm\) 0.7 & 13.6 \(\pm\) 2.0 & \\ & & 1.6 \(\pm\) 0.1 & 400.7 \(\pm\) 2.0 & 15.8 \(\pm\) 2.7 & 15.8 \(\pm\) 2.7 & 63.2 \(\pm\) 7.6 & \\ & & 1.5 \(\pm\) 0.1 & 451.1 \(\pm\) 3.3 & 19.7 \(\pm\) 3.4 & 19.7 \(\pm\) 3.4 & 78.8 \(\pm\) 9.6 & \\ Sep 12 & \(R\) & 1.2 \(\pm\) 0.1 & 83.5 \(\pm\) 1.8 & 6.0 \(\pm\) 1.3 & 6.0 \(\pm\) 1.3 & 24.0 \(\pm\) 3.7 & 0.22 \\ & & 1.2 \(\pm\) 0.3 & 191.5 \(\pm\) 1.5 & 9.7 \(\pm\) 2.9 & 9.7 \(\pm\) 2.9 & 38.8 \(\pm\) 8.2 & \\ & & 1.3 \(\pm\) 0.1 & 225.2 \(\pm\) 4.6 & 22.0 \(\pm\) 4.5 & 22.0 \(\pm\) 4.5 & 88.0 \(\pm\) 12.7 & \\ & & 1.4 \(\pm\) 0.1 & 276.5 \(\pm\) 0.9 & 14.4 \(\pm\) 1.2 & 14.4 \(\pm\) 1.2 & 57.6 \(\pm\) 3.4 & \\ & & 1.7 \(\pm\) 0.1 & 328.9 \(\pm\) 0.4 & 10.7 \(\pm\) 0.4 & 10.7 \(\pm\) 0.4 & 42.8 \(\pm\) 1.1 & \\ & & 0.6 \(\pm\) 0.1 & 370.9 \(\pm\) 0.5 & 2.9 \(\pm\) 0.6 & 2.9 \(\pm\) 0.6 & 11.6 \(\pm\) 1.7 & \\ & & 0.7 \(\pm\) 0.1 & 389.1 \(\pm\) 0.8 & 7.1 \(\pm\) 0.8 & 7.1 \(\pm\) 0.8 & 28.4 \(\pm\) 2.3 & \\ & & 0.5 \(\pm\) 0.1 & 434.3 \(\pm\) 0.6 & 2.5 \(\pm\) 0.6 & 2.5 \(\pm\) 0.6 & 10.0 \(\pm\) 1.7 & \\ & & 1.3 \(\pm\) 0.1 & 461.1 \(\pm\) 0.4 & 7.7 \(\pm\) 0.4 & 7.7 \(\pm\) 0.4 & 30.8 \(\pm\) 1.1 & \\ Sep 13 & \(V\) & 2.5 \(\pm\) 0.2 & 119.9 \(\pm\) 2.5 & 21.7 \(\pm\) 2.3 & 21.7 \(\pm\) 2.3 & 86.8 \(\pm\) 6.5 & 0.30 \\ & & 2.7 \(\pm\) 0.2 & 376.6 \(\pm\) 1.5 & 19.3 \(\pm\) 1.9 & 19.3 \(\pm\) 1.9 & 77.2 \(\pm\) 5.4 & \\ & & 1.5 \(\pm\) 0.2 & 443.0 \(\pm\) 2.1 & 11.2 \(\pm\) 2.1 & 11.2 \(\pm\) 2.1 & 44.8 \(\pm\) 5.9 & \\ & & 0.5 \(\pm\) 0.1 & 434.3 \(\pm\) 0.6 & 2.5 \(\pm\) 0.6 & 2.5 \(\pm\) 0.6 & 10.0 \(\pm\) 1.7 & \\ & & 1.3 \(\pm\) 0.1 & 461.1 \(\pm\) 0.4 & 7.7 \(\pm\) 0.4 & 7.7 \(\pm\) 0.4 & 30.8 \(\pm\) 1.1 & \\ & & 2.7 \(\pm\) 0.2 & 119.9 \(\pm\) 2.5 & 21.7 \(\pm\) 2.3 & 21.7 \(\pm\) 2.3 & 86.8 \(\pm\) 6.5 & 0.30 \\ & & 2.7 \(\pm\) 0.2 & 376.6 \(\pm\) 1.5 & 19.3 \(\pm\) 1.9 & 19.3 \(\pm\) 1.9 & 77.2 \(\pm\) 5.4 & \\ & & 1.5 \(\pm\) 0.2 & 443.0 \(\pm\) 2.1 & 11.2 \(\pm\) 2.1 & 11.2 \(\pm\) 2.1 & 44.8 \(\pm\) 5.9 & \\ & & 0.5 \(\pm\) 0.1 & 436.7 \(\pm\) 0.9 & 2.6 \(\pm\) 0.7 & 2.6 \(\pm\) 0.7 & 10.4 \(\pm\) 2.0 & 0.35 \\ & & 3.7 \(\pm\) 0.3 & 91.2 \(\pm\) 1.1 & 11.6 \(\pm\) 1.0 & 11.6 \(\pm\) 1.0 & 46.4 \(\pm\) 2.8 & \\ & & 3.9 \(\pm\) 0.3 & 123.6 \(\pm\) 1.4 & 14.2 \(\pm\) 1.7 & 14.2 \(\pm\) 1.7 & 56.8 \(\pm\) 4.8 & \\ & & 1.0 \(\pm\) 0.1 & 177.2 \(\pm\) 6.8 & 23.9 \(\pm\) 4.7 & 23.9 \(\pm\) 4.7 & 95.6 \(\pm\) 13.3 & \\ & & 1.3 \(\pm\) 0.1 & 260.7 \(\pm\) 0.6 & 5.5 \(\pm\) 0.5 & 5.5 \(\pm\) 0.5 & 22.0 \(\pm\) 1.4 & \\ & & 2.0 \(\pm\) 0.0 & 384.3 \(\pm\) 0.4 & 13.7 \(\pm\) 0.6 & 13.7 \(\pm\) 0.6 & 54.8 \(\pm\) 1.7 & \\ & & 0.6 \(\pm\) 0.1 & 436.7 \(\pm\) 0.9 & 7.9 \(\pm\) 0.9 & 7.9 \(\pm\) 0.9 & 31.6 \(\pm\) 2.5 & \\ & & 2.8 \(\pm\) 0.3 & 103.7 \(\pm\) 2.7 & 11.9 \(\pm\) 2.5 & 11.9 \(\pm\) 2.5 & 47.6 \(\pm\) 7.1 & 0.38 \\ & & 3.6 \(\pm\) 0.4 & 135.2 \(\pm\) 1.8 & 11.8 \(\pm\) 1.6 & 11.8 \(\pm\) 1.6 & 47.2 \(\pm\) 4.5 & \\ & & 3.6 \(\pm\) 0.1 & 374.1 \(\pm\) 1.5 & 32.0 \(\pm\) 1.8 & 32.0 \(\pm\) 1.8 & 128.0 \(\pm\) 5.1 & \\ & & 1.6 \(\pm\) 0.2 & 446.9 \(\pm\) 1.6 & 9.4 \(\pm\) 1.8 & 9.4 \(\pm\) 1.8 & 37.6 \(\pm\) 5.1 & \\ Sep 14 & \(V\) & 4.3 \(\pm\) 0.5 & 395.0 \(\pm\) 2.0 & 12.5 \(\pm\) 1.3 & 12.5 \(\pm\) 1.3 & 50.0 \(\pm\) 3.7 & 0.36 \\ & & 5.5 \(\pm\) 0.4 & 422.0 \(\pm\) 1.3 & 11.5 \(\pm\) 1.2 & 11.5 \(\pm\) 1.2 & 46.0 \(\pm\) 3.4 & \\ & & 6.3 \(\pm\) 0.2 & 489.5 \(\pm\) 0.7 & 17.9 \(\pm\) 0.9 & ## 5 Discussion In this paper, we have presented the results from the optical monitoring of the blazar BL Lacertae for the period Jul 11 - Sep 14, 2020, which encompasses the August 2020 flare. During this period (more specifically, starting from the second half of August), we have performed intense intra-night monitoring of BL Lacertae. The blazar showed very high intra-night activity with a duty cycle over that period of 96% or 88%, depending on whether the probably variable cases are considered variable or not. We performed a thorough analysis of the INV of BL Lacertae during the August 2020 flare, and now we shall discuss some constraints that the results from our analysis can place on the blazar jet parameters. ### Emitting Region Parameters First of all, we adopted the turbulent jet model (e.g. Bhatta et al., 2013) in order to interpret the INV observed. Within this model, a plane shock hits a turbulent cell and accelerates (energize) the cell electrons, which are then cooled by synchrotron emission. In this way, a flux pulse is produced, which manifests itself as a flare on the LC. The combination of the individual pulses coming from cells of various characteristics leads to the observed INV. Within this model, the high duty cycle obtained by us means that there is well-developed turbulence within the jet (e.g. Webb et al., 2021). In a recent study, Kalita et al. (2023) reported results from the BL Lacertae monitoring from Oct 1 to Nov 23, 2020 in the optical. According to their Table 2, the source showed INV during four nights out of ten (the probably variable cases considered non-variable); see also Shabolvinskaya et al. (2023) regarding the source monitoring in that period. Therefore, the duty cycle could be estimated as \(\sim\)40%, which is significantly lower than ours. The probably variable cases, however, are associated with the intra-night monitoring duration of \(\lesssim\)3 h, which \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{Date, 2020} & Band & \(F_{0}\) & \(\Delta t_{0}\) & \(\mathcal{T}_{\rm r}\) & \(\mathcal{T}_{\rm d}\) & \(\Delta\mathcal{T}\) & \(\sigma_{\rm fit}\) \\ & & (mJy) & (min) & (min) & (min) & (min) & (mJy) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline & \(I\) & 4.7 \(\pm\) 0.7 & 398.4 \(\pm\) 1.8 & 12.3 \(\pm\) 1.2 & 12.3 \(\pm\) 1.2 & 49.2 \(\pm\) 3.4 & 0.42 \\ & & 5.2 \(\pm\) 0.5 & 424.6 \(\pm\) 1.9 & 13.6 \(\pm\) 1.6 & 13.6 \(\pm\) 1.6 & 54.4 \(\pm\) 4.5 & \\ & & 7.4 \(\pm\) 0.2 & 490.1 \(\pm\) 0.6 & 18.4 \(\pm\) 0.8 & 18.4 \(\pm\) 0.8 & 73.6 \(\pm\) 2.3 & \\ \hline \end{tabular} Note. – Timescales are in the observer’s frame. Column 3: Twice the flare amplitude. Column 4: Approximate position of the flare maximum (the actual position is equal to \(\Delta t_{0}\) only for symmetric flares). Column 5: \(e\)-folding rise timescale. Column 6: \(e\)-folding decay timescale. Column 7: Approximate duration of the flare. Column 8: Standard deviation about the fitted sum of DE functions. \end{table} Table 7: (continued) Figure 17: Time lag of the \(BVI\) band variations (with respect to the \(R\) band ones) against the frequency of the corresponding bands (squares). The black solid curve is fit for this frequency dependence. The black plus signs mark the randomized lag values, while the green lines are the fits to each set of randomized time lags (see text). We show the 3\(\sigma\) error bars for the sake of comparison with the randomized lag values. could affect the source variability status and, hence, the duty cycle estimate. In any case, the above-obtained value could be considered as a lower limit. If, however, we assume that the duty cycle decrease is real, and not an artifact of the insufficient monitoring duration, then, following the turbulent jet model, the turbulence within the jet subsides significantly within about two months since the August 2020 flare onset. The details about the processes of particle acceleration taking place in the jet are not directly relevant to the present scenario, and so we assumed for simplicity a quasi-instantaneous injection within a time \(t^{\prime}_{\rm inj}\leq\mathcal{R}/c\) of a mono-energetic population of high-energy electrons in a homogeneous region of radius \(\mathcal{R}\) (here \(c\) stands for the speed of light); here and below the primed quantities are in the rest frame. These electrons cool by synchrotron emission and lose half of their energy within the cooling time, \(t_{\rm cool}(\nu)\): \[t_{\rm cool}(\nu)\simeq 4.73\times 10^{4}\,\mathcal{B}^{-3/2}\,\nu_{15}^{-1/2} \,\left(\frac{\delta}{1+z}\right)^{-1/2}\,\,\,\mbox{[s]}, \tag{11}\] where \(\nu_{15}\) is the observed photon frequency (in units of \(10^{15}\,\)Hz, \(\nu=10^{15}\nu_{15}\,\)Hz) and \(\mathcal{B}\) the magnetic field strength (in units of Gauss). Here we neglected the cooling by the inverse Compton processes; that is, a zero Compton dominance parameter was assumed. This assumption is justified because Abdo et al. (2010a) reported a Compton dominance parameter of 0.2 for BL Lacertae. In the framework of this scenario, the low-energy electrons result from initially more energetic ones after their synchrotron cooling, thereby leading to the soft time lag (e.g. Urry et al., 1997; Tavecchio et al., 1998): the time lag between two bands corresponding to frequencies \(\nu_{1}\) and \(\nu_{2}\) (\(\nu_{1}>\nu_{2}\)) is equal to \(\tau(\nu_{2},\nu_{1})=t_{\rm cool}(\nu_{2})-t_{\rm cool}(\nu_{1})\). Therefore, if we have estimated the time lags among the \(BVRI\) bands, then Figure 19: Distribution of the maximal electron Lorentz factors. Figure 20: Distribution of the maximal radii. we can derive \(\mathcal{B}\) and \(\delta\) simultaneously. This technique was applied using the Aug 20 lags (Table 6), and so we have \(\tau(\nu_{R},\nu_{k})=t_{\rm cool}(\nu_{R})-t_{\rm cool}(\nu_{k})\), \(k=B,V,I\). In this notation the lags \(\tau(\nu_{R},\nu_{B})\) and \(\tau(\nu_{R},\nu_{V})\) are positive, while the lag \(\tau(\nu_{R},\nu_{I})\) is negative. The frequency dependence of the observed lags is shown in Figure 17. Technically, we did randomization of the time lags within the corresponding asymmetric lag uncertainties to estimate the parameters and their uncertainties. For each set of randomly drawn lags, we estimated \(\mathcal{B}\) and \(\delta\) performing an unweighted fit using the Nelder-Mead fitting method; we ran a total of 2500 cycles. Finally, we built the parameter distributions and used them to get \(\mathcal{B}=5.6^{+1.3}_{-0.8}\,\rm G\) and \(\delta=11.0^{+0.3}_{-0.3}\); the weighted Nelder-Mead fit without randomization gave very similar results. The parameter uncertainties represent the 16-th and 84-th percentiles of the corresponding distributions, and the fit corresponding to the so-derived parameters is drawn in Figure 17 with a black line. Using the same approach and MWL time lags from \(\gamma\)-rays to optical, Weaver et al. (2020) obtained a magnetic field strength of \(\sim\)3.0 G for BL Lacertae. We have only \(B\) vs \(I\) time lag for Aug 26, and so we can apply the following expression to derive the magnetic field strength (e.g. Tavecchio et al., 1998; Papadakis et al., 2003): \[\mathcal{B}\,\delta^{1/3}\simeq 1.31\!\times\!10^{3}\,\left(\frac{1 +z}{\nu_{15,I}}\right)^{1/3}\times\\ \left[\frac{1-(\nu_{15,I}/\nu_{15,B})^{1/2}}{\tau(\nu_{I},\nu_{B} )}\right]^{2/3}\,\,\mathrm{[G]}, \tag{12}\] where \(\nu_{15,B}\) and \(\nu_{15,I}\) are the frequencies corresponding to the \(BI\) bands, respectively (in units of \(10^{15}\,\rm Hz\)) and \(\tau(\nu_{I},\nu_{B})\) the \(B\) vs \(I\) time lag (in units of seconds). Having a \(B\) vs \(I\) time lag of \(3.8^{+2.5}_{-1.3}\,\rm min\), we got \(\mathcal{B}\,\delta^{1/3}\simeq 20.3^{+6.4}_{-5.7}\,\rm G\) or \(\mathcal{B}\simeq 9.1^{+2.9}_{-2.6}\,\rm G\) if we assume a Doppler factor of \(11.0^{+0.3}_{-0.3}\) as estimated above (see also Shablovinskaya et al., 2023). The uncertainties of \(\mathcal{B}\) were derived using the lag and Doppler factor randomization. As we mentioned in Section 3.7, the measured time lags are INLC lags rather than individual flare lags. Therefore, we shall assume the parameters determined above to be an average over the emitting regions, contributing to the given INLC. In this regard, our estimate of the Doppler factor is a kind of local estimate related to the regions, contributing to the Aug 20 INLC. Nevertheless, it is consistent with the literature values of \(\delta\) for BL Lacertae as mentioned before. We see that the various Doppler factor estimates for BL Lacertae are consistent with each other irrespective of the band and method used to get them. This is in contrast with the estimates of \(\delta\) for the high-energy synchrotron-peaked blazars, for which dependence on the band and method used is observed (this is termed as the "Doppler crisis", e.g. Piner and Edwards, 2018; Agarwal et al., 2021). An explanation of that dependence could lie in the more complex internal jet structure in these sources compared to the other kind of blazars. Hence, our results are in support of this scenario as far as BL Lacertae is classified as a low-energy synchrotron-peaked blazar: the lack of discrepancy among the Doppler factor estimates could mean a simple structure of its jet. An independent magnetic field strength estimate could be obtained using the results from the LC decompositions and considering the decay timescale, \(\mathcal{T}_{\rm d}\), as an upper limit of \(t_{\rm cool}\); that is, \(\mathcal{T}_{\rm d}\geq t_{\rm cool}\)(e.g. Fan et al., 2021). Thus, the lower limit (or the minimum value) of the magnetic field strength, \(\mathcal{B}_{\rm min}(\delta)\), inside the emitting region could be derived by rewriting the Equation (11) as follows: \[\begin{split}\widetilde{\mathcal{B}}_{\rm min}&=1.31 \!\times\!10^{3}\,\mathcal{T}_{\rm d}^{-2/3}\,\nu_{15}^{-1/3}\,(1+z)^{1/3}\,\, \mathrm{[G]};\\ \mathcal{B}_{\rm min}(\delta)&=\widetilde{\mathcal{B }}_{\rm min}\,\delta^{-1/3};\\ \mathcal{B}&\geq\mathcal{B}_{\rm min}(\delta),\end{split} \tag{13}\] where \(\mathcal{T}_{\rm d}\) is in units of seconds. In addition, the results from the LC decompositions could also be used to set limits on the electron Lorentz factor in the emitting region and on the radius of the emitting region. The electron Lorentz factor, \(\gamma_{\rm e}\), which is the electron energy in units of \(m_{\rm e}c^{2}\) can be associated with the observed frequency of the emitted synchrotron radiation via (Ghisellini et al., 1997) \[\nu=\frac{4}{3}\,\gamma_{\rm e}^{2}\,\nu_{\mathcal{B}}\frac{\delta}{1+z}, \tag{14}\] where \(\nu_{\mathcal{B}}=2.80\times 10^{6}\mathcal{B}\) is the cyclotron frequency. This equation, coupled with Equation (11), yields \(\gamma_{\rm e}\propto t_{\rm cool}^{1/3}(\nu)\). Assuming again that \(\mathcal{T}_{\rm d}\geq t_{\rm cool}\), we get an upper limit (or a maximal value) of the electron Lorentz factor for the corresponding frequency: \[\begin{split}\widetilde{\gamma}_{\rm e,max}&=4.53\! \times\!10^{2}\,\nu_{15}^{2/3}\,[\mathcal{T}_{\rm d}\,(1+z)]^{1/3}\,;\\ \gamma_{\rm e,max}(\delta)&=\widetilde{\gamma}_{\rm e,max}\,\delta^{-1/3};\\ \gamma_{\rm e}&\leq\gamma_{\rm e,max}(\delta),\end{split} \tag{15}\] where \(\mathcal{T}_{\rm d}\) is in units of seconds. Accounting for our assumption about the injection time of electrons, the rising part of the flare LC constrains the light-crossing time, \(t_{\rm cross}\) (\(\mathcal{T}_{\rm r}\geq t_{\rm cross}\)), thus setting an upper limit (or a maximal value) on the emit ting region radius as follows: \[\begin{split}\widetilde{\mathcal{R}}_{\mathrm{max}}&= \frac{c\,\mathcal{T}_{\mathrm{r}}}{1+z}\ [\mathrm{cm}];\\ \mathcal{R}_{\mathrm{max}}(\delta)&=\widetilde{ \mathcal{R}}_{\mathrm{max}}\,\delta;\\ \mathcal{R}&\leq\mathcal{R}_{\mathrm{max}}(\delta), \end{split} \tag{16}\] where \(\mathcal{T}_{\mathrm{r}}\) is in units of seconds. The dominance of the light-crossing time means also that the rising timescale and, hence, the emitting region radius are not frequency dependent. Taking the values of \(\mathcal{T}_{\mathrm{r}}\) and \(\mathcal{T}_{\mathrm{d}}\) from Table 7, assuming \(\delta=11.0\), and using Equations (13), (15), and (16), we obtained the minimal values of the magnetic field strength, maximal values of the electron Lorentz factor, and maximal values of the radius that characterize the emitting regions. The distributions of \(\mathcal{B}_{\mathrm{min}}(\delta=11.0)\equiv\mathcal{B}_{\mathrm{min}}(11.0)\), \(\gamma_{\mathrm{e,max}}(\delta=11.0)\equiv\gamma_{\mathrm{e,max}}(11.0)\), and \(\mathcal{R}_{\mathrm{max}}(\delta=11.0)\equiv\mathcal{R}_{\mathrm{max}}(11.0)\) are shown in Figures 18, 19, and 20. Some characteristics of the emitting regions are listed in Table 8. Using the same approach, Covino et al. (2015) found the following characteristics for the emitting regions of BL Lacertae assuming \(\delta=10.0\) and a Compton dominance parameter of unity: a lower limit for the magnetic field strength of \(6.0\,\mathrm{G}\) and an upper limit for the radius of \(3\times 10^{-5}\,\mathrm{pc}=6.2\,\mathrm{AU}\). In addition, Weaver et al. (2020) obtained a magnetic field strength of \(\sim\)\(3.0\,\mathrm{G}\) using a minimal timescale of \(\sim\)\(30.0\,\mathrm{min}\), derived on the basis of the BL Lacertae the PSD slope approximation used by us - this should be accounted for in the discussion that follows. Papadakis et al. (2003) estimated the PSD slope for BL Lacertae on intra-night timescales to be \(\varkappa=1.87\pm 0.16\); the individual PSDs were averaged over nights and bands before the fitting. Carini et al. (2011) found the SF slopes for the blazar S5 0716+714 to lie mostly between 1 and 2 (corresponding to the PSD slopes in the range 2-3). Recently, Goyal (2021) found a mean PSD slope of \(3.1\pm 0.3\) for a sample of seven BL Lacs. Our result is consistent with that of Carini et al. (2011) and Goyal (2021) to within the scatter quoted and steeper than the PSD slope obtained by Papadakis et al. (2003). The above groups, however, did not apply any detrending procedure, and so their results could be affected by the long-term component when present: the results will be dependent on the number of the INLCs showing a long-term component (the INLCs without such component could be considered as being detrended already). Our assumption about the INLC generation is related to the turbulent jet model as we mentioned above. In this regard, Calafut & Wiita (2015) and Pollack et al. (2016) estimated the PSD slopes expected from the turbulence within the jet flow. Their computations are based on the numerical 2D modeling of relativistic jet propagation, and both groups found the PSD slopes to average around \(\varkappa=2\) for timescales from a few days to years. Our mean PSD slope is steeper, but it is derived on the intra-night timescales. However, the detailed analysis of the PSDs for our data is beyond the scope of the present paper. ## 6 Summary The main results of the presented study could be summarized as follows: 1. Short-timescale flux variations displayed a total amplitude variation of \(\sim\)2.2 mag in \(R\) band. In addition, we found that on a short-term basis the spectral index has a weak dependence on the flux level and the variations could be mildly chromatic; 2. During the August 2020 flare, the median spectral index was calculated to be \(\langle\alpha_{VRI}\rangle_{\rm med}=0.885\pm 0.020\); 3. We did not find any significant periodicity; 4. The source was found to display BWB chromatism on intra-night timescales; 5. The duty cycle was estimated to be \(\sim\)90% or higher; 6. The weighted mean SF slope was found to be \(\langle\varrho\rangle_{\rm wt}=1.624\pm 0.007\); 7. The cross-correlation analysis resulted in two cases of significant inter-band time lags - the lags were of order of a few minutes; 8. We obtained an estimate of the Doppler factor, \(\delta=11.0^{+0.3}_{-0.3}\), using the inter-band time lags; 9. We derived the values or limits for the magnetic field strength in the emitting regions using the inter-band time lags or LC decomposition results, respectively. The typical values/limits for \(\mathcal{B}\) were found to be \(\sim\)10.0 G if we assume \(\delta=11.0\); 10. Using the LC decomposition results, we obtained limits for the Lorentz factors of the emitting electrons and the radii of the emitting regions. In particular, the smallest upper limit on the radius is 2.2 AU, which we related to the Kolmogorov scale of the turbulent flow; 11. The mean slope of the power spectral density on intra-night timescales, roughly estimated from the mean SF slope, is steeper than that of a pure random walk/red-noise process. ## Acknowledgments We thank the anonymous referee for valuable comments and suggestions, which helped in improving the paper. The work is partly supported by the NCN grant No 2018/29/B/ST9/01793. A.A. and A.O. were supported by the Scientific and Technological Research Council of Turkey (TUBITAK), Project No. 121F427. EE was supported by the Scientific Research Project Coordination Unit of Istanbul University, Project No. FDK-2022-19145. We thank TUBITAK National Observatory for partial support in using T60 and T100 telescopes with project numbers 19BT60-1505 and 19AT100-1486, respectively.
2303.07722
Early Career Developers' Perceptions of Code Understandability. A Study of Complexity Metrics
Context. Code understandability is fundamental. Developers need to understand the code they are modifying clearly. A low understandability can increase the amount of coding effort, and misinterpreting code impacts the entire development process. Ideally, developers should write clear and understandable code with the least effort. Aim. Our work investigates whether the McCabe Cyclomatic Complexity or the Cognitive Complexity can be a good predictor for the developers' perceived code understandability to understand which of the two complexities can be used as criteria to evaluate if a piece of code is understandable. Method. We designed and conducted an empirical study among 216 early career developers with professional experience ranging from one to four years. We asked them to manually inspect and rate the understandability of 12 Java classes that exhibit different levels of Cyclomatic and Cognitive Complexity. Results. Our findings showed that while the old-fashioned McCabe Cyclomatic Complexity and the most recent Cognitive Complexity are modest predictors for code understandability when considering the complexity perceived by early-career developers, they are not for problem severity. Conclusions. Based on our results, early-career developers should not be left alone when performing code-reviewing tasks due to their scarce experience. Moreover, low complexity measures indicate good understandability, but having either CoC or CyC high makes understandability unpredictable. Nevertheless, there is no evidence that CyC or CoC are indicators of early-career perceived severity.Future research efforts will focus on expanding the population to experienced developers to confront whether seniority influences the predictive power of the chosen metrics.
Matteo Esposito, Andrea Janes, Terhi Kilamo, Valentina Lenarduzzi
2023-03-14T09:11:10Z
http://arxiv.org/abs/2303.07722v2
# Does Cyclomatic or Cognitive Complexity Better Represents Code Understandability? ###### Abstract _Background._ Code understandability is fundamental. Developers need to clearly understand the code they are modifying. A low understandability can increase the amount of coding effort and misinterpretation of code has impact on the entire development process. Ideally, developers should write clear and understandable code with the least possible effort. _Objective._ The goal of this work is to investigate if the McCabe Cyclomatic Complexity or the Cognitive Complexity can be a good predictor for the developers' perceived code understandability to understand which of the two complexities can be used as criteria to evaluate if a piece of code is understandable. _Method._ We designed and conducted an empirical study among 216 junior developers with professional experience ranging from one to four years. We asked them to manually inspect and rate the understandability of 12 Java classes that exhibit different levels of Cyclomatic and Cognitive Complexity. _Results._ Cognitive Complexity slightly outperforms the Cyclomatic Complexity to predict the developers' perceived understandability. _Conclusion._ The identification of a clear and validated measure for Code Complexity is still an open issue. Neither the old fashioned McCabe Cyclomatic Complexity and the most recent Cognitive Complexity are good predictors for code understandability, at least when considering the complexity perceived by junior developers. keywords: Cyclomatic Complexity, Cognitive Complexity, Empirical Study + Footnote †: journal: Journal of Systems ans Software ## 1 Introduction Code understandability is the ability of software developers to comprehend and effectively work with code written by others or by themselves in the past. In other words, it refers to how easy it is to read and interpret a piece of code. Code understandability is an essential aspect of software development, as it can greatly impact the efficiency and effectiveness of the development process. When code is easy to understand, developers can more easily identify and fix errors, modify existing code, and integrate new code into existing projects. On the other hand, code that is difficult to understand can lead to confusion, errors, and time-consuming troubleshooting. There are several factors that contribute to code understandability, including the use of clear and concise syntax, consistent formatting and naming conventions, and well-organized code structure. Additionally, documentation and comments can also play a crucial role in improving code understandability. Code understandability can be defined as the measure to which "code possesses the characteristic of understandability to the extent that its purpose is clear to the inspector" [1]. Having a poor code understandability in the program code can increase the amount of coding effort by more than 50% [2; 3] and any misinterpretation of the code will influence the entire development process. To avoid misinterpretation of the code, developers should write code that requires the least amount of effort to be understood [4]. Different metrics, such as the McCabe Cyclomatic Complexity [5], and the Cognitive Complexity [6] have been proposed in the past to evaluate the complexity of the code. Current static analysis tools allow the developers to keep track of these metrics in their code on real-time. Cognitive Complexity has been introduced by SonarQube1 as an extension of the McCabe Cyclomatic Complexity, to better evaluate code understandability [6]. The effect of Cognitive Complexity on code understandably was investigated by two recent studies [7; 4]. Based on their results, Cognitive Complexity seems to be a good indicator of understandability where a higher value means a reduction of understandability. However, both studies did not consider the opinion of the developers on the perceived complexity of the code. Yet, we believe that only two studies (of which one was conducted by the original authors of the Cognitive Complexity metric) are not enough to demonstrate the effectiveness of a new metric. Moreover, as highlighted by Munoz [4], the different complexity and understandability metrics are not deeply investigated and validated. In particular, it is still not evident which of the metrics better support the prediction of code understandability [7]. As consequence, Lavazza et al. [8] extended [4] correlating Cognitive and Cyclomatic complexity to identify which metric provides advantage for code understandability. Unfortunately, the achieved results are not proposing for a particular metric. Code can be complex also due to problems in the code such as design issues or code smells. As highlighted by Politowski et al. [9], the presence of anti-patterns in the code can decrease the code understandability, and increase the effort needed to modify the code. Therefore, if the complexity metrics are correlated with code understandability, also problems in the code can be correlated with the complexity measures. Since the previous studies highlighted the need to understand whether Cognitive complexity is correlated with understandability better than the other existing metrics and the previous results, based on mining software repository studies, were not able to tip the scales, we decided to investigate the impact of these two metrics on code understandably from the point of view of the developer's perception. To this purpose, we designed and conducted an empirical study involving 216 developers with at least one year of experience. We asked them to manually inspect twelve Java classes that exhibit different levels of Cyclomatic and Cognitive Complexity measured by SonarQube. The tasks requested for each class were to rate the code understandability. Moreover, if a positive correlation exists between complexity measures and code understandability, we also aim at understanding if complexity measures are correlated with the developers' perceived severity of problems in the Java code. While there seems to be some differences between developers' opinions on the perception of complexity of the code, the overall data indicates that Cognitive Complexity is a better indicator on the perceived understandability of the code. The paper is structured as follows: In Section 2 we introduce the background of this work, while in Section 3 we outline the research methodology adopted in this study. Section 4 presents and discusses the obtained results. Section 6 identifies the threats to validity, while Section 7 describes the related work. Finally, Section 8 draws the conclusions. ## 2 Background In this Section, we briefly describe the two complexity measures we considered in this work. Both measures are included in the SonarQube suite. ### Cyclomatic Complexity Cyclomatic Complexity is a metric introduced by McCabe already in 1976 [5]. It is a graph theoretical measure for program complexity. The idea behind Cyclomatic Complexity is to measure the amount of linearly independent paths in the program. It is based on the assumption that the more independent paths there are in a program, the more complex the program is likely to be. The definition of Cyclomatic Complexity is based on representing program code as a control flow graph, i.e. a directed graph with all execution paths of the program depicted. Each node in the graph represents a basic code block and each edge a pass of control between the blocks. Based on the graph, Cyclomatic Complexity \(M\) is calculated as \(M=E-N+P\) where \(E\) is the number of edges, \(N\) the number of nodes and \(P\) the number of strongly connected components in the graph. While Cyclomatic Complexity is a widely used metric to indicate the error proneness of program code, it fails to address certain code issues especially when it comes to computational complexity. Cyclomatic Complexity is poor at handling nested condition and iterative structures [10]. Also it has been regarded as a poor metric for code understandability [6]. In SonarQube, the Complexity measure is calculated based on the Cyclomatic Complexity of the code [11] where each split in the control flow of a function increments the complexity measure by one. However, there are small differences between languages in how the complexity gets calculated due to differences in language structures. Cyclomatic complexity can be used as an indicator of how difficult a program is to test, maintain, or modify. Programs with high cyclomatic complexity are generally more difficult to understand, analyze, and change, as they contain more decision points and potential paths through the code. As such, cyclomatic complexity is often used as a quality metric to evaluate the maintainability and overall complexity of software programs. ### Cognitive Complexity Cognitive complexity is based on the idea that not all decision points in a program are equally difficult for a human to understand. Some decisions are simple and easy to reason about, while others are more complex and require more mental effort. Therefore, cognitive complexity assigns a weight to each decision point in the code based on its level of complexity, with more complex decisions receiving a higher weight. In SonarQube, Cognitive Complexity was introduced as "a new metric for measuring the understandability of any given piece of code" [6]. Based on the documentation [12], Cognitive Complexity exhibits some similarity with Cyclomatic Complexity defined by McCabe [5], since Cognitive Complexity can address some of the "common critiques and shortcomings belonging to Cyclomatic Complexity" [10]. Moreover, Cognitive Complexity can fill the gap related to understandability present in the Cyclomatic Complexity [6]. Investigating the construction model, Cognitive Complexity is based on three basic rules [6]: 1. "Ignore structures that allow multiple statements to be readably shorthanded into one"; 2. "Increment for each break in the linear flow of the code"; 3. "Increment when flow-breaking structures are nested". The first rule implicates to obtain no increment of complexity for a method declaration or null-coalescing operators like "?" in C# or PHP, to not penalize developers writing shorter code than those using the operators written on multiple lines. The second rule increments complexity whenever the flow of statements is broken, i.e., [6]: * switch, if, else if, else, ternary operator * for, foreach, while, do while * catch * goto LABEL, break LABEL, continue LABEL * sequences of like binary logical operators * each method in a recursion cycle. The last rule increases the complexity value to take the level of nesting of control flow structures into account. The following structures increment the nesting level [6]: * switch, if, else if, else, ternary operator * for, foreach, while, do while * catch * nested methods and method-like structures ## 3 The Empirical Study We designed and conducted an empirical study by following the guideline proposed by Runeson and Host [13]. In this section, we present the goal, the research questions, metrics and hypotheses for the empirical study. We outline the study context, the data collection and the data analysis. ### Goal and Research Questions The _goal_ of this study is to compare cyclomatic complexity and cognitive complexity with the _purpose_ of understanding which complexity metric better represents the developer's perceived complexity of the Java code. The _perspective_ is of researchers, since they are interested in understanding what complexity metrics can be more helpful to understand the code complexity. Based on the aforementioned goal, we derived the following Research Questions: * Which complexity metric has a higher correlation with the perceived understandability level of a given developer for a specific code snippet? * Is there a correlation between the complexity metrics and the perceived severity of an existing problem in the code? These research questions are further divided into the following sub-research questions: * What is the correlation between the _Cyclomatic Complexity_ and the perceived understandability level of a given developer for a specific code snippet? * What is the correlation between the _Cyclomatic Complexity_ and the perceived superfamily of existing problems in the code? * Is there a correlation between _Cognitive Complexity_ and the perceived superfamily of existing problems in the code? In \(\mathbf{RQ}_{1}\), we investigated correlations between perceived code understandability and Cyclomatic (\(\mathbf{RQ}_{1.1}\)) and Cognitive (\(\mathbf{RQ}_{1.2}\)) complexities. The goal of this question is to understand if it is possible to use only one of the two complexities to represent code understandability. In particular, since Cognitive Complexity is considered a "more contextualized form of quantitative data on code complexity", we are interested to understand if Cognitive Complexity is a better predictor for the code understandability. Since the Cognitive Complexity was build upon the Cyclomatic Complexity, we hypothesized that it might better represent the code understandability. Complex code is considered hard to modify [2; 3]. Moreover, code affected by high levels of Cyclomatic Complexity is usually affected by more severe problems [2; 3]. Therefore, in our second research question (\(\mathbf{RQ}_{2}\)), we aim at understanding if Cognitive Complexity can better represent the severity of the problems in the code (\(\mathbf{RQ}_{2.2}\)) compared to Cyclomatic Complexity (\(\mathbf{RQ}_{2.1}\)). Moreover, we considered that a lower code understandability can lead to a misleading in the problem identification in the inspected code and, consequently, a wrong perception of its severity (\(\mathbf{RQ}_{2}\)). ### Empirical Study Design To answer our research questions, we designed our empirical study consisting of the five steps below. Fig. 1 illustrates the process using the Business Process Model and Notation (BPMN) [14] specification language. 1. _Code Selection:_ We selected Java code affected by problems of different severity from Apache Software Foundation projects. 2. _Complexity measurement:_ We measured the Cyclomatic and Cognitive Complexity of the selected Java code using SonarQube. 3. _Developers selection:_ we identified the junior developers to be included in our study. 4. _Code inspection:_ We asked developers to inspect the selected Java code and to provide their opinion on the understandability of the code, on the presence of issues, and to rate the severity of the existing problem, if any. 5. _Data Analysis:_ We analyzed the developers' answers and correlated the developer's perceived understandability with the Cyclomatic and Cognitive Complexity. In the remainder of this Section, we describe the all aforementioned steps in detail. #### 3.2.1 Code Selection In this section, we report the case and subject selection for this study. We selected classes written in Java affected by different problems that can influence code understandably from Apache Software Foundation projects. Two of three authors, together with a senior Java developer, evaluated independently the presence of issues in the code. Then, all the three persons discussed possible inconsistencies and finally defined a list of 12 classes where all of them agree on the presence of the same issues (Table 2). More details of the selected classes and of the problems identified in the code are available in Table 1. #### 3.2.2 Complexity Measurement. We measured the code complexity by means of Cognitive Complexity and Cyclomatic Complexity applying SonarQube version 7.51. Footnote 1: [https://www.cs.uc.edu/~census/](https://www.cs.uc.edu/~census/) #### 3.2.3 Developers Selection As for participants, we selected junior developers. The reason for selecting them, instead of senior developers, is because they are the developers that most frequently need to approach new code. In particular, junior developers commonly need to extend existing code in the company they work, fixing bugs or integrating new features. Therefore, we selected master or bachelor students in their last year of studies, with at least one year of experience as developer in a company. The selected participants, are exactly these developers that are working on existing code and that need to understand problems in the code when extending it or when they are fixing bugs. We finally involved 216 junior developers with an experience in Java that range from one to four years. We did not present to the participants the Cyclomatic Complexity and Cognitive Complexity values, in order not to influence them in their ability to recognize a potential design problem only because they see the complexity values in advance. #### 3.2.4 Code Inspection. We asked developers to manual inspect the 12 Java classes and provide their opinion about the code understandability. To collect the information, we organized the questionnaire into four sections: \begin{table} \begin{tabular}{l|l} \hline **Class** & **Validated problem in the code** \\ \hline C1 & Maintainability low because of code smells present in the code \\ \hline C2 & As C1, and in addition, cognitive complexity exceeds threshold \\ defined by SonarQube \\ \hline C3 & Code is not tested \\ \hline C4 & As C2, and in addition, code contains faults \\ \hline C5 & Duplicated code \\ \hline C5 & Combination of C1 (constants missing) and C6 \\ \hline C7 & Variation of C1 with a higher criticality (unimplemented \\ & functions) \\ \hline C8 & Variation of C1 with a lower criticality \\ \hline C9 & Code smell: exception handling \\ \hline C10 & Code is not tested \\ \hline C11 & Code is not tested \\ \hline C12 & Minor code smell \\ \hline \end{tabular} \end{table} Table 1: Validated problem in the selected cases Figure 1: Empirical study design process * _Respondents' Background_. We collected the profile of the respondents considering development experience. * _Code Inspection_. In this section of the questionnaire we asked participants to manually inspect a Java class and provide their opinion about their perceived _Code Understandability_ through a five-point Likert Scale (1 means "very easy" and 5 means "very difficult"). * _Perceived Problem Criticality_ in the code, reporting if the problem exists in the class and rating their _severity_ through a five-point Likert Scale (1 means "very low severity" and 5 means "very high severity"). We implemented the questionnaire using Google Forms. The Questionnaire is available in the replication package4. Footnote 2: [https://gdpr-info.eu](https://gdpr-info.eu) ### Study Execution We provided the participants the instructions describing how to access the classes and how to answer the survey. The participants were allowed to inspect the classes and fill out the online questionnaire in a single round. We informed the participants, according to the GDPR\({}^{2}\), about their rights and that they can abandon the study anytime. Moreover, all information provided by each participant have been treated as confidential, without disclosing any sensible data, such as name and surname. ### Data Analysis Concerning the results of the code inspection phase, we first verified the participants' background analyzing the distribution of their education level (bachelor or master) and their experience as developers in software companies. To answer our RQ, we first quantitatively analyzed the perceived code understandability reported by the developers. Then, we investigated the correlations between the perceived code understandability (dependent variable) and the Cyclomatic Complexity (**RQ\({}_{1.1}\)**) and Cognitive Complexity (**RQ\({}_{1.2}\)**) as independent variables. We adopted the Spearman rank correlation coefficient \(\rho\)[15], which measures how well a monotonic function can be fitted between two groups of values measured from the same samples. This is a non-parametric method and the values of \(\rho\) range between -1 and 1, where 1 means perfect positive monotonic association, -1 means perfect negative monotonic association, and 0 means there is no monotonic association between the groups. For interpreting the other values of \(\rho\), we followed the guideline suggested by Cohen [16]: no correlation if \(0\leq\rho<0.1\), small correlation if \(0.1\leq\rho<0.3\), medium correlation if \(0.3\leq\rho<0.5\), and large correlation if \(0.5\leq\rho\leq 1\). Corresponding limits apply for negative correlation coefficients. We determined the statistical significance of the correlations checking p-values, that should be lower than 0.05 (significance level alpha). Therefore, we adopted the value of \(\rho\) to compare the correlations obtained in (**RQ\({}_{1.1}\)**) and (**RQ\({}_{1.2}\)**). To enable to visualize the key value of our results, we plotted the results with box-plots. We applied statistical tests to verify whether the differences are statistically significant. Since the data are not normally distributed, we exploited the Friedman Test with the Nemenyi post-hoc test [17] on all the machine learning models. This is a post-hoc test that identifies the groups of data that differ after a statistical test of multiple comparisons has rejected the null hypothesis (the groups are similar), making a pair-wise performance. We selected this test because it is robust to multiple comparisons - which is our case since we had to compare multiple models on multiple features. To conduct the statistical analysis, we used the Nemenyi package for Python3. Footnote 3: The Nemenyi Python package: [https://scikit-posthocs.readthedocs.io/en/latest/](https://scikit-posthocs.readthedocs.io/en/latest/). Footnote 4: [https://figshare.com/s/0044c83c4fcb45dd831f](https://figshare.com/s/0044c83c4fcb45dd831f) Then, we first manually validated if the problems reported by the developers in the code refer to actual problems in the code (Table 1). As for (**RQ\({}_{2}\)**), the qualitative data analysis has been conducted individually by each author. Moreover, in order to to get a fair/good agreement on the first iteration of this process, pairwise inter-rater reliability was measured across the three sets of decisions. Based on the disagreements, we clarified possible discrepancies and different classifications. A second iteration resulted in 100% agreement among all the authors. We conducted the next analysis only for the cases where participants correctly identified a problem in the code. We analyzed the correlations between the perceived problem severity (dependent variable) and the Cyclomatic Complexity (**RQ\({}_{2.1}\)**) and Cognitive Complexity (**RQ\({}_{2.2}\)**) as independent variables with the Spearman rank correlation coefficient \(\rho\)[15], following the same approach adopted in (**RQ\({}_{1.1}\)**). ### Replicability In order to allow our study to be replicated, we have published the complete raw data together with the instruction of the assignment and the complete questionnaire in the replication package5. Footnote 5: [https://figshare.com/s/0044c83c4fcb45dd831f](https://figshare.com/s/0044c83c4fcb45dd831f) ## 4 Results In this section, we report the results obtained answering our RQs. We collected information from 216 students. **Background Information.** The respondents were 170 master students (79%) and 46 (21%) were bachelor students. 96 (79.34%) of them had between 1 and 2 years of developing experience, while the remaining 25 (20.66%) had more than 3 and 4 years (Table 4). ### Which is the correlation between complexity metrics and the perceived code understandability (RQ1) As we can see from Table 5, the respondents considered the vast majority of the classes (83%) neither to easy (median 3) or hard to understand (median 3), while the remaining classes (C3 and C4) are easy to understand (median 2). As evidence on the application on Cognitive Complexity as an understandability measure is scarce, we set out to study how junior developers perceive code with different Cyclomatic and Cognitive Complexity levels. Our results indicate that cognitive complexity seems a better indicator on severity across developers and that while there is quite a lot of variance cognitive complexity also is better agreed on as a complexity indicator. It was evident that less complex classes were considered easy to understand indicating that low Cyclomatic and Cognitive Complexity supports understandability of the code. However, if Cyclomatic or Cognitive Complexity was high, the opinion on understandability was varied. This is a very interesting result and requires further investigation. It does seem that low Cognitive Complexity makes the code more understandable despite the high Cyclomatic Complexity but reducing the Cognitive Complexity does not make the understandability of the code universally better for all developers. What is especially eyeopening is that having both complexity measures high, the perception on understandability varied. Understandability appears to be a little more correlated with Cognitive Complexity. However, the difference to Cyclomatic Complexity was not drastic. The developers were more agreed on Cognitive Complexity as a complexity measure which means that it could be more useful of the two. Prior results [7; 18] have indicated that the metrics themselves do not indicate understandability. Investigating the correlation between the overall code understandability and the Cyclomatic and Cognitive Complexities, results are statistical significant, since p-values is equal to 0.000 (Table 6). Overall, both metrics are not correlated with the perceived problem (r equal to 0.364 for Cyclomatic and 0.466 for Cognitive [16]). \begin{table} \begin{tabular}{l|l|l|l} \hline \hline \multicolumn{1}{c|}{**Role**} & \multicolumn{2}{c}{**Developer Experience**} \\ \hline Bachelor & 21\% & less than 2 years & 84\% \\ \hline Master & 79\% & 3 and 4 years & 16\% \\ \hline \hline \end{tabular} \end{table} Table 4: Background Information \begin{table} \begin{tabular}{l|l|l|l} \hline \hline \multicolumn{1}{c|}{**RQ**} & \multicolumn{2}{c|}{**Question**} & \multicolumn{1}{c}{**Answer type**} \\ \hline RQ1 & How easy is it to understand the code of this class? & Five-point scales \\ \hline RQ2 & Does, in your opinion, this class has design, coding style, or any other problems? & Yes/No \\ \hline \multicolumn{1}{c|}{} & If YES, please rate the severity of the problem & Five-point Likert Scales \\ \hline \hline \end{tabular} \end{table} Table 3: Questions about code inspection’s section \begin{table} \begin{tabular}{l|l|l|l} \hline \hline \multicolumn{1}{c|}{**Class id.**} & \multicolumn{1}{c|}{**Perc. Understandability**} & \multicolumn{1}{c|}{**\# respondent**} & \multicolumn{1}{c}{**Mode**} \\ \hline & 1-Very Easy & 20 & \\ & 2-Easy & 54 & \\ C1 & 3-Neither Easy or Hard & 76 & 3 \\ & 4-Hard & 42 & \\ & 5-Very Hard & 8 & \\ \hline & 1-Very Easy & 48 & \\ & 2-Easy & 55 & \\ C2 & 3-Neither Easy or Hard & 58 & 2 \\ & 4-Hard & 28 & \\ & 5-Very Hard & 3 & \\ & 1-Very Easy & 51 & \\ & 2-Easy & 49 & \\ C3 & 3-Neither Easy or Hard & 58 & 2 \\ & 4-High & 29 & \\ & 5-Very Hard & 7 & \\ \hline & 1-Very Easy & 27 & \\ & 2-Easy & 46 & \\ C4 & 3-Neither Easy or Hard & 65 & 3 \\ & 4-Hard & 40 & \\ & 5-Very Hard & 12 & \\ & 1-Very Easy & 22 & \\ & 2-Easy & 35 & \\ C5 & 3-Neither Easy or Hard & 67 & 4 \\ & 4-Hard & 49 & \\ & 5-Very Hard & 23 & \\ & 1-Very Easy & 8 & \\ & 2-Easy & 21 & \\ C6 & 3-Neither Easy or Hard & 34 & 3 \\ & 4-Hard & 72 & \\ & 5-Very Hard & 60 & \\ & 1-Very Easy & 18 & \\ & 2-Easy & 35 & \\ C7 & 3-Neither Easy or Hard & 61 & 4 \\ & 4-Hard & 52 & \\ & 5-Very Hard & 29 & \\ & 1-Very Easy & 7 & \\ & 2-Easy & 14 & \\ C8 & 3-Neither Easy or Hard & 39 & 4 \\ & 4-Hard & 50 & \\ & 5-Very Hard & 83 & \\ & 1-Very Easy & 29 & \\ C9 & 3-Neither Easy or Hard & 38 & 4 \\ & 4-Hard & 35 & \\ & 5-Very Hard & 66 & \\ & 1-Very Easy & 10 & \\ & 2-Easy & 18 & \\ C10 & 3-Neither Easy or Hard & 35 & 4 \\ & 4-Hard & 52 & \\ & 5-Very Hard & 75 & \\ & 1-Very Easy & 6 & \\ & 2-Easy & 6 & \\ C11 & 3-Neither Easy or Hard & 31 & 4 \\ & 4-Hard & 57 & \\ & 5-Very Hard & 88 & \\ & 1-Very Easy & 6 & \\ & 2-Easy & 19 & \\ C12 & 3-Neither Easy or Hard & 28 & 4 \\ & 4-Hard & 51 & \\ & 5-Very Hard & 92 & \\ \hline \hline \end{tabular} \end{table} Table 5: Perceived Code Understandability (RQ1) Which complexity metric better represents the severity of an existing problem in the code (\(R\)\({}_{2}\)) To answer RQ\({}_{2}\), we first investigated if they perceived a design or coding style problem and if they correctly detected and identified it. **Perceived design or coding style problem.** The percentage of respondents than consider a class affected by a design or coding style problem is less than 75%. **Design coding style problem identification.** It is interesting to note that, almost all the participants that identified a problem in the classes, correctly identified at least one of the actual problems. The only exception is for classes C7 and C12 not all the developers provided a description. 77.85% of them correctly identified the correct problem for C7 and 89% considering C12 class. Table 7 shows the results for the identification of the problems grouped by Class (C). Therefore, as also highlighted by Table 7, we can conclude that the understandability of the code is independent from the perception of a problem. **Design or coding style problem severity.** The participants rated how concerned they were with respect to the design problem identified in the inspected code for each class. The participants rated their evaluation based on a 5-point Likert scale (1 means _"very low"_ and 5 means _"very high"_). Table 7 shows the obtained results grouped by Class (C), 5-point Likert scale levels (from 1 to 5), and number of respondents. We report the average and the median of the perceived Severity. Almost all the respondents that perceived a problem in the inspected classes considered it at least with a _medium severity_ (median 3). ## 5 Discussion As evidence on the application on Cognitive Complexity as an understandability measure is scarce, we set out to study how junior developers perceive code with different Cyclomatic and Cognitive Complexity levels. Our results indicate that cognitive complexity seems a better indicator on severity across developers and that while there is quite a lot of variance cognitive complexity also is better agreed on as a complexity indicator. It was evident that less complex classes were considered easy to understand indicating that low Cyclomatic and Cognitive Complexity supports understandability of the code. However, if Cyclomatic or Cognitive Complexity was high, the opinion on understandability was varied. This is a very interesting result and requires further investigation. It does seem that low Cognitive Complexity makes the code more understandable despite the high Cyclomatic Complexity but reducing the Cognitive Complexity does not make the understandability of the code universally better for all developers. What is especially eyeopening is that having both complexity measures high, the perception on understandability varied. Understandability appears to be a little more correlated with Cognitive Complexity. However, the difference to Cyclomatic Complexity was not drastic. The developers were more agreed on Cognitive Complexity as a complexity measure which means that it could be more useful of the two. Prior results [8; 7; 18] have indicated that the metrics themselves do not indicate understandability and that the different proposed metrics are not positive correlated [8]. Based on our findings low complexity measures do seem to indicate good understandability but having either Cognitive or Cyclomatic high, makes understandability unpredictable. Moreover, out results also confirm that both complexity metrics are also not correlated with the perceived severity of the problems in the code. When looking into RQ\({}_{2}\) the increased complexity increased the perception on severity as well. However, there was large variance especially with Cyclomatic Complexity. The perception on the code issues was good among the junior developers. If the class was considered to be affected by a design problem the developers are also able to describe what the problem is. This shows that highlighting the issues contributing to complexity measures can help in keeping understandability high. The understandability, however, is not dependent on the type of the problem. This may indicate that the developers should take also the more minor design issues more seriously. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|} \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Class**}} & \multicolumn{6}{c|}{**Problem**} & \multirow{2}{*}{**Mode**} \\ \cline{2-4} \cline{6-7} & \multicolumn{1}{c|}{**Perceived**} & \multicolumn{1}{c|}{**Described**} & \multicolumn{2}{c|}{**Severity**} & \multicolumn{1}{c}{} \\ \cline{2-7} & \multicolumn{1}{c|}{**\#**} & \multicolumn{1}{c|}{**\%**} & \multicolumn{1}{c|}{**\#**} & \multicolumn{1}{c|}{**\%**} & \multicolumn{1}{c|}{**\#**} & \multicolumn{1}{c|}{**\#**} & \\ \hline \multirow{4}{*}{C1} & \multirow{4}{*}{146} & \multirow{4}{*}{69} & \multirow{4}{*}{146} & \multirow{4}{*}{100} & \multicolumn{4}{c|}{1-Very Easy} & 5 \\ & & & & & 2-Easy & 11 & \\ & & & & & 3-Neither Easy or Hard & 25 & 3 \\ & & & & & 4-Hard & 24 & \\ & & & & & 5-Very Hard & 16 & \\ & & & & & 1-Very Easy & 7 & \\ & & & & & 2-Easy & 29 & \\ & & & & & 3-Neither Easy or Hard & 23 & 3 \\ & & & & & 4-Hard & 21 & \\ & & & & & 5-Very Hard & 8 & \\ & & & & & 1-Very Easy & 9 & \\ & & & & & 2-Easy & 21 & \\ C3 & 154 & 74 & 154 & 100 & 3-Neither Easy or Hard & 35 & 3 \\ & & & & & 4-Hard & 16 & \\ & & & & & 5-Very Hard & 10 & \\ \hline \multirow{4}{*}{C4} & \multirow{4}{*}{127} & \multirow{4}{*}{60} & \multirow{4}{*}{127} & \multirow{4}{*}{60} & \multirow{4}{*}{127} & \multirow{4}{*}{100} & \multirow{4}{*}{3-Neither Easy or Hard} & \multirow{4}{*}{17} & \multirow{4}{*}{3} \\ & & & & & 1-Very Easy & 8 & \\ & & & & & 2-Easy & 17 & \\ & & & & & 4-Hard & 17 & \\ & & & & & 5-Very Hard & 12 & \\ \cline{1-1} & & & & & 1-Very Easy & 11 & \\ \cline{1-1} & & & & & 1-Very Easy & 4 & \\ & & & & & 2-Easy & 5 & \\ \cline{1-1} & & & & 1-Very Easy & 9 & \\ & & & & & 2-Easy & 10 & \\ \cline{1-1} & & & & & 3-Neither Easy or Hard & 24 & \\ \cline{1-1} & & & & & 5-Very Hard & 13 & \\ \cline{1-1} & & & & 1-Very Easy & 0 & \\ \cline{1-1} & & & & & 2-Easy & 4 \\ \cline{1-1} & & & & 1-Very Easy & 8 & \\ \cline{1-1} & & & & & 4-Hard & 8 & \\ \cline{1-1} & & & & & 5-Very Hard & 17 & \\ \cline{1-1} & & & & 1-Very Easy & 6 & \\ \cline{1-1} & & & & & 2-Easy & 9 & \\ \cline{1-1} & & & & & 3-Neither Easy or Hard & 11 & \\ \cline{1-1} & & & & & 4-Hard & 9 & \\ \cline{1-1} & & & & & 5-Very Hard & 20 & \\ \cline{1-1} & & & & & 1-Very Easy & 1 & \\ \cline{1-1} & & & & & 2-Easy & 0 & \\ \cline{1-1} & & & & & 5-Very Hard & 20 & \\ \hline \end{tabular} \end{table} Table 7: Problem identification and description and perceived Severity (RQ\({}_{2}\)) \begin{table} \begin{tabular}{l|c|c|c} \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Complexity**}} \\ \hline **Spearman** & **Cyclomatic** & **Cognitive** \\ \hline **r** & & -0.268 & -0.152 \\ \hline **p-value** & & 0.000 & 0.001 \\ \hline \end{tabular} \end{table} Table 8: Perceived Problem Severity - Spearman correlation (RQ\({}_{2}\)) ## 6 Threats to Validity In this Section, we introduce the threats to validity, following the structure suggested by Yin [19], reporting construct validity, internal validity, external validity, and reliability. Moreover, we will also discuss the different tactics adopted to mitigate them. **Construct validity.** Concerning the set of tasks, we considered classes whose code complexity was measured by the same tool (SonarQube) that allows to compute both complexities considered in this work (cyclomatic and cognitive complexities). We checked each question to avoid potential misunderstandings, negative questions, and threats. The perceived priority of the design problem was collected by asking the participants to first describe the problem they perceived to understand if their perception is actually related to the identified problem and not to other potential issue in the code. We asked the participants to rate the severity of the problem by means of a Likert scale, to allow us to compare the responses based on an homogeneous scale. To reduce this threat, we checked the correctness of the identification both manually and by means of automated tools. **Internal Validity.** Considering the respondents, we selected junior developers with maximum 4 years of experience in programming skills to better focus on our goal. However, we are aware that the results could be biased by the selection of participants belonging to a set of developers more deeply trained in this tasks. **External validity.** It can be concerned to the subjects of the study and the selected objects. To mitigate this threat, we adopted a treatment set of classes measure was possible to use the same tool to measured cognitive and cyclomatic complexities. Moreover, we are aware that further studies with different analyzed classes considering also the missing groups are needed to confirm our results. **Conclusion validity.** Conclusion validity focuses on how sure we can be that the tasks we adopted are related to the actual outcome we observed. The survey were checked by three experts on empirical studies. Moreover, it was ensured that the subjects of both groups had similar backgrounds and knowledge regarding code understandability and code inspection. ## 7 Related Work Code understandability is described as the measure of how well "code possesses the characteristic of understandability to the extent that its purpose is clear to the inspector" [1]. To better understand a piece of code, legibility is one of the main factors to take under control, since if code is harder to read, it could be harder to understand [1]. Code understanding requires building high-level abstractions from code statements or visualizations or models [20; 21]. However, also readable code could be difficult to be understand [7]. Code understandability can be measured considering several different factors. One possibility is based on the perceived understandability reported by developers answering comprehension questions [22; 23], or filling out blank program parts [24], or extending and/or modifying existing piece of code [25]. To be more accurate, some studies traced the time to perform the assigned task, both questions or developing ones [18; 26; 25]. Other approaches evaluate code understandability focusing on physiological metrics detected by biometrics sensors [27; 28] or eye-tracking devices [29; 30]. Moreover, considering the perceived understandability by rating the different pieces of code under analysis, can provide a positive step forward in this field [7]. Different factors can positively or negatively influence how developers perceive the understandability of a piece of code [7], that can be useful to develop a model to automatically measure the understandability. Several studies investigated the role of software metrics focusing on complexity as well as source-level metrics, such as LOC [7] and Cyclomatic Complexity [31; 7] or Cognitive Complexity [6] during the developing process or during maintenance tasks [18]. Moreover, other types of metrics such as documentation related metrics such as comment readability and metrics relating to a developer's experience were considered from researchers [7]. Results showed that none of the investigated metrics accurately represent code understandability [18; 7]. However, all the software metrics considered in these studies suffered of empirical validation of their ability to measure code understandability. In particular, Cognitive Complexity needs more accurate validation [6]. However, the results demonstrated that such metrics can improve the effectiveness of the code understandability under evaluation [7]. A deeper investigation of Cognitive Complexity has been performed by Munoz et al. [4] and later by Lavazza et al. [8]. Munoz et al. [4] considered as Cognitive Complexity the metric measured by SonarQube and evaluated the association with different code understandability metrics: the time taken to understand a code snippet, the percentage of correctly answered comprehension questions on a code snippet, subjective ratings of a comprehension task, and physiological measures on the subjects engaged in understanding code. Results showed that Cognitive Complexity is correlated with the time spent by a developer to understand source code. However, they did not compared the magnitude of this correlation against different complexity metrics. As Lavazza et al. [8] reported in their work "_before embracing the use of Cognitive Complexity, we need to understand whether Cognitive Complexity is really correlated with understandability better than the measures that were proposed in the past for the same purpose_". To assess it, Lavazza et al. [8] conducted an empirical study extending study [4]. They correlated Cognitive and Cyclomatic complexity to identify which metric provides advantage for code understandability. Unfortunately, the achieved results are not proposing for a particular metric. ## 8 Conclusion We designed and conducted a case study among 216 junior developers (bachelor and master level students). We asked them to manually inspect 12 Java classes that exhibit different code complexity levels as Cognitive and Cyclomatic Complexity measured by SonarQube. For each class, developers had to rate the code understandability. Our finding show that Cognitive Complexity better represents the code understandability than Cyclomatic Complexity, even if its correlation with the code understandability is not high. * Cognitive Complexity better represents the code understandability than Cyclomatic Complexity, even if its correlation with the code understandability is not high. * The severity problems in the code are not correlated with the complexity (both Cyclomatic and Cognitive). We expected to find more problems in classes with higher levels of complexities, mainly because we were expecting these classes to be harder to understand. Therefore, we cannot claim that classes with a higher Cyclomatic or Cognitive complexity are affected by more severe problems than these with lower levels of complexity. Future works will include a replication of this work with more developers asking them to suggest the refactoring action in order to fix the identified problem. Moreover, future work will include the comparison between the perceived understandability of the code of junior and senior developers, and the consideration of other programming languages such as Python and Javascript.
2304.02825
Mind the $\tilde{\mathcal{O}}$: Asymptotically Better, but Still Impractical, Quantum Distributed Algorithms
The CONGEST and CONGEST-CLIQUE models have been carefully studied to represent situations where the communication bandwidth between processors in a network is severely limited. Messages of only $O(log(n))$ bits of information each may be sent between processors in each round. The quantum versions of these models allow the processors instead to communicate and compute with quantum bits under the same bandwidth limitations. This leads to the following natural research question: What problems can be solved more efficiently in these quantum models than in the classical ones? Building on existing work, we contribute to this question in two ways. Firstly, we present two algorithms in the Quantum CONGEST-CLIQUE model of distributed computation that succeed with high probability; one for producing an approximately optimal Steiner Tree, and one for producing an exact directed minimum spanning tree, each of which uses $\tilde{O}(n^{1/4})$ rounds of communication and $\tilde{O}(n^{9/4})$ messages, where $n$ is the number of nodes in the network. The algorithms thus achieve a lower asymptotic round and message complexity than any known algorithms in the classical CONGEST-CLIQUE model. At a high level, we achieve these results by combining classical algorithmic frameworks with quantum subroutines. An existing framework for using distributed version of Grover's search algorithm to accelerate triangle finding lies at the core of the asymptotic speedup. Secondly, we carefully characterize the constants and logarithmic factors involved in our algorithms as well as related algorithms, otherwise commonly obscured by $\tilde{O}$ notation. The analysis shows that some improvements are needed to render both our and existing related quantum and classical algorithms practical, as their asymptotic speedups only help for very large values of $n$.
Phillip A. Kerger, David E. Bernal Neira, Zoe Gonzalez Izquierdo, Eleanor G. Rieffel
2023-04-06T02:18:52Z
http://arxiv.org/abs/2304.02825v6
Mind the \(\tilde{\mathcal{O}}\): asymptotically better, but still impractical, quantum distributed algorithms ###### Abstract The CONGEST and CONGEST-CLIQUE models have been carefully studied to represent situations where the communication bandwidth between processors in a network is severely limited. Messages of only \(\mathcal{O}(\log(n))\) bits of information each may be sent between processors in each round. The quantum versions of these models allow the processors instead to communicate and compute with quantum bits under the same bandwidth limitations. This leads to the following natural research question: What problems can be solved more efficiently in these quantum models than in the classical ones? Building on existing work, we contribute to this question in two ways. Firstly, we present two algorithms in the Quantum CONGEST-CLIQUE model of distributed computation that succeed with high probability; one for producing an approximately optimal Steiner Tree, and one for producing an exact directed minimum spanning tree, each of which uses \(\tilde{\mathcal{O}}(n^{1/4})\) rounds of communication and \(\tilde{\mathcal{O}}(n^{9/4})\) messages, where \(n\) is the number of nodes in the network. The algorithms thus achieve a lower asymptotic round and message complexity than any known algorithms in the classical CONGEST-CLIQUE model. At a high level, we achieve these results by combining classical algorithmic frameworks with quantum subroutines. An existing framework for using a distributed version of Grover's search algorithm to accelerate triangle finding lies at the core of the asymptotic speedup. Secondly, we carefully characterize the constants and logarithmic factors involved in our algorithms as well as related algorithms, otherwise commonly obscured by \(\tilde{O}\) notation. The analysis shows that some improvements are needed to render both our and existing related quantum and classical algorithms practical, as their asymptotic speedups only help for very large values of \(n\). _Keywords--_ Quantum Computing, Distributed Computing, Steiner Tree, Directed Minimum Spanning Tree ## 1 Introduction The classical CONGEST-CLIQUE Model (cCCM henceforth) in distributed computing has been carefully studied as a model central to the field, e.g., (Korhonen and Suomela, 2017; Saikia and Karmakar, 2019; Fischer and Oshman, 2021; Lenzen, 2012; Dolev, Lenzen, and Peled, 2012; Nowicki, 2019). In this model, processors in a network solve a problem whose input is distributed across the nodes under significant communication limitations, described in detail in SS2. For example, a network of aircraft or spacecraft, satellites, and control stations, all with large distances between them, may have severely limited communication bandwidth to be modeled in such a way. The quantum version of this model, in which quantum bits can be sent between processors, the quantum CONGEST-CLIQUE Model (qCCM), as well as the quantum CONGEST model, have been the subject of recent research (Izumi and Gall, 2019; Censor-Hillel, Fischer, Le Gall, Leitersdorf, and Oshman, 2022; van Apeldoorn and de Vos, 2022; Elkin, Klauck, Nanongkai, and andurangan, 2012) in an effort to understand how quantum communication may help in these distributed computing frameworks. For the quantum CONGEST Model, however, (Elkin et al., 2012) showed that many problems cannot be solved more quickly than in the classical model. These include shortest paths, minimum spanning trees, Steiner trees, min-cut, and more; the computational advantages of quantum communication are thus severely limited in the CONGEST setting, though a notable positive result is sub-linear diameter computation in (Le Gall and Magniez, 2018). No comparable negative results exist for the qCCM, and in fact, (Izumi and Gall, 2019) provides an asymptotic quantum speedup for computing all-pairs shortest path (APSP henceforth) distances. Hence, it is apparent that the negative results of (Elkin et al., 2012) cannot transfer over to the qCCM, so investigating these problems in the qCCM presents an opportunity for contribution to the understanding of how quantum communication may help in these distributed computing frameworks. In this paper, we contribute to this understanding by formulating algorithms in the qCCM for finding approximately optimal Steiner trees and exact directed minimum spanning trees using \(\tilde{\mathcal{O}}(n^{1/4})\) rounds - asymptotically fewer rounds than any known classical algorithms. This is done by augmenting the APSP algorithm of (Izumi and Gall, 2019) with an efficient routing table scheme, which is necessary to make use of the shortest _paths_ information instead of only the APSP _distances_, and using the resulting subroutine with existing classical algorithmic frameworks. Beyond asymptotics, we also characterize the complexity of our algorithms as well as those of (Izumi and Gall, 2019; Censor-Hillel et al., 2016; Saikia and Karmakar, 2019; Fischer and Oshman, 2021) to include the logarithmic and constant factors involved to estimate the scales at which they would be practical, which was not included in the previous work. It should be noted that, like APSP, these problems cannot see quantum speedups in the CONGEST (non-clique) setting as shown in (Elkin et al., 2012). Our Steiner tree algorithm is approximate and based on a classical polynomial-time centralized algorithm of (Kou, Markowsky, and Berman, 1981). Our directed minimum spanning tree problem algorithm follows an approach similar to (Fischer and Oshman, 2021), which effectively has its centralized roots in (Lovasz, 1985). ## 2 Background and Setting This section provides the necessary background for our algorithms' settings and the problems they solve. ### The CONGEST and CONGEST-CLIQUE Models of Distributed Computing In the standard CONGEST model, we consider a graph of \(n\) processor nodes whose edges represent communication channels. Initially, each node knows only its neighbors in the graph and associated edge weights. In rounds, each processor node executes computation locally and then communicates with its neighbors before executing further local computation. The congestion limitation restricts this communication, with each node able to send only one message of \(\mathcal{O}(\log(n))\) classical bits in each round to its neighbors, though the messages to each neighbor may differ. In the cCCM, we separate the communication graph from the problem input graph by allowing all nodes to communicate with each other, though the same \(\mathcal{O}(\log(n))\) bits-per-message congestion limitation remains. Hence, a processor node could send \(n-1\) different messages to the other \(n-1\) nodes in the graph, with a single node distributing up to \(\mathcal{O}(n\cdot\log(n))\) bits of information in a single round. Taking advantage of this way of dispersing information to the network is paramount in many efficient CONGEST-CLIQUE algorithms. The efficiency of algorithms in these distributed models is commonly measured in terms of the _round complexity_, the number of rounds of communication used in an algorithm to solve the problem in question. A good overview of these distributed models can be found in (Ghaffari, 2020). ### Quantum Versions of CONGEST and CONGEST-CLIQUE The quantum models we work in are obtained via the following modification: Instead of restricting to messages of \(\mathcal{O}(\log(n))\) classical bits, we allow messages to consist of \(\mathcal{O}(\log(n))\) quantum bits, qubits. For background on qubits and the fundamentals of quantum computing, we refer the reader to (Rieffel and Polak, 2011). We formally define the qCCM, the setting for our algorithms, as follows: **Definition 2.1** (Quantum CONGEST-CLIQUE).: The Quantum CONGEST-CLIQUE Model (qCCM) is a distributed computation model in which an input graph \(G=(V,E,W)\) is distributed over a network of \(n\) processors, where each processor is represented by a node in \(V\). Each node is assigned a unique ID number in \([n]\). Time passes in _rounds_, each of which consists of the following: 1. Each node may execute unlimited local computation. 2. Each node may send a message consisting of either a register of \(\mathcal{O}(\log n)\) qubits or a string of \(\mathcal{O}(\log n)\) classical bits to each other node in the network. Each of those messages may be distinct. 3. Each node receives and saves the messages the other nodes send it. The input graph \(G\) is distributed across the nodes as follows: Each node knows its own ID number, the ID numbers of its neighbors in \(G\), the number of nodes \(n\) in \(G\), and the weights corresponding to the edges it is incident upon. The output solution to a problem must be given by having each node \(v\in V\) return the restriction of the global output to \(\mathcal{N}_{G}(u):=\{v:uv\in E\}\), its neighborhood in \(G\). No entanglement is shared across nodes initially. This is an analog of the cCCM, except that quantum bits may be sent in place of classical bits. To clarify the output requirement, in the Steiner tree problem, we require node \(u\) to output the edges of the solution tree that are incident upon \(u\). Since many messages in our algorithms need not be sent as qubits, we define the qCCM slightly unconventionally, allowing either quantum or classical bits to be sent. We specify those that may be sent classically. However, even without this modification, the quantum versions of CONGEST and cCCM are at least as powerful as their classical counterparts. This is because any \(n\)-bit classical message can be instead sent as an \(n\)-qubit message of unentangled qubits; for a classical bit reading \(0\) or \(1\), we can send a qubit in the state \(\left|0\right\rangle\) or \(\left|1\right\rangle\) respectively, and then take measurements with respect to the \(\{\left|0\right\rangle,\left|1\right\rangle\}\) basis to read the same message the classical bits would have communicated. Hence, one can also freely make use of existing classical algorithms in the qCCM. Further, the assumption that IDs are in \([n]\), with \(n\) known, is not necessary but is convenient; without this assumption, we could have all nodes broadcast their IDs to the entire network and then assign a new label in \([n]\) to each node according to an ordering of the original IDs, resulting in our assumed situation. **Remark 2.2**.: Definition 2.1 does not account for how the information needs to be stored. In this paper, it suffices for all information regarding the input graph to be stored classically as long as there is quantum access to that data. We provide some details on this in SS8.4 of the appendix. **Remark 2.3**.: No entanglement being shared across nodes initially in definition 2.1 results in quantum teleportation not being a trivial way to solve problems in the qCCM. **Example 2.4**.: To provide some intuition on how allowing communication through qubits in this distributed setting can be helpful, we now describe and give an example of distributed Grover search, first described in (Le Gall and Magniez, 2018). The high-level intuition for why quantum computing gives an advantage for search is that quantum operations use quantum interference effects to have canceling effects among non-solutions. Grover search has a generalization called "amplitude amplification" we will use; see (Rieffel and Polak, 2011) for details on these algorithms. Now, for a processor node \(u\) in the network and a Boolean function \(g:X\rightarrow\{0,1\}\), suppose there exists a classical procedure \(\mathcal{C}\) in the cCCM that allows \(u\) to compute \(g(x)\), for any \(x\in X\) in \(r\) rounds. The quantum speedup will come from computing \(\mathcal{C}\) in a quantum superposition, which enables \(g\) to be evaluated with inputs in superposition so that amplitude amplification can be used for inputs to \(g\). Let \(A_{i}:\{x\in X:g(x)=i\}\), for \(i=0,1\), and suppose that \(0<|A_{1}|\leq|X|/2\). Then classically, node \(u\) can find an \(x\in A_{1}\) in \(\Theta(r|X|)\) rounds by checking each element of \(X\). Using the quantum distributed Grover search of (Le Gall and Magniez, 2018) enables \(u\) to find such an \(x\) with high probability in only \(\tilde{\mathcal{O}}(r\sqrt{|X|})\) rounds by evaluating the result of computing \(g\) on a superposition of inputs. We illustrate this procedure in an example case where a node \(u\) wants to inquire whether one of its edges \(uv\) is part of a triangle in \(G\). We first describe a classical procedure for this, followed by the corresponding quantum-distributed search version. For \(v\in\mathcal{N}_{G}(u)\), denote by \(\mathcal{I}_{v}:V\rightarrow\{0,1\}\) the indicator function of \(\mathcal{N}_{G}(v)\), and by \(g_{uv}:\mathcal{N}_{G}(u)\rightarrow\{0,1\}\) its restriction to inputs in \(\mathcal{N}_{G}(u)\). Classically, node \(u\) can evaluate \(g_{uv}(w)\) in two rounds for any \(w\in\mathcal{N}_{G}(u)\) by sending the ID of \(w\) (of length \(\log n\)) to \(v\), and having \(v\) send back the answer \(\mathcal{I}_{v}(w)\). Then \(u\) can check \(g_{uv}(w)\) for each \(w\in\mathcal{N}_{G}(u)\) one at a time to determine whether \(uv\) is part of a triangle in \(G\) or not in \(2\cdot|\mathcal{N}_{G}(u)|\) rounds. For the distributed quantum implementation, \(u\) can instead initialize a register of \(\log n\) qubits as \(|\psi\rangle_{0}:=\frac{1}{\sqrt{|\mathcal{N}_{G}(u)|}}\sum_{x\in\mathcal{N}_{G }(u)}|x\rangle\), all the inputs for \(g_{uv}\) in equal superposition. To do a Grover search, \(u\) needs to be able to evaluate \(g_{uv}\) with inputs \(|\psi\rangle\) in superposition. For the quantum implementation of \(\mathcal{C}\), \(u\) sends a quantum register in state \(|\psi\rangle|0\rangle\) to node \(v\), and has node \(v\) evaluate a quantum implementation of \(\mathcal{I}_{v}\), which we will consider as a call to an oracle mapping \(|x\rangle|0\rangle\) to \(|x\rangle|\mathcal{I}_{v}(x)\rangle\) for all \(x\in V\). Node \(v\) sends back the resulting qubit register, and node \(u\) has evaluated \(g_{uv}(|\psi\rangle)\) in 2 rounds. Now, since \(u\) can evaluate \(g_{uv}\) in superposition, node \(u\) may proceed using standard amplitude amplification, using 2 rounds of communication for each evaluation of \(g_{uv}\), so that \(u\) can find an element \(w\in\mathcal{N}_{G}(u)\) satisfying \(g_{uv}(w)=1\) with high probability in \(\tilde{\mathcal{O}}(r\sqrt{|\mathcal{N}_{G}(u)|})\) rounds if one exists. We note that in this example, \(v\) cannot execute this procedure by itself since it does not know \(\mathcal{N}_{G}(u)\) (and sending this information to \(v\) would take \(|\mathcal{N}_{G}(u)|\) rounds), though it is able to evaluate \(\mathcal{I}_{v}\) in superposition for any \(w\in\mathcal{N}_{G}(u)\). For any classical procedure \(\mathcal{C}\) evaluating a different function from this specific \(g\) (that can be implemented efficiently classically and, therefore, translated to an efficient quantum implementation), the same idea results in the square-root advantage to find a desired element such that \(g\) evaluates to \(1\). ### Notation and Problem Definitions For an integer-weighted graph \(G=(V,E,W)\), we will denote \(n:=|V|,m:=|E|\), and \(W_{e}\) the weight of an edge \(e\in E\) throughout the paper. Let \(\delta(v)\subset V\) be the set of edges incident on node \(v\), and \(\mathcal{N}_{G}(u):=\{v:uv\in E\}\) the neighborhood of \(u\in G\). Denote by \(d_{G}(u,v)\) the shortest-path distance in \(G\) from \(u\) to \(v\). For a graph \(G=(V,E,W)\) two sets of nodes \(U\) and \(U^{\prime}\), let \(\mathcal{P}_{G}(U,U^{\prime}):=\{uv\in E:u\in U,w\in U^{\prime}\}\) be the set of edges connecting \(U\) to \(U^{\prime}\). Let \(\mathcal{P}(U):=\mathcal{P}(U,U)\) as shorthand. All logarithms will be taken with respect to base \(2\), unless otherwise stated. **Definition 2.5** (Steiner Tree Problem).: Given a weighted, undirected graph \(G=(V,E,W)\), and a set of nodes \(\mathcal{Z}\subset V\), referred to as _Steiner Terminals_, output the minimum weight tree in \(G\) that contains \(\mathcal{Z}\). **Definition 2.6** (Approximate Steiner Tree).: For a Steiner Tree Problem with terminals \(\mathcal{Z}\) and solution \(\mathcal{S}_{OPT}\) with edge set \(E_{\mathcal{S}_{OPT}}\), a tree \(T\) in \(G\) containing \(\mathcal{Z}\) with edge set \(E_{T}\) such that \[\sum_{uv\in E_{T}}W_{uv}\leq r\cdot\sum_{uv\in E_{\mathcal{S}_{OPT}}}W_{uv}\] is called an approximate Steiner Tree with approximation factor \(r\). **Definition 2.7** (Directed Minimum Spanning Tree Problem (DMST)).: Given a directed, weighted graph \(G=(V,E,W)\) and a root node \(r\in V\), output the minimum weight directed spanning tree for \(G\) rooted at \(r\). This is also known as the _minimum weight arborescence_ problem. ## 3 Contributions We provide an algorithm for the qCCM that produces an approximate Steiner Tree with high probability (w.h.p.) in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds and an algorithm that produces an exact Directed Minimum Spanning Tree w.h.p. in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds. To do this, we enhance the quantum APSP algorithm of (Izumi and Gall, 2019) in an efficient way to compute not only APSP distances but also the corresponding routing tables (described in SS4) that our algorithms rely on. Further, in addition to these \(\tilde{\mathcal{O}}\) results, in sections 4.7, 5.4, and 6.3, we characterize the constants and logarithmic factors involved in our algorithms as well as related classical algorithms to contribute to the community's understanding of their implementability. This reveals that the factors commonly obscured by \(\tilde{\mathcal{O}}\) notation in related literature, especially the logarithms, have a severe impact on practicality. We summarize the algorithmic results in the following two theorems: **Theorem 3.1**.: There exists an algorithm in the Quantum CONGEST-CLIQUE model that, given an integer-weighted input graph \(G=(V,E,W)\), outputs a \(2(1-1/l)\) approximate Steiner Tree with probability of at least \(1-\frac{1}{poly(n)}\), and uses \(\tilde{\mathcal{O}}(n^{1/4})\) rounds of computation, where \(l\) denotes the number of terminal leaf nodes in the optimal Steiner Tree. **Theorem 3.2**.: There exists an algorithm in the Quantum CONGEST-CLIQUE model that, given a directed and integer-weighted input graph \(G=(V,E,W)\), produces an exact Directed Minimum Spanning Tree with high probability, of at least \(1-\frac{1}{poly(n)}\), and uses \(\tilde{\mathcal{O}}(n^{1/4})\) rounds of computation. ## 4 APSP and Routing Tables We first describe an algorithm for the APSP problem with routing tables in the qCCM, for which we combine an algorithm of (Izumi and Gall, 2019) with a routing table computation from (Zwick, 2000). For this, we reduce APSP with routing tables to triangle finding via _distance products_ as in (Censor-Hillel et al., 2016). ### Distance Products and Routing Tables **Definition 4.1**.: A _routing table_ for a node \(v\) is a function \(R_{v}:V\to V\) mapping a vertex \(u\) to the first node visited in the shortest path going from \(v\) to \(u\) other than \(v\) itself. **Definition 4.2**.: The _distance product_ between two \(n\crosscross n\) matrices \(A\) and \(B\) is defined as the \(n\crosscross n\) matrix \(A\star B\) with entries: \[(A\star B)_{ij}=\min_{k}\{A_{ik}+B_{kj}\}. \tag{4.1}\] The distance product is also sometimes called the min-plus or tropical product. For shortest paths, we will repeatedly square the graph adjacency matrix with respect to the distance product. For a \(n\crosscross n\) matrix \(W\) and an integer \(k\), let us denote \(W^{k,\star}:=W\star(W\star(\ldots(W\star W))\ldots)\) as the \(k^{th}\) power of the distance product. For a graph \(G=(V,E,W)\) with weighted adjacency matrix \(W\) (assigning \(W_{uv}=\infty\) if \(uv\notin E\)), \(W^{k,\star}_{uv}\) is the length of the shortest path from \(v\) to \(u\) in \(G\) using at most \(k\) hops. Hence, for any \(N\geq n\), \(W^{N,\star}\) contains all the shortest path distances between nodes in \(G\). As these distance products obey standard exponent rules, we may take \(N=2^{\lceil\log n\rceil}\) to recursively compute the APSP distances via taking \(\lceil\log n\rceil\) distance product squares: \[W^{2,\star}=W\star W,\;\;W^{4,\star}=\left(W^{2,\star}\right)^{2,\star},\ldots,\;\;W^{2^{\lceil\log n\rceil},\star}=\left(W^{2^{\lceil\log n\rceil-1,\star} }\right)^{2,\star}. \tag{4.2}\] This procedure reduces computing APSP distances to computing \(\lceil\log n\rceil\) distance products. In the context of the CONGEST-CLIQUE model, each node needs to learn the row of \(W^{n}\) that represents it. As we also require nodes to learn their routing tables, we provide a scheme in SS4.3 that is well-suited for our setting to extend (Izumi and Gall, 2019) to also compute routing tables. ### Distance Products via Triangle Finding Having established reductions to distance products, we turn to their efficient computation. The main idea is that we can reduce distance products to a binary search in which each step in the search finds negative triangles. This procedure corresponds to (Izumi, Le Gall, & Magniez, 2020, Proposition 2), which we describe here, restricting to finding the distance product square needed for Eq. (4.2). A negative triangle in a weighted graph is a set of edges \(\Delta^{-}=(uv,vw,wu)\subset E^{3}\) such that \(\sum_{e\in\Delta^{-}}W_{e}<0\). Let us denote the set of all negative triangles in a graph \(G\) as \(\Delta_{G}^{-}\). Specifically, we will be interested in each node \(v\) being able to output edges \(vu\in\delta(v)\) such that \(vu\) is involved in at least one negative triangle in \(G\). Let us call this problem FindEdges, and define it formally as: FindEdges Input: An integer-weighted (directed or undirected) graph \(G=(V,E,W)\) distributed among the nodes, with each node \(v\) knowing \(\mathcal{N}_{G}(v)\), as well as the weights \(W_{vu}\) for each \(u\in\mathcal{N}_{G}(v)\). Output: For each node \(v\), its output is all the edges \(vu\in E\) that are involved in at least one negative triangle in \(G\). **Proposition 4.3**.: If FindEdges on a \(n\)-node integer-weighted graph \(G=(V,E,W)\) can be solved in \(T(n)\) rounds, then the distance product \(A\star B\) of two \(n\crosscross n\) matrices \(A\) and \(B\) with entries in \([M]\) can be computed in \(T(3n)\cdot\lceil\log_{2}(2M)\rceil\) rounds. Proof.: Let \(A\) and \(B\) be arbitrary \(n\cross n\) integer-valued matrices, and \(D\) be an \(n\crosscross n\) matrix initialized to \(\mathbf{0}\). Let each \(u\in V\) simulate three copies of itself,\(u_{1},u_{2},u_{3}\), writing \(V_{1},V_{2},V_{3}\) as the sets of copies of nodes in \(V\). Consider the graph \(G^{\prime}=(V_{1}\cup V_{2}\cup V_{3},E^{\prime},W^{\prime})\), by letting \(u_{i}v_{j}\in E^{\prime}\) for \(u_{i}\in V_{i},v_{j}\in V_{j},i\neq j\), taking \(W^{\prime}_{u_{1}v_{2}}=A_{uv}\) for \(u_{1}\in V_{1},v_{2}\in V_{2}\), \(W^{\prime}_{u_{2}v_{3}}=B_{uv}\) for \(u_{2}\in V_{2},v_{3}\in V_{3}\), and \(W^{\prime}_{u_{3}v_{1}}=D_{uv}\) for \(u_{3}\in V_{3},v_{1}\in V_{1}\). An edge \(zv\) is part of a negative triangle in \(G^{\prime}\) exactly whenever \[\min_{u\in V}\{A_{vu}+B_{uz}\}<-D_{zv}.\] Assuming we can compute FindEdges for a \(k\)-node graph in \(T(n)\) rounds, with a non-positive matrix \(D=\mathbf{0}\) initialized we can apply simultaneous binary searches on \(D_{zv}\), with values between \(\{-2M,0\}\), updating it for each node \(v\) after each run of FindEdges to find \(\min_{u\in V}\{A_{vu}+B_{uz}\}\) for every other node \(z\) in \(T(3n)\cdot\lceil\log(\max_{v,z\in V}\{\min_{u\in V}\{A_{vu}+B_{uv}\}\})\rceil\) rounds, since \(G^{\prime}\) is a tripartite graph with \(3n\) nodes. **Remark 4.4**.: This procedure can be realized in a single \(n\)-node distributed graph by letting each node represent the three copies of itself since \(G^{\prime}\) is tripartite. The \(T(3n)\) stems from each processor node possibly needing to send one message for each node it is simulating in each round of FindEdges. If bandwidth per message is large enough (3 times the bandwidth needed for solving FindEdges in \(T(n)\) rounds), then this can be done in \(T(n)\) rounds. So for this binary search, each node \(v\) initializes and locally stores \(D_{vz}=0\) for each other \(z\in V\), after which we solve FindEdges on \(G^{\prime}\). The node then updates each \(D_{vz}\) according to whether or not the edge copies of \(vz\) were part of a negative triangle in \(G^{\prime}\), after which FindEdges is computed with the updated values for \(D\). This is repeated until all the \(\min_{u\in V}\{A_{vu}+B_{uz}\}\) have been determined. ### Routing Tables via Efficient Computation of Witness Matrices For the routing table entries, we also need each node \(v\) to know the intermediate node \(u\) that is being used to attain \(\min_{u\in V}\{W_{vu}+W_{uz}\}\). **Definition 4.5**.: For a distance product \(A\star B\) of two \(n\times n\) matrices \(A,B\), a _witness matrix_\(C\) is an \(n\times n\) matrix such that \[C_{ij}\in argmin_{k\in[n]}\{A_{ik}+B_{kj}\}\] Put simply, a witness matrix contains the intermediate entries used to attain the values in the resulting distance product. We present here a simple way of computing witness matrices along with the distance product by modifying the matrix entries appropriately, first considered by (Zwick, 2000). The approach is well-suited for our algorithm, as we only incur \(\mathcal{O}(\log n)\) additional calls to FindEdges for a distance product computation with a witness matrix. For an \(n\times n\) integer matrix \(W\), obtain matrices \(W^{\prime}\) and \(W^{\prime\prime}\) by taking \(W^{\prime}_{ij}=nW_{ij}+j-1\) and \(W^{\prime\prime}_{ji}=nW_{ji}\). Set \(K=W^{\prime}\star W^{\prime\prime}\). **Claim 1**.: With \(W,W^{\prime},W^{\prime\prime}\), and \(K\) as defined immediately above, 1. \(\left\lfloor\frac{K}{n}\right\rfloor=W^{2,\star}\) 2. \((K\mod n)+1\) is a witness matrix for \(W^{2,\star}\). The claim follows from routine calculations of the quantities involved and can be found in the Appendix, SS8.1. Hence, we can obtain witness matrices by simply changing the entries of our matrices by no more than a multiplicative factor of \(n\) and an addition of \(n\). Since the complexity of our method depends on the magnitude of the entries of \(W\) logarithmically, we only need logarithmically many more calls to FindEdges to obtain witness matrices along with the distance products, making this simple method well-suited for our approach. More precisely, we can compute \(W^{2}\) with a witness matrix using \(\left\lceil\log(2n\cdot\max_{i,j}\{W_{ij}^{2}<\infty\})\right\rceil\). calls to FindEdges. We obtain the following corollary to proposition 4.3 to characterize the exact number of rounds needed: **Corollary 4.6**.: If FindEdges on an \(n\)-node integer-weighted graph \(G=(V,E,W)\) can be solved in \(T(n)\) rounds, then the distance product square \(W^{2,\star}\), along with a witness matrix \(H\), can be computed in \(T(3n)\cdot\lceil\log_{2}(n\cdot\max_{v,z\in G}\{\min_{u\in V}\{W_{vu}+W_{uv}\} \}+n)\rceil\) rounds. Proof.: This follows from claim 1 and proposition 4.3 upon observing that \[\max_{v\in V}\{\min_{u\in V}\{W^{\prime}_{vu}+W^{\prime\prime}_{uv}\}\}\leq n \cdot\max_{v,z\in G}\{\min_{u\in V}\{W_{vu}+W_{uv}\}\}+n.\qed\] Once we obtain witness matrices along with the distance product computations, constructing the routing tables for each node along the way of computing APSP is straightforward. In each squaring of \(W\) in Eq. (4.2), each node updates its routing table entries according to the corresponding witness matrix entry observed. It is worth noting that these routing table entries need only be stored and accessed classically so that we avoid using unnecessary quantum data storage. ### Triangle Finding Given the results from sections 4.3 and 4.2, we have reduced finding both the routing tables and distance product to having each edge learn the edges involved in a negative triangle in the graph. This section will thus describe the procedure to solve the FindEdges subroutine. We state here a central result from (Izumi and Gall, 2019): **Proposition 4.7**.: There exists an algorithm in the quantum CONGEST-CLIQUE model that solves the FindEdges subroutine in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds. We will proceed to describe each step of the algorithm to describe the precise round complexity beyond the \(\tilde{O}(n^{1/4})\) to characterize the constants involved in the interest of assessing the future implementability of our algorithms. As a preliminary, we give a message routing lemma of (Dolev et al., 2012) for the congested clique, which will be used repeatedly: **Lemma 4.8**.: Suppose each node in \(G\) is the source and destination for at most \(n\) messages of size \(\mathcal{O}(\log n)\) and that the sources and destinations of each message are known in advance to all nodes. Then all messages can be routed to their destinations in 2 rounds. We introduce the subproblem FindEdgesWithPromise (FEWP henceforth). Let \(\Gamma(u,v)\) denote the number of nodes \(w\in V\) such that \((u,v,w)\) forms a negative triangle in \(G\). FEWP: Input: An integer-weighted graph \(G=(V,E,W)\) distributed among the nodes and a set \(S\subset\mathcal{P}(V)\), with each node \(v\) knowing \(\mathcal{N}_{G}(v)\) and \(S\). Promise: For each \(uv\in S,\Gamma(u,v)\leq 90\log n\). Output: For each node \(v\), its output is the edges \(vu\in S\) that satisfy \(\Gamma(u,v)>0\). We give here a description of the procedure of (Izumi and Gall, 2019) to solve FindEdges given an algorithm \(\mathcal{A}\) to solve FEWP. Let \(\varepsilon_{\mathcal{A}}\) be the failure probability of the algorithm \(\mathcal{A}\) for an instance of FEWP. FindEdgesViaFEWP: 1. \(S:=\mathcal{P};M:=\emptyset;i:=0\). 2. WHILE \(60\cdot 2^{i}\log n\leq n\): 1. Each node samples each of its edges with probability \(\sqrt{\frac{60\cdot 2^{i}\log n}{n}}\), so that we obtain a distributed subgraph \(G^{\prime}\) of \(G\) consisting of the sampled edges 2. Run \(\mathcal{A}\) on \((G^{\prime},S)\). Denote the output by \(S^{\prime}\). 3. \(S\gets S\setminus S^{\prime};M\gets M\cup S;i\gets i+1\). 3. Run \(\mathcal{A}\) on \((G,S)\), and call \(S^{\prime\prime}\) the output. 4. Output \(M\cup S\). From step 2 of this above algorithm, it is straightforward to check that this requires a maximum of \(c_{n}:=\lceil\log\left(\frac{n}{60\log n}\right)\rceil+1\) calls to the \(\mathcal{A}\) subroutine to solve FEWP. Further, it succeeds with probability at least \(1-c_{n}/n^{3}-c_{n}/n^{2}8-(c_{n}+1)\varepsilon_{\mathcal{A}}\). We refer the reader to (Izumi and Gall, 2019, SS3) for the proof of correctness. We now turn toward constructing an efficient algorithm for FEWP. To solve this subroutine, we must first introduce an additional labeling scheme over the nodes that will determine how the search for negative triangles will be split up to avoid communication congestion in the network. Assume for simplicity that \(n^{1/4},\sqrt{n},n^{3/4}\) are integers. Let \(\mathcal{M}=[n^{1/4}]\times[n^{1/4}]\times[\sqrt{n}]\). Clearly, \(|\mathcal{M}|=n\), and \(\mathcal{M}\) admits a total ordering lexicographically. Since we assume each node \(v_{i}\in V\) is labeled with unique integer ID \(i\in[n]\), \(v_{i}\) can select the element in \(\mathcal{M}\) that has place \(i\) in the lexicographic ordering of \(\mathcal{M}\) without communication occurring. Hence, each node \(v\in V\) is associated with a unique triple \((i,j,k)\in\mathcal{M}\). We will refer to the unique node associated with \((i,j,k)\in\mathcal{M}\) as node \(v_{(i,j,k)}\). The next ingredient is a partitioning scheme of the space of possible triangles. Let \(\mathcal{U}\) be a partition of \(V\) into subsets containing \(n^{3/4}\) nodes each, by taking \(U_{i}:=\{v_{j}:j\in\{(i-1)\cdot n^{3/4},\ldots,i\cdot n^{3/4}\}\}\) for \(i=1,\ldots,n^{1/4}\), and \(\mathcal{U}:=\{U_{1},\ldots,U_{n^{14}}\}\). Apply the same idea to create a partition \(\mathcal{U}^{\prime}\) of \(\sqrt{n}\) sets of size \(\sqrt{n}\), by taking \(U^{\prime}_{i}:=\{v_{j}:j\in\{(i-1)\cdot\sqrt{n},\ldots,i\cdot\sqrt{n}\}\}\) for \(i=1,\ldots,\sqrt{n}\), and \(\mathcal{U}:=\{U_{1},\ldots,U_{\sqrt{n}}\}\). Let \(\mathbb{V}=\mathcal{U}\times\mathcal{U}\times\mathcal{U}^{\prime}\). Each node \(v_{(i,j,k)}\) can then locally determine its association with the element \((U_{i},U_{j},U^{\prime}_{k})\in\mathbb{V}\) since \(|\mathbb{V}|=n\). Further, if we use one round to have all nodes broadcast their IDs to all other nodes, each node \(v_{(i,j,k)}\) can locally compute the \((U_{i},U_{j},U^{\prime}_{k})\) it is assigned to, so this assignment can be done in one round. We present here the algorithm ComputePairs used to solve the FEWP subroutine. ComputePairs Input: An integer-weighted graph \(G=(V,E,W)\) distributed among the nodes, a partition of \(V\times V\times V\) of \((U_{i},U_{j},U_{k}^{\prime})\) associated with each node as above, and a set \(S\subset\mathcal{P}(V)\) such that for \(uv\in S,\Gamma(u,v)\leq 90\log n\). Output: For each node \(v\), its output is the edges \(vu\in S\) that satisfy \(\Gamma(u,v)>0\). 1. Every node \(v_{(i,j,k)}\) receives the weights \(W_{uv}\), \(W_{vw}\) for all \(uv\in\mathcal{P}(U_{i},U_{j})\) and \(vw\in\mathcal{P}(U_{j},U_{k}^{\prime})\). 2. Every node \(v_{(i,j,k)}\) constructs the set \(\Lambda_{k}(U_{i},U_{j})\subset\mathcal{P}(U_{i},U_{j})\) by selecting every \(uv\in\mathcal{P}(U_{i},U_{j})\) with probability \(10\cdot\frac{\log n}{\sqrt{n}}\). If \(|\{v\in U_{1}:uv\in\Lambda_{k}(U_{i},U_{j})\}|>100n^{1/4}\log n\) for some \(u\in U_{j}\), abort the algorithm and report failure. Otherwise, \(v_{(i,j,k)}\) keeps all pairs \(uv\in\Lambda_{k}(U_{i},U_{j})\cap S\) and receives the weights \(Wuv\) for all of those pairs. Denote those elements of \(\Lambda_{k}(U_{i},U_{j})\cap S\) as \(u_{1}^{k}v_{1}^{k},\ldots,u_{m}^{k}v_{m}^{k}\). 3. Every node \(v_{(i,j,k)}\) checks for each \(l\in[m]\) whether there is some \(U\in\mathcal{U}^{\prime}\) that contains a node \(w\) such that \((u_{l}^{k},v_{l}^{k},w)\) forms a negative triangle, and outputs all pairs \(u_{l}^{k}v_{l}^{k}\) for which a negative triangle was found. ``` With probability at least \(1-2/n\), the algorithm ComputePairs does not terminate at step 2 and every pair \((u,v)\in S\) appears in at least one \(\Lambda_{k}(U_{i},U_{j})\). The details for this result can be found in (Izumi and Gall, 2019, Lemma 2). Step 1 requires \(2n^{1/4}\lceil\frac{\log W}{\log n}\rceil\) rounds and can be implemented fully classically without any qubit communication. Step 2 requires at most \(200\log n\lceil\frac{\log W}{\log n}\rceil\) rounds and can also be implemented classically. Step 3 can be implemented in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds quantumly taking advantage of distributed Grover search but would take \(\mathcal{O}(\sqrt{n})\) steps to implement classically. The remainder of this section is devoted to illustrating how this step can be done in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds. Define the following quantity: **Definition 4.9**.: For node \(v_{(i,j,k)}\), let \[\Delta(i,j,k):=\{(u,v)\in\mathcal{P}(U_{i},U_{j})\cap S:\exists w\in U_{k}^{ \prime}\text{ with }(u,v,w)\text{ forming a negative triangle in }G\}\] For simultaneous quantum searches, we divide the nodes into different classes based on the number of negative triangles they are a part of with the following routine: IdentifyClass Input: An integer-weighted graph \(G=(V,E,W)\) distributed among the nodes, and a set \(S\subset E\) as in FEWP. Output: For each node \(v\), a class \(\alpha\) the node belongs to. 1. Every node \(u_{(i,j,k)}\in V\) samples each node in \(\{v\in V:(u_{(i,j,k)},v)\in S\}\) with probability \(\frac{10\log n}{n}\), creating a set \(\Lambda(u)\) of sampled vertices. If \(\max_{u}|\Lambda(u)|>20\log n\), abort the algorithm and report a failure. Otherwise, have each node broadcast \(\Lambda(u)\) to all other nodes, and take \(R:=\cup_{u\in V}\{uv|v\in\Lambda(u)\}\). 2. Each \(v_{(i,j,k)}\in V\) computes \(d_{i,j,k}:=|\{uv\in\mathcal{P}(i,j)\cap R:\ \exists w\in U_{k}^{\prime}\text{ such that }\{u,v,w\}\text{ forms a negative triangle in }G\}|\), then determines its class \(\alpha\) to be \(min\{c\in\mathbb{N}:d_{i,j,k}<10\cdot 2^{c}\log n\}\). This uses at most \(20\log n\) rounds (each node sends at most that many IDs to every other node) and can be implemented by having all exchanged messages consist only of classical bits. Using Chernoff's bound, one can show that the procedure succeeds with probability of at least \(1-1/n\) as seen in (Izumi et al., 2020, Proposition 5). Let us make the convenient assumption that \(\alpha=0\) for all \(v_{i,j,k}\), which avoids some technicalities around congestion in the forthcoming triangle search. Note that \(\alpha\leq\frac{1}{2}\log n\), so we can run successive searches for each \(\alpha\) for nodes in with class \(\alpha\) in the general case. The general case is discussed in SS8.2 of the appendix and can also be found in (Izumi and Gall, 2019), but this case is sufficient to convey the central ideas. We have all the necessary ingredients to describe the implementation of step 3 of the ComputePairs procedure. 1. Each node executes the IdentifyClass procedure. 2. For each \(\alpha\), for every \(l\in[m]\), every node \(v_{(i,j,k)}\) in class \(\alpha\) executes a quantum search to find whether there is a \(U_{k}^{\prime}\in\mathcal{U}^{\prime}\) with some \(w\in U_{k}^{\prime}\) forming a negative triangle \((u_{l}^{k},v_{l}^{k},w)\) in \(G\), and then reports all the pairs \(u_{l}^{k}v_{l}^{k}\) for which such a \(U_{k}^{\prime}\) was found. This provides the basis of the triangle-searching strategy. To summarize the intuition of the asymptotic speedup in this paper: Since the \(U_{k}^{\prime}\) have size \(\sqrt{n}\) (recall that \(|\mathcal{U}^{\prime}|=\sqrt{n}\)), if each node using a quantum search can search through its assigned \(U_{k}^{\prime}\) in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds, simultaneously, we will obtain our desired complexity. We will complete this argument in SS4.6 and first describe the quantum searches used therein in the following subsection. ### Distributed Quantum Searches With this intuition in mind, we now state two useful theorems of (Izumi and Gall, 2019) for the distributed quantum searches. Let \(X\) denote a finite set throughout this subsection. **Theorem 4.10**.: Let \(g:X\to\{0,1\}\), if a node \(u\) can compute \(g(x)\) in \(r\) rounds in the CONGEST-CLIQUE model for any \(x\in X\), then there exists an algorithm in the Quantum CONGEST-CLIQUE that has \(u\) output some \(x\in X\) with \(g(x)=1\) with high probability using \(\tilde{\mathcal{O}}(r\sqrt{|X|})\) rounds. This basic theorem concerns only single searches, but we need a framework that can perform multiple simultaneous searches. Let \(g_{1},\ldots,g_{m}:X\to\{0,1\}\) and \[A_{l}^{0}:=\{x\in X:g_{l}(x)=0\},A_{l}^{1}:=\{x\in X:g_{l}(x)=1\},\forall i\in [m].\] Assume there exists an \(r\)-round classical distributed algorithm \(C_{m}\) that allows a node \(u\) upon an input \(\chi=(x_{1},\ldots,x_{m})\in X^{m}\) to determine and output \((g_{1}(x_{1}),\ldots,g_{m}(x_{m}))\). In our use of distributed searches, \(X\) will consist of nodes in the network, and searches will need to communicate with those nodes for which the functions \(g_{i}\) are evaluated. To avoid congestion, we will have to consider those \(\chi\in X^{m}\) that have many repeated entries carefully. We introduce some notation for this first. Define the quantity \[\alpha(\chi):=\max_{I\subset[m]}|\{\chi_{i}=\chi_{j}\quad\forall i,j\in I\}|,\] the maximum number of entries in \(\chi\) that are all identical. Next, given some \(\beta\in\mathbb{N}\), assume that in place of \(C_{m}\) we now have a classical algorithm \(\tilde{C}_{m,\beta}\) such that upon input \(\chi=(x_{1},\ldots,x_{m})\in X^{m}\), a node \(u\) outputs \(g_{1}(x_{1}),\ldots,g_{m}(x_{m})\) if \(\alpha(\chi)\leq\beta\) and an arbitrary output otherwise. The following theorem summarizes that such a \(\tilde{C}_{m,\beta}\) with sufficiently large \(\beta\) is enough to maintain a quantum speedup as seen in the previous theorem: **Theorem 4.11**.: For a set \(X\) with \(|X|<m/(36\log m)\), suppose there exists such an evaluation algorithm \(C_{m,\beta}\) for some \(\beta>8m/|X|\) and that \(\alpha(\chi)\leq\beta\) for all \(\chi\in A_{1}^{1}\times\cdots\times A_{m}^{1}\). Then there is a \(\tilde{\mathcal{O}}(r\sqrt{|X|})\)-round quantum algorithm that outputs an element of \(A_{1}^{1}\times\cdots\times A_{m}^{1}\) with probability at least \(1-2/m^{2}\). The proof can be found in (Izumi and Gall, 2019, Theorem 3). ### Final Steps of the Triangle Finding We continue here to complete the step 3.2 of the ComputePairs procedure, armed with Theorem 4.11. We need simultaneous searches to be executed by each node \(v_{(i,j,k)}\) to determine the triangles in \(U_{i}\times U_{j}\times U_{k}^{\prime}\). We provide a short lemma first that ensures the conditions for the quantum searches: **Lemma 4.12**.: The following statements hold with probability at least \(1-2/n^{2}\): 1. \(|\Delta(i,j,k)|\leq 2n\) 2. \(|\Lambda_{k}(U_{i},U_{j})\cap\Delta(i,j,k)|\leq 100\cdot\sqrt{n}\log n\) for \(i,j\in[n^{1/4}]\). The proofs of these statements are technical but straightforward, making use of Chernoff's bound and union bounds; hence we skip them here. To invoke Theorem 4.11, we describe a classical procedure first, beginning with an evaluation step, EvaluationA implementable in \(\tilde{\mathcal{O}}(1)\) rounds. EvaluationA Input: Every node \(v_{(i,j,k)}\) receives \(m\) elements \((u_{1}^{i,j,k},\ldots,u_{m}^{i,j,k})\) of \(\mathcal{U}^{\prime}\) Promise: For every node \(v_{i,j,k}\) and every \(\mathbf{w}\in\mathcal{U}^{\prime},|L_{\mathbf{w}}^{i,j,k}|\leq 800\sqrt{n}\log n\). Output: Each node outputs a list of exactly those \(u_{l}^{i,j,k}\) such that there is a negative triangle in \(U_{i}\times U_{j}\times u_{l}^{i,j,k}\). 1. Every node \(v_{(i,j,k)}\), for each \(r\in\sqrt{n}\) routes the list \(L_{\mathbf{w}}^{i,j,t}\) to node \(v_{(i,j,t)}\). 2. Every node \(v_{(i,j,k)}\), for each \(vu\) it received in step 1, sends the truth value of the inequality \[\min_{w\in U_{k}^{\prime}}\{W_{uw}+W_{ww}\}\leq W_{vu}\] (4.3) to the node that sent \(vu\). Each node is the source and destination of up to \(800n\log n\) messages in step 1, meaning that this step can be implemented in \(1600\log n\) rounds. The same goes for step 2, noting that the number of messages is the same, but they need only be single-bit messages (the truth values of the inequalities). Hence, the evaluation of Theorem 4.11 can be implemented in \(3200\log n\) rounds. Now, applying the theorem with \(X=\mathcal{U}^{\prime},\beta=800\sqrt{n}\log n\), noting that then the assumptions of the theorem hold with probability at least \(1-2/n^{2}\) due to Lemma 4.12, implies that step 3.2 is implementable in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds, with a success probability of at least \(1-2/m^{2}\). For the general case in which we do not assume \(\alpha=0\) for all \(i,j,k\) in IdentifyClass, covered in the appendix, one needs to modify the EvaluationA procedure in order to implement load balancing and information duplication to avoid congestion in the simultaneous searches. These details can be found in the appendix, where a new labeling scheme and different evaluation procedure EvaluationB, are described for this, or in (Izumi and Gall, 2019). ### Complexity As noted previously and in (Izumi and Gall, 2019), this APSP scheme uses \(\tilde{\mathcal{O}}(n^{1/4})\) rounds. Let us characterize the constants and logarithmic factors involved to assess this algorithm's practical utility. Suppose that in each round, \(2\cdot\log n\) qubits can be sent in each message (so that we can send two IDs or one edge with each message), where \(n\) is the number of nodes. For simplicity, let's assume \(W\ll n\) and drop \(W\). 1. APSP with routing tables needs \(\log(n)\) distance products with witness matrices. 2. Computing the \(i^{th}\) distance product square for Eq. (4.2) with a witness matrix needs up to \(\log\!\left(2^{i}\right)=i\) calls to FindEdges, since the entries of the matrix being squared may double each iteration. Then APSP and distance products together make \(\sum_{i=1}^{\lceil\log n\rceil}i=\frac{\lceil\log(n)\rceil(\lceil\log(n) \rceil+1)}{2}\) calls to FindEdges. 3. Solving FindEdges needs \(\log\!\left(\frac{n}{60\log n}\right)\) calls to FEWP, using FindEdgesViaFEWP. 4. Step 1 of ComputePairs needs up to \(2\cdot n^{1/4}\) rounds and step 2 takes up to \(200\log n\) rounds. 5. Step 1 of IdentifyClass needs up to \(20\log n\) rounds. 6. In step 2 of IdentifyClass, the \(c_{uvw}\) are up to \(\frac{1}{2}\log n\) large, and hence \(\alpha\) may range up to \(\frac{1}{2}\log n\). 7. Step 0 of the EvaluationB procedure needs \(n^{1/4}\) rounds. Steps 1 and 2 of the EvaluationB (or EvaluationA, in the \(\alpha=0\) case) procedure use a total of \(3200\log n\) rounds. 8. EvaluationB (or EvaluationA) procedure is called up to \(\log(n)n^{1/4}\) times for each value of \(\alpha\) in step 3.2 of ComputePairs. Without any improvements, we get the following complexity, using \(3n\) in place of \(n\) for the terms of steps 3-8 due to corollary 4.6: \[\frac{\lceil\log(n)\rceil(\lceil\log(n)\rceil+1)}{2}\log\!\left( \frac{3n}{60\log 3n}\right)\!\left(2(3n)^{1/4}+220\log 3n+2(3n)^{1/4}+\right.\] \[\left.\frac{1}{2}\log 3n\cdot\log 3n\cdot(3n)^{1/4}3200(\log 3n) \right)\!, \tag{4.4}\] which we will call \(f(n)\), so that \(f(n)=\mathcal{O}(n^{1/4}\log^{6}(n))\), with the largest term being about \(800\log^{6}(n)n^{1/4}\), and we have dropped \(W\) to just consider the case \(W\ll n\). We can solve the problem trivially in the (quantum or classical) CONGEST-CLIQUE within \(n\log(W)\) rounds by having each node broadcast its neighbors and the weight on the edge. Let us again drop \(W\) for the case \(W\ll n\) so that in order for the quantum algorithm to give a real speedup, we will need \[f(n)<n,\] which requires \(n>10^{18}\) (even with the simpler under-approximation \(800\log^{6}(n)n^{1/4}\) in place of \(f\)). Hence, even with some potential improvements, the algorithm is impractical for a large regime of values of \(n\) even when compared to the trivial CONGEST-CLIQUE \(n\)-round strategy. For the algorithm of (Izumi and Gall, 2019) computing only APSP _distances_, the first term in 4.4 becomes simply \(\lceil\log n\rceil\), so that when computing only APSP distances the advantage over the trivial strategy begins at roughly \(n\approx 10^{16}\). **Remark 4.13**.: In light of logarithmic factors commonly being obscured by \(\tilde{\mathcal{O}}\) notation, we point out that even an improved algorithm needing only \(\log^{4}(n)n^{1/4}\) would not be practical unless \(n>10^{7}\), for the same reasons. Recall that \(n\) is the number of _processors_ in the distributed network - tens of millions would be needed to make this algorithm worth implementing instead of the trivial strategy. Practitioners should mind the \(\tilde{\mathcal{O}}\) if applications are of interest, since even relatively few logarithmic factors can severely limit practicality of algorithms, and researchers should be encouraged to fully write out the exact complexities of their algorithms for the same reason. #### 4.7.1 Memory Requirements Although in definition 2.1 we make no assumption on the memory capacities of each node, the trivial \(n\)-round strategy uses at least \(2\log(n)|E|^{2}\cdot\log(W)\) memory at the leader node that solves the problem. For the APSP problem in question, using the Floyd-Warshall algorithm results in memory requirements of \(2n^{2}\log(n)\cdot\log(nW)\) at the leader node. Hence, we may ask whether the quantum APSP algorithm leads to lower memory requirements. The memory requirement is largely characterized by up to \(720n^{7/4}\log(n)\log(nW)\) needed in step 0 of the EvaluationB procedure, which can be found in the appendix. This results in a memory advantage for quantum APSP over the trivial strategy beginning in the regime of \(n>1.6\cdot 10^{10}\). #### 4.7.2 Complexity of the Classical Analogue For completeness, we provide here a characterization of the complexity of a closely related classical algorithm for APSP with routing tables in the CONGEST-CLIQUE as proposed in (Censor-Hillel et al., 2016) that has complexity \(\tilde{\mathcal{O}}(n^{1/3})\). In their framework, the approach to finding witness matrices requires \(\mathcal{O}(\log(n)^{3})\) calls to the distance product (Censor-Hillel et al., 2016, SS3.4), and similarly to our approach \(\log(n)\) distance products are required. Their classical algorithm computes distance products in \(\mathcal{O}(n^{1/3})\) rounds, or under \(2\log n\) message bandwidth in up to \[20n^{1/3}\log(n)^{4}=:g(n) \tag{4.5}\] rounds, the details of which can be found in the appendix, SS8.2.1. Then \(g(n)>n\) up until about \(n\approx 2.6\cdot 10^{11}\). As with the quantum APSP, though this algorithm gives the best known asymptotic complexity of \(\tilde{\mathcal{O}}(n^{1/3})\) in the classical CONGEST-CLIQUE, it also fails to give any real improvement over the trivial strategy across a very large regime of values of \(n\). Consequently, algorithms making use of this APSP algorithm, such as (Saikia and Karmakar, 2019) or (Fischer and Oshman, 2021), suffer from the same problem of impracticality. However, the algorithm only requires within \(4n^{4/3}\log(n)\log(nW)+n\log(n)\log(nW)\) memory per node, which is less than required for the trivial strategy even for \(n\geq 4\). ## 5 Approximately Optimal Steiner Tree Algorithm ### Algorithm Overview We present a high-level overview of the proposed algorithm to produce approximately optimal Steiner Trees, divided into four steps. **Step 1 - APSP and Routing Tables:**: Solve the APSP problem as in (Izumi and Gall, 2019) and add an efficient routing table scheme via triangle finding in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds, with success probability \((1-1/poly(n))\) (this step determines the algorithm's overall success probability). **Step 2 - Shortest-path Forest:**: Construct a shortest-path forest (SPF), where each tree consists of exactly one source terminal and the shortest paths to the vertices whose closest terminal is that source terminal. This step can be completed in one round and \(n\) messages, per (Saikia and Karmakar, 2019, SS3.1). The messages can be in classical bits. **Step 3 - Weight Modifications:**: Modify the edge weights depending on whether they belong to a tree (set to 0), connect nodes in the same tree (set to \(\infty\)), or connect nodes from different trees (set to the shortest path distance between root terminals of the trees that use the edge). This uses one round and \(n\) messages. **Step 4 - Minimum Spanning Tree:**: Construct a minimum spanning tree (MST) on the modified graph in \(\mathcal{O}(1)\) rounds as in (Nowicki, 2019), and prune leaves of the MST that do not connect terminal nodes since these are not needed for the Steiner Tree. The correctness of the algorithm follows from the correctness of each step together with the analysis of the classical results of (Kou et al., 1981), which uses the same algorithmic steps of constructing a shortest path forest and building it into an approximately optimal Steiner Tree. ### Shortest Path Forest After the APSP distances and routing tables have been found, we construct a _Shortest Path Forest_ (SPF) based on the terminals of the Steiner Tree. **Definition 5.1**.: (Shortest Path Forest): For a weighted, undirected graph \(G=(V,E,W)\) together with a given set of terminal nodes \(Z=\{z_{1},\ldots,z_{k}\}\), a subgraph \(F=(V,E_{F},W)\) of \(G\) is called a _shortest path forest_ if it consists of \(|Z|\) disjoint trees \(T_{z}=(V_{z},E_{z},W)\) satisfying 1. \(z_{i}\in T_{z_{j}}\) if and only if \(i=j\), for \(i,j\in[k]\). 2. For each \(v\in Z_{i},d_{G}(v,z_{i})=\min_{z\in Z}d_{G}(v,z)\), and a shortest path connecting \(v\) to \(z_{i}\) in \(G\) is contained in \(T_{z_{i}}\) 3. The \(V_{z_{i}}\) form a partition of \(V\), and \(E_{z_{1}}\cup E_{z_{2}}\cdots\cup E_{z_{k}}=E_{F}\subset E\) In other words, an SPF is a forest obtained by gathering, for each node, a shortest path in \(G\) connecting it to the closest Steiner terminal node. For a node \(v\) in a tree, we will let \(par(v)\) denote the parent node of \(v\) in that tree, \(s(v)\) the Steiner Terminal in the tree that \(v\) will be in, and \(ID(v)\in[n]\) the ID of node \(v\in V\). Let \(\mathcal{Q}(v):=\{z:d_{G}(v,z)=\min_{z\in Z}d_{G}(v,z)\}\) be the set of Steiner Terminals closest to node \(v\). We make use of the following procedure for the SPF: DistributedSPF Input: For each node \(v\in G\), APSP distances and the corresponding routing table \(R_{v}\). Output: An SPF distributed among the nodes. 1. Each node \(v\) sets \(s(v):=\operatorname{argmin}_{z\in\mathcal{Q}(v)}ID(z)\) using the APSP information. 2. Each node \(v\) sets \(par(v):=R_{v}(s(v))\), \(R_{v}\) being the routing table of \(v\), and sends a message to \(par(v)\) to indicate this choice. If \(v\) receives such a message from another node \(u\), it registers \(u\) as its child in the SPF. Step 1 in DistributedSPF requires no communication since each node already knows the shortest path distances to all other nodes, including the Steiner Terminals, meaning it can be executed locally. Each node \(v\) choosing \(par(v)\) in step 2 can also be done locally using routing table information, and thus step 2 requires 1 round of communication of \(n-|Z|\) classical messages, since all non-Steiner nodes send one message. **Claim 2**.: After executing the DistributedSPF procedure, the trees \(T_{z_{k}}=(V_{z_{k}},E_{z_{k}},W)\) with \(V_{z_{k}}:=\{v\in V:s(v)=z_{k}\}\) and \(E_{z_{k}}:=\{v,par(v)\}:v\in V_{z_{k}}\}\) form an SPF. Proof.: i) holds since each Steiner Terminal is closest to itself. iii) is immediate. To see that ii) holds, note that for \(v\in V_{z_{k}}\), \(par(v)\in V_{z_{k}}\) and \(\{v,par(v)\}\in E_{z_{k}}\) as well. Then \(par(par(\ldots par(v)\dots))=z_{k}\) and the entire path to \(z_{k}\) lies in \(T_{z_{k}}\). Hence, after this procedure, we have a distributed SPF across our graph, where each node knows its label, parent, and children of the tree it is in. ### Weight Modified MST and Pruning Finally, we introduce a modification of the edge weights before constructing an MST on that new graph that will be pruned into an approximate Steiner Tree. These remaining steps stem from a centralized algorithm first proposed by (Kou et al., 1981) whose steps can be implemented efficiently in the distributed setting, as in (Saikia and Karmakar, 2019). We first modify the edge weights as follows: Partition the edges \(E\) into three sets - _tree edges_\(E_{F}\) as in 5.1 that are part of the edge set of the SPF, _intra-tree edges_\(E_{IT}\) that are incident on two nodes in the same tree \(T_{i}\) of the SPF, and _inter-tree edges_\(E_{XT}\) that are incident on two nodes in different trees of the SPF. Having each node know which of these its edges belong to can be done in one round by having each node send its neighbors the ID of the terminal it chose as the root of the tree in the SPF that is a part of. Then the edge weights are modified as follows, denoting the modified weights as \(W^{\prime}\): 1. For \(e=(u,v)\in E_{T},W^{\prime}(u,v):=0\) 2. For \(e=(u,v)\in E_{IT},W^{\prime}(u,v):=\infty\) 3. For \(e=(u,v)\in E_{XT},W^{\prime}(u,v):=d(u,Z_{u})+W(u,v)+d(v,Z_{v})\), noting that \(d_{G}(u,s(u))\) is the shortest-path distance in \(G\) from \(u\) to its closest Steiner Terminal. Next, we find a minimum spanning tree on the graph \(G^{\prime}=(V,E,W^{\prime})\), for which we may implement the classical \(\mathcal{O}(1)\) round algorithm proposed by (Nowicki, 2019). On a high level, this constant-round complexity is achieved by sparsification techniques, reducing MST instances to sparse ones, and then solving those efficiently. We skip the details here and refer the interested reader to (Nowicki, 2019). After this step, each node knows which of its edges are part of this weight-modified MST, as well as the parent-child relationships in the tree for those edges. Finally, we prune this MST by removing non-terminal leaf nodes and the corresponding edges. This is done by each node \(v\) sending the ID of its parent in the MST to every other node in the graph. As a result, each node can locally compute the entire MST and then decide whether or not it connects two Steiner Terminals. If it does, it decides it is part of the Steiner Tree; otherwise, it broadcasts that it is to be pruned. Each node that has not been pruned then registers the edges connecting it to non-pruned neighbors as part of the Steiner Tree. This pruning step takes 2 rounds and up to \(n^{2}+n\) classical messages. ### Overall Complexity and Correctness In algorithm 5.1, after step 1, steps 2, and 3 can each be done within 2 rounds. Walking through (Nowicki, 2019) reveals that the MST for step 4 can be found in 54 rounds, with an additional 2 rounds suffering for the pruning. Hence, the overall complexity remains dominated by Eq. (4.4). Hence, the round complexity is \(\tilde{\mathcal{O}}(n^{1/4})\), which is faster than any known classical CONGEST-CLIQUE algorithm to produce an approximate Steiner tree of the same approximation ratio. However, as a consequence of the full complexity obtained in SS4.7, the regime of \(n\) in which this algorithm beats the trivial strategy of sending all information to a single node is also \(n>10^{18}\). For the same reason, the classical algorithm provided in (Saikia and Karmakar, 2019) making use of the APSP subroutine from (Censor-Hillel et al., 2016) discussed in SS4.7.2 has its complexity mostly characterized by Eq. (4.5), so that the regime in which it provides an advantage over the trivial strategy lies in \(n>10^{11}\). Our algorithm's correctness follows from the correctness of each step together with the correctness of the algorithm by (Kou et al., 1981) that implements these steps in a classical, centralized manner. ## 6 Directed Minimum Spanning Tree Algorithm This section will be concerned with establishing Theorem 3.2 for the Directed Minimum Spanning Tree (DMST) problem, in definition 2.7. Like (Fischer and Oshman, 2021), we follow the algorithmic ideas first proposed by (Lovasz, 1985), implementing them in the quantum CONGEST-CLIQUE. Specifically, we will use \(\log n\) calls to the APSP and routing tables scheme described in SS4, so that in our case, we retrieve complexity \(\tilde{\mathcal{O}}(n^{1/4})\) and success probability \((1-\frac{1}{poly(n)})^{\log n}=1-\frac{1}{poly(n)}\). Before describing the algorithm, we need to establish some preliminaries and terminology for the procedures executed during the algorithm, especially the ideas of shrinking vertices into _super-vertices_ and tracking a set \(H\) of specific edges as first described in (Edmonds et al., 1967). We use the following language to discuss super-vertices and related objects. **Definition 6.1**.: A _super-vertex set_\(\mathbb{V}^{*}:=\{V_{1}^{*},\ldots,V_{t}^{*}\}\) for a graph \(G=(V,E,W)\) is a partition of \(V\), and each \(V_{i}^{*}\) is called a _super-vertex_. We will call a super-vertex _simple_ if \(V^{*}\) is a singleton. The corresponding _minor_\(G^{*}:=(\mathbb{V}^{*},E^{*},W^{*})\) is the graph obtained by creating edges \((V_{i}^{*},V_{j}^{*})\) with weight \(W^{*}(V_{i}^{*},V_{j}^{*}):=\min\{W(v_{i},v_{j}):v_{i}\in V_{i}^{*},v_{j}\in V _{j}^{*}\}\). Notably, we continue to follow the convention of an edge of weight \(\infty\) being equivalent to not having an edge. We will refer to creating a super-vertex \(V^{*}\) as _contracting_ the vertices in \(V^{*}\) into a super-vertex. ### Edmonds' Centralized DMST Algorithm We provide a brief overview of the algorithm proposed in (Edmonds et al., 1967), which presents the core ideas of the super-vertex-based approach. The following algorithm produces a DMST for \(G\): Edmonds DMST Algorithm Input: An integer-weighted digraph and a root node \(r\). Output: A DMST for \(G\) rooted at \(r\). 1. Initialize a subgraph \(H\) with the same vertex set as G by subtracting for each node the minimum incoming edge weight from all its incoming edges, and selecting exactly one incoming zero-weight edge for each non-root node of \(G\). Set \(G_{0}=G,H_{0}=H,t=0\). 2. WHILE \(H_{t}\) is not a tree: 1. For each cycle of \(H\), contract the nodes on that cycle into a super-vertex. Consider all non-contracted nodes as simple super-vertices, and obtain a new graph \(G_{t+1}\) as the resulting minor. 2. If there is a non-root node of \(G_{t+1}\) with no incoming edges, report a failure. Otherwise, obtain a subgraph \(H_{t+1}\) by, for each non-root node of \(G_{t+1}\), subtracting the minimum incoming edge weight from all its incoming edges, and selecting exactly one incoming zero-weight edge for each non-root, updating \(t\gets t+1\). 3. Let \(B_{t}=H_{t}\). FOR \(k\in(t,t-1,\ldots,1)\): 1. Obtain \(B^{\prime}_{k-1}\) by expanding the non-simple super-vertices of \(B_{k}\) and selecting all but one of the edges for each of the previously contracted cycles of \(H_{k}\) to add to \(B_{k-1}\). 4. Return \(B_{0}\). Note that the edge weight modifications modify the weight of all directed spanning trees equally, so optimality is unaffected. In step 2., if \(H_{t}\) is a tree, it is an optimal DMST for the current graph \(G_{t}\). Otherwise, it contains at least one directed cycle, so that indeed step 2. is valid. Hence, at the beginning of step 3., \(B_{t}\) is a DMST for \(G_{t}\). Then the first iteration produces \(B_{t-1}\) a DMST for \(G_{t-1}\) since only edges of zero weight were added, and \(B_{t-1}\) will have no cycles. The same holds for \(B_{t-2},B_{t-3},\ldots,B_{0}\), for which \(B_{0}\) corresponds to the DMST for the original graph \(G\). If the algorithm reports a failure at some point, no spanning tree rooted at \(r\) exists for the graph, since a failure is reported only when there is an isolated non-root connected component in \(G_{t+1}\). Note that in iteration \(t\) of step 2., \(H\) has one cycle for each of its connected components that does not contain the root node. Hence, the drawback of this algorithm is that we may apply up to \(\mathcal{O}(n)\) steps of shrinking cycles. This shortcoming is remedied by a more efficient method of selecting how to shrink nodes into super-vertices in (Lovasz, 1985), such that only \(\log n\) shrinking cycle steps take place. ### Lovasz' Shrinking Iterations We devote this subsection to discuss the shrinking step of (Lovasz, 1985) that will be repeated \(\log n\) times in place of step 2. of Edmonds' algorithm to obtain Lovasz' DMST algorithm. Lovasz' Shrinking Iteration LSI Input: A directed, weighted graph \(G=(V,E,W)\) and a root node \(r\in V\). Output: Either a new graph \(G^{*}\), or a success flag and a DMST \(H\) of \(G\). 1. If there is a non-root node of \(G\) with no incoming edges, report a failure. Otherwise, for each non-root node of \(G\), subtract the minimum incoming edge weight from all its incoming edges. Select exactly one incoming zero-weight edge for each non-root node to create a subgraph \(H\) of \(G\) with those edges. 2. Find all cycles of \(H\), and denote them \(H_{1},\dots,H_{C}\). If \(H\) has no cycles, abort the iteration and return (SUCCESS, H). For \(j=1,\dots,C\), find the set \(V_{j}\) of nodes that dipaths in \(H\) from \(H_{j}\) can reach. 3. Compute the All-Pairs-Shortest-Path distances in \(G\). 4. For each node \(v\in V\), denote \(d_{j}(v):=\min\{d(v,u):u\in H_{j}\}\). For each \(j=1,\dots,C\), set \(\beta_{j}:=\min\{d_{j}(v):v\in V(G)\setminus\mathbb{V}_{j}\}\) and \(U_{j}:=\{u\in V_{j}:d_{j}(u)\leq\beta_{j}\}\). 5. Create a minor \(G^{*}\) by contracting each \(U_{j}\) into a super-vertex \(U_{j}^{*}\), considering all other vertices of \(G\) as simple super-vertices \(V_{1}^{*},\dots,V_{k}^{*}\). For each vertex \(N^{*}\) of \(G^{*}\), let the edge weights in \(G^{*}\) be: \[W_{N^{*}U_{j}^{*}}^{*} =\min\{W_{vu}:v\in N^{*},u\in U_{j}^{*}\}-\beta_{j}+\min\{d_{j}(u ):u\in U_{j}^{*}\}\] for all \(j=1,\dots,C\), and \[W_{N^{*}V^{*}}^{*} =\min\{W_{vV^{*}}:v\in N^{*}\}\] for all the simple super-vertices \(V^{*}\) of \(G^{*}\). 6. Return \(G^{*}\). To summarize these iterations: The minimum-weight incoming edge of each node is selected. That weight is subtracted from the weights of every incoming edge to that node, and one of those edges with new weight \(0\) is selected for each node to create a subgraph \(H\). If \(H\) is a tree, we are done. Otherwise, we find all cycles of the resulting directed subgraph, then compute APSP and determine the \(V_{j},U_{j}\), and \(\beta_{j}\), which we use to define a new graph with some nodes of the original \(G\) contracted into super-vertices. The main result for the DMST problem in (Lovasz, 1985) is that replacing (a) and (b) of step 2. in the Edmonds DMST Algorithm, taking the new \(H\) obtained at each iteration to be \(H_{t+1}\) and the \(G^{*}\) to be \(G_{t+1}\), leads to no more than \(\lceil\log n\rceil\) such shrinking iterations needed before a success is reported. #### 6.2.1 Quantum Distributed Implementation Our goal is to implement the Lovasz iterations in the quantum distributed setting in \(\tilde{\mathcal{O}}(n^{1/4})\) rounds by making use of quantum APSP of SS4. In the distributed setting, processor nodes cannot directly be shrunk into super-vertices. As in (Fischer and Oshman, 2021), we reconcile this issue by representing the super-vertex contractions within the nodes through _soft contractions_. First, note that a convenient way to track what nodes we want to consider merging into a super-vertex is to keep a mapping \(sID:V\to S\), where \(S\) is a set of super-vertex IDs, which we can just take to be the IDs of the original nodes. We will refer to a pair of \((G,sID)\) as an _annotated graph_. An annotated graph naturally corresponds to some _minor_ of \(G\), namely, the minor obtained by contracting all vertices sharing a super-vertex ID into a super-vertex. **Definition 6.2** (Soft Contractions).: For an annotated graph \((G,sID)\), a set of active edges \(H\), and active component \(H_{i}\) with corresponding weight modifiers \(\beta_{i}\), and a subset \(A\subset S\) of super-vertices, the _soft contraction_ of \(H_{i}\) in G is the annotated graph \((G^{H_{i}},sID^{\prime})\) obtained by taking \(G^{H_{i}}=(V,E,W^{\prime})\) with * \(W^{\prime}_{uv}=0\) if \(sID(u)=sID(v)\) * \(W^{\prime}_{uv}=W_{uv}+dist_{G(A)}(v,C(H_{i}))-\beta_{i}\) if \(u\in V\setminus A\) and \(v\in A\) * \(W^{\prime}_{uv}=W_{uv}\) otherwise and updating the mapping \(sID\) to \(sID^{\prime}\) defined by \(sID^{\prime}(v)=sID(v),\forall v\notin A\), \(sID^{\prime}(v)=\min\{sID(u):u\in A\}\). #### 6.2.2 Quantum Distributed Lovasz' Iteration We provide here a quantum distributed implementation of Lovasz' iteration that we will form the core of our DMST algorithm. Quantum Distributed Lovasz' Iteration Qdlsi Input: A directed, weighted, graph \(G=(V,E,W)\) with annotations \(sID\) and a subgraph \(H\). Output: A new graph \(G^{*}\) with annotations \(sID^{\prime}\), or a success flag and a DMST \(H\) of \(G\). 1. Have all nodes learn all edges of \(H\), as well as the current super-vertices. 2. For each connected component \(H_{i}\subset H\), denote by \(C(H_{i})\) the cycle of \(H_{i}\). Let \(c(H_{i})\) be the node with maximal ID in \(C(H_{i})\), which each node can locally compute. 3. Run the quantum algorithm for APSP and routing tables described in SS4 on this graph, or report a failure if it fails. 4. For each \(i\), determine an edge \(v_{i}u_{i}\), \(v_{i}\notin H_{i},u_{i}\in H_{i}\) minimizing \(\beta_{i}:=W_{v_{i}u_{i}}+d_{G}(u_{i},c(H_{i}))\), and broadcast both to all nodes in \(H_{i}\). 5. Each node \(v_{i}\) in each \(H_{i}\) applies the following updates \(locally\): * Soft-contract \(H_{i}\) at level \(\beta_{i}\) to soft-contract all super-vertices with distance \(\beta_{i}\) to \(C(H_{i})\) into one super-vertex, with each contracted node updating its super-vertex ID to \(c(H_{i})\) * add edge \(v_{i}u_{i}\) to \(H\), effectively merging \(H_{i}\) with another active component of \(H\) We can follow exactly the steps of Lovasz's DMST algorithm, distributedly by replacing steps 2-5 of the LSI with this quantum-distributed version. The following ensues: **Lemma 6.3**.: If none of the APSP and routing table subroutines fail, within \(\lceil\log n\rceil\) iterations of the QDLSI, \(H\) is a single connected component. **Lemma 6.4**.: With probability \((1-\frac{1}{poly(n)})^{\log n}\), all the APSP and routing table subroutines in step 3 succeed. Lemmas 6.3 and 6.4 then together imply Theorem 3.2. Within \(\lceil\log n\rceil\) iterations, only one active component remains: the root component. This active component can then be expanded to a full DMST on \(G\) within \(\lceil\log n\rceil\) rounds, as detailed in (Fischer and Oshman, 2021, SS7) or the Umpacking procedure in SS8.3 of the appendix. All messages in the algorithm other than those for computing the APSP in QDLSI may be classical. We provide here the full algorithm for completeness: Quantum DMST Algorithm Input: An integer-weighted digraph and a root node \(r\). Output: A DMST for \(G\) rooted at \(r\). 1. Initialize a subgraph \(H\) with the same vertex set as G by subtracting for each node the minimum incoming edge weight from all its incoming edges, and selecting exactly one incoming zero-weight edge for each non-root node of \(G\). Set \(t=0,H_{0}=H\), and \(G_{0}=G\) with annotations \(sID_{0}\) to be the identity mapping. 2. WHILE: \(H_{t}\) is not a single component 1. Run QDLSI with inputs \(H_{t}\), \((G_{t},sID_{t})\) to obtain \(H_{t+1}\), \((G_{t+1},sIDt+1)\) as outputs. Increment \(t\gets t+1\). 3. Let \(T_{t}:=H_{t}\). For \(k=t,\ldots,1\): For each super-vertex of the \(k^{th}\) iteration of QDLSI applied, simultaneously run the Unpacking procedure with input tree \(T_{k}\) to obtain \(T_{k-1}\). 4. Return \(T_{0}\) as the distributed minimum spanning tree. ### Complexity In the QDLSI, all steps other than the APSP step 3 of the quantum Lovasz iteration can be implemented within 2 rounds. In particular, to have all nodes know some tree on G for which each node knows its parent, every node can simply broadcast its parent edge and weight. Since this iteration is used up to \(\lceil\log(n)\rceil\) times and expanding the DMST at the end of the algorithm also takes logarithmically many rounds, we obtain a complexity dominated by the APSP computation of \(\tilde{\mathcal{O}}(n^{1/4})\), a better asymptotic rate than any known classical CONGEST-CLIQUE algorithm. However, beyond the \(\tilde{\mathcal{O}}\), the complexity is largely characterized by \(\log(n)\cdot f(n)\), with \(f(n)\) as in Eq. (4.4). In order to have \(\log(n)f(n)<n\) to improve upon the trivial strategy of having a single node solve the problem, we then need \(n>10^{21}\). Using the classical APSP from (Censor-Hillel et al., 2016) in place of the quantum APSP of SS4 as done in (Fischer and Oshman, 2021) to attain the \(\tilde{\mathcal{O}}(n^{1/3})\) complexity in the cCCM, one would need \(\log(n)\cdot g(n)<n\) to beat the trivial strategy, with \(g\) as in Eq. (4.5), or more than \(n>10^{14}\). ## 7 Discussion and Future Work We have provided algorithms in the Quantum CONGEST-CLIQUE model for computing approximately optimal Steiner Trees and exact Directed Minimum Spanning trees that use asymptotically fewer rounds than their classical known counterparts. As Steiner Tree and Minimum Spanning Trees cannot benefit from quantum communication in the CONGEST (non-clique) model, the algorithms reveal how quantum communication can be exploited thanks to the CONGEST-CLIQUE setting. A few open questions remain as well. In particular, there exist many generalizations of the Steiner Tree problem, so these may be a natural starting point to attempt to generalize the results. A helpful overview of Steiner-type problems can be found in (Hauptmann and Karpinski, 2015). Regarding the DMST, it may be difficult to generalize a similar approach to closely related problems. Since the standard MST can be solved in a (relatively small) constant number of rounds in the classical CONGEST-CLIQUE, no significant quantum speedup is possible. Other interesting MST-type problems are the bounded-degree and minimum-degree spanning tree problems. However, even the bounded-degree decision problem on an unweighted graph, "does \(G\) have a spanning tree of degree at most \(k\)?" is NP-complete, unlike the DMST, so we suspect that other techniques would need to be employed. (Dinitz, Halldorsson, Izumi, and Newport, 2019) provides a classical distributed approximation algorithm for the problem. Additionally, we have traced many constants and \(\log\) factors throughout our description of the above algorithms, which, as shown, would need to be significantly improved for these and related algorithms to be practical. Hence, a natural avenue for future work is to work towards such practical improvements. Beyond the scope of the particular algorithms involved, we hope to help the community recognize the severity with which the practicality of algorithms is affected by logarithmic factors that may be obscured by \(\tilde{\mathcal{O}}\) notation, and thus encourage fellow researchers to present the full complexity of their algorithms beyond asymptotics. Particularly in a model like CONGEST-CLIQUE, where problems can always be solved trivially in \(n\) rounds, these logarithmic factors should clearly not be taken lightly. Further, a question of potential practical interest would be to ask the following: What algorithms solving the discussed problems are the most efficient with respect to rounds needed in the CONGEST-CLIQUE in the regimes of \(n\) in which the discussed algorithms are impractical? ## Acknowledgements We are grateful for support from the NASA Ames Research Center, from the NASA SCaN program, and from DARPA under IAA 8839, Annex 130. PK and DB acknowledge support from the NASA Academic Mission Services (contract NNA16BD14C). The authors thank Ojas Parekh for helpful input and discussions regarding the arborescence problem, Shon Grabbe for ongoing useful discussions, and Filip Maciejewskifor for helpful feedback on the work.
2302.00364
The YODO algorithm: An efficient computational framework for sensitivity analysis in Bayesian networks
Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. Various sensitivity measures have been defined to quantify such influence, most commonly some function of the quantity of interest's partial derivative with respect to the network's conditional probabilities. However, computing these measures in large networks with thousands of parameters can become computationally very expensive. We propose an algorithm combining automatic differentiation and exact inference to efficiently calculate the sensitivity measures in a single pass. It first marginalizes the whole network once, using e.g. variable elimination, and then backpropagates this operation to obtain the gradient with respect to all input parameters. Our method can be used for one-way and multi-way sensitivity analysis and the derivation of admissible regions. Simulation studies highlight the efficiency of our algorithm by scaling it to massive networks with up to 100'000 parameters and investigate the feasibility of generic multi-way analyses. Our routines are also showcased over two medium-sized Bayesian networks: the first modeling the country-risks of a humanitarian crisis, the second studying the relationship between the use of technology and the psychological effects of forced social isolation during the COVID-19 pandemic. An implementation of the methods using the popular machine learning library PyTorch is freely available.
Rafael Ballester-Ripoll, Manuele Leonelli
2023-02-01T10:47:31Z
http://arxiv.org/abs/2302.00364v1
The YODO algorithm: An efficient computational framework for sensitivity analysis in Bayesian networks ###### Abstract Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. Various sensitivity measures have been defined to quantify such influence, most commonly some function of the quantity of interest's partial derivative with respect to the network's conditional probabilities. However, computing these measures in large networks with thousands of parameters can become computationally very expensive. We propose an algorithm combining automatic differentiation and exact inference to efficiently calculate the sensitivity measures in a single pass. It first marginalizes the whole network once, using e.g. variable elimination, and then backpropagates this operation to obtain the gradient with respect to all input parameters. Our method can be used for one-way and multi-way sensitivity analysis and the derivation of admissible regions. Simulation studies highlight the efficiency of our algorithm by scaling it to massive networks with up to 100'000 parameters and investigate the feasibility of generic multi-way analyses. Our routines are also showcased over two medium-sized Bayesian networks: the first modeling the country-risks of a humanitarian crisis, the second studying the relationship between the use of technology and the psychological effects of forced social isolation during the COVID-19 pandemic. An implementation of the methods using the popular machine learning library PyTorch is freely available. The YODO algorithm The YODO algorithm: An efficient computational framework for sensitivity analysis in Bayesian networks Rafael Ballester-Ripoll Rafael Ballester-Ripoll Automatic differentiation; Bayesian networks; COVID-19; PyTorch; Sensitivity analysis. ## 1 Introduction Probabilistic graphical models, and specifically Bayesian networks (BNs), are a class of models that are widely used for risk assessment of complex operational systems in a variety of domains. The main reason for their success is that they provide an efficient and intuitive framework to represent the joint probability of a vector of variables of interest using a simple graph. Their use to assess the reliability of engineering, medical and ecological systems, among many others, is becoming increasingly popular. Sensitivity analysis is a critical step for any applied real-world analysis to assess the importance of various risk factors and to evaluate the overall safety of the system under study (see e.g. Goerlandt and Islam, 2021; Makaba et al., 2021; Zio et al., 2022, for some recent examples). As noticed by Rohmer (2020), sensitivity analysis in BNs is usually _local_, in the sense that it measures the effect of a small number of parameter variations on output probabilities of interest, while other parameters are kept fixed. In the case of a single parameter variation, sensitivity analysis is usually referred to as _one-way_; otherwise, when more than one parameter is varied, it is called _multi-way_. Although recently there has been an increasing interest in proposing _global_ sensitivity methods for BNs measuring how different factors _jointly_ influence some function of the model's output (see e.g. Ballester-Ripoll and Leonelli, 2022; Li and Mahadevan, 2018), the focus of this paper still lies in local sensitivity methods. Local sensitivity analysis in BNs can be broken down into two main steps. First, some parameters of the model are varied, and the effect of these variations on output probabilities of interest is investigated. For this purpose, a simple mathematical function, usually termed _sensitivity function_, describes an output probability of interest as a function of the BN parameters (Castillo et al., 1997; Coupe and van der Gaag, 2002). Furthermore, some specific properties of such a function can be computed, for instance, the _sensitivity value_ or the _vertex proximity_, which give an overview of how sensitive the probability of interest is to variations of the associated parameter (van der Gaag et al., 2007). Second, once parameter variations are identified, their effect is summarized by a distance or divergence measure between the original and the varied distributions underlying the BN, most commonly the Chan-Darwiche distance (Chan and Darwiche, 2005) or the well-known Kullback-Leibler divergence. As demonstrated by Kwisthout and van der Gaag (2008), the derivation of both the sensitivity function and its associated properties is computationally very demanding. In Ballester-Ripoll and Leonelli (2022), we introduced a novel, computationally highly-efficient method to compute all sensitivity measures of interest in one-way sensitivity analysis, which takes advantage of backpropagation and is easy to compute thanks to automatic differentiation. We now also demonstrate how the algorithm can be utilized for more generic multi-way sensitivity analyses and for deriving admissible regions (van der Gaag and Renooij, 2001). Simulation studies show the efficiency of the approach by processing massive networks in a few seconds and demonstrate when multi-way analyses are computationally feasible. Two practical applications from real-world datasets further showcase the insights sensitivity measures can provide and the efficiency of the implemented routines. We have open-sourced a Python implementation using the popular machine learning library PyTorch1, contributing to the recent effort of promoting sensitivity analysis (Douglas-Smith et al., 2020). ## 2 Bayesian networks and sensitivity analysis A BN is a probabilistic graphical model defining a factorization of the probability mass function (pmf) of a random vector using a directed acyclic graph (DAG) (Darwiche, 2009b; Pearl, 1988). More formally, let \([p]=\{1,\ldots,p\}\) and \(\mathbf{Y}=(Y_{i})_{i\in[p]}\) be a random vector of interest with sample space \(\mathbb{Y}=\times_{i\in[p]}\mathbb{Y}_{i}\). A BN defines the pmf \(P(\mathbf{Y}=\mathbf{y})\), for \(\mathbf{y}\in\mathbb{Y}\), as a product of simpler conditional pmfs as follows: \[P(\mathbf{Y}=\mathbf{y})=\prod_{i\in[p]}P(Y_{i}=y_{i}\mid\mathbf{Y}_{\Pi_{i}}=\mathbf{y}_{\Pi_ {i}}), \tag{1}\] where \(\mathbf{Y}_{\Pi_{i}}\) are the parents of \(Y_{i}\) in the DAG associated to the BN. The definition of the pmf over \(\mathbf{Y}\), which would require defining \(\#\mathbb{Y}-1\) probabilities, is thus simplified in terms of one-dimensional conditional pmfs. The coefficients of these functions are henceforth referred to as the parameters \(\mathbf{\theta}\) of the model. The DAG structure may be either expert-elicited or learned from data using structural learning algorithms, and the associated parameters \(\mathbf{\theta}\) can be either expert-elicited or learned using frequentist or Bayesian approaches. No matter the method used, we assume that a value for these parameters \(\mathbf{\theta}\) has been chosen, which we refer to as the _original value_ and denote it as \(\mathbf{\theta}^{0}\). The DAG associated with a BN provides an intuitive overview of the relationships between variables of interest. However, it does also provide a framework to assess if any generic conditional independence holds for a specific subset of the variables via the so-called d-separation criterion (see e.g. Pearl, 1988). Furthermore, the DAG provides a framework for the efficient propagation of probabilities and evidence via algorithms that take advantage of the structure of the underlying DAG. ### One-way sensitivity analysis In practical applications, it is fundamental to extensively assess the implications of the chosen parameter values \(\mathbf{\theta}^{0}\) to outputs of the model. In the context of BNs, this study is usually referred to as _sensitivity analysis_, which can be further used during the model-building process as showcased by Coupe et al. (2000). Let \(Y_{O}\) be an output variable of interest and \(\mathbf{Y}_{E}\) be _evidential_ variables, those that may be observed. The interest is in then studying how \(P(Y_{O}=y_{O}\mid\mathbf{Y}_{E}=\mathbf{y}_{E})\) varies when a parameter \(\theta_{i}\) is varied. In particular, \(P(Y_{O}=y_{O}\mid\mathbf{Y}_{E}=\mathbf{y}_{E})\) seen as a function of \(\theta_{i}\) is called _sensitivity function_ and denoted as \(f(\theta_{i})\). ### Proportional covariation Notice that when an input \(\theta_{i}\) is varied from its original value \(\theta_{i}^{0}\), the parameters from the same conditional pmf need to _covary_ to respect the sum-to-one condition of probabilities. When variables are binary, this is automatic since one parameter must be equal to one minus the other. However, for variables taking more than two levels, this covariation can be done in several ways (Renooij, 2014). We henceforth assume that whenever a parameter is varied from its original value \(\theta_{i}^{0}\) to a new value \(\theta_{i}\), then every parameter \(\theta_{j}\) from the same conditional pmf is _proportionally covariied_(Laskey, 1995) from its original value \(\theta_{j}^{0}\): \[\theta_{j}(\theta_{i})=\frac{1-\theta_{i}}{1-\theta_{i}^{0}}\theta_{j}^{0}. \tag{2}\] Proportional covariation has been studied extensively, and its choice is motivated by a wide array of theoretical properties (Chan and Darwiche, 2005; Leonelli et al., 2017; Leonelli and Riccomagno, 2022; Renooij, 2014). Under the assumption of proportional covariation, Castillo et al. (1997) and Coupe and van der Gaag (2002) demonstrated that the sensitivity function is the ratio of two linear functions: \[f(\theta_{i})=\frac{c_{0}+c_{i}\theta_{i}}{d_{0}+d_{i}\theta_{i}}, \tag{3}\] where \(c_{0},c_{i},d_{0},d_{i}\in\mathbb{R}_{+}\). van der Gaag et al. (2007) noticed that the above expression coincides with the fragment of a rectangular hyperbola, which can be generally written as \[f(\theta_{i})=\frac{r}{\theta_{i}-s}+t, \tag{4}\] where \[s=-\frac{d_{0}}{d_{i}},\ \ t=\frac{c_{i}}{d_{i}},\ \ r=\frac{c_{0}}{d_{i}}+st. \tag{5}\] #### 2.2.1 Sensitivity values The _sensitivity value_ describes the effect of infinitesimally small shifts in the parameter's original value on the probability of interest and is defined as the absolute value of the first derivative of the sensitivity function at the original value of the parameter, i.e. \(|f^{{}^{\prime}}(\theta_{i}^{0})|\). This can be found by simply differentiating the sensitivity function as \[|f^{{}^{\prime}}(\theta_{i}^{0})|=\frac{|c_{i}d_{0}-c_{0}d_{i}|}{(d_{i}\theta_ {i}^{0}+d_{0})^{2}}. \tag{6}\] The higher the sensitivity value, the more sensitive the output probability to small changes in the parameter's original value. As a rule of thumb, parameters having a sensitivity value larger than one may require further investigation. Notice that when \(\mathbf{Y}_{E}\) is empty, i.e. the output probability of interest is marginal, the sensitivity function is linear in \(\theta_{i}\). The sensitivity value is the same regardless of the original \(\theta_{i}^{0}\). Therefore, in this case, the absolute value of the gradient is sufficient to quantify the effect of a parameter on an output probability of interest. #### 2.2.2 Vertex proximity van der Gaag et al. (2007) further noticed that parameters for which the sensitivity value is small may still be such that the conditional output probability of interest is very sensitive to their variations. This happens when the original parameter value is close to the _vertex_ of the sensitivity function, defined as the point \(\theta_{i}^{v}\) at which the sensitivity value is equal to one, i.e. \[|f^{{}^{\prime}}(\theta_{i}^{v})|=1. \tag{7}\] The vertex can be derived from the equation of the sensitivity function as \[\theta_{i}^{v}=\left\{\begin{array}{ll}s+\sqrt{|r|},&\mbox{if }s<0,\\ s-\sqrt{|r|},&\mbox{if }s>0.\end{array}\right. \tag{8}\] Notice that the case \(s=0\) is not contemplated since it would coincide with a linear sensitivity function, not a hyperbolic one. _Vertex proximity_ is defined as the absolute difference \(|\theta_{i}^{0}-\theta_{i}^{v}|\). The smaller the vertex proximity, the more sensitive the output probabilities may be to parameter variations, even when the sensitivity value is small. #### 2.2.3 Other metrics Given the coefficients \(c_{0},c_{i},d_{0},d_{i}\) of Equation (3), it is straightforward to derive any property of the sensitivity function besides the sensitivity value and the vertex proximity. Here we propose the use of two additional metrics. The first is the absolute value of the second derivative of the sensitivity function at the original parameter value, which can be easily computed as: \[|f^{\prime\prime}(\theta_{i}^{0})|=\frac{2d_{i}\left|c_{i}d_{0}-c_{0}d_{i} \right|}{\left(d_{i}\theta_{i}^{0}+d_{0}\right)^{3}}. \tag{9}\] Similarly to the sensitivity value, high values of the second derivative at \(\theta_{i}^{0}\) indicate parameters that could highly impact the probability of interest. The second measure is the maximum of the first derivative of the sensitivity function over the interval \([0,1]\) in absolute value, which we find easily by noting that the denominator of Equation (6) is a parabola: \[\max_{\theta_{i}\in[0,1]}|f^{\prime}(\theta_{i})|=\begin{cases}\infty&\mbox{ if }-d_{0}/d_{i}\in[0,1]\\ \max\{|c_{i}d_{0}-c_{0}d_{i}|/d_{0}^{2},|c_{i}d_{0}-c_{0}d_{i}|/(d_{i}+d_{0})^ {2}\}&\mbox{ otherwise.}\end{cases} \tag{10}\] Again high values indicate parameters whose variations can lead to a significant change in the output probability of interest. ### Multi-way sensitivity analysis In many practical applications, there is interest in assessing the effect of simultaneous variations of multiple parameters on the output of interest. This is called a _multi-way sensitivity analysis_. Although there have been some attempts to study the theoretical properties and computational efficiency of these more generic analyses (see e.g. Bolt and Renooij, 2014; Chan and Darwiche, 2004; Kjaerulff and van der Gaag, 2000; Leonelli et al., 2017; Leonelli and Riccomagno, 2022), in practice, they are not as common as one-way analyses. #### 2.3.1 General formulation Suppose now that \(n\) parameters \(\boldsymbol{\theta}_{n}=(\theta_{1},\ldots,\theta_{n})\) are simultaneously varied. By default, these parameters are taken from different conditional pmfs so that they are independent of each other (van der Gaag et al., 2007). In the binary case, this is natural since only one parameter per pmf can be varied since the other is functionally related. The other parameters from conditional pmfs including \(\theta_{1},\ldots,\theta_{n}\), are proportionally covaried, as for the one-way analysis (see Leonelli and Riccomagno, 2022, for a formal discussion). The effect of varying the parameters \(\boldsymbol{\theta}_{n}\) on a probability of interest \(P(Y_{O}=y_{O}\mid\boldsymbol{Y}_{E}=\boldsymbol{y}_{E})\) is captured by the n-way sensitivity function, which is equal to \[f(\boldsymbol{\theta}_{n})=\frac{\sum_{K\in\mathcal{P}([n])}c_{K}\prod_{i\in K }\theta_{i}}{\sum_{K\in\mathcal{P}([n])}d_{K}\prod_{i\in K}\theta_{i}}, \tag{11}\] where \(\mathcal{P}\) denotes the power set and \(c_{K},d_{K}\in\mathbb{R}\), \(K\in\mathcal{P}([n])\), are constants computed from the non-varied parameters. For instance, a 2-way sensitivity function can be written as: \[f(\theta_{1},\theta_{2})=\frac{c_{0}+c_{1}\theta_{1}+c_{2}\theta_{2}+c_{12} \theta_{1}\theta_{2}}{d_{0}+d_{1}\theta_{1}+d_{2}\theta_{2}+d_{12}\theta_{1} \theta_{2}} \tag{12}\] An n-way sensitivity function, in general, requires the computation of \(2^{n+1}\) constants and is thus computationally expensive. Furthermore, the number of combinations of parameters for which the sensitivity function has to be constructed increases: see Section 3.2 for a discussion. #### 2.3.2 Maximum n-way sensitivity values While for one-way sensitivity analysis, one can uniquely talk about the derivative of the sensitivity function, for multi-valued functions, there are multiple directions at which the derivative could be computed, as noted by (Bolt and Renooij, 2014), and hence the notion of _directional derivative_. However, basic calculus tells us that the maximum directional derivative of a function \(f\) at a point \(\boldsymbol{\theta}_{n}\) equals the length of the gradient vector at \(\boldsymbol{\theta}_{n}\), i.e. \(|\Delta f(\boldsymbol{\theta}_{n})|\). This observation led to the definition of the sensitivity value for an n-way sensitivity function as the maximum one out of all possible directional derivatives (Bolt and Renooij, 2014). For a vector of parameters \(\mathbf{\theta}_{n}\) with original values \(\mathbf{\theta}_{n}^{0}\) the _maximum n-way sensitivity value_ is defined as \[sv_{\max}^{\mathbf{\theta}_{n}}=|\Delta f(\mathbf{\theta}_{n}^{0})|, \tag{13}\] where \(f\) is the associated n-way sensitivity function. By definition, the maximum n-way sensitivity value would first require the derivation of the n-way sensitivity function and, subsequently, the computation of its gradient. As noticed already, this direct approach would be computationally too expensive. However, Bolt and Renooij (2014) demonstrated that \(sv_{\max}^{\mathbf{\theta}_{n}}\) could be easily computed from the sensitivity values of one-way sensitivity functions. Let \(c_{0}^{i},c_{i},d_{0}^{i},d_{i}\) be the coefficients of the one-way sensitivity function for the variation of the parameter \(\theta_{i}\) in \(\mathbf{\theta}_{n}\). Then: \[sv_{\max}^{\mathbf{\theta}_{n}}=\frac{1}{P(\mathbf{Y}_{E}=\mathbf{y}_{E})^{2}}\sqrt{\sum_{ i\in[n]}(c_{i}d_{0}^{i}-c_{0}^{i}d_{i})^{2}}. \tag{14}\] Therefore if an efficient method for computing the coefficients of one-way sensitivity functions exists, then maximum n-way sensitivity values can be equally efficiently derived. ### Admissible regions In many applied situations, the object of interest is not a probability per se, but rather the most likely value of a variable, possibly conditional on a specific subset of evidence. This is the case for classification problems where a Bayes classifier is used: an unlabeled observation exhibiting a specific evidence pattern is classified according to the most likely value. BNs designed explicitly for this task are usually called Bayesian network classifiers (Bielza and Larranaga, 2014; Friedman et al., 1997). Although sensitivity methods for this type of classification problem have been discussed (Bolt and van der Gaag, 2017), sensitivity values and related measures are often not particularly useful. van der Gaag and Renooij (2001) demonstrated that parameters with a small sensitivity value might induce a change in the classification rule, or equally in the most likely value, for just a slight deviation from its original value. For this reason, they introduced the concept of _admissible region_, which captures the extent to which a parameter can be varied without inducing a change in the most likely value for the variable of interest. For ease of notation, we consider here a variable of interest \(Y_{O}\) taking two possible levels \(y_{O}\) and \(y^{\prime}_{O}\) (thus, we consider the most common binary classification problem). Consider also possible evidence \(\mathbf{Y}_{E}=\mathbf{y}_{E}\), a perturbed parameter \(\theta_{i}\) and suppose that \(P(Y_{O}=y_{O}|\mathbf{Y}_{E}=\mathbf{y}_{E})>P(Y_{O}=y^{\prime}_{O}|\mathbf{Y}_{E}=\mathbf{y} _{E})\), without loss of generality. The admissible region \(R_{i}\) is formally defined as the interval of values for \(\theta_{i}\) \[(\max\{\theta_{i}^{0}-r,0\},\min\{\theta_{i}^{0}+s,1\}),\qquad r,s,\in\mathbb{ R}, \tag{15}\] for which \(P(Y_{O}=y_{O}|\mathbf{Y}_{E}=\mathbf{y}_{E})>P(Y_{O}=y^{\prime}_{O}|\mathbf{Y}_{E}=\mathbf{y}_{E})\). The wider the interval \(R_{i}\), the less influential the parameter is for the most likely value. van der Gaag and Renooij (2001) and van der Gaag et al. (2007) already demonstrated that such regions could be computed from the one-way sensitivity functions by identifying the points at which the sensitivity functions intersect. However, they did not explicitly write the admissible regions as a function of the sensitivity functions' coefficients to our knowledge. Let \(c_{0},c_{i},d_{0},d_{i}\) be the coefficients of the sensitivity function for the event \(Y_{O}=y_{O}|\mathbf{Y}_{E}=\mathbf{y}_{E}\). It follows that the sensitivity function for \(Y_{O}=y^{\prime}_{O}|\mathbf{Y}_{E}=\mathbf{y}_{E}\) must be equal to \[\frac{(d_{0}-c_{0})+(d_{i}-c_{i})\theta_{i}}{d_{0}+d_{i}\theta_{i}}. \tag{16}\] By equating the two sensitivity functions, we find that \[R_{i}=\left\{\begin{array}{ll}\left(0,\min\{\frac{d_{0}-2c_{0}}{2c_{i}-d_{i }},1\}\right)&\mbox{if }\theta_{i}^{0}\leq\frac{d_{0}-2c_{0}}{2c_{i}-d_{i}}\\ \left(\max\{0,\frac{d_{0}-2c_{0}}{2c_{i}-d_{i}}\},1\right)&\mbox{otherwise} \end{array}\right. \tag{17}\] In the case of \(E=\emptyset\), i.e. no evidence, the expression for the admissible regions simplifies to: \[R_{i}=\left\{\begin{array}{ll}\left(0,\min\{\frac{1-2c_{0}}{2c_{i}},1\} \right)&\mbox{if }\theta_{i}^{0}\leq\frac{1-2c_{0}}{2c_{i}}\\ \left(\max\{0,\frac{1-2c_{0}}{2c_{i}}\},1\right)&\mbox{otherwise}\end{array}\right. \tag{18}\] Therefore, given an efficient method to compute one-way sensitivity functions, admissible regions for all individual parameters can be equally efficiently derived. ## 3 The YODO method The YODO (You Only Derive Once) method was first introduced in Ballester-Ripoll and Leonelli (2022b) to compute the one-way sensitivity measures discussed in Sections 2.2.1-2.2.3. We first review it and then discuss its use in multi-way sensitivity analysis. ### YODO for one-way sensitivity analysis #### 3.1.1 First case: Marginal probability as a function of interest Suppose \(f(\theta_{i})=P(Y_{O}=y_{O})=c_{0}+c_{i}\theta_{i}\) assuming proportional covariation as \(\theta_{i}\) varies. Let \(\theta_{j_{1}},\ldots,\theta_{j_{n}}\) be the other parameters of the same conditional pmf as \(\theta_{i}\), i.e. they are all bound by the sum-to-one constraint \(\theta_{i}+\theta_{j_{1}}+\cdots+\theta_{j_{n}}=1\). First, we rewrite \(f\) as \[f(\theta_{i})=g(\theta_{i},\theta_{j_{1}}(\theta_{i}),\ldots,\theta_{j_{n}}( \theta_{i})) \tag{19}\] and we show how to obtain \(f^{\prime}(\theta_{i})\) provided that we can compute the gradient \(\nabla g\) with respect to symbols \(\theta_{i},\theta_{j_{1}},\ldots,\theta_{j_{n}}\) (see Section 3.1.3 for details on the latter). By the generalized chain rule, it holds that \[f^{\prime}(\theta_{i})=\frac{\partial g}{\partial\theta_{i}}\cdot 1+\frac{ \partial g}{\partial\theta_{j_{1}}}\cdot\frac{d\theta_{j_{1}}}{d\theta_{i}}+ \cdots+\frac{\partial g}{\partial\theta_{j_{n}}}\cdot\frac{d\theta_{j_{n}}}{d \theta_{i}}. \tag{20}\] By deriving Equation (2), we have that for all \(1\leq m\leq n\): \[\frac{d\theta_{j_{m}}}{d\theta_{i}}=\frac{-\theta_{j_{m}}^{0}}{1-\theta_{i}^{0}} \tag{21}\] and, therefore, \[f^{\prime}(\theta_{i})=\frac{\partial g}{\partial\theta_{i}}-\frac{(\partial g /\partial\theta_{j_{1}})\cdot\theta_{j_{1}}^{0}+\cdots+(\partial g/\partial \theta_{j_{n}})\cdot\theta_{j_{n}}^{0}}{1-\theta_{i}^{0}}. \tag{22}\] Last, since \(f(\theta_{i})=P(\mathbf{Y}_{O}=\mathbf{y}_{O})=c_{0}+c_{i}\theta_{i}\), we easily find the parameters \(c_{0},c_{i}\): \[\begin{cases}c_{i}=f^{\prime}(\theta_{i}^{0})\\ c_{0}=P(\mathbf{Y}_{O}=\mathbf{y}_{O})-c_{i}\theta_{i}^{0}.\end{cases} \tag{23}\] #### 3.1.2 Second case: Conditional probability as a function of interest When \(f(\theta_{i})=P(Y_{O}=y_{O}\mid\mathbf{Y}_{E}=\mathbf{y}_{E})=P(Y_{O}=y_{O},\mathbf{Y}_{E} =\mathbf{y}_{E})/P(\mathbf{Y}_{E}=\mathbf{y}_{E})\), we simply repeat the procedure from Sec. 3.1.1 twice: 1. We first apply it to \(P(Y_{O}=y_{O},\mathbf{Y}_{E}=\mathbf{y}_{E})\) to obtain \(c_{0}\) and \(c_{i}\); 2. we then apply it to \(P(\mathbf{Y}_{E}=\mathbf{y}_{E})\) to obtain \(d_{0}\) and \(d_{i}\). #### 3.1.3 Computing the gradient \(\nabla g\) Let \(\mathbf{Y}_{K}=\mathbf{y}_{K}\) be a subset of the network variables taking some evidence values (this could be \(K=O\) or \(K=O\cup E\); hence we cover the two cases above). We start by moralizing the BN into a Markov random field (MRF) \(\mathcal{M}\). This marries all variable parents together and, for each conditional probability table (now called _potential_), drops the sum-to-one constraint; see e.g. (Darwiche, 2009a) for more details. Next, we impose the evidence \(\mathbf{Y}_{K}=\mathbf{y}_{K}\) by defining \(\mathcal{M}^{\mathbf{Y}_{K}=\mathbf{y}_{K}}\) as a new MRF that results from substituting each potential \(\Phi_{i_{1},\dots,i_{M}}(x_{i_{1}},\dots,x_{i_{M}})\) by a new potential \(\widehat{\Phi}_{i_{1},\dots,i_{M}}\) defined as follows: \[\begin{cases}\widehat{\Phi}_{i_{1},\dots,i_{M}}(Y_{i_{1}}=x_{i_{1}},\dots,Y_{ i_{M}}=x_{i_{M}})=\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par \par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\parpar\par \par\par\par\par\par\par\par\par\par\par\par\par\par\parpar\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\parpar\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par In other words, we copy the original potential but zero-out all entries that do not honor the assignment of values \(\mathbf{Y}_{K}=\mathbf{y}_{K}\). See Table 1 for an example using a bivariate potential. Intuitively, the modified MRF \(\mathcal{M}^{\mathbf{Y}_{K}=\mathbf{y}_{K}}\) represents the unnormalized probability for all variable assignments that are compatible with \(\mathbf{Y}_{K}=\mathbf{y}_{K}\). In particular, if \(\mathcal{M}_{\mathbf{Y}_{K}}\) denotes the marginalization of a network \(\mathcal{M}\) over all variables in \(\mathbf{Y}_{K}\), we have that \((\mathcal{M}^{\mathbf{Y}_{K}=\mathbf{y}_{K}})_{\mathbf{Y}}=P(\mathbf{Y}_{K}=\mathbf{y}_{K})\). In other words, computing \(g\) reduces to marginalizing our MRF. In this paper, we marginalize it exactly using the variable elimination (VE) algorithm (see e.g Darwiche, 2009a). This method is differentiable w.r.t. all parameters \(\mathbf{\theta}\) since VE only relies on variable summation and factor multiplication. Any other differentiable inference algorithm could be used as well (for instance, the junction tree algorithm as in Kjaerulff and van der Gaag, 2000). This step, evaluating the function \(g\), is known as the _forward pass_ in the neural network literature. Next, we backpropagate the previous operation (a step known as the _backward pass_) to build the gradient \(\nabla g\). Crucially, note that backpropagation yields \(\partial g/\partial\theta\) for every parameter \(\theta\in\mathbf{\theta}\) of the network at once, not just an individual \(\theta_{i}\). Last, we obtain parameters \(c_{0},c_{i},d_{0},d_{i}\) as detailed before, and use them to compute the metrics of Sections 2.2.1-2.2.3 for each \(\theta_{i}\). Note the advantages of this approach as compared to other alternatives. For example, symbolically deriving the gradient of \(g\) would be cumbersome and depend on the target network topology and definition of the probability of interest (Darwiche, 2003). Automatic differentiation avoids this by evaluating the gradient numerically using the chain rule. Furthermore, finding the gradient using finite differences would require evaluating \(g\) twice per parameter \(\theta_{i}\). In contrast, automatic differentiation only requires a forward and backward pass to find the entire gradient -in our experiments, roughly the time of just two marginalization operations (see below). \begin{table} \end{table} Table 1: Left: example potential of an MRF \(\mathcal{M}\) for variables \(Y_{1}\) and \(Y_{2}\), each with three levels \(\{1,2,3\}\). Right: corresponding potential for \(\mathcal{M}^{Y_{2}=3}\). #### 3.1.4 Additional one-way information Although YODO is specifically designed to compute the coefficients of the one-way sensitivity function of Equation (3), it further provides all the information to answer additional sensitivity questions: * It provides the admissible regions for every parameter in \(\boldsymbol{\theta}\) concerning the event of interest \(Y_{O}=y_{O}|\boldsymbol{Y}_{E}=\boldsymbol{y}_{E}\), since they formally only depend on the coefficients \(c_{0},c_{i},d_{0},d_{i}\) as shown in Equations (17) and (18). * It can quickly find the parameters that do not affect the output probability of interest. This set is usually called the _parameter sensitivity set_(Coupe and van der Gaag, 2002). This consists of the parameters \(\theta_{i}\) for which \(c_{i}\) and/or \(d_{i}\) are non-zero. * It identifies whether a parameter change leads to a monotonically increasing or decreasing sensitivity function, as already addressed in Bolt and Renooij (2017). Again this can be straightforwardly derived by checking the sign of \(c_{i}d_{0}-c_{0}d_{i}\): see Equation (6). ### YODO for multi-way sensitivity analysis Although there would be no difficulty in conceptually considering simultaneous variations of multiple parameters, we restrict our attention to 2-way sensitivity analyses where only pairs of parameters are varied. This is because: (i) sensitivity functions cannot be visualized in higher dimensions; (ii) the number of groups of parameters grows exponentially; (iii) most critically, the associated measures are challenging to interpret, similar to higher-order interactions in standard statistical models (see e.g. Hayes et al., 2012). The 2-way version of the sensitivity function considered before would entail computing the unknowns \(c_{12}\) and \(d_{12}\) from Equation 12. This can be achieved by computing the Hessian, rather than the gradient, in the previous calculations, which is supported in most modern autodifferentiation packages. However, the sheer size of the Hessian (up to \(10^{10}\) in the networks considered in Sec. 4.1) would make the interpretation of such indices a challenge of its own. Therefore, we advocate that the maximum n-way sensitivity value is the most valuable and versatile tool for multi-way sensitivity analysis. From its definition in Equation (13), it is clear that it can be instantaneously computed for a specific combination of parameters \(\boldsymbol{\theta}_{n}\) once the YODO algorithm has been run. Still, even when focusing on \(n=2\), the possible \(\boldsymbol{\theta}_{n}\) can become overwhelmingly large for medium-sized BNs. To address this, we introduce an algorithm to obtain the top \(K\)\(sv_{\max}\) pairs efficiently by noting that parameters \(\boldsymbol{\theta}\) contribute to Equation (14) independently from each other. We use a priority queue and proceed in a dynamic programming fashion, whereby we start with a pool \(\mathcal{P}\) of \(K\) best candidates and keep track of \(\max_{j}sv_{\max}^{\theta_{i},\theta_{j}}\) for all \(i\in\mathcal{P}\). The top \(K\) pairs are guaranteed to be found after \(K\) steps. The algorithm relies on sorting \(n\) elements and on \(K\) insertions and deletions on the queue and runs in \(O(n\log n+K^{2}\log K)\) operations. See Algorithm 1 for all details. ``` 1:// Gather contributions to Eq. 14 from every BN parameter \(\theta_{i}\) 2:\(v\leftarrow\) empty vector 3:for\(i\gets 1\) to \(n\)do 4:\(v_{i}\leftarrow(c_{i}d_{0}^{i}-c_{0}^{i}d_{i})^{2}\) 5:endfor 6:\(v\leftarrow\) sortDescending(\(v\)) 7: 8:// Populate the queue with \(K\) initial candidates 9:\(q\leftarrow\) empty priority queue 10:for\(i\gets 1\) to \(K\)do 11:\(q.\text{put}(\frac{1}{P(\mathbf{Y}_{E}=y_{E})^{2}}\sqrt{v_{i}+v_{i+1}},i,i+1)\) // First element acts as queue's key 12:endfor 13: 14:// Read the queue's largest \(K\) keys while updating it 15:\(w\leftarrow\) empty vector 16:for\(k\gets 1\) to \(K\)do 17:\((v,i,j)\gets q.\text{get}()\) 18:\(w_{k}\gets v\) 19:if\(j<n\)then 20:\(//\) Insert next pair candidate 21:\(q.\text{put}(\frac{1}{P(\mathbf{Y}_{E}=y_{E})^{2}}\sqrt{v_{i}+v_{j+1}},i,j+1)\) 22:endif 23:endfor 24:return\(w\) ``` **Algorithm 1**Algorithm to find the top \(K\) maximum 2-way sensitivity values ### Implementation In order to perform variable elimination efficiently, we note that the problem of graphical model marginalization is equivalent to that of tensor network contraction (Robeva and Seigal, 2018), and use the library _opt_einsum_(Smith and Gray, 2018) which offers optimized heuristics for the latter. As backend, we use the state-of-the-art machine learning library _PyTorch_(Paszke et al., 2019), version 1.13.1, to do all operations between tensors and then perform backpropagation on them. We use _pgmpy_(Ankan and Panda, 2015) for reading and moralizing BNs. ## 4 Results We first study the method's scalability by testing it on large networks with hundreds of nodes and arcs and up to \(10^{5}\) parameters; we then overview the insights revealed by our method when applied to two Bayesian networks. All experiments were run on a 4-core i5-6600 3.3GHz Intel workstation with 16GB RAM. ### Simulation study First, we run our method over the 10 Bayesian networks considered in Scutari et al. (2019). As a baseline, we use the numerical estimation of each sensitivity value via finite differences, whereby we slightly perturb each parameter \(\theta_{i}\) and measure the impact on \(f\). As a probability of interest, we set \(P(A=a|B=b)\), where \(A,B,a,b\) were two variables, and two levels picked randomly, respectively. Each timing is the average of three independent runs. Results are reported in Table 2, which shows that YODO outperforms the baseline by several orders of magnitude and that computing the most relevant 2-way sensitivity values takes in the order of 2s at most. ### Risk assessment for humanitarian crises and disasters We next extend the analysis of Ballester-Ripoll and Leonelli (2022b), which only focused on one-way indices, to assess the country-level risk associated with humanitarian crises and disasters. The data was collected from INFORM (INFORM, 2022) and consists of 20 drivers of disaster risk covering natural, human, socio-economic, institutional, and infrastructure factors that influence the country-level risk of a disaster, together with a final country risk index which summarizes how exposed a country is to the possibility of a humanitarian disaster. Table 3 reports an overview of the twenty drivers considered, which cover three main risk dimensions: Hazard and exposure (natural/human); Vulnerability (Socio-economic/Vulnerable groups); Lack of coping capacity (institutional/infrastructure). All \begin{table} \begin{tabular}{l r r r r r r} \hline \hline & \(\#\)nodes & \(\#\)arcs & \(\#\)parameters & Treewidth & Time (fin. diff.) & Time (autodiff.) & Time (svmax) \\ Network & & & & & & & \\ \hline child & 20 & 30 & 344 & 3 & 5.901379 & 0.026062 & 0.011229 \\ water & 32 & 123 & 13484 & 10 & 246.183577 & 0.057886 & 0.242638 \\ alarm & 37 & 65 & 752 & 4 & 12.021088 & 0.039274 & 0.019980 \\ hailfinder & 56 & 99 & 3741 & 4 & 58.217657 & 0.062527 & 0.092034 \\ hepar2 & 70 & 158 & 2139 & 6 & 100.047695 & 0.089120 & 0.052889 \\ win95pts & 76 & 225 & 1148 & 8 & 38.573179 & 0.092268 & 0.031935 \\ pathfinder & 109 & 208 & 97851 & 6 & 9254.500155 & 0.182588 & 1.848624 \\ munin1 & 186 & 354 & 19226 & 11 & 148307.45340 & 16.085417 & 1.194862 \\ andes & 223 & 626 & 2314 & 17 & 238.634045 & 0.311464 & 0.070569 \\ pigs & 441 & 806 & 8427 & 10 & 1544.150893 & 0.568345 & 0.213123 \\ \hline \hline \end{tabular} \end{table} Table 2: Our method was applied to 10 Bayesian networks, here sorted by the number of nodes. All times are in seconds. The times for the baseline (third-to-last column) were estimated as the total number of parameters in the network and the time needed to estimate one sensitivity value numerically. Treewidths were found with the _NetworkX_ graph library Hagberg et al. (2008). The last column reports the time needed to find the top 20 \(sv_{\text{max}}\) pairs based on existing YODO gradients. variables take values between zero and ten. Using the equal-length method, they have been discretized into three categories (low/0, medium/1, high/2). The dataset comprises 190 countries. \begin{table} \begin{tabular}{l c c c} \hline \hline Variable & Abbreviation & Risk Dimension & Category \\ \hline **Earthquake** & EARTIQUAKE & Hazard and Exposure & Natural \\ **Tsunami** & TSUNAMI & Hazard and Exposure & Natural \\ **Flood** & FLOOD & Hazard and Exposure & Natural \\ **Tropical Cyclone** & TROP\_CYC & Hazard and Exposure & Natural \\ **Drought** & DRUGHT & Hazard and Exposure & Natural \\ **Epidemic** & EPIDEMIC & Hazard and Exposure & Natural \\ **Projected Conflict Risk** & PCR & Hazard and Exposure & Human \\ **Current Highly Violent Conflict Intensity** & CHVCI & Hazard and Exposure & Human \\ **Development and Deprivation** & D\_AND\_D & Vulnerability & Socio-E Economic \\ **Economic Dependency** & ECON\_DEP & Vulnerability & Socio-E Economic \\ **Unprotected People** & UNP\_PEOPLE & Vulnerability & Vulnerable Groups \\ **Other Vulnerable Groups** & OTHER\_VULN\_GROUPS & Vulnerability & Vulnerable Groups \\ **Children US** & CHILDREN\_US & Vulnerability & Vulnerable Groups \\ **Food Security** & FOOD\_ECC & Vulnerability & Vulnerable Groups \\ **Recent Shocks** & RECENT\_SHOCS & Vulnerability & Vulnerable Groups \\ **Health Conditions** & HEALTH\_COND & Vulnerability & Vulnerable Groups \\ **Governance** & GOVERRANANCE & Lack of Coping Capacity & Institutional \\ **Communication** & COMMUNICATION & Lack of Coping Capacity & Infrastructure \\ **Physical Infrastructure** & PHYS\_INFRA & Lack of Coping Capacity & Infrastructure \\ **Access to Health System** & ACCESS\_TO\_HEALTH & Lack of Coping Capacity & Infrastructure \\ \hline \hline \end{tabular} \end{table} Table 3: Variables considered for the humanitarian network from the INFORM (2022) dataset. Figure 1: BN learned over the INFORM (2022) dataset for country-level disaster risk. Similar to Qazi and Simsekler (2021), a BN is learned using the hc function of the bnlearn package and is reported in Figure 1. A complete interpretation of the learned DAG is beyond the scope of this paper. However, it can be noticed that most risk factors are independent of the overall country-risk given the development and deprivation index (D AND D). As an illustration of the YODO method, we compute here all sensitivity measures for the conditional probability of a high risk of disaster (RISK = 2) conditional on a high risk of flooding (FLOOD = 2). Computing all metrics for all 183 network parameters with our method took only 0.055 seconds. The results are reported in Table 4 for the 20 most influential parameters according to the sensitivity value. It can be noticed that the most influential parameters come from the conditional distributions of the overall risk given the development and deprivation index (D AND D), as well as from the conditional distribution of the flooding index given a projected conflict risk index (PCR) equal to low. As an additional illustration, Figure 2 reports the sensitivity value of the parameters for the output conditional probability of an overall high risk given a high earthquake risk. Blue is associated with positive sensitivity values, and red with negative ones. Out of 183 network parameters, 30 have a sensitivity value of zero, meaning that they do not affect the probability of interest. It can be noticed that the most influential parameters have a positive relationship with the output probability, and almost all are associated with the development and deprivation index. We further investigate in a 2-way sensitivity analysis the effect of parameters' variations over the same probability \(P(\text{RISK}=\text{high}\mid\text{EARTHQUAKE}=\text{high})\). The 15 largest maximum 2-way sensitivity values are reported in Figure 3. Since these are vector norms, they are always positive irrespective of the relationship between the parameters and the probability of interest. Thus, the coloring should not be interpreted as in Figure 2. Again \begin{table} \begin{tabular}{l l l l l l} \hline \hline Parameter & Value & \begin{tabular}{l} Sensitivity \\ value \(\downarrow\) \\ \end{tabular} & \begin{tabular}{l} Proxi. \\ unity \\ \end{tabular} & \begin{tabular}{l} 2\text{{}^{\text{nd}}} deriv. \\ \end{tabular} & \begin{tabular}{l} 1\text{{}^{\text{st}}} deriv. \\ \end{tabular} \\ \hline RISK = high \(\mid\) D\_AND\_D = low & 0.0012 & 0.914 & 0.056 & 1.437 & 0.916 \\ FLOOD = high \(\mid\) PCR = low & 0.107 & 0.722 & 0.0534 & 4.059 & 1.475 \\ FLOOD = medium \(\mid\) PCR = low & 0.469 & 0.645 & 0.718 & 3.238 & \(\infty\) \\ FLODOD = low \(\mid\) PCR = low & 0.425 & 0.645 & 0.718 & 3.238 & \(\infty\) \\ RISK = high \(\mid\) D\_AND\_D = high & 0.34 & 0.555 & 0.731 & 0.387 & 0.714 \\ RISK = high \(\mid\) D\_AND\_D = medium & 0.0868 & 0.467 & 1.002 & 0.295 & 0.488 \\ FEDEDEDATE = high \(\mid\) HERALF\_COND = low & 0.148 & 0.295 & 1.167 & 0.231 & 0.332 \\ D\_AND\_D = high \(\mid\) FEDEDMIC = medium & 0.0742 & 0.238 & 1.834 & 0.133 & 0.249 \\ PCR = high \(\mid\) RISK = medium & 0.278 & 0.226 & 0.6 & 0.395 & 0.394 \\ PCR = high \(\mid\) RISK = low & 0.0266 & 0.204 & 0.694 & 0.322 & 0.213 \\ FLOOD = high \(\mid\) PCR = high & 0.509 & 0.196 & 0.475 & 0.46 & 1.211 \\ FLOOD = high \(\mid\) PCR = medium & 0.136 & 0.167 & 0.787 & 0.25 & 0.206 \\ D\_AND\_D = high \(\mid\) EPEDMIC = high & 0.787 & 0.159 & 4.159 & 0.0459 & 0.202 \\ D\_AND\_D = high \(\mid\) EPEDMIC = low & 0.0411 & 0.153 & 2.984 & 0.0265 & 0.156 \\ RISK = low \(\mid\) D\_AND\_D = high \(\mid\) EPEDMIC = low & 0.0208 & 0.151 & 2.319 & 0.0796 & 0.274 \\ HEALF\_COND = medium \(\mid\) OTHER\_VULN_GROUPS = low & 0.05 & 0.151 & 3.026 & 0.061 & 0.154 \\ HEALTH\_COND = low \(\mid\) OTHER\_VULN_GROUPS = low & 0.949 & 0.15 & 3.036 & 0.0606 & 0.154 \\ PCR = low \(\mid\) RISK = high & 0.00521 & 0.15 & 3.023 & 0.0609 & 0.236 \\ D\_AND\_D = medium \(\mid\) EPEDMIC = high & 0.176 & 0.148 & 5.933 & 0.0338 & 0.18 \\ PCR = high \(\mid\) RISK = high & 0.943 & 0.148 & 3.092 & 0.0588 & 0.224 \\ \hline \hline \end{tabular} \end{table} Table 4: Four sensitivity metrics for the top 20 parameters of the humanitarian crisis network, when the probability of interest is \(P(\text{RISK}=\text{high}|\text{FLOOD}=\text{high})\). all parameters associated with the development and deprivation index are the ones that have the most substantial effect on the probability of a country having a high overall risk. Thanks to the efficiency of YODO, these indices are almost instantaneously computed with a total computation time of just 0.047s. ### The role of technology during COVID-19 isolation The second BN investigates the role of digital communication technology in facilitating the maintenance of meaningful social relationships and promoting the perception of social support during the COVID-19 lockdown. As reported by Gabbiodini et al. (2020), the data was collected through an online questionnaire in March 2020 in Italy, about two weeks from the beginning of the lockdown that the Italian Government adopted for the urgent containment and management of the COVID-19 epidemiological emergency. The data can be downloaded from Gabbiodini (2020) and includes demographic information about 464 individuals, their use of digital communication technologies, and various psychological measures characterizing their emotional status. Each variable is discretized into either two, three, or four levels using either the equal frequency method or some ad-hoc thresholds to optimize the meaning of the levels. Details are reported in Table 5. A BN is learned for this dataset using 1000 bootstrap repetitions of a tabu search algorithm and keeping the edges that have appeared more than 50% of the times. Furthermore, edges from the psychological measures to the technological and demographic variables were Figure 2: Top 20 most influential parameters for the humanitarian crisis network, color-coded by the sign of \(f^{\prime}(\theta_{i})\). The probability of interest is \(P(\text{RISK}=\text{high}\mid\text{EARTHQUAKE}=\text{high})\). Total computation time: 0.038s. \begin{table} \begin{tabular}{l r r r} \hline \hline Variable & Meaning & Group & Levels \\ \hline **AGE** & age of respondent & demographic & \(<\) 25/\(\geq\) 25 (0/1) \\ **GENDER** & gender of respondent & demographic & male/female(0/1) \\ **REGION** & region of residence & demographic & Lombardy/other(0/1) \\ **OUTSIDE** & times outside per week & demographic & 0/1/2\(\geq\)0/1/2 \\ **Square_METERS** & home square meters & demographic & \(<\)80/\(\geq\)800/(1) \\ **FAMILYSIZE** & number of individuals at home & demographic & 0/12/2 (3/0/1) \\ **DAYS_ISOLATION** & days since lockdown & demographic & 0-10/11-0/\(>\)200/1/2 \\ **OCUPATION** & occupation & demographic & Other/Smartworking/Student/ \\ & use of communication technology & & Office(0/1/2/3) \\ **TECH_FUN_PQ** & for fun pre-quarantine & technology & low/medium/high(0/1/2) \\ **TECH_FUN_Q** & use of communication technology & & \\ **TECH_WORK_PQ** & use of communication technology & & \\ & for work pre-quarantine & technology & \\ **TECH_WORK_Q** & use of communication technology & & \\ **ANXETY** & level of anxiety & psychology & low/medium/high(0/1/2) \\ **ANQ.IR** & perceived level of anger/iritability & psychology & low/medium/high(0/1/2) \\ **ELONGINGNESS** & how often the word we is used & psychology & low/medium/high(0/1/2) \\ **BOREDOOM** & level of boredom & psychology & low/medium/high(0/1/2) \\ **LONELINESS** & perceived loneliness & psychology & low/medium/high(0/1/2) \\ **SOCIAL** & perceived social support & psychology & low/medium/high(0/1/2) \\ \hline \hline \end{tabular} \end{table} Table 5: Variables considered for the COVID-19 network from Gabbiadini et al. (2020). Figure 3: Top 15 most influential pairs of parameters for the humanitarian crisis network according to \(sv_{\text{max}}\). The probability of interest is \(P(\text{RISK}=\text{high}\mid\text{EARTHQUAKE}=\text{high})\). Total computation time: 0.033s. forbidden. Similarly, no edges from the technological to the demographic variables were allowed. These choices were motivated by learning a network whose connections could have a more natural, causal interpretation. Figure 4 reports the learned BN. The two variables connected with psychological measures are age and gender. In particular, given the age of an individual, all other demographic characteristics (except gender) are irrelevant to predict his psychological status. The network, therefore, seems to suggest that age was the main driver for the psychological status of individuals in lockdown. In this second application, we showcase the computation of the admissible regions using the YODO algorithm. Given a low level of the loneliness index, individuals were most likely spending much time interacting remotely for work during the quarantine (TECH_WORK_Q). Table 6 reports the limits of the admissible regions and other measures ordered from the narrowest interval. The admissible region does not have width one in six cases, all coming from the pmf of TECH_WORK_Q or OCCUPATION. This suggests that the data strongly supports the hypothesis that individuals who did not feel lonely had many online work connections during the lockdown. As a second illustration, we consider an individual's age, given that he felt very lonely during the lockdown. The BN suggests the most likely value was of individuals older than 24 years old. Table 7 shows the admissible regions for the network parameters and shows that the network is way less robust for this hypothesis. Admissible regions are much narrower, having a width equal to 0.04 for two parameters. It can also be noticed that Figure 4: BN learned over the COVID-19 dataset from Gabbiadini et al. (2020). parameters with narrow admissible regions come from many different PMFs. Therefore, minor variations in the network parameters would make individuals with less than 25 years more likely to have high levels of loneliness. ## 5 Discussion We demonstrated the use of automatic differentiation in BNs and, more specifically, in studying how sensitive they are to parameter variations. The novel algorithms are freely available in Python and are planned to be included in the next release of the bnmonitor R package (Leonelli et al., 2021). Their efficiency was demonstrated through a simulation study. Two critical applications in humanitarian crises and studying the psychological effects of isolation during the COVID-19 pandemic illustrate their use in practice. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Parameter & Value & Sens. value & Proximity & AR (lower) & AR (upper) \\ \hline TECH\_WORK\_Q = 1 | TECH\_WORK\_PQ = 1, OCCUPATION = 1 & 0.96 & 0.21 & \(4.54\cdot 10^{6}\) & 0.56 & 1.0 \\ OCCUPATION = 0 | AGE = 1 & 0.37 & 0.28 & \(9.51\cdot 10^{6}\) & 0 & 0.87 \\ OCCUPATION = 1 & 1 AGE = 0.4 & 0.28 & \(9.78\cdot 10^{6}\) & 0.1 & 1.0 \\ TECH\_WORK\_Q = 1 | TECH\_WORK\_PQ = 0, OCCUPATION = 2 & 0.41 & 0.27 & \(2.02\cdot 10^{7}\) & 0.093 & 1.0 \\ TECH\_WORK\_Q = 0 | TECH\_WORK\_PQ = 0, OCCUPATION = 2 & 0.59 & 0.16 & N/A & 0.068 & 1.0 \\ TECH\_WORK\_Q = 1 | TECH\_WORK\_PQ = 1, OCCUPATION = 2 & 0.75 & 0.12 & \(5.49\cdot 10^{7}\) & 0.041 & 1.0 \\ AGE = 0 & 0.45 & 0.11 & \(4.36\cdot 0\) & 0 & 1.0 \\ TECH\_FUNC\_PQ = 1 | AGE = 1 & 0.26 & \(4.0\cdot 10^{-9}\) & \(2.1\cdot 10^{7}\) & 0 & 1.0 \\ TECH\_FUNC\_PQ = 2 | AGE = 0 & 0.36 & \(2.39\cdot 10^{-8}\) & N/A & 0 & 1.0 \\ TECH\_FUNC\_PQ = 2 | AGE = 1 & 0.51 & \(8.01\cdot 10^{-9}\) & \(1.05\cdot 10^{7}\) & 0 & 1.0 \\ TECH\_FUNC\_Q = 0 | TECH\_FUNC\_PQ = 0 & 0.47 & \(2.18\cdot 10^{-8}\) & \(4.19\cdot 10^{7}\) & 0 & 1.0 \\ TECH\_FUNC\_PQ = 0 & 0 | TECH\_FUNC\_PQ = 2 & 0.12 & \(8.12\cdot 10^{8}\) & \(4.19\cdot 10^{7}\) & 0 & 1.0 \\ TECH\_FUNC\_PQ = 1 & 0.27 & \(1.39\cdot 10^{-8}\) & \(4.19\cdot 10^{7}\) & 0 & 1.0 \\ TECH\_FUNC\_PQ = 0 & 1 | TECH\_FUNC\_PQ = 2 & 0.12 & \(8.12\cdot 10^{8}\) & 0 & 1.0 \\ TECH\_FUNC\_PQ = 0 & 0.34 & \(7.92\cdot 10^{-9}\) & \(2.11\cdot 10^{7}\) & 0 & 1.0 \\ TECH\_FUNC\_Q = 1 | TECH\_FUNC\_PQ = 1 & 0.41 & \(1.39\cdot 10^{-8}\) & \(4.19\cdot 10^{7}\) & 0 & 1.0 \\ \hline \hline \end{tabular} \end{table} Table 6: COVID network for the probability of interest \(P\)(TECH\_WORK\_Q = high \(|\) LONELINESS = low): sensitivity metrics for the 15 parameters with the smallest admissible region. Total computation time: 0.071s. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Parameter & Value & Sens. value & Proximity & AR (lower) & AR (upper) \\ \hline DAYS\_ISOLATION = 0 | OCCUPATION = 3 & 1.0 & 0.028 & 11.13 & 0.96 & 1.0 \\ OUTSIDE = 2 | OCCUPATION = 3 & 1.0 & 0.028 & 11.13 & 0.96 & 1.0 \\ ANG\_IRR = 2 | AGE = 0, GENDER = 0 & 0.27 & 0.047 & 8.4 & 0 & 0.29 \\ BOREDO\_ROM = 0 | ANG\_IRR = 1 & 0.24 & 0.019 & 3.4 & 0 & 0.3 \\ ANG\_IRR = 0 | AGE = 1, GENDER = 1 & 0.3 & 0.13 & 2.43 & 0 & 0.31 \\ LONELINESS = 1 | BOREDO = 0 & 0.33 & 0.014 & 7.4 & 0 & 0.41 \\ LONELINESS = 0 | BOREDOM = 1 & 0.32 & 0.011 & 4.28 & 0 & 0.42 \\ AGE = 1 & 0.55 & 1.01 & 0.023 & 0.54 & 1.0 \\ AGE = 0 & 0.45 & 0.59 & 1.3 & 0 & 0.46 \\ GENDER = 1 & 0.75 & 0.0048 & 4.6 & 0.52 & 1.0 \\ GENDER = 0 & 0.25 & 0.0048 & 4.6 & 0 & 0.48 \\ ANG\_IRR = 1 | AGE = 1, GENDER = 1 & 0.42 & 0.018 & 23.86 & 0 & 0.49 \\ ANG\_IRR = 2 | AGE = 0, GENDER = 1 & 0.49 & 0.13 & 2.42 & 0 & 0.5 \\ LONELINESS = 1 | BOREDOM = 1 & 0.47 & 0.011 & 4.28 & 0 & 0.57 \\ BOREDOM = 1 | ANG\_IRR = 1 & 0.5 & 0.016 & 4.12 & 0 & 0.58 \\ \hline \hline \end{tabular} \end{table} Table 7: COVID network for the probability of interest \(P\)(AGE =\(\geq 25\mid\) LONELINESS = high): sensitivity metrics for the 15 parameters with the smallest admissible region. Total computation time: 0.11s. Although YODO is specifically designed to compute the coefficients of the one-way sensitivity function in Equation 3, we demonstrated in this paper how it could be used to answer a variety of sensitivity queries, for instance, admissible regions and the identification of the parameter sensitivity set. Importantly, YODO also provides the basis for multi-way sensitivity analyses, and we demonstrated their feasibility in practice. #### Future Work The YODO algorithm introduced here is designed explicitly for BN models, but it could also be adapted to work with other graphical models. The study of context-specific independence has been shown to increase the efficiency of various inferential tasks often, and thus we may expect that it could also speed up YODO. Therefore, we plan to adapt it to work over graphical models embedding non-symmetric types of independence, as, for instance, staged trees (Carli et al., 2022; Smith and Anderson, 2008), whose sensitivity functions have also been studied (Leonelli, 2019). Another avenue of research is the adaptation of YODO to work for sum-product networks (Poon and Domingos, 2011; Sanchez-Cauce et al., 2021), a different representation of a factorization of a joint probability distribution, which has become increasingly popular in the past few years. Although YODO makes various types of multi-way sensitivity analysis feasible, they are still a local approach to investigate the combined effect of parameters' variations on probabilities of interest. Recently, it has been shown that the computation of Sobol indices, a global sensitivity index, is feasible in sensitivity to evidence analyses (Ballester-Ripoll and Leonelli, 2022a). We are currently investigating algorithms to globally assess the effect of the various parameters of a BN and consequently compute their associated Sobol indices.
2306.12297
A novel multi-stage concurrent topology optimization for variable-stiffness structures
The concurrent optimization of topology and fibre orientation is a promising approach to pursue higher strength and lighter weight of variable-stiffness structure. This study proposes a novel discrete-continuous scheme for the concurrent optimization. Considering the global convergence, Discrete Material Optimization (DMO) is firstly utilized to select a dominant angle for each element from several predefined candidate angles. However, it is still difficult to obtain excellent fibre angle convergence due to difficulty in selection of candidate materials for some elements. Therefore, the Sequential Binary-Phase Topology Optimization is proposed to guarantee the uniqueness of the element to candidate angle mapping. Moreover, to obtain better mechanical properties, Continuous Fibre Angle Optimization (CFAO) with spatial filter is introduced to optimize fibre continuity. Several classic numerical examples are used to verify the excellent performance of the proposed method in terms of fibre angle convergence and stable optimization ability.
Yaya Zhanga, Hu Wang, Jichao Yin, Shuhao Li, Mengzhu Yang
2023-06-21T14:27:02Z
http://arxiv.org/abs/2306.12297v1
# A novel multi-stage concurrent topology optimization ###### Abstract The concurrent optimization of topology and fibre orientation is a promising approach to pursue higher strength and lighter weight of variable-stiffness structure. This study proposes a novel discrete-continuous scheme for the concurrent optimization. Considering the global convergence, Discrete Material Optimization (DMO) is firstly utilized to select a dominant angle for each element from several predefined candidate angles. However, it is still difficult to obtain excellent fibre angle convergence due to difficulty in selection of candidate materials for some elements. Therefore, the Sequential Binary-Phase Topology Optimization is proposed to guarantee the uniqueness of the element to candidate angle mapping. Moreover, to obtain better mechanical properties, Continuous Fibre Angle Optimization (CFAO) with spatial filter is introduced to optimize fibre continuity. Several classic numerical examples are used to verify the excellent performance of the proposed method in terms of fibre angle convergence and stable optimization ability. **Keywords:** Topology optimization; Fibre reinforced composites; Discrete-continuous fibre optimization; Fibre angle convergence; **1. Introduction** Structure lightweight design plays an important role in aerospace [1, 2], shipping [3] and automotive tools[4]. Generally, structure lightweight design includes two main configurations[5]. Fibre-reinforced composites have been widely used in these fields due to the excellent mechanical properties of high strength, stiffness and light weight [6, 7]. The fibre reinforced composites can be divided into Constant Stiffness (CS) composites and variable stiffness (VS) composites. CS composites, in which the fibre angle of each layer is constant, are typically optimized by the laminate stacking sequence, lamination parameters and fibre angle of each layer [8, 9, 10]. Compared with the CS, VS composites with variable fibre angles allows structures to distribute the loads more efficiently [11]. Its optimization objects typically include lamination parameters, fibre path and local fibre angles. Automated fibre placement(AFP) [12], filament winding [13] and automated tape laying (ATL) [14] enable fabrication of curvilinear fibres and that makes it possible to fabricate the VS composites. The VS optimization to improve load carrying capacity as well as light structure weight has gradually attracted the attention of researchers [15]. Topology Optimization (TO) has excellent performance in seeking new structural configurations while reducing weight. TO guides engineers to place materials in the prescribed design domain for the excellent structural performance [16]. Compared to other optimization methods, TO provides a greater degree of freedom in the design space and is an effective design method for structural light-weighting. Topology optimization considering fibre angle can significantly improve the structural stiffness [17, 18], natural frequency [19] and maximum buckling load [20]. Therefore, the combination of structural topology and fibre angle orientation optimization is a very promising approach for structure lightweight design in fibre reinforced composites. In fibre orientation optimization, the continuity of fibre orientation plays a crucial role in ensuring material properties such as strength or stiffness because fibre discontinuity leads to stress concentration [21]. The continuous fibre angle optimization faces a challenge that the rotated stiffness tensor is composed of multi-valued functions, such as trigonometric function. The periodicity of trigonometric functions makes the optimization problem a highly non-convex problem with multiple local optimal solutions in given space. To avoid local convergence, non-gradient optimization--Heuristic Optimization Algorithms (HOA) had been widely employed, such as Genetic Algorithms (GA) [22, 23], Artificial Immune Algorithms (AIA) [24], Ant Colonies (AC) [25] and Particle Swarm Optimization (PSO) [26]. Keller [27] adopted the Evolutionary Algorithm with first-order search and the niching strategy to optimize the orientation angles. Other studies [28, 29, 30] used GA to optimize the orientation angles, number of plies, and stacking sequence. In these studies, the ply angles were assumed to be constant. However, Sigmund [31] compared Non-Gradient Topology Optimization (NGTO) [32]with Gradient-based Topology Optimization (GTO) from the perspective of global search ability and computational efficiency. The results shows that NGTO commonly could not find global optima solutions based on the coarse meshes and have high computational cost. Furthermore, NGTO is difficult to handle the problems of high-aspect ratio, complex geometry design domains, complex physics situations and accuracy demands. They proved that GTO performs much better in these areas than NGTO. For the variable stiffness design problem, several gradient-based methods have been proposed to optimize the fibre angle orientation. Based on discrete Kirchhoff theory, Mota Soares et. al [33] carried out sensitivity analysis of fibre orientation as well as the thickness for the optimization. Bruyneel et. al [34] adopt the approximation concepts approach and dual algorithm. They performed sensitivity analysis analytically and used sequential convex programming--Method of Moving Asymptotes (MMA) [35], to optimize the fibre orientation and the thickness efficiently. This method is also called Continuous Fibre Angle Optimization (CFAO). CFAO takes the angle directly as a design variable, which varies in the whole angle orientation range. It tends to fall into local optima and the optimization result is highly dependent on the initial design values [36, 37]. To solve these problems, Stegmann and Lund [38] suggested a method, Discrete Material Optimization (DMO), whose equivalent constitutive matrix is expressed as a weighted sum of candidate constitutive matrixes from prescribed discrete angles. DMO based on gradient and penalty coefficient is also adopted to force each element to correspond to only one candidate angle. Experimental results show that the DMO has good global optimization ability and is not sensitive to the initial design values. Subsequently, Lund [20] studied the problem of making buckling load factor of multi-material composite structures by DMO. Bruyneel [39] proposed a new variant of DMO, the Shape Functions with Penalization scheme (SFP), with fewer design variables required. Yin et al. [40] introduced the peak function into material interpolation for optimal selection of different isotropic materials which advantage is that a variety of materials can be introduced into optimization without increasing design variables. However, artificially high stiffness materials may occur during the process, which may lead to local minimum. Gao et al. [41] proposed the Bi-value Coding Parameterization (BCP) which can reduce the number of design variables significantly. Wu et al. [6] combined DMO with commercial software ABAQUS to optimize the ply angles of the laminate vehicle door. Compared with CFAO, DMO has better global optimization ability and low sensitivity to the initial design values. But because of fibre discontinuities, DMO has poor manufacturability and will easily lead to stress concentration. To overcome the shortcomings of CAFO and DMO, Kiyono et al. [42] proposed a novel scheme of discrete-continuous fibre angle optimization. This study used the normal distribution functions as weighting functions in DMO to choose one discrete angle from any number of discrete candidate angles for each element. And then a spatial filter was utilized to ensure the continuity of the fibres, where the filter radius would influence the level of fibre continuity. Luo et al. [43] proposed a concurrent optimization framework for topology optimization and discrete-continuous fibre angle optimization. In the study, the fibre orientation interval is divided into several average subintervals. The optimization problem [10] becomes to select an interval among several discrete subintervals and perform continuous angle optimization within the interval. The DMO and CFAO have been carried out in sequence in each iteration step. Ding. and Xu. introduced the normal distribution functions as the weight functions to the concurrent optimization. Researchers have proposed some good ideas for the concurrent optimization, but there are still several problems to be addressed. Firstly, despite the use of penalty strategies, the elements convergence is still not guaranteed and the convergence process is relatively slow. Secondly, the number of subintervals as well as the number of discrete design variables affects the optimization result. Considering the advantages of DMO and CFAO, this paper seeks a new scheme for the concurrent optimization of structural topology and fibre angle orientation. Due to good global optimization ability, DMO has been utilized to select an optimized angle for each element from several predefined candidate angles firstly. However, some "fuzzy regions" make it difficult to select the best angle for each element. To avoid "fuzzy regions", a Sequential Binary-phase Topology Optimization (SBPTO) is suggested to optimize further based on the optimized structure of the DMO. The SBPTO is a topology optimization method for solving multiphase problem where one phase represents a candidate angle. The essential of the SBPTO is to decompose the multiphase topology optimization problem into a series of two-phase topology optimization problem in a sequential manner. This method can be generally and easily implemented. The integration of DMO and SBPTO not only preserves the optimization ability of DMO, but also solves the problem of fibre convergence for the interpolation penalty model and improves the stability of the solution. Finally, considering mechanical properties, the continuity of the fibre path is optimized by using CFAO with spatial filter. The process of DMO-SBPTO-CFAO is abbreviated as DSCO (See **Fig. 1**). The remainder of this paper is organized as follows: section 2 introduces the discrete-continuously fibre optimization method. In section 3, some numerical examples are shown. Finally, the findings are concluded in section 4. The code and data accompanying this study will be made publicly available at [https://github.com/HnuAiSimOpt/DSCO](https://github.com/HnuAiSimOpt/DSCO) after the paper is officially published. ## 2 Discrete-continuously fibre optimization method The framework of the suggested concurrent optimization DSCO is illustrated in Fig. 1. In the proposed framework, DMO, SBPTO and CFAO are integrated. Considering the global convergence of DMO [37], the DMO is employed to find the "near optimum" solution efficiently. However, there might be some unavoidable "fuzzy regions" which make it difficult to select the best angle. According to Fig. 1, such "fuzzy regions" are usually located at the junction of different angles or near where constraints and forces are applied. The elements located in such regions are called unconvergent elements. In order to remove the unconvergent elements, the SBPTO is introduced to facilitate selecting the best angle from the candidate angles. The SBPTO decomposes the multi-phase topology optimization method into a series of binary phase topology optimization where each phase presents a candidate fibre angle or void-material. Based on the design space of DMO, the design variables of the elements that satisfy the conditions which would be explained in section 2.2 will be frozen. As the design space is reduced, the contribution of unconvergent elements becomes more prominent. Binary phase topology optimization makes it easier for the difficult-to-choose elements to converge. The CFAO is sensitive to the initial value and easily fall into local optimization due to the high nonconvex character with multiple local optimization [15, 29, 39]. The two-step discrete fibre optimization provides reliable initial design values for CFAO to reduce the risk of falling into local optimality. The flow chart of the concurrent optimization is as Fig. 2. ### Topology optimization formulation In this study, the objective is to minimize the compliance of structure by the concurrent optimization of topology and fibre angle orientation. It can be updated as follows: Figure 1: Framework of the DSCO Figure 2: The flow chart of the concurrent optimization \[\begin{array}{l}\mbox{\it find}:\left(\mathbf{\chi}_{1},\mathbf{ \chi}_{2},\ldots\mathbf{\chi}_{n},\mathbf{\alpha}_{1},\mathbf{\alpha}_{2},\ldots\mathbf{\alpha}_{n+1},\mathbf{ \rho},\mathbf{0}\right)\\ \mbox{min}:c=\mbox{\bf U}^{T}\mbox{\bf KU}\\ \mbox{\it s.t.}:\mbox{\bf KU}=\mbox{\bf F}\\ \mbox{V}\leq f\times N\\ \alpha_{i,j}\in\left[0,1\right];\ \chi_{i,j}\in\left[0,1\right];\ \rho_{i}\in \left[0,1\right];\\ \theta_{i}\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\end{array} \tag{1}\] where \(\ \chi_{i,j}\\) and \(\ \alpha_{i,j}\left(i=1,\ldots n\right)\)denotes the density of the \(j\)th candidate fibre angle in the \(i\)th element for DMO and SBPTO respectively. \(\ \rho_{i}\\) is the density of the \(i\)th element in the step of CFAO. \(\ \theta_{i}\\)is fibre orientation in the \(i\)th element. The objective, \(\ \zeta\), stands for compliance. \(\ \mbox{\bf K}\\) denotes the stiffness matrix. \(\ \mbox{\bf U}\\) denotes the displacement vector. \(\ \mbox{\bf F}\\) denotes the load vector respectively. \(\ N\\) is the total number of elements. \(\ V\\) is the material volume and \(\ f\\) is the allowed maximum volume. ### Topology optimization with Discrete Angle Optimization Firstly, the Discrete Material Optimization (DMO) is introduced. The purpose of the DMO method is utilized to couple the two geometrical scales and carry out macroscopic topology optimization and microscopic material selection simultaneously. The main idea of DMO is to minimize the objective function by selecting a material among a predefined set of materials. In this study, a predefined set of materials represents a set of fibre angles. In the optimization process, the design variables \(\ \chi_{i}\), the density of corresponding material, vary between 0 and 1. Therefore, the discrete material problem can be transformed into the continuous variable optimization problem. The \(i\)th element constitutive matrix, \(\ \mbox{\bf D}_{i}^{e}\), can be expressed as a weighted sum of candidate constitutive matrixes corresponding to the predefined fibre angles, \(\ \mbox{\bf D}_{j}\): \[\mbox{\bf D}_{i}^{e}=\sum_{j=1}^{n}w_{i,j}\mbox{\bf D}_{j}^{e}=w_{i,i}\mbox{ \bf D}_{1}+w_{i,2}\mbox{\bf D}_{2}+\ldots+w_{i,n}\mbox{\bf D}_{n},\ \ \ 0\leq w_{i,j}\leq 1 \tag{2}\] where \(n\) denotes the number of candidate fibre angles. The weight functions, \(\ \ w_{i,j}\), are between 0 and 1. And 0 implies giving up a material while 1 means choosing a material in physical sense. In this work, the weight function has been denoted as: \[w_{i,j}=(\varepsilon+\chi_{j}^{p})\Pi_{k=1}^{n}(\varepsilon+(1-\chi_{k=j}^{p})) \tag{3}\] where \(\chi_{i}(i=1,\ldots,n)\) denotes the density of the \(i\)th material. In order to be 'fair', the initial design variables, \(\chi_{i}\), should be set uniformly between 0 and 1. \(\varepsilon\) is a small positive number (for example \(1\times 10^{-9}\)). The power, \(p\), is a penalty for the intermediate values \(\chi_{i}\) which is aiming at pushing the design variables to 0 or 1. In this study, one candidate material corresponds to a candidate discrete fibre angle. From Eqs. (2-3), increasing the number of candidate materials tends to increase design variables. At the same time, increasing the number of elements or the number of laminates will significantly increase the number of design variables. The computation cost and storage of the optimization would also be increased rapidly. In the DMO, the design variables can be updated by using MMA solver which is a gradient-based solver. Combined with Eq. (1), the derivative of compliance with respect to \(\chi_{i,j}\) can be given by \[\frac{\partial c_{i}}{\partial\chi_{i,j}}=-\mathbf{u}_{e}^{r}\frac{\partial \mathbf{k}_{e}}{\partial\chi_{i,j}}\mathbf{u}_{e} \tag{4}\] where \(c_{i}\) is the compliance of the \(i\)th element. \(\mathbf{u}_{e}\) is the displacement component vector in the \(i\)th element. \(\mathbf{k}_{e}\)is the element stiffness matrix. \(\frac{\partial\mathbf{k}_{e}}{\partial\chi_{i,j}}\) is obtained by \[\frac{\partial\mathbf{k}_{e}}{\partial\chi_{i,j}}=\int_{\Omega_{e}}\mathbf{B} ^{r}\frac{\partial\overline{\mathbf{D}_{i}^{r}}}{\partial\chi_{i,j}}\mathbf{B }d\Omega=\sum_{k=1}^{n}\int_{\Omega_{e}}\mathbf{B}^{r}\lambda_{k}\frac{ \partial w_{i,k}}{\partial\chi_{i,j}}\mathbf{D}^{r}\lambda_{k}^{r}\mathbf{B}d\Omega \tag{5}\] where \[\mathbf{D}^{r}=\begin{bmatrix}D_{11}^{e}&D_{12}^{e}&0\\ D_{21}^{e}&D_{22}^{e}&0\\ 0&0&D_{33}^{e}\end{bmatrix} \tag{6}\] \[\mathbf{\lambda}_{k}=\begin{bmatrix}\cos^{2}\theta_{k}&\sin^{2}\theta_{k}&-\sin 2 \theta_{k}\\ \sin^{2}\theta_{k}&\cos^{2}\theta_{k}&\sin 2\theta_{k}\\ \sin\theta_{k}\cos\theta_{k}&-\sin\theta_{k}\cos\theta_{k}&\cos^{2}\theta_{k} -\sin^{2}\theta_{k}\end{bmatrix} \tag{7}\] Substituting Eqs. (2) and (3) into \(\dfrac{\partial w_{i,k}}{\partial\chi_{i,j}}\), \[\dfrac{\partial w_{i,k}}{\partial\chi_{i,j}}=\begin{cases}p\,\chi_{j}^{p-1} \prod\nolimits_{k=1}^{n}\left(\varepsilon+\left(1-\chi_{k=j}^{p}\right) \right),&\text{if }k=j\\ \left(-p\,\chi_{j}^{p-1}\right)\left(\varepsilon+\chi_{m}^{p}\right)\prod \nolimits_{k=1}^{n}\left(\varepsilon+\left(1-\chi_{k=m,k=j}^{p}\right)\right),&\text{otherwise}\end{cases} \tag{8}\] For all numerical examples, the convergence criterion for the step of the DMO is as follow, \[\varepsilon_{i}=\dfrac{\sum\limits_{j=i-4}^{i}\left(c\left(j\right)-\dfrac{c }{c}\right)^{2}}{5}\leq\varepsilon_{0} \tag{9}\] where \(\varepsilon_{i}\) refers to the variance of the last five compliance values of the \(i\) th iteration in DMO. \(\varepsilon_{0}\) is set as a small constant. ``` Step 1: Set design domain, objective volume fraction, filter radius. Step 2: each element is assigned a set of predefined candidate angles, \(\chi_{i}\left(i=1,...n\right),\) which are set uniformly. Step 3: Calculate element constitutive matrix by Eq. (2) and carry out FEA. Step 4: Calculate the compliance and the sensitivities using Eq. (5). Step 5: Apply the MMA to determine the \(n\times N_{e}\) volume fractions (\(N_{e}\) is the number of elements). Step 6: if the convergence criterion in Eq. (9) is satisfied, continue. If not, return to Step 3. ``` **Algorithm 1**Discrete material optimization (DMO) In order to elaborate the process of optimization better, it is necessary to define the term "fibre convergence".Ideally, in the process of discrete fibre angle optimization, the artificial density of one material is pushed to 1 as well as the densities of other materials are pushed to 0, or all the densities of materials are pushed to 0 for each element. However, there are some elements that cannot meet the above requirements. Several numerical examples in [39] indicates that convergence rate in DMO is limited and the excellent fibre convergence cannot be achieved. Therefore, it is required to use fibre convergence to evaluate the proportion of elements that meet the above criteria. The element which satisfies the inequality Eq. (10) could be regarded as a converged one. In other words, the element selects a definite fibre angle from candidate angles. \[\chi_{i}\geq\eta\sqrt{\chi_{1}^{2}+\chi_{2}^{2}+\ldots+\chi_{p}^{2}} \tag{10}\] where \(\eta\) indicates the tolerance level and it is typically in the interval [0.95, 0.995]. Fibre convergence, the ratio of converged elements in total elements, \(h_{\eta}\), is depicted as: \[h_{\eta}=\frac{N_{c}^{e}}{N^{e}} \tag{11}\] where \(h_{0.95}=1\) simply means that for every element, there is one single volume fraction contributing more than 99.5% to the Euclidian norm of volume fractions. Due to the elements located in "fuzzy regions", it is difficult to achieve excellent fibre convergence in DMO. These "fuzzy regions" are usually located at the junction of different angles or near where constraints and forces are applied. In these regions, the force transfer path is commonly complex. Generally, the sensitivities of the objective function to each fibre angle variable of the element located in these "fuzzy regions" are similar. This makes it difficult for those elements to make a choice in the candidate fibre angles in the stage of DMO. To handle the convergence problem, the sequential binary-phase topology optimization (SBPTO) with good selection performance in multi-phase is thus introduced. Multi-phase optimization is decomposed into a series of two-phase topology optimization. Multi-phase with (n+1)-phase represents \(n\) candidate fibre angles which have been defined in DMO and one void-material. The SBPTO decomposes the (n+1)-phase topology optimization method into a series of binary phases topology optimization. After cyclic calculation, one candidate material would be selected while the remained materials have been discarded. For the convenience of representation, we will represent the aforementioned design variables as \(\alpha_{i}\left(i=1,\ldots,n+1\right)\) which determine the distribution of the material corresponding to the \(i\)th phase. In each element, the design variables should be equal to unity. \[\sum_{i=1}^{n+1}\alpha_{i}=1,\ \ \ 0\leq l_{i}\leq\alpha_{i}\leq u_{i}\leq 1 \tag{12}\] where \(l_{i}\) and \(u_{i}\) represent the upper and lower boundaries respectively. \(n\)(\(n\)+1) binary phase topology sub-problems are involved in the SBPTO with (\(n\)+1)-phase. During the solution of each binary phase subproblem, \(n\)-1 phases in the unfrozen element should be fixed while the remained two phases would be active phases. The two active phases in unfrozen elements are presented by \(a\) and \(b\), and volume fractions are denoted by \(\alpha_{a}^{u}\) and \(\alpha_{b}^{u}\) respectively. And the fixed phases are denoted by \(\alpha_{i}^{u}\left(i\neq\left\{a,b\right\}\right).\) The sum of the two active phases \(r_{ab}^{u}\) can be calculated as: \[r_{ab}^{u}=1-\sum_{\begin{subarray}{c}i=1\\ i\neq\left\{a,b\right\}\end{subarray}}^{n+1}\alpha_{i}^{u} \tag{13}\] The elements whose design variables satisfies Eq. (14) would be fixed in the next binary-phase topology optimization. The design variables of unfrozen elements form a reduced design space. This reduced design space serves as the initial design space for binary-phase topology optimization. \[\alpha_{a}>\lambda\ \ \ \text{or}\ \ \ \alpha_{b}>\lambda\ \ \ \text{or}\ \ \ \begin{cases}\alpha_{a}=0\\ \alpha_{b}=0\end{cases} \tag{14}\] where satisfying the first or second inequality means that the fibre angle of element is determined and satisfying the third inequality means both of a and b phase in the element are not selected. Therefore, the design variables in these elements that satisfy the inequality are fixed in this binary-phase topology optimization. \(\lambda\) is set close to 1 but less than 1. In this work, we set \(\lambda\) as 0.99. For each binary-phase topology sub-problem, \(\alpha_{a}^{u}\), is taken as the design variable. The optimization of the binary-phase topology sub-problem refers to the SIMP method in [44]. After sub-problem optimization, the artificial densities of \(b,\ \alpha_{b}^{u}\), would be computed as follow: \[\alpha_{b}^{u}=r_{ab}^{u}-\alpha_{a}^{u} \tag{15}\] the upper bound of phase \(a,\ \ u_{a,temp}^{u}\), in each iteration should satisfy: \[u_{a,temp}^{u}=\min\left(u_{a}^{u},r_{ab}^{u}\right) \tag{16}\] In the sequential binary-phase topology optimization, the derivative of compliance with respect to \(\ \alpha_{i,a}\) can be obtained: \[\frac{\partial c_{i}}{\partial\alpha_{i,a}}=-\mathbf{u}_{e}^{T}\frac{\partial \mathbf{k}_{e}}{\partial\alpha_{i,a}}\mathbf{u}_{e} \tag{17}\] where \[\frac{\partial\mathbf{k}_{e}}{\partial\alpha_{i,a}}=\int_{\Omega_{a}}\mathbf{ B}^{T}\frac{\partial\overline{D_{e}}}{\partial\alpha_{i,a}}\mathbf{B}d \Omega=\int_{\Omega_{a}}\mathbf{B}^{T}\lambda\frac{\partial D_{e}}{\partial \alpha_{i,a}}\lambda^{T}\mathbf{B}d\Omega \tag{18}\] The alternating active-phase algorithm can be depicted as: \begin{tabular}{l} \hline \hline **Algorithm 2:** Sequential Binary-Phase Topology Optimization \\ \hline **Repeat** \\ for a=1 to p do \\ for b=1 to p (except for a) do \\ \(\mathbf{a}^{u}=\) solution of binary phase subproblem \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) \\ \(\mathbf{end}\) The flow chart of the SBPTO is shown as **Fig. 3**: ### Representation of fibre angle To improve the fibre continuity, a continuous interpolation function is introduced. It is a fibre continuity filter which is a spatial filter [42]. In the design domain, each fibre angle of design point \(\theta_{i}\), \(i=1,\ldots n\) at \(p_{i}=\left(x_{i},y_{i},z_{i}\right)\) can be interpolated by these design variables in a "influence area", that satisfies the condition, \(\left\|\mathbf{x}-\mathbf{p}_{i}\right\|\leq R_{c}\), where \(R_{c}\) is called the cut-off radius. The interpolation function is defined as: \[\Theta(\mathbf{x})=\sum_{i\in I_{s}}w_{i}(\mathbf{x})\theta_{i},\ \ \ \left\|\mathbf{x}-\mathbf{p}_{i}\right\|\leq R_{c} \tag{19}\] the fibre angle, \(\Theta(\mathbf{x})\), should be constrained as \(\Theta(\mathbf{x})\in[-90^{{}^{\circ}},90^{{}^{\circ}}]\). The initial fibre angle configuration is determined by the results of discrete optimization above. And \(w_{i}(\mathbf{x})\) is the weighting function which is defined as follow. \[\begin{split} w_{i}(\mathbf{x})&=\frac{H_{ei} \alpha_{i}}{\sum_{j\in I_{s}}H_{ej}}\\ H_{ei}&=\max(0,r_{\text{min}}-\left\|\mathbf{x}- \mathbf{p}_{i}\right\|)\end{split} \tag{20}\] where \(\alpha_{i}\) is the volume fraction. In the continuous fibre angle optimization, the derivative of compliance with Figure 3: The flow chart of the SBPTO respect to the design variable \(\rho_{i}\) can be given by \[\frac{\partial c_{i}}{\partial\rho_{i}}=-\mathbf{u}_{e}^{T}\frac{\partial\mathbf{ k}_{e}}{\partial\rho_{i}}\mathbf{u}_{e} \tag{21}\] Substituting Eq. (16) into \(\frac{\partial\mathbf{k}_{e}}{\partial\rho_{i}}\) \[\frac{\partial\mathbf{k}_{e}}{\partial\rho_{i}}=\int_{\Omega_{e}}\mathbf{B}^{ T}\frac{\partial\overline{D_{e}}}{\partial\rho_{i}}\mathbf{B}d\Omega=\int_{ \Omega_{e}}\mathbf{B}^{T}\lambda\frac{\partial D_{e}}{\partial\rho_{i}} \lambda^{T}\mathbf{B}d\Omega \tag{22}\] The derivative of compliance with respect to \(\theta_{i}\) can be developed \[\frac{\partial\mathbf{k}_{e}}{\partial\theta_{i}}=\int_{\Omega_{e}}\mathbf{B }^{T}\frac{\partial\overline{D_{e}}}{\partial\theta_{i}}\mathbf{B}d\Omega= \int_{\Omega_{e}}\mathbf{B}^{T}\frac{\partial\lambda}{\partial\theta_{i}} \mathbf{D}_{e}\lambda^{T}\mathbf{B}d\Omega+\int_{\Omega_{e}}\mathbf{B}^{T} \lambda D_{e}\frac{\partial\lambda^{T}}{\partial\theta_{i}}\mathbf{B}d\Omega \tag{23}\] where \[\frac{\partial\lambda}{\partial\theta_{i}}=\begin{bmatrix}-2\sin\theta_{i} \cos\theta&2\sin\theta_{i}\cos\theta_{i}&-2\cos 2\theta_{i}\\ 2\sin\theta_{i}\cos\theta_{i}&-2\sin\theta_{i}\cos\theta_{i}&2\cos 2\theta_{i} \\ \cos 2\theta_{i}&-2\cos 2\theta_{i}&-4\sin\theta_{i}\cos\theta_{i}\end{bmatrix} \tag{24}\] ## 3 Numerical examples This section demonstrates the validity of the proposed method with four numerical examples: an MBB beam with three in-plane loads, an L-shape beam with one in-plane load, a cantilever beam with one in-plane load and a cantilever beam with multiple loads. For simplicity, all of examples employ uniform meshes which size are \(1\times 1\). And in the step of the SBPTO, the convergence criterion, \(h_{\eta}\), is set as 0.99. ### MBB beam The first numerical example is a 2D MBB beam structure. The boundary condition is shown in Fig. 4. The points at the one-fourth and three-fourth of the top edge are applied external force \(F=1\). The middle point of bottom edge is applied external force \(2F\). The rectangular domain size is \(120\times 40\) and is meshed by 4-nodes regular quadrilateral elements. The material parameters with orthotropic properties are given as \(D_{11}^{e}=0.5448,\ D_{12}^{e}=0.0383,\ D_{22}^{e}=0.1277,\ D_{33}^{e}=0.0456\). Only one layer has been used. The desired volume fraction is 0.5. the minimum filter radius \(\tau_{min}\) is 1.5. For comparison, these cases are investigated by the CFAO and DSCO separately. In the cases(a)-(d) of the CFAO, the initial fibre angles of CFAO are set as case(a)-(d) and candidate angles sets of DSCO are set as cases(e)-(n) in Table 1. The optimized results are shown in Fig. 5 and Fig. 6. Fig. 5 shows that the iteration number of CFAO is generally lower than that of DSCO, but the compliance of CFAO is particularly sensitive to the initial fibre angle setting. The maximum and minimum compliance in cases (a)-(d) are 622.42 and 328.32 respectively. That is consistent with the previous introduction in section 1. The optimized compliances of the DSCO are shown in Fig. 5(e)-(n). It can be seen that compared with CFAO, the optimized optimization results of the DSCO are more stable for different initial fibre configurations especially for the cases(f)-(n). Since there are only 2 angles in the initial candidate angle set, the compliance of the case(e) has the largest fluctuation in DSCO. That indicates that the optimization ability of the candidate angle set containing two \begin{table} \begin{tabular}{c l c c c} \hline \hline & case & \(\mathbf{\theta}_{initial}^{e}\) & case & \(\mathbf{\theta}_{initial}^{e}\) \\ \hline \multirow{3}{*}{CFAO} & case(a) & [0’] & case(b) & [90’] \\ & case(c) & [45’] & case(d) & [-45’] \\ & case(e) & [0’,90’] & case(f) & [0’,-30’,30’,90’] \\ & case(g) & [0’,-60’,60’,90’] & case(h) & [0’,-45’,45’,90’] \\ \multirow{3}{*}{DSCO} & case(i) & [0’,-45’,45’,90’,-30’,30’] & case(j) & [0’,-45’,45’,90’,-60’,60’] \\ & case(k) & [0’,-45’,45’,90’,-30’,60’] & case(l) & [0’,-45’,45’,90’,30’,60’] \\ \multirow{3}{*}{DSCO} & case(m) & [0’,-45’,45’,90’,30’,-60’] & case(n) & [0’,-45’,45’,90’,-30’,-60’] \\ \hline \hline \end{tabular} \end{table} Table 1: the angle setting of the CFAO and DSCO Figure 4: The MBB beam. angles is poor. The initial angle sets of the rest cases of DSCO are 4 angles or 6 angles. It can be found that the compliance does not decrease with the increase of the number of angles in the initial angle set and the case(h) with initial angle set \([0^{\circ},-45^{\circ},45^{\circ},90^{\circ}]\) has minimum compliance 302.7. At the same time, we can also see that in cases(e)-(n), the number of iterations increases significantly as the number of angles of the angle set increases. For the case(h), the compliance is 302.7 and the iteration number is 434. Meanwhile for the case(d), which has the minimum compliance in the cases of CFAO, the compliance is 328.32 and the iteration number is 350. Compared with case(d), the compliance of case(h) decreases by 7.8%, which is crucial for the optimization, even with higher computational costs. It is also worth mentioning that since CFAO is very sensitive to initial fibre angle setting, we cannot select an appropriate initial fibre angle setting directly. To obtain good result, we need to conduct several CFAO with different initial fibre angle settings, which would lead to the increase of computing costs substantially. Although we cannot prove that the optimized solution of case(h) is a globally optimal solution, we can conclude that in this MBB beam example, the DSCO does reduce the risk of getting stuck in a local optimum and the initial angle set \([0,-45^{\circ},45^{\circ},90^{\circ}]\) in DSCO provides good result. Figure 5: The compliance and iteration of case(a)–(l) In the optimized fibre layout figures of Fig. 6, the short red lines represent the fibre orientation of each element. As shown in Fig. 6, the fibre continuity in most of design domain is good. The few elements with poor continuity are mainly concentrated around the nodes where loads or constraints are applied or some elements in the area of rapid geometric change. For the overall structure, different initial angle settings always result in distinctively different structures. Due to the symmetry of the structure, half of the optimized fibre layout of case(h) is showed, see Fig. 7. It can be seen that fibre orientation of few elements located near the load varies erratically. Fig. 8 shows the objective function and convergence rate histories of case(h) with initial fibre angle set \(\theta_{initial}^{e}=[0^{\circ},-45^{\circ},45^{\circ},90^{\circ}]\) by using the DSCO for MBB beam. In the optimized structure figures, orange areas represent unconvergent elements and areas with other different colours represent different fibre angles. The total iteration of case(h) is 434 and the optimized compliance reaches a low value 302.70. Since the weighting function is not normalized, the compliance would be unrealistically high initially and this phenomenon has no effect on the final result [37]. The first step, DMO, stops when the convergence condition is met at the 60th iteration. The convergence rate, \(h_{0.95}\), reaches 0.96 at the end of DMO. The second step, SBPTO, lasts 305 iterations. The convergence rate, \(h_{0.95}\), reaches 1 at the end of SBPTO. The first two steps determine the initial values \(\theta_{i}\) of the CFAO which realizes the continuous fibre angle optimization. It is worth mentioning that the orientation of the fibre is mostly parallel to the member direction. Fig. 6: The optimized fibre layout of CFAO and DSCO for MBB beam. Fig. 7: Half of the optimized fibre layout of case(h) ### L-shape beam The second numerical example is a 2D L-shape beam. The boundary condition and the size of the 2D L-shape structure are shown in Fig. 9. The top point of the right edge is applied an external force \(F=\)1. The convergence criterion, \(\mathcal{E}_{0}\),is set as \(10^{-2}\). The height and width of the beam are both 100. The beam is meshed by 4-nodes regular quadrilateral elements. The material parameters with orthotropic properties are given same with the case of MBB beam. Only one layer has been used. The desired volume fraction is 0.6 and the minimum filter radius \(r_{\text{min}}\) is 1.5. The initial fibre angles of CFAO are and candidate angles sets of DSCO are shown in Table 2. Figure 8: Histories of objective and convergence rate with the DSCO for MBB beam Figure 9: L-shape beam The optimized results are shown in Fig. 10 and Fig. 11. We can find that the iteration number of CFAO is generally lower than that of DSCO in Fig. 10. The minimum compliance of CFAO and DSAO are 188 and 170.58 respectively. Compared with CFAO, the optimized optimization results of the DSCO are more stable. The compliance of CFAO is particularly sensitive to the initial fibre angle setting. The maximum and minimum compliance in cases (a)-(d) are 326 and 188 respectively. The case(h) with initial angle set \([0^{{}^{\circ}},-45^{{}^{\circ}},45^{{}^{\circ}},90^{{}^{\circ}}]\) has minimum compliance 170.58. Meanwhile, we can also find that in cases(e)-(n), the number of iterations increases significantly as the number of angles of the angle set increases. For the case(h), the compliance is 170.58 and the iteration number is 354. Meanwhile for the case(d), which has the minimum compliance in the cases of CFAO, the compliance is 188 and the iteration number is 110. Compared with case(d), the compliance of case(h) decreases by 9.3%. Although it seems that the computation cost of CFAO is much lower, in order to get good result, the initial fibre angle settings are chosen based on trial and error method which will significantly increase the computation cost. In the L-shape beam example, results show that the DSCO does reduce the risk of getting stuck in a local optimum and the initial angle set \([0^{{}^{\circ}},-45^{{}^{\circ}},45^{{}^{\circ}},90^{{}^{\circ}}]\) in DSCO provides good result. \begin{table} \begin{tabular}{c l c c c} \hline \hline & case & \(\mathbf{\theta}_{initial}^{e}\) & case & \(\mathbf{\theta}_{initial}^{e}\) \\ \hline \multirow{3}{*}{CFAO} & case(a) & \([0^{{}^{\circ}}]\) & case(b) & \([90^{{}^{\circ}}]\) \\ & case(c) & \([45^{{}^{\circ}}]\) & case(d) & \([-45^{{}^{\circ}}]\) \\ & case(e) & \([0^{{}^{\circ}},90^{{}^{\circ}}]\) & case(f) & \([0^{{}^{\circ}},-30^{{}^{\circ}},30^{{}^{\circ}},90^{{}^{\circ}}]\) \\ \multirow{3}{*}{DSCO} & case(g) & \([0^{{}^{\circ}},-60^{{}^{\circ}},60^{{}^{\circ}},90^{{}^{\circ}}]\) & case(h) & \([0^{{}^{\circ}},-45^{{}^{\circ}},45^{{}^{\circ}},90^{{}^{\circ}}]\) \\ & case(i) & \([0^{{}^{\circ}},-45^{{}^{\circ}},45^{{}^{\circ}},90^{{}^{\circ}},-30^{{}^{ \circ}},30^{{}^{\circ}}]\) & case(j) & \([0^{{}^{\circ}},-45^{{}^{\circ}},45^{{}^{\circ}},90^{{}^{\circ}},-60^{{}^{ \circ}},60^{{}^{\circ}}]\) \\ \hline \hline \end{tabular} \end{table} Table 2: The allocation of the initial fibre angle As shown in Fig. 11, materials distribution of CFAO is relatively concentrated. The fibre continuity in most of design domain is also good. Few elements with poor continuity are mainly concentrated around the nodes where loads or constraints are applied and in the area of the corner of 'L' or low stress values. In order to show the fibre layout more clearly, optimized fibre layout of case(h) is showed in Fig. 12. ## References Fig. 13 shows the objective function and convergence rate histories of case(h) with initial fibre angle set \(\theta^{\epsilon}_{initial}=[0^{{}^{\circ}},-45^{{}^{\circ}},45^{{}^{\circ}},90^{{}^ {\circ}}]\) by using the DSCO for MBB beam. The total iteration of case(h) is 354 and the optimized compliance reaches a low value 170.58. The first step, DMO, stops when the convergence condition is met at the 50th iteration. The convergence rate, \(h_{0.95}\), reaches 0.95 at the end of DMO. The second step, SBPTO, lasts 246 iterations. The convergence rate, \(h_{0.95}\), reaches 0.999 at the end of SBPTO. The third step, CFAO, lasts 58 iterations. We can find that the unconvergent Figure 11: The optimized results of CFAO and DSCO for L-shape beam including fibre path Figure 12: the optimized fibre layout of case(h) elements in DMO mainly exists at the corner of the 'L' shape. ### Cantilever beam #### 3.3.1 single load case The third numerical example is a 2D cantilever structure. The boundary condition is shown in Fig. 14. The left edge is fixed and the middle point of the right edge is applied an external force \(F=1\). The convergence criterion, \(\varepsilon_{0}\), is set as \(10^{-2}\). The rectangular domain size is \(50\times 40\) and is meshed by 4-nodes regular quadrilateral elements. The material parameters with orthotropic properties are given as \(E_{x}=2\), \(E_{y}=1\), \(G_{xy}=0.25\) and \(\nu_{xy}=0.3\). Only one layer has been used. The desired volume fraction is 0.5. the minimum filter radius \(r_{min}\) is 1.5. The allocation of the initial fibre angle sets of CFAO and DSCO are set as Table 2. Figure 13: Histories of objective functions and convergence rate with the DSCO for L-beam The compliance and iteration number of case(a)-(j) are shown in Fig. 15. The cases(a)-(d) belong to CFAO and cases(e)-(j) belong to DSCO. DSCO's compliances are much more stable to the initial fibre angle settings than DSCO's. The compliance of case(h) with fibre angle setting \(\theta^{c}_{initial}=\)[0', \(-45\)', \(45\)', \(90\)'] is the minimum and the minimum compliance is \(13.24\). At the same time, the compliance of case(f) (\(13.25\)) is very close to the compliance of case(h) (\(13.24\)). In terms of the number of iterations, case(f) is \(189\) while case(h) is \(224\). Thus both initial fibre settings of case(f) and case(h) are good choices. Although we cannot prove that the optimized solution of case(h) is a globally optimal solution, we can conclude that the proposed method, DSCO, does reduce the risk of getting stuck in a local optimum. The fibre optimized figures in **Fig. 16** show that the characteristics of the fibre continuity and the optimized structures with different initial angle settings are similar to those mentioned above. It is worth mentioning that the orientation of the fibre is mostly parallel to the member direction. Figure 14: The cantilever beam with single load. Figure 15: The compliance and the iteration number of case(a)–(j). ## References Fig. 17 shows the iterative process of the compliance and convergence rate of case(e)-(j). It can be found that the iterative processes are all stable. The convergence of each case can reach a very high level. Fig. 16: The optimized fibre layout of CFAO and DSCO for Cantilever beam cases(a)-(j). initial fibre configuration \(\theta^{e}_{initial}=[0^{\circ},-45^{\circ},45^{\circ},90^{\circ}]\) by using the DSCO for Cantilever beam. The total iteration is 224 and the optimized compliance reaches a low value 13.24. The first step, DMO, stops when the convergence condition is met at the 29th iteration. The convergence rate, \(h_{0.95}\), reaches 0.962 at the end of DMO. The second step, SBPTO, lasts 143 iterations. The convergence rate, \(h_{0.95}\), reaches 0.9975 at the end of SBPTO. #### 3.3.2 muti-load case The fourth numerical example is a 2D cantilever structure with multi-load. The Figure 17: The iterative histories for Cantilever beam cases(e)–(j) of DSCO Figure 18: Histories of objective function and convergence rate of case(f) with the DSCO for Cantilever beam with single load. boundary condition is shown in Fig. 19. The external force \(F=1\). The convergence criterion, \(\varepsilon_{0}\), is set as \(10^{-2}\). The rectangular domain size is \(60\times 40\) and is meshed by 4-nodes regular quadrilateral elements. The other parameters are same with the case in section 3.3.1. The compliance and the iteration number of case(a)-(h) are shown in Fig. 20. The cases(a)-(d) are solved by CFAO while cases(e)-(h)use DSCO. The iteration number of CFAO is generally lower than that of DSCO, but the compliance of CFAO is particularly sensitive to the initial fibre angle setting. Except case(d), the compliances of CFAO are much higher than that of DSCO. In contrast, compliances of DSCO are much more stable for the initial fibre angle settings. The compliance of case(f) with fibre angle setting \(\theta^{e}_{initial}=[0\,,-45^{\circ},45^{\circ},90^{\circ}]\) is the minimum (36.91). In the cases(e)-(f) of the DSCO, with the increase of the number of candidate angles, the iteration increases significantly. Although we cannot prove that the optimized solution of case(f) is a globally optimal solution, we can conclude that the proposed method, the DSCO, does reduce the risk of getting stuck in a local optimum. Fig. 19: The cantilever beam with multi-load. In the fibre optimized figures of Fig. 21, the fibre continuity in most of design domain is good. The few elements with poor continuity are mainly concentrated around the nodes where loads or constraints are applied or some elements where there is a large variation in size. For the overall structure, different initial angle settings always result in very different structures. Figure 20: The compliance and the iteration number of case(e)–(f). Fig. 22 shows the objective function and convergence rate histories of case(f) with initial fibre configuration \(\theta^{e}_{initial}=[0^{\circ},-45^{\circ},45^{\circ},90^{\circ}]\) by using the DSCO for Cantilever beam. In the optimized structure figures, areas with different colours represent different fibre angles. The total iteration is 255 and the optimized compliance reaches a low value 36.91. The first step, DMO, stops when the convergence condition is met at the 44th iteration. The convergence rate, \(h_{0.95}\), reaches 0.9325 at the end of DMO. The second step, SBPTO, lasts 121 iterations. Figure 21: The optimized fibre layout of CFAO and DSCO for Cantilever beam with multi-load cases(a)-(h). ## 4 Conclusions This paper establishes a new framework for concurrent optimization of topology and fibre orientation. This method aims to solve the problem of fibre convergence and make the solution closer to the optimal solution. The main idea of this method is discrete-continuous fibre optimization. Firstly, DMO is utilized to select one fibre angle for each element from several predefined candidate angles. Secondly, since some elements cannot converge to obtain a definite angle in this process, the Sequential Binary-Phase Topology Optimization (SBPTO) is employed to improve fibre convergence rate. One specific angle orientation is treated as one material phase. The SBPTO decomposes the multiphase material topology optimization problem into a series of binary-phase material topology optimization problem. Eventually, to obtain good mechanical properties, the continuous variable of fibres orientation is designed to give more design space and use spatial filtering to make the fibre change smoothly as much as possible. The method proposed has been verified effectively by several examples. The results of numerical examples show that DSCO with \(\theta_{initial}^{c}=[0^{{}^{\circ}},-45^{{}^{\circ}},45^{{}^{\circ}},90^{{}^{ \circ}}]\) has the best optimization ability in all cases. The stable optimization ability and relatively suitable computational cost make \(\theta_{initial}^{c}=[0^{{}^{\circ}},-45^{{}^{\circ}},45^{{}^{\circ}},90^{{}^{ \circ}}]\) being selected as the set of predefined candidate angles. Although the optimization framework proposed in this paper can obtain fibre angle orientations with good smoothness, the design of fibre infill pattern [21] should be introduced in order to ensure good manufacturability. Therefore, the gap between the numerical calculation results and the actual manufacturing needs to be further studied and reduced. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Figure 22: Histories of objective function and convergence rate of case(f) ## Acknowledgements We acknowledge the support provided by the Project of the National Natural Science Foundation of China (11702090) and Peacock Program for Overseas High-Level Talents Introduction of Shenzhen City (KQTD20200820113110016).
2309.01308
Fermi Constraints on the Ejecta Speed and Prompt Emission Region of the Distant GRB 220101A
At redshift z = 4.618, GRB 220101A is the most distant gamma-ray burst (GRB) detected by Fermi/LAT to date. It is also a very energetic event, with an equivalent isotropic energy of $3.6\times10^{54}$ erg. We jointly analyzed the Fermi/GBM and LAT observations of GRB 220101A with two independent approaches and found a significant spectral break at sub-100 MeV energies during the prompt emission. The fast variability of the emission suggests that this spectral attenuation is caused by internal opacity to pair creation. Regardless of the nature of the emission processes assumed in the spectral analysis, we infer a moderate value for the jet Lorentz factor, $\Gamma\sim110$, and find that all of the high-energy emission was produced above and near the photosphere, at a distance of $\sim10^{14}$ cm from the central engine. We compare these results with the four other LAT-detected GRBs with similar properties.
Lorenzo Scotton, Frédéric Piron, Nicola Omodei, Niccolò Di Lalla, Elisabetta Bissaldi
2023-09-04T01:43:15Z
http://arxiv.org/abs/2309.01308v2
# Fermi Constraints on the Ejecta Speed and Prompt Emission Region of the Distant GRB 220101A ###### Abstract At redshift \(z=4.618\), GRB 220101A is the most distant gamma-ray burst (GRB) detected by Fermi/LAT to date. It is also a very energetic event, with an equivalent isotropic energy of \(3.6\times 10^{54}\) erg. We jointly analyzed the Fermi/GBM and LAT observations of GRB 220101A with two independent approaches and found a significant spectral break at sub-100 MeV energies during the prompt emission. The fast variability of the emission suggests that this spectral attenuation is caused by internal opacity to pair creation. Regardless of the nature of the emission processes assumed in the spectral analysis, we infer a moderate value for the jet Lorentz factor, \(\Gamma\sim 110\), and find that all of the high-energy emission was produced above and near the photosphere, at a distance of \(\sim\)10\({}^{14}\) cm from the central engine. We compare these results with the four other LAT-detected GRBs with similar properties. _Unified Astronomy Thesaurus concepts:_ Gamma-ray bursts (629); High energy astrophysics (739) + Footnote †: slugcomment: Received 2023 May 10; revised 2023 August 9; accepted 2023 August 27; published 2023 October 12 ## 1 Introduction Gamma-ray bursts (GRBs) are extragalactic and extremely energetic transient emissions of gamma rays. Their high luminosities suggest that the central engine of a GRB is a newborn stellar-mass black hole, which emits an ultrarelativistic collimated outflow (jet). At a typical distance from the central engine of \(R\sim 10^{11}\)-\(10^{12}\) cm, the jet becomes transparent to thermal radiation, which is free to travel and possibly observed as a thermal component of the GRB spectrum. At an intermediate distance of \(R\sim 10^{14}\)-\(10^{15}\) cm, still within the jet, either the kinetic energy carried by the jet dissipates via shocks or magnetic reconnection takes place. As a common result, charged particles are accelerated and emit highly variable synchrotron radiation. Both the thermal radiation, possibly reprocessed below the photosphere, and the nonthermal synchrotron radiation emitted at this intermediate region represent the prompt emission of the GRB. At larger radii, \(R\sim 10^{16}\)-\(10^{17}\) cm, the jet collides with the circumburst medium, and the generated external shock accelerates charged particles that emit synchrotron radiation in this so-called afterglow phase. The prompt GRB emission is a short phase of intense and highly variable emission in hard X-rays and gamma rays that lasts from fractions of seconds to hundreds of seconds, while the subsequent afterglow phase is a long-lasting (hours, days) and decaying emission from (very) high energies (GeV-TeV) down to radio frequencies. The first GRB catalog of the Burst and Transient Source Experiment on board the Compton Gamma Ray Observatory revealed a bimodality in the temporal and spectral distribution of GRBs (Kouveliotou et al., 1993); short GRBs have a duration of less than \(\sim\)2 s and are characterized by harder spectra, while long GRBs have a duration greater than \(\sim\)2 s and are typically softer. Short GRBs are believed to be produced by the merger of two neutron stars (Eichler et al., 1989; Narayan et al., 1992; Piran, 2004) or a neutron star and a stellar-mass black hole (Paczynski, 1991; Piran, 2004). On 2017 August 17, the direct association of the gravitational wave GW 170817 emitted by the merger of a binary neutron star system and the short GRB 170817A (Abbott et al., 2017) proved that binary neutron star mergers are the progenitors of at least some short GRBs. On the other hand, long GRBs are believed to be produced by the collapse of fast-rotating massive stars (\(>\)30 \(M_{\rm{Sun}}\), Collapsar model; Woosley, 1993; Piran, 2004), as suggested by the association of nearby long GRBs with core-collapsed supernovae of Types Ib/Ic (Galama et al., 1998; Bloom et al., 2002; Hjorth et al., 2003; Piran, 2004). In both scenarios, the merger of two compact objects or the collapse of a massive star result in the formation of a stellar-mass black hole, which acts as the central engine powering the jet. The variable high-energy emission of some bursts, such as GRB 090926A (Yassine et al., 2017), GRB 100724B, GRB 160509A (Vianello et al., 2018), and GRB 170405A (Arimoto et al., 2020), exhibits a cutoff at the high end of its spectrum, which has been interpreted as a flux attenuation caused by the opacity to pair creation. In these rare cases, the theoretical framework developed by Hascoet et al. (2012) and applied by Yassine et al. (2017) on GRB 090926A allows one to directly determine the bulk Lorentz factor \(\Gamma_{\rm{bulk}}\) of the relativistic outflow and to localize the region where the observed variable high-energy emission was produced. This theoretical model assumes that the observed radiation is emitted close to or above the photosphere, and it does not rely on the specific nature of the emission mechanism but rather only on the knowledge of the burst distance, its emission variability, its broadband spectrum, and the cutoff energy. The Fermi Gamma Ray Space Telescope is an observatory sensitive in the energy range from 10 keV to more than 300 GeV. It hosts two instruments: the Large Area Telescope (LAT; Atwood et al., 2009), which is an imaging, wide field-of-view (FOV), high-energy pair conversion telescope that covers the energy range from 20 MeV to more than 300 GeV, and the Gamma-ray Burst Monitor (GBM; Meegan et al., 2009), which comprises 12 sodium iodide (NaI) scintillation detectors and two bismuth germanate (BGO) detectors and covers the energy range from 8 keV to 40 MeV. The LAT standard analyses consider LAT data above 100 MeV and do not overlap with the energy range covered by the GBM, where the bulk of the GRB prompt emission is expected. Pelasa et al. (2010) proposed a nonstandard analysis technique to consider LAT data down to \(\sim\)20 MeV in order to fill this gap, thus providing useful data to better constrain the high-energy part of the GRB prompt spectra. These LAT low-energy (LLE) data are defined by less stringent cuts than LAT standard data, and they provide higher photon statistics above 100 MeV. In this work, we analyze the exceptionally bright and distant GRB 220101A during its prompt emission at high energy using Fermi data, and we provide a physical interpretation of the observed emission. We specify the LAT and GBM data observations of GRB 220101A in Section 2.1, and we present the broadband spectral analysis procedure and results in Sections 2.2 and 3. Finally, we propose the interpretation of our results in Section 4 and compare GRB 220101A with other similar LAT-detected bursts in Section 5. ## 2 Observations and Analysis Procedure ### Observations and Data Sets The long GRB 220101A was detected and observed in a broad multiwavelength range. The prompt emission has been observed from hard X-rays to high-energy gamma rays, and the afterglow has been detected from optical (de Ugarte Postigo et al., 2022; Hentunen et al., 2022; Perley, 2022) down to radio wavelengths up to few days after the event (Laskar, 2022). The first detection of GRB 220101A was provided by the BAT instrument on board the Neil Gehrels Swift Observatory (Gehrels et al., 2004) at 05:10:12 UT on 2022 January 1 (first BAT notice).4 This observatory also performed follow-up observations with XRT in the hard X-rays and UVOT in the visible domain (Tohuvavohu et al., 2022). Swift-UVOT localized GRB 220101A at R.A., decl. = \(1^{\circ}\)35340, \(31^{\circ}\)76903 with a 90% confidence error radius of \(0.\!\!^{\prime\prime}61\). Its photometric redshift was first measured by the Xinglong 2.16 m telescope at \(z=4.618\)(Fu et al., 2022) and later confirmed by the Liverpool Telescope (Perley, 2022) and the Nordic Optical Telescope (Fynbo et al., 2022). Footnote 4: [https://heasarc.gsfc.nasa.gov/wsgi-scripts/tach/gen_v2/tach.wsgi/](https://heasarc.gsfc.nasa.gov/wsgi-scripts/tach/gen_v2/tach.wsgi/) The Fermi/GBM triggered on GRB 220101A at \(T_{0}=05\):10:12 UT on 2022 January 1 (Lesage et al., 2022). The burst was also detected by Fermi/LAT at high energies (Arimoto et al., 2022) and occurred \(18^{\circ}\) from the LAT boresight at \(T_{0}\). The LAT on-ground localization of the event is R.A., decl. = \(1^{\circ}\)52, \(31^{\circ}\)75 with an error radius of \(0.\!\!^{\circ}46\), consistent with the Swift/XRT localization. The GBM data used in this work are the time-tagged events recorded by NaI detectors 3, 6, and 7, which observed the burst at an angle smaller than \(60^{\circ}\), and by BGO detector 1, which was closest to the direction of the event at \(T_{0}\). We also used the LAT standard P8R3_TRANSIENT020E_V2 data extracted from a region centered at the localization provided by the XRT with a \(12^{\circ}\) radius. Additionally, we used the LLE data to extend our analysis down to 20 MeV. Figure 1 shows the Fermi multidetector light curve of GRB 220101A during its prompt emission. The red dashed vertical line denotes the time of the trigger \(T_{0}\), and the black dashed lines define the four time bins A, B, C, and D that are used in the time-resolved spectral analysis. The duration of the prompt emission is estimated as \(T_{0}\) = (128 \(\pm\) 16) s (HEASARC GBM Burst Catalog).5 The main emission episode in the GBM energy range (8 keV-40 MeV) was observed in the time interval \(T_{0}+[65,\,134]\) s (time bins A-D), while the largest portion of LAT events is observed in the time interval \(T_{0}+[95,\,107]\) s (time bins B and C). The brightest emission episode around \(T_{0}+100\) s was jointly detected by the GBM detectors and the LAT. Interestingly, the high-energy flux is attenuated above \(\sim\)100 MeV during this episode. The highest-energy photon associated with the burst with a probability greater than 99% was detected at a later time (\(T_{0}+152\) s) with an energy of 927 MeV. In this work, we focus on the brightest emission episode around \(T_{0}+100\) s. The variability of this emission as seen in Figure 1 suggests that it has an internal origin in the jet. Consistently, Mei et al. (2022) interpreted this episode as prompt-dominated, with an afterglow appearing only after \(\sim\)118 s. Footnote 5: [https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbst.html](https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbst.html) ### Analysis Procedure First, we performed an LAT-only standard analysis based on the unbinned likelihood method using fermitools.6 We employed the likelihood ratio test (LRT; Neyman & Pearson, 1928) to estimate the significance of the GRB detection with the LAT. In the null hypothesis, the background model is composed of the isotropic emission only, which is typically fitted as a power-law (PL) spectrum. The contribution to the background in the LAT from the galactic diffuse emission was neglected owing to the high latitude of the burst (\(\sim\)\(-30^{\circ}\)). A detection threshold TS\({}_{\rm GRB}\) \(>\) 20 was then used following the first LAT GRB catalog (Ackermann et al., 2013), which corresponds to a one-sided Gaussian probability of 4.1\(\sigma\). We also used the LRT to search for spectral attenuation at high energies using an exponential cutoff multiplicative model. The corresponding test statistic is defined as TS\({}_{\rm cont}\) = \(-2\ln[\mathcal{L}_{\rm max}(M_{0})/\mathcal{L}_{\rm max}(M_{\rm l})]\), where \(M_{0}\) is the spectral model in the null hypothesis, \(M_{\rm l}\) = \(M_{0}\times\exp(-E/E_{\rm cut})\) is the spectral model in the alternate hypothesis, and \(E_{\rm cut}\) is the cutoff energy. In the LAT-only standard analysis, \(M_{0}\) is a PL, and \(M_{1}\) is referred to as CUTPL. Since TS\({}_{\rm cut}\) follows a \(\chi^{2}\) with one degree of freedom (the additional \(E_{\rm cut}\) parameter) in the large sample limit (Wilks, 1938), we estimated the Gaussian significance of the additional cutoff as \(\alpha_{\rm cut}=\sqrt{\rm TS_{\rm cut}}\). Footnote 6: [https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/](https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/) Next, we performed a joint GBM-LAT spectral analysis. We used the gtburst software to bin the event data and produce a total count spectrum in each GBM detector. This tool was also used to create GBM background count spectra during the GRB on-source interval by fitting polynomial functions in each energy channel of two off-source intervals and extrapolating them to the GRB interval. In addition, we used fermitools to create the LAT count spectra from the best-fit model obtained in the LAT-only standard analysis. We jointly analyzed the GBM and LAT count spectra with the pyXSPEC fitting software7(Arnaud, 1996). In order to check the stability of our results, we also performed a joint spectral analysis using the "Multi-Mission Maximum Likelihood" (threeML) software8(Vianello et al., 2015), which allows us to combine the native likelihoods of different instruments simultaneously. In the current analysis, threeML offered the full accuracy of the LAT unbinned likelihood technique, which is lost during the binning in space and energy that is required by pyXSPEC. We considered the following spectral models, which are differential photon energy spectra in units of \(\rm{cm}^{-2}\,s^{-1}\,keV^{-1}\). Figure 1: Fermi multidetector light curve of GRB 220101A prompt emission in increasing energy bands from top to bottom. The first four panels present count rates, while the last panel presents the energy of the LAT-observed events. The red dashed vertical line denotes the time of the trigger \(T_{0}\) while the black dashed vertical lines indicate the time intervals chosen for the time-resolved spectral analysis, covering the main emission episode observed by the LAT. 1. Band (four parameters). Introduced by Band et al. (1993), it reads \[f_{\rm Band}\left(E\right)=A\] \[\times\begin{cases}\left(\frac{E}{E_{\rm pix}}\right)^{\alpha} \exp\left[-\frac{E(2+\alpha)}{E_{\rm pix}}\right]\\ \left(\frac{E}{E_{\rm pix}}\right)^{\beta}\exp(\beta-\alpha)\left(\frac{E_{\rm pix }(\alpha-\beta)}{E_{\rm pix}(2+\alpha)}\right)^{-\beta}\end{cases},\] where \(\alpha\) is the low-energy spectral index, \(\beta\) is the high-energy spectral index, \(E_{\rm p}\) is the peak energy of the spectral energy distribution (SED), \(E_{b}\) is the break energy, and \(E_{\rm pix}\) is the reference energy fixed to 100 keV. 2. Internal shock synchrotron model (ISSM; four parameters). It was introduced by Yassine et al. (2020) and further investigated by L. Scotton et al. (2023, in preparation) as a proxy function of the GRB internal shock model developed by Bosnjak & Daigne (2014), \[f_{\rm ISSM}(E)=\frac{A}{\left[1-\frac{E_{\rm p}}{E_{\rm r}}\! \left(\frac{2+\beta}{2+\alpha}\right)\right]^{\beta-\alpha}}\] \[\times\left(\frac{E}{E_{r}}\right)^{\alpha}\left[\frac{E}{E_{r}}- \frac{E_{\rm p}}{E_{r}}\!\left(\frac{2+\beta}{2+\alpha}\right)\right]^{\beta- \alpha},\] (2) where \(\alpha\), \(\beta\), and \(E_{p}\) have the same meaning as in the Band model, and \(E_{r}\) is the reference energy fixed to 10 keV. We implemented both functions as local models in pyXSPEC and threeML. We also considered the models obtained by multiplying these functions by an exponential cutoff at high energies (\(\propto e^{-E/E_{\rm pix}}\)). We called the resulting models BandExpCut and ISSMExpCut, respectively. Similar to the LAT-only standard analysis, we estimated the spectral cutoff significance as \(\sigma_{\rm cut}\!=\!\sqrt{\rm TS_{\rm cut}}\) using Band or ISSM for the \(M_{0}\) spectral model in the null hypothesis and BandExpCut or ISSMExpCut for the \(M_{1}\) spectral model in the alternate hypothesis. ## 3 Broadband Spectral Analysis Results ### High-energy Spectral Evolution We analyzed LAT standard data at energies greater than 100 MeV to search for a GRB detection and test whether a spectral cutoff is statistically required. We considered \(T_{0}\!+\![0\), 600] s as the time interval in which the burst position was in the LAT FOV. In particular, we focused on the main emission interval \(T_{0}\!+\![65,\,134]\) s and the time intervals \(T_{0}\!+\![0,\,65]\), [134, 300], and [300, 600] s. Table 1 presents the analysis results. High-energy emission from the point source is detected (\(\rm TS\!>\!20\)) over the whole time interval \(T_{0}\!+\![0,\,600]\) s and, more specifically, in the main emission episode \(T_{0}\!+\![65,\,134]\) s and in \(T_{0}\!+\![134,\,300]\) s. No high-energy emission is detected before \(T_{0}\!+\!65\) s and after \(T_{0}\!+\!300\) s or in the time window when the burst reentered the LAT FOV, i.e., \(4500\!\) s \(<\!T\!-\!T_{0}\!<\!6000\) s. In the main emission interval \(T_{0}\!+\![65,\,134]\) s, its spectral index is very steep and significantly softer than \(-3\). This is consistent with the depleted flux seen at \(\sim\!T_{0}\!+\!100\) s in Figure 1. However, no cutoff is required by the data in any time interval. We thus increased the spectral coverage and sensitivity to a possible cutoff by including LAT data down to 30 MeV, ignoring the energy dispersion effects that are not implemented in the unbinned likelihood analysis. As expected, the PL index was better constrained, but no spectral cutoff was detected. ### Time-resolved Prompt Emission Spectra Since no spectral cutoff was detected in the LAT-only spectral analysis, we extended the energy lever arm to lower energy by including GBM data in the fits. Table 2 shows the best-fit parameters, fit statistics, and significance of the additional spectral cutoff of BandExpCut on GBM\(+\)LAT data in the four time intervals A, B, C, and D. The \(E_{\rm cut}\) is significantly detected in time bins B and C at \(26\!\pm\!13\) and \(45\pm 13\) MeV, respectively. Moreover, we considered LLE data down to 20 MeV instead of the LAT standard data to properly \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \(T-T_{0}\) (s) & Range (MeV) & \multicolumn{2}{c}{PL} & \multicolumn{3}{c}{CUPPL} \\ \cline{3-8} & & Index & TS & Index & \(E_{\rm cut}\) (MeV) & TS & \(\sigma_{\rm cut}\) \\ \hline 0–600 & \(>\)100 & \(-2.48\pm 0.23\) & **104.1** & \(-1.97\pm 0.58\) & \(939\pm 1129\) & **105.3** & 1.1 \\ & \(>\)30 & \(-2.93\pm 0.13\) & **170.0** & \(-2.93\pm 0.13\) & \((2.9\pm 7.9)\times 10^{5}\) & **170.0** & 0 \\ \hline 0–65 & \(>\)100 & \(-2.33\pm 0.74\) & 10.6 & \(-1.01\pm 0.37\) & \(321\pm 321\) & 1.1 & 0.7 \\ & \(>\)30 & \(-1.73\pm 0.39\) & 12.7 & \(-1.00\pm 0.02\) & \(458\pm 434\) & 14.6 & 1.4 \\ 65–134 & \(>\)100 & \(-3.41\pm 0.52\) & **45.7** & \(-2.97\pm 1.32\) & \(607\pm 1816\) & **45.8** & 0.3 \\ & \(>\)30 & \(-3.48\pm 0.17\) & **129.1** & \(-3.45\pm 0.28\) & \(3167\pm 21570\) & **129.1** & 0 \\ \hline 134–300 & \(>\)100 & \(-2.18\pm 0.31\) & **47.3** & \(-1.00\pm 0.08\) & \(427\pm 193\) & **50.0** & 1.6 \\ & \(>\)30 & \(-1.98\pm 0.21\) & **56.8** & \(-1.0\pm 2.3\) & \(439\pm 1568\) & **60.7** & 2.0 \\ \hline 300–600 & \(>\)100 & \(-1.81\pm 0.51\) & 11.1 & \(-1.00\pm 0.08\) & \(945\pm 931\) & 12.1 & 1.0 \\ & \(>\)30 & \(-1.76\pm 0.50\) & 9.9 & \(-1.00\pm 0.01\) & \(1045\pm 1143\) & 10.7 & 0.9 \\ \hline \hline \end{tabular} 1 \end{table} Table 1Results of the LAT-only Spectral Analysis of PL and CUTPL in Different Time Windows account for the energy dispersion and benefit from the greater photon statistics. Table 3 shows the results of the Band fits with and without the spectral cutoff to time bins B, C, and \(\rm B+C\). We further checked that the results do not depend strongly on the specific choice of the Band model and also used the ISSM model to describe the nonthermal spectrum. Table 4 shows the corresponding results for ISSM, and Table 5 summarizes the overall results. A spectral cutoff is detected in time bins B and \(\rm B+C\) for both BandExpCut and ISSMExpCut, while it is detected in \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C: \(T_{0}+[100, 107]\) s} & \multicolumn{2}{c}{B \(+\) C: \(T_{0}+[95, 107]\) s} \\ \hline & \multicolumn{2}{c}{ISSMExpCut} & \multicolumn{2}{c}{ISSM} & \multicolumn{2}{c}{ISSMExpCut} & \multicolumn{2}{c}{ISSM} & \multicolumn{2}{c}{ISSMExpCut} \\ \(\alpha\) & \(-0.77\pm 0.06\) & \(-0.66\pm 0.09\) & \(-0.74\pm 0.07\) & \(-0.67\pm 0.10\) & \(-0.75\pm 0.05\) & \(-0.67\pm 0.09\) \\ \(\beta\) & \(-2.52\pm 0.06\) & \(-2.17\pm 0.05\) & \(-2.50\pm 0.05\) & \(-2.28\pm 0.05\) & \(-2.50\pm 0.03\) & \(-2.24\pm 0.07\) \\ \(E_{p}\) [keV] & 701 \(\pm\) 60 & 1263 \(\pm\) 332 & 776 \(\pm\) 69 & 996 \(\pm\) 130 & 751 \(\pm\) 46 & 1066 \(\pm\) 236 \\ \(E_{\rm cut}\) [MeV] &... & **41 \(\pm\) 10** &... & 88 \(\pm\) 27 &... & **64 \(\pm\) 22** \\ \hline Norm. (10\({}^{-2}\)) & 18 \(\pm\) 1 & 17 \(\pm\) 1 & 17 \(\pm\) 1 & 16 \(\pm\) 2 & 17.2 \(\pm\) 0.9 & 16 \(\pm\) 1 \\ PGSTAR/dof & 605/519 & 582/518 & 530/519 & 517/518 & 661/519 & 629/518 \\ \(\sigma_{\rm cut}\) &... & **4.8** &... & 3.6 &... & **5.7** \\ \hline \end{tabular} 1 \end{table} Table 2pyXSPEC Spectral Fits of Band with and without a Cutoff on GBM\(+\)LAT Data in Time Intervals A, B, C, and D \begin{table} \begin{tabular}{l c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C: \(T_{0}+[100, 107]\) s} & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} \\ \hline & Band & BandExpCut & Band & BandExpCut \\ \(\alpha\) & \(-0.77\pm 0.03\) & \(-0.73\pm 0.05\) & \(-0.85\pm 0.05\) & \(-0.82\pm 0.06\) \\ \(\beta\) & \(-2.71\pm 0.06\) & \(-2.21\pm 0.33\) & \(-2.46\pm 0.05\) & \(-1.9\pm 0.2\) \\ \(E_{p}\) [keV] & 248 \(\pm\) 12 & 230 \(\pm\) 23 & 384 \(\pm\) 39 & 339 \(\pm\) 46 \\ \(E_{\rm cut}\) [MeV] &... & 24 \(\pm\) 23 &... & **26 \(\pm\) 13** \\ Norm. (10\({}^{-2}\)) & 2.09 \(\pm\) 0.10 & 2.22 \(\pm\) 0.19 & 2.62 \(\pm\) 0.17 & 2.76 \(\pm\) 0.22 \\ PGSTAR/dof & 638/524 & 627/523 & 593/524 & 569/523 \\ \(\sigma_{\rm cut}\) &... & 3.3 &... & **4.9** \\ \hline \end{tabular} 1 \end{table} Table 2pyXSPEC Spectral Fits of Band with and without a Cutoff on GBM\(+\)LAT Data in Time Intervals A, B, C, and D \begin{table} \begin{tabular}{l c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C: \(T_{0}+[100, 107]\) s} & \multicolumn{2}{c}{B \(+\) C: \(T_{0}+[95, 107]\) s} \\ \hline & \multicolumn{2}{c}{Band} & BandExpCut & Band & BandExpCut \\ \(\alpha\) & \(-0.85\pm 0.05\) & \(-0.82\pm 0.05\) & \(-0.85\pm 0.04\) & \(-0.82\pm 0.04\) & \(-0.85\pm 0.03\) & \(-0.82\pm 0.03\) \\ \(\beta\) & \(-2.32\pm 0.04\) & \(-1.87\pm 0.11\) & \(-2.31\pm 0.03\) & \(-2.09\pm 0.05\) & \(-2.31\pm 0.02\) & \(-2.01\pm 0.06\) \\ \(E_{p}\) [keV] & 387 \(\pm\) 41 & 337 \(\pm\) 39 & 437 \(\pm\) 39 & 397 \(\pm\) 35 & 416 \(\pm\) 29 & 373 \(\pm\) 26 \\ \(E_{\rm cut}\) [MeV] &... & **23 \(\pm\) 8** &... & **69 \(\pm\) 20** &... & **42 \(\pm\) 11** \\ Norm. (10\({}^{-2}\)) & 2.60 \(\pm\) 0.17 & 2.8 \(\pm\) 0.2 & 2.59 \(\pm\) 0.13 & 2.70 \(\pm\) 0.14 & 2.59 \(\pm\) 0.10 & 2.73 \(\pm\) 0.12 \\ PGSTAR/dof & 609/519 & 570/518 & 536/519 & 510/518 & 672/519 & 612/518 \\ \(\sigma_{\rm cut}\) &... & **6.2** &... & **5.1** &... & **7.7** \\ \hline \end{tabular} 1 \end{table} Table 3pyXSPEC Spectral Fits of Band with and without the Cutoff on GBM\(+\)LLE Data in Time Intervals B, C, and B \(+\) C \begin{table} \begin{tabular}{l c c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C: \(T_{0}+[100, 107]\) s} & \multicolumn{2}{c}{B \(+\) C: \(T_{0}+[95, 107]\) s} \\ \hline & \multicolumn{2}{c}{ISSM} & \multicolumn{2}{c}{ISSMExpCut} & \multicolumn{2}{c}{ISSMExpCut} & \multicolumn{2}{c}{ISSM} & \multicolumn{2}{c}{ISSMExpCut} \\ \(\alpha\) & \(-0.77\pm 0.06\) & \(-0.66\pm 0.09\) & \(-0.74\pm 0.07\) & \(-0.67\pm 0.10\) & \(-0.75\pm 0.05\) & \(-0.67\pm 0.09\) \\ \(\beta\) & \(-2.52\pm 0.06\) & \(-2.17\pm 0.05\) & \(-2.50\pm 0.05\) & \(-2.28\pm 0.05\) & \(-2.50\pm 0.03\) & \(-2.24\pm 0.07\) \\ \(E_{p}\) [keV] & 701 \(\pm\) 60 time bin C only when considering BandExpCut. We note that the significance of the spectral cutoff is systematically smaller when employing ISSMExpCut; the continuous curvature of ISSM, which reflects the natural shape of GRB synchrotron spectra, accounts for part of the softening of the spectra at high energies and thus reduces the significance of the additional cutoff. Figure 2 shows the GRB count spectra and residuals (upper panels) and SEDs (lower panels) when fitting Band (left panels) and BandExpCut (right panels) in time bin B \(+\) C. Figure 3 shows the same quantities for ISSM and ISSMExpCut. The residuals in the LLE energy range improve when adding the high-energy spectral cutoff to both Band and ISSM, and this is consistent with the significant detection of the spectral cutoff. In order to assess possible systematic effects due to the specific software used for the spectral data preparation and fit, we performed the same analysis within the framework of threeML. Tables 6 and 7 show the threeML spectral results when fitting Band and ISSM, respectively, with and without the spectral cutoff to GBM\(+\)LLE data. The spectral results are fully consistent between the two different approaches; the high-energy cutoff is required in time bins B (5.8\(\sigma\)), C (4.7\(\sigma\)), and B \(+\) C (7.1\(\sigma\)) when fitting BandExpCut, while it is required in time bins B (4.7\(\sigma\)), C (3.2\(\sigma\)), and B \(+\) C (5.3\(\sigma\)) when fitting ISSMExpCut. The results from pyXSPEC and threeML are in excellent agreement, confirming the cutoff detection already found with pyXSPEC. As mentioned in Section 2.2, threeML makes use of the native LAT likelihood; therefore, we performed the same fits on GBM\(+\)LLE\(+\)LAT data, limiting the LLE data below 100 MeV and considering the LAT standard data above 100 MeV. The corresponding results are reported in Tables 8 \begin{table} \begin{tabular}{c c c c c} \hline & & B: \(T_{0}+\) [95, 100] s & C: \(T_{0}+\) [100, 107] s & B \(+\) C: \(T_{0}+\) [95, 107] s \\ \hline BandExpCut & \(E_{\rm cut}\) [MeV] & 23 \(\pm\) 8 & 69 \(\pm\) 20 & 42 \(\pm\) 11 \\ & \(\sigma_{\rm cut}\) & 6.2 & 5.1 & 7.7 \\ ISSMExpCut & \(E_{\rm cut}\) [MeV] & 41 \(\pm\) 10 & 88 \(\pm\) 27 & 64 \(\pm\) 22 \\ & \(\sigma_{\rm cut}\) & 4.8 & 3.6 & 5.7 \\ \hline \end{tabular} \end{table} Table 5Summary of the pyXSPEC Spectral Analysis Figure 2.— Left: GRB count spectra and residuals (upper panel) and SED (lower panel) from Band fits to GBM\(+\)LLE data in time bin B \(+\) C with pyXSPEC. Right: same for BandExpCut. and 9. The likelihoods of the Band and ISSM fits are remarkably similar within the analyses of both GBM\(+\)LLE and GBM\(+\)LLE\(+\)LAT data. Table 10 resumes the results of the threeML spectral analysis: a spectral cutoff is detected at \(32\pm 9\) (6.2\(\sigma\)), \(67\pm 16\) (5.8\(\sigma\)), and \(51\pm 10\) (8.4\(\sigma\)) MeV when fitting BandExpCut in time bins B, C, and B + C, respectively. A high-energy cut is detected at \(41\pm 13\) (4.9\(\sigma\)), \(88\pm 28\) (4.5\(\sigma\)), and \(66\pm 15\) (6.3\(\sigma\)) MeV when fitting ISSMExpCut in the same time bins. As already observed, the significances of the spectral cutoffs in the case of ISSMExpCut are smaller due to the continuous curvature of ISSM. Moreover, the cutoff significance is larger than in the previous GBM\(+\)LLE data analysis, especially in time bins C and B + C. This can be explained by the better sensitivity of the native LAT likelihood, which manifests particularly in these time bins where the spectral cutoff is close to 100 MeV, the lower bound of LAT standard data. The fits of BandExpCut and ISSMExpCut yield similar results within the errors but with different fitted spectral cutoff values. We used these differences to assess the systematic \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+\) [95, 100] s} & \multicolumn{2}{c}{C: \(T_{0}+\) [100, 107] s} & \multicolumn{2}{c}{B + C: \(T_{0}+\) [95, 107] s} \\ \hline & Band & BandExpCut & Band & BandExpCut & Band & BandExpCut \\ \(\alpha\) & \(-0.84\pm 0.05\) & \(-0.80\pm 0.06\) & \(-0.85\pm 0.04\) & \(-0.82\pm 0.04\) & \(-0.84\pm 0.03\) & \(-0.81\pm 0.04\) \\ \(\beta\) & \(-2.31\pm 0.04\) & \(-1.88\pm 0.08\) & \(-2.29\pm 0.03\) & \(-2.09\pm 0.06\) & \(-2.30\pm 0.02\) & \(-2.01\pm 0.05\) \\ \(E_{\rm F}\) [keV] & \(370\pm 40\) & \(320\pm 40\) & \(420\pm 40\) & \(390\pm 40\) & \(401\pm 27\) & \(359\pm 26\) \\ \(E_{\rm F}\) [MeV] &... & \(\mathbf{23\pm 6}\) &... & \(\mathbf{71\pm 25}\) &... & \(\mathbf{43\pm 11}\) \\ Norm. (\(10^{-2}\)) & 2.60 \(\pm\) 0.17 & 2.77 \(\pm\) 0.22 & 2.60 \(\pm\) 0.13 & 2.71 \(\pm\) 0.15 & 2.60 \(\pm\) 0.10 & 2.74 \(\pm\) 0.13 \\ \(-\)log(\(\mathcal{L}\)) & 2129 & 2112 & 2352 & 2341 & 2792 & 2767 \\ \(\sigma_{\rm cut}\) &... & \(\mathbf{5.8}\) &... & \(\mathbf{4.7}\) &... & \(\mathbf{7.1}\) \\ \hline \end{tabular} 1 \end{table} Table 6threeML Spectral Fits of Band with and without the Cutoff on GBM\(+\)LLE Data in Time Intervals B, C, and B + C Figure 3.— Left: GRB count spectra and residuals (upper panel) and SED (lower panel) from ISSM fits to GBM\(+\)LLE data in time bin B + C with pyxSPEC. Right: same for ISSMExpCut. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C: \(T_{0}+[100, 107]\) s} & \multicolumn{2}{c}{B \(+\) C: \(T_{0}+[95, 107]\) s} \\ \hline & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C: \(T_{0}+[100, 107]\) s} & \multicolumn{2}{c}{B \(+\) C: \(T_{0}+[95, 107]\) s} \\ \hline & \multicolumn{2}{c}{BAM} & \multicolumn{2}{c}{BAMExpCut} & \multicolumn{2}{c}{BAM} & \multicolumn{2}{c}{BAMExpCut} & \multicolumn{2}{c}{BAM} & \multicolumn{2}{c}{BAMExpCut} \\ \(\alpha\) & \(-0.73\pm 0.08\) & \(-0.61\pm 0.12\) & \(-0.71\pm 0.07\) & \(-0.65\pm 0.08\) & \(-0.72\pm 0.05\) & \(-0.64\pm 0.07\) \\ \(\beta\) & \(-2.50\pm 0.06\) & \(-2.10\pm 0.09\) & \(-2.47\pm 0.04\) & \(-2.28\pm 0.07\) & \(-2.48\pm 0.04\) & \(-2.21\pm 0.07\) \\ \(E_{p}\) [keV] & \(680\pm 70\) & \(1900\pm 1400\) & \(760\pm 60\) & \(950\pm 140\) & \(730\pm 50\) & \(1100\pm 230\) \\ \(E_{\rm out}\) [MeV] &... & \(\mathbf{34\pm 11}\) &... & \(100\pm 40\) &... & \(\mathbf{61\pm 19}\) \\ Norm. (\(10^{-2}\)) & \(1.87\pm 0.04\) & \(1.87\pm 0.04\) & \(1.98\pm 0.04\) & \(1.99\pm 0.04\) & \(1.94\pm 0.03\) & \(1.94\pm 0.03\) \\ \(-\)log(\(\mathcal{L}\)) & \(2128\) & \(2117\) & \(2349\) & \(2344\) & \(2788\) & \(2774\) \\ \(\sigma_{\rm out}\) &... & \(\mathbf{4.7}\) &... & \(3.2\) &... & \(\mathbf{5.3}\) \\ \hline \end{tabular} 1 \end{table} Table 7threeML Spectral Fits of ISSM with and without the Cutoff on GBM\(+\)LLE Data in Time Intervals B, C, and B + C \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C: \(T_{0}+[100, 107]\) s} & \multicolumn{2}{c}{B \(+\) C: \(T_{0}+[95, 107]\) s} \\ \hline & \multicolumn{2}{c}{BAM} & \multicolumn{2}{c}{BAMExpCut} & \multicolumn{2}{c}{BAM} & \multicolumn{2}{c}{BAMExpCut} & \multicolumn{2}{c}{BAM} \\ \(\alpha\) & \(-0.73\pm 0.08\) & \(-0.62\pm 0.12\) & \(-0.73\pm 0.06\) & \(-0.64\pm 0.08\) & \(-0.74\pm 0.05\) & \(-0.64\pm 0.07\) \\ \(\beta\) & \(-2.53\pm 0.06\) & \(-2.12\pm 0.09\) & \(-2.51\pm 0.04\) & \(-2.27\pm 0.07\) & \(-2.52\pm 0.04\) & \(-2.22\pm 0.05\) \\ \(E_{p}\) [keV] & \(670\pm 70\) & \(1500\pm 900\) & \(750\pm 60\) & \(980\pm 140\) & \(720\pm 50\) & \(1060\pm 160\) \\ \(E_{\rm out}\) [MeV] &... & \(\mathbf{41\pm 13}\) &... & \(\mathbf{88\pm 28}\) &... & \(\mathbf{66\pm 15}\) \\ Norm. (\(10^{-3}\)) & \(1.87\pm 0.04\) & \(1.87\pm 0.04\) & \(1.98\pm 0.04\) & \(1.99\pm 0.04\) & \(1.93\pm 0.03\) & \(1.94\pm 0.03\) \\ \(-\)log(\(\mathcal{L}\)) & \(2147\) & \(2135\) & \(2383\) & \(2373\) & \(2832\) & \(2812\) \\ \(\sigma_{\rm out}\) &... & \(\mathbf{4.9}\) &... & \(\mathbf{5.8}\) &... & \(\mathbf{6.3}\) \\ \hline \end{tabular} 1 \end{table} Table 8threeML Spectral Fits of Band with and without the Cutoff on GBM\(+\)LLE\(+\)LAT Data in Time Intervals B, C, and B + C \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C: \(T_{0}+[100, 107]\) s} & \multicolumn{2}{c}{B \(+\) C: \(T_{0}+[95, 107]\) s} \\ \hline & \multicolumn{2}{c}{BAM} & \multicolumn{2}{c}{BAM} & \multicolumn{2}{c}{BAM} \\ \(\alpha\) & \(-0.75\pm 0.08\) & \(-0.62\pm 0.12\) & \(-0.73\pm 0.06\) & \(-0.64\pm 0.08\) & \(-0.74\pm 0.05\) & \(-0.64\pm 0.07\) \\ \(\beta\) & \(-2.53\pm 0.06\) & \(-2.12\pm 0.09\) & \(-2.51\pm 0.04\) & \(-2.27\pm 0.07\) & \(-2.52\pm 0.04\) & \(-2.22\pm 0.05\) \\ \(E_{p}\) [keV] & \(670\pm 70\) & \(1500\pm 900\) & \(750\pm 60\) & \(980\pm 140\) & \(720\pm 50\) & \(1060\pm 160\) \\ \(E_{\rm out}\) [MeV] &... & \(\mathbf{41\pm 13}\) &... & \(\mathbf{88\pm 28}\) &... & \(\mathbf{66\pm 15}\) \\ Norm. (\(10^{-3}\)) & \(1.87\pm 0.04\) & \(1.87\pm 0.04\) & \(1.98\pm 0.04\) & \(1.99\pm 0.04\) & \(1.93\pm 0.03\) & \(1.94\pm 0.03\) \\ \(-\)log(\(\mathcal{L}\)) & \(2147\) & \(2135\) & \(2383\) & \(2373\) & \(2832\) & \(2812\) \\ \(\sigma_{\rm out}\) &... & \(\mathbf{4.9}\) &... & \(\mathbf{5.8}\) &... & \(\mathbf{6.3}\) \\ \hline \end{tabular} 1 \end{table} Table 9threeML Spectral Fits of ISSM with and without the Cutoff on GBM\(+\)LLE\(+\)LAT Data in Time Intervals B, C, and B + C \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Parameter & \multicolumn{2}{c}{B: \(T_{0}+[95, 100]\) s} & \multicolumn{2}{c}{C uncertainty in our analysis. Specifically, we considered solid detections (\(\sigma_{\rm cut}>4\sigma\)), and we discarded the pyXSPEC ISSM fits that are slightly worse than the Band fits. We chose as final values the spectral cutoff energies obtained with the pyXSPEC fit of BandExpCut on GBM+LLE data, and we estimated the systematics from the absolute variations of the other analyses around these results, ignoring the statistical errors. 1. For time bin B, we consider the result of the pyXSPEC analysis \(E_{\rm cut,Band}=23\pm 8\) MeV in the opacity computation (see Section 4.2). The lowest value for such a cutoff is 23 MeV when fitting BandExpCut to GBM+LLE data with both pyXSPEC and threeML. The highest value is 41 MeV when fitting ISSMExpCut to GBM+LLE+LAT data. We thus estimate the cutoff value as \(E_{\rm cut}=23\pm 8\) (stat ) +18\(/-\)0 (syst). 2. For time bin C, we consider \(E_{\rm cut,Band}=69\pm 20\) MeV. The lowest value of \(E_{\rm cut}\) is 67 MeV in the fit of BandExpCut to GBM+LLE+LAT data, and the highest value is 88 MeV in the fit of ISSMExpCut to the same data. We thus estimate the cutoff value as \(E_{\rm cut}=69\pm 20\) (stat) +19\(/-\)2 (syst). 3. In time bin B + C, \(E_{\rm cut,Band}=42\pm 11\) MeV. The lowest and highest values of \(E_{\rm cut}\) are 42 MeV in the pyXSPEC fit of BandExpCut and 66 MeV in the threeML fit of ISSMExpCut to GBM+LLE+LAT data. We thus estimate the cutoff value as \(E_{\rm cut}=42\pm 11\) (stat) +24\(/-\)0 (syst). ## 4 Interpretation The temporal variability observed at high energy suggests that the detected spectral cutoffs are due to gamma opacity to an electron-positron pair creation. In the same spirit as Yassine et al. (2017) and the theoretical framework developed by Hascoet et al. (2012), we estimated the minimum variability timescale of the observed high-energy emission. Coupling the minimum variability timescale with the detected cutoffs, we determined the speed of the jet and localized the region in which all of the high-energy emission was produced. ### Estimate of the Variability Timescale In order to estimate the minimum variability timescale, we considered the fast rise exponential decay (FRED) function (Norris et al., 2005; Yassine et al., 2017), and we modified it to simultaneously fit the two main LLE observed peaks. This modified FRED function, which we call FRED2P, reads \[I(t)=\begin{cases}\mathrm{B}_{x},&\text{if }t\leqslant\ t_{\rm start,r}\\ \mathrm{A}_{x}\times\,\exp\left\{-\frac{1}{\gamma_{2x}}\left[\frac{(t_{\rm peak,x}-t_{\rm start,x})^{2}}{t-t_{\rm start,r}}+(t-t_{\rm start,x})\right] \right\}+\,\mathrm{B}_{x},&\text{if }t_{\rm start,r}<t\leqslant t_{\rm start,y}\\ \mathrm{A}_{y}\times\,\exp\left\{-\frac{1}{\gamma_{2y}}\left[\frac{(t_{\rm peak,y}-t_{\rm start,y})^{2}}{t-t_{\rm start,y}}+(t-t_{\rm start,y})\right] \right\}+\,\mathrm{B}_{y},&\text{if }t>t_{\rm start,y}\end{cases}, \tag{3}\] Figure 4: Left: light curve showing the two LLE peaks with the best-fit FRED2P function superimposed. Right: \(\Gamma_{\gamma\gamma}\) and \(\Gamma_{\rm Ia}\) as a function of the ratio of the radii at which the high- and low-energy emissions were produced in time bins B and C. with \[\mathrm{B}_{y}=I(t_{\mathrm{start,y}}). \tag{4}\] The labels \(x\) and \(y\) refer to the first and second LLE peak, respectively. FRED2P is parameterized on each peak as the normalization A, offset B, start time of the pulse \(t_{\mathrm{start}}\), peak time of the pulse \(t_{\mathrm{peak}}\), and decay index \(\tau_{2}\), which characterizes the decrease of the pulse. The left panel of Figure 4 shows the two LLE peaks superimposed on the best-fit FRED2P function. The minimum variability timescale of each pulse is estimated as the half-width at half-maximum and reads \[t_{\mathrm{var}}=\frac{\tau_{2}}{2}\times\sqrt{\left[\mathrm{log }(2)+2\frac{t_{\mathrm{peak}}-t_{\mathrm{start}}}{\tau_{2}}\right]^{2}-4 \left(\frac{t_{\mathrm{peak}}-t_{\mathrm{start}}}{\tau_{2}}\right)^{2}}. \tag{5}\] The minimum variability timescale of the first peak is \(t_{\mathrm{var,x}}=0.88\pm 0.13\) s, and \(t_{\mathrm{var,y}}=2.1\pm 0.4\) s for the second peak. ### Bulk Lorentz Factor and Localization of the Prompt Emission Region The bulk Lorentz factor \(\Gamma_{\mathrm{bulk}}\) is obtained as in Yassine et al. (2017), assuming that the observed spectral cutoff is due to opacity to gamma-gamma annihilation in the GRB jet and that the prompt emission is produced near or above the photosphere at a radius \(R_{\mathrm{LE}}\) for the low-energy (MeV) emission and \(R_{\mathrm{HE}}\) for the high-energy (tens of MeV) emission. This opacity model was proposed by Hascoet et al. (2012) and applied by Yassine et al. (2017) to determine \(\Gamma_{\mathrm{bulk}}\) and the emission radii of GRB 090926A. The radius at which the low-energy emission is produced is obtained from the estimated variability as \[R_{\mathrm{LE}}=2c\Gamma^{2}\frac{t_{\mathrm{var}}}{1+z}. \tag{6}\] The \(\Gamma_{\mathrm{bulk}}\) of the jet is estimated directly as \[\Gamma_{\gamma\gamma} =\frac{K\Phi(s)}{\left[\frac{1}{2}(1+\frac{R_{\mathrm{HE}}}{R_{ \mathrm{LE}}})\left(\frac{R_{\mathrm{HE}}}{R_{\mathrm{HE}}}\right)\right]^{1/ 2}}(1+z)^{-(1+s)/(1-s)}\] \[\times\left\{\sigma_{\mathrm{T}}\left[\frac{D_{L}(z)}{ct_{\mathrm{ var}}}\right]^{2}E_{\mathrm{s}}F(E_{\mathrm{s}})\right\}^{1/2(1-s)}\] \[\times\left[\frac{E_{\mathrm{s}}E_{\mathrm{cut}}}{(m_{e}c^{2})^{ 2}}\right]^{(s+1)/2(s-1)}, \tag{7}\] where \(t_{\mathrm{var}}\) is the estimated variability timescale in the considered time interval, \(E_{\mathrm{cut}}\) is the energy of the detected cutoff, \(E_{\mathrm{s}}\) is the typical energy of the photons interacting with those at the cutoff energy, \(s\) is the photon index of the seed spectrum close to \(E_{\mathrm{s}}\), and \(F(E_{\mathrm{s}})\) is the photon fluence at \(E_{\mathrm{s}}\) integrated over \(t_{\mathrm{var}}\). The values employed to compute \(\Gamma_{\gamma\gamma}\) are reported in Table 11. In the error propagation, we also considered the systematic uncertainties of \(E_{\mathrm{cut}}\) reported at the end of Section 3.2, and we added them in quadrature to the statistical uncertainties. The photospheric radius \(R_{\mathrm{ph}}\) at which the jet becomes transparent to Thomson scattering and the minimal bulk Lorentz factor \(\Gamma_{\mathrm{Tr}}\) defining this transparency condition are computed as in Yassine et al. (2017), \[R_{\mathrm{ph}}\simeq\frac{\sigma_{\mathrm{T}}\hat{E}}{8\pi c^{3}m_{p}\Gamma^ {3}}, \tag{8}\] where \(\sigma_{\mathrm{T}}=6.65\times 10^{-29}\) m\({}^{2}\) is the Thomson cross section, \(\hat{E}\) is the total power injected in the flow, \(m_{p}=1.67\times 10^{-27}\) kg is the proton mass, and \(\hat{\Gamma}=\frac{1+\varepsilon}{2}\Gamma_{\gamma\gamma}\) is the average Lorentz factor in the flow, where \(\kappa\) is the ratio between the highest and lowest values of \(\Gamma_{\mathrm{bulk}}\). The transparency condition \(R_{\mathrm{LE}}\geq\)\(R_{\mathrm{ph}}\) translates to \[\Gamma_{\gamma\gamma}>\Gamma_{\mathrm{Tr}}\simeq\left[\frac{\sigma_{\mathrm{T} }\hat{E}}{8\pi c^{4}m_{p}t_{\mathrm{var}}}\right]^{1/5}. \tag{9}\] The values of the mentioned quantities are reported in Table 12. It is worth noting that the photospheric radii are of the order of \(10^{14}\) cm, well above the typical range of \(10^{10}\)-\(10^{11}\) cm. In fact, the high values for the luminosity and the moderate values of \(\Gamma_{\mathrm{bulk}}\) presented in Table 11 induce large photospheric radii, as shown by Equation (8). The right panel of Figure 4 shows the value of \(\Gamma_{\gamma\gamma}\) and \(\Gamma_{\mathrm{Tr}}\) as a function of the radii at which the high- and low-energy emissions were produced. The contours of \(\Gamma_{\gamma\gamma}\) have been computed including the systematic errors estimated at the end of Section 3.2. We note that when the high- and low-energy emissions are cospatial, \(\Gamma_{\gamma\gamma}\) and its contour are comparable to or greater than \(\Gamma_{\mathrm{Tr}}\) in time bins B and C. The transparency condition is thus fulfilled. We conclude that the bulk Lorentz factor of the jet in the prompt phase of GRB 220101A is \(\Gamma_{\mathrm{bulk}}\sim 110\) and that all of the high-energy emission took place near \begin{table} \begin{tabular}{l c c c} \hline \hline & B: \(T_{0}+\) [95, & C: \(T_{0}+\) [100, & B + C: \(T_{0}+\) [95, \\ Time Bin & 100] s & 107] s & 107] s \\ \hline \(t_{\mathrm{var}}\) [s] & \(0.88\pm 0.13\) & \(2.1\pm 0.4\) & \(1.5\pm 0.5\) \\ \(s\) & \(-1.92\pm 0.10\) & \(-2.11\pm 0.06\) & \(-2.03\pm 0.05\) \\ \(\dot{\Phi}\) (s) & \(0.48\pm 0.01\) & \(0.47\pm 0.01\) & \(0.47\pm 0.01\) \\ \(E_{\mathrm{cut}}\) [MeV] & \(23\pm 8\) & \(69\pm 20\) & \(42\pm 11\) \\ \(E_{\mathrm{s}}\) [MeV] & \(1\) & \(1\) & \(1\) \\ \(F(E_{\mathrm{s}})\) [\(\Gamma_{\mathrm{max}}\)] & \(0.34\pm 0.03\) & \(0.81\pm 0.04\) & \(0.57\pm 0.02\) \\ \(L\) [\(10^{5}\) erg s\({}^{-1}\)] & \(7.6\pm 0.6\) & \(7.5\pm 0.3\) & \(7.6\pm 0.3\) \\ \hline \(R_{\mathrm{LE}}\) [\(10^{14}\) cm] & \(1.2\pm 0.3\) & \(2.4\pm 0.6\) & \(1.8\pm 0.7\) \\ \(\Gamma_{\gamma\gamma}\) (\(R_{\mathrm{LE}}=\)\(R_{\mathrm{HE}}\)) & \(115\pm 10\) & \(103\pm 8\) & \(105\pm 13\) \\ \hline \end{tabular} Note. The luminosity is computed in the observer frame energy range 10 keV–1 GeV. \end{table} Table 11: Summary of the Parameters Employed in the Computation of \(\Gamma_{\mathrm{bulk}}\) and the Observed Energy Emission Radius in Time bins B, C, and B + C \begin{table} \begin{tabular}{l c c c} \hline \hline & B: \(T_{0}+\) [95, & C: \(T_{0}+\) [100, & B + C: \(T_{0}+\) [95, \\ Time Bin & 100] s & 107] s & 107] s \\ \hline \(R_{\mathrm{LE}}\) [\(10^{14}\) cm] & \(1.2\pm 0.3\) & \(2.4\pm 0.6\) & \(1.8\pm 0.7\) \\ \(R_{\mathrm{ph}}\) [\(10^{14}\) cm] & \(1.9\pm 0.5\) & \(2.6\pm 0.6\) & \(2.4\pm 0.9\) \\ \(\Gamma_{\gamma\gamma}\) & \(115\pm 10\) & \(103\pm 8\) & \(105\pm 13\) \\ \(\Gamma_{\mathrm{Tr}}\) & \(125\pm 4\) & \(105\pm 4\) & \(112\pm 8\) \\ \hline \end{tabular} \end{table} Table 12: Summary of the Radius at Which the Low-Energy Emission Took Place \(R_{\mathrm{LE}}\), the Photospheric Radius \(R_{\mathrm{ph}}\), \(\Gamma_{\gamma\gamma}\), and \(\Gamma_{\mathrm{Tr}}\) in Time bins B, C, and B + C or above the photosphere at a radius of a few \(10^{14}\) cm, typical of internal shocks. ## 5 Discussion and Conclusions In this work, we assume that the observed variable emission is prompt emission. We note that Bianco et al. (2023) interpreted such early variable emission as afterglow and assumed that it is synchrotron radiation produced by a fast-spinning newborn neutron star that injects energy into the expanding supernova ejecta (Rueda et al., 2022). The authors considered the rest-frame temporal delay of the observed radiation and characterized the transition in the structure of the central neutron star in its first instants. Their analysis relies on the temporal delay of the radiation emitted, and it is an alternative to the interpretation we present in this paper. The work of Moradi et al. (2021) on GRB 190114C pointed to a precise quantum electrodynamics model to explain the ultrarelativistic prompt emission of such a bright burst. The high energy budget of GRB 220101A makes it a companion to GRB 190114C, and a similar analysis would also be interesting in this case. However, the required detailed time-resolved analysis is beyond the scope of this work. Table 13 lists the LAT-detected bursts that we found in the literature and presents a spectral cutoff at high energies. We stress that we did not consider the totality of the LAT-detected bursts, and we did not search systematically for the presence of an exponential spectral high-energy cutoff. This analysis shall be done in the future, and it is beyond the scope of this paper. For each of the mentioned bursts, we report the estimated values of \(\Gamma_{\rm bulk}\), the spectral cutoff in the observer frame, and the spectral cutoff in the reference frame for the bursts with a redshift measurement. The value of \(\Gamma_{\rm bulk}\) is 100-400. In the cases of GRB 090926A and GRB 220101A, \(\Gamma_{\rm bulk}\) was determined following the procedure presented in the previous section. This estimation is based on the work of Hascoet et al. (2012), who accounted for the geometry of the GRB jet, and thus provides a realistic description of the jet dynamics. For GRB 100724B and GRB 160509A, Vianello et al. (2018) adopted two physical models: the semiphenomenological internal shock model developed by Granot et al. (2008), which provides a temporal, spatial, and directional dependence of the pair-production interaction and a conservative lower limit of \(\Gamma_{\rm bulk}\), and the photospheric model by Gill & Thompson (2014). Vianello et al. (2018) estimated \(\Gamma_{\rm bulk}\) in the interval 100-400 for these two bursts. In the case of GRB 170405A, Arimoto et al. (2020) estimated a lower limit of \(\Gamma_{\rm bulk}=170\) by applying the mentioned method of Granot et al. (2008) and provided an upper limit of \(\Gamma_{\rm bulk}=420\), which required the cutoff energy in the comoving frame to be \(m_{e}c^{2}\): \(\Gamma_{\rm bulk,max}=(1+z)\frac{E_{\rm bulk}}{m_{e}c^{2}}\)(Gill & Granot, 2018). In this work, we adopted the approach developed by Hascoet et al. (2012) and previously applied by Yassine et al. (2017) on GRB 090926A to directly estimate the bulk Lorentz factor and localize the region at which all of the high-energy emission of GRB 220101A took place. We stress that this approach does not rely on the specific emission process responsible for the detected emission and that the estimated \(\Gamma_{\rm bulk}\) is comparable with the corresponding value of four other LAT-detected bursts that are well known for presenting a spectral cutoff at high energies. These bursts represent a precious set in which a direct estimation of \(\Gamma_{\rm bulk}\) can be performed. ## Acknowledgments The Fermi LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT, as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States; the Commissariat a l'Energie Atomique and the Centre National de la Recherche Scientifique/Institut National de Physique Nucleaire et de Physique des Particules in France; the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy; the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK), and Japan Aerospace Exploration Agency (JAXA) in Japan; and the K. A. Wallenberg Foundation, the Swedish Research Council, and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'Etudes Spatiales in France. This work was performed in part under DOE contract DE-AC02-76SF00515. ## ORCID iDs Lorenzo Scotton (r)[https://orcid.org/00000-0002-0602-0235](https://orcid.org/00000-0002-0602-0235) Frederic Piron (r)[https://orcid.org/00000-0001-6885-7156](https://orcid.org/00000-0001-6885-7156) Nicola Omodei (r)[https://orcid.org/0000-0002-5448-7577](https://orcid.org/0000-0002-5448-7577) Niccolo Di Lalla (r)[https://orcid.org/0000-0002-7574-1298](https://orcid.org/0000-0002-7574-1298) Eliasabetta Bissaldi (r)[https://orcid.org/0000-0001-9935-8106](https://orcid.org/0000-0001-9935-8106)
2304.09970
Learning policies for resource allocation in business processes
Efficient allocation of resources to activities is pivotal in executing business processes but remains challenging. While resource allocation methodologies are well-established in domains like manufacturing, their application within business process management remains limited. Existing methods often do not scale well to large processes with numerous activities or optimize across multiple cases. This paper aims to address this gap by proposing two learning-based methods for resource allocation in business processes. The first method leverages Deep Reinforcement Learning (DRL) to learn near-optimal policies by taking action in the business process. The second method is a score-based value function approximation approach, which learns the weights of a set of curated features to prioritize resource assignments. To evaluate the proposed approaches, we first designed six distinct business processes with archetypal process flows and characteristics. These business processes were then connected to form three realistically sized business processes. We benchmarked our methods against traditional heuristics and existing resource allocation methods. The results show that our methods learn adaptive resource allocation policies that outperform or are competitive with the benchmarks in five out of six individual business processes. The DRL approach outperforms all benchmarks in all three composite business processes and finds a policy that is, on average, 13.1% better than the best-performing benchmark.
J. Middelhuis, R. Lo Bianco, E. Scherzer, Z. A. Bukhsh, I. J. B. F. Adan, R. M. Dijkman
2023-04-19T21:05:38Z
http://arxiv.org/abs/2304.09970v2
# Learning policies for resource allocation in business processes ###### Abstract Resource allocation is the assignment of resources to activities that must be executed in a business process at a particular moment at run-time. While resource allocation is well-studied in other fields, such as manufacturing, there exist only a few methods in business process management. Existing methods are not suited for application in large business processes or focus on optimizing resource allocation for a single case rather than for all cases combined. To fill this gap, this paper proposes two learning-based methods for resource allocation in business processes: a deep reinforcement learning-based approach and a score-based value function approximation approach. The two methods are compared against existing heuristics in a set of scenarios that represent typical business process structures and on a complete network that represents a realistic business process. The results show that our learning-based methods outperform or are competitive with common heuristics in most scenarios and outperform heuristics in the complete network. Keywords:Resource allocation business process optimization deep reinforcement learning Bayesian optimization ## 1 Introduction Efficiently allocating resources to activities that must be executed in a business process at a particular moment at run-time is crucial for organizations to achieve operational outcomes effectively. Optimally utilizing available resources helps an organization operate at full capacity, which maximizes its output and can improve its profitability and competitiveness. Prescriptive processing monitoring (PPrM) techniques [8] have seen a strong increase in attention recently and can be looked at for making recommendations on the resource that can best be allocated to an activity. PPrM techniques base these recommendations on the sequence of events and the event attributes of the running case. For example, [14, 19] recommend the best resource to perform a specific activity for running cases to minimize the case's completion time. [18] propose a method that recommends not only the best resource but also the best next activity to complete to minimize the cycle time in an environmental permit application process. While PPrM methods can achieve their respective goals for individual cases, they do not consider the effect of an intervention on other cases in the process [8]. For example, PPrM techniques favor cost-efficient resources but insufficiently consider the resources' capacity constraints. Business process optimization (BPO) aims to solve this problem by optimizing the performance of the process as a whole and not just individual cases. The resource allocation problem is a dynamic task assignment problem [16], where new cases continuously enter the business process and must be assigned in an online manner. We distinguish between methods that use static and dynamic algorithms. Several rule-based algorithms have been proposed that create a static ranking of assignments based on process mined dimensions [20, 22, 2]. However, these methods lack the ability to adapt to the current state of the process. One dynamic method is proposed by [11], who first predicts the next activity and activity time of all possible assignments and then schedules resources based on this prediction. However, this approach is very reliant on the performance of the prediction model. Reinforcement learning (RL) has been used to directly learn the goodness of assignments for resource allocation in business processes. [7, 5] propose a tabular RL approach, Q-learning, to minimize the resource allocation cost. The main limitation of Q-learning is that it requires that each state in the process model has been observed, which becomes infeasible with large state spaces and makes it hard to incorporate continuous case attributes, or case attributes in general, as part of the decision criteria. To deal with these problems, a double DQN [23] is proposed that learns close-to-optimal decisions based on observed states and can also predict the best decision for states it has not observed. One limitation is that the method has to learn which actions are feasible, which quickly becomes intractable for problems with large state and action spaces. Furthermore, the method optimizes the number of completed cases but is not suitable for optimizing more complex KPIs, such as minimizing the waiting time or cycle time of cases, which leaves room for improvement in the method. For these reasons, these methods cannot be expected to perform well on realistic-sized business processes and, indeed, are tested only on simple models. To lift BPO approaches to the level at which they can be applied to realistic-sized business processes, we present two learning methods for resource allocation that can handle complex business process state spaces, including continuous case attributes and time, and that can also optimize for time-related KPIs such as the cycle time. The first method is a DRL-based method, and the second is a score-based value function approximation method. We evaluate the two methods in two ways. First, we apply the methods to five small business process models with common process structures, such as different utilization rates, process times, and competing resources. We show that our methods outperform, or are competitive with, the benchmark heuristics in all scenarios. Second, we connect the scenarios to create a large business process model and demonstrate that our methods outperform the heuristics for this problem. Existing learning methods [7, 5, 23] are unsuitable for large process models due to their limitations in dealing with continuous variables and in optimizing for time. The outline of this paper is as follows. Section 2 introduces concepts related to BPO. Section 3 models the resource allocation problem as an optimization problem. Section 4 presents the two learning methods to solve the resource allocation problem. Section 5 evaluates the proposed methods in different scenarios and benchmarks them with heuristics. Section 6 gives an overview of the related work, and Section 7 concludes our work. ## 2 Preliminaries This paper considers business process models describing the activities that can be performed in an organization and the resources that can perform them, including the arrival process of new cases and the completion rate of activities. For example, Fig. 1 shows a business process in which loan applications arrive at a rate of 2 per hour. There are two resources (bank employees, in this case) who can execute both activities in the process. First, applications are validated, which takes a certain time according to some distribution with a mean of \(\mathbb{E}(X_{1})=20\) minutes for resource 1 and a mean of \(\mathbb{E}(X_{2})=30\) minutes for resource 2. After the validation, an application is completed, which takes on average 30 or 20 minutes, respectively for resources 1 and 2. This paper does not commit to any particular business process modeling notation. We assume that the business process is there and can be simulated or executed according to some execution semantics, which makes the business process observable. Specifically, at any point in time, we can observe the execution state of the business process as well as events that change the state of the process, such as the completion of activities or the arrival of new cases. This requires the following definitions. Definition 1 (Event, Trace): Let \(\mathcal{A}\) be the set of activities, \(\mathcal{C}\) the set of cases, \(\mathcal{R}\) the set of resources, and \(\mathcal{T}\) the time domain. An event \(e=(a,c,t,r,l)\) is mapped to an activity \(a\in\mathcal{A}\), has a unique case identifier \(c\in\mathcal{C}\), is executed at timestamp \(t\in\mathcal{T}\) by resource \(r\in\mathcal{R}\), and has a life cycle stage \(l\in\{start,complete\}\) to indicate if the event represents the start or completion of executing an activity. In line with [1], we refer to the different elements Figure 1: An example of a business process model \(a\), \(c\), \(t\), \(r\) and \(l\) of an event \(e\) as \(\#_{a}(e)\), \(\#_{c}(e)\), \(\#_{t}(e)\), \(\#_{r}(e)\) and \(\#_{l}(e)\). A trace is a finite non-empty sequence of events that describes everything that happened in a business case. We define it as \(\sigma=\langle e_{1},e_{2},\ldots,e_{n}\rangle\), where \(n\) is the length of the trace \(|\sigma|\). Events within trace are sequential and should be non-decreasing in time such that \(\#_{t}(e_{i})\geq\#_{t}(e_{j})\ \forall i,j\in\{1,2,\ldots,n\}\)._ The process model of Fig. 1 consists of two activities, which can be executed once for each case in the process. We distinguish executions of the same activity, both within and without a case, using the term activity instance. Definition 2 (Activity instance): An activity instance is an occurrence of an activity. An activity instance belongs to a specific case and one case may have multiple occurrences of the same activity. We define the set of activity instances as \(\mathcal{K}\), and each activity instance \(k\in\mathcal{K}\) belongs to a case \(c\). We define \(\mathcal{K}_{c}\) as the activity instances belonging to case \(c\in\mathcal{C}\) and \(\mathcal{K}_{a}\) as the activity instances belonging to activity \(a\in\mathcal{A}\). To model the state of a business process for the purpose of learning what the best resource allocation is in a particular state, we must be able to observe unassigned activity instances and available resources that can be assigned to them. Definition 3 (Unassigned Activity Instance): An unassigned activity instance is an activity instance waiting to be assigned. We define the set of unassigned activity instances as \(K\subseteq\mathcal{K}\). We use subscript to indicate to what case \(K_{c}\) and activity \(K_{a}\) an unassigned activity instance belongs. Definition 4 (Available Resource, Unavailable Resource): An available resource is currently not busy executing an activity instance and can be assigned. We define the set of available resources as \(R^{+}\subseteq\mathcal{R}\) and the set of unavailable resources as \(R^{-}\subseteq\mathcal{R}\) such that \(R^{+}\cup R^{-}=\mathcal{R}\) and \(R^{+}\cap R^{-}=\emptyset\). A resource is removed from the available resource set \(R^{+}\) and added to the unavailable resource set \(R^{-}\) when working on an activity instance. The resource is added to the set of available resources \(R^{+}\) when it finishes the activity instance. The business process execution state contains all information about the condition of the process at a given point in time, such as what resources and activity instances are available, as well as what is currently being processed. From a business process execution state, the process may transition into another state after some time \(t\) has elapsed as a consequence of an event happening. These events can be the arrival of a new case, the activation of an activity instance, the completion of an activity instance, or the completion of a case. Events happen according to the process model. An activity can only be performed and completed if assigned to a resource. Based on the business process execution state, a decision-maker can make an informed decision. Definition 5 (Business Process Execution State): A business process execution state is the set of active cases \(C\subseteq\mathcal{C}\), unassigned activity instances \(K\subseteq\mathcal{K}\), available and unavailable resources \(R^{+},R^{-}\subseteq\mathcal{R}\), and current assignments \(B\subseteq K\times R^{-}\) that exist at a particular moment in time \(t\). ## 3 Problem Description The problem we aim to solve in this paper is a dynamic task assignment problem [16]. In a given state, we assign a resource to an unassigned activity instance with the objective of optimizing some KPI under specific constraints. We define the specific parts of the optimization problem, which are the decision variables, objective, and constraints, in the context of the dynamic task assignment below. One explicit constraint in a business process is resource eligibility. Not every resource can perform all activities due to, for example, a lack of a specific skill set or level of authorization. The process model in Fig. 1 defines that both resources can perform both activities. Therefore, the eligible resource set for the activity "Validate application" (VA) is then \(R_{VA}=\{1,2\}\). Definition 6 (Resource Eligibility): The process model defines which describes what resources are eligible to perform what activity. We refer to the set of resources that can execute activity \(a\in\mathcal{A}\) as \(\mathcal{R}_{a}\subseteq\mathcal{R}\), which implies that \(r\in\mathcal{R}_{a}\) can also execute each unassigned activity instance \(k\in K_{a}\). The decision variables are assignments, which are the allocation of a resource to an unassigned activity instance. The set of assignments only includes assignments that satisfy the resource eligibility constraint (Def. 6). However, not all assignments are possible in any given business process execution state (Def 5) because there must be an unassigned activity instance (Def. 3), and an available resource (Def. 4). Under these constraints, we define the set of possible assignments as a subset of the set of assignments. Definition 7 (Assignment, Possible Assignment): An assignment is a tuple \((r,k)\) of a resource \(r\in\mathcal{R}\) and an activity instance \(k\in\mathcal{K}\). Considering resource eligibility, we define the set of assignments as \(\mathcal{D}=\{(r,k)|r\in\mathcal{R}_{a},k\in K_{a}\}\). However, not every assignment is possible in all business process execution states, as there must be unassigned activity instances, and resources must be available and eligible to perform them. We define the set of possible assignments in a specific business process execution state as \(D=\{r,k)|(r,k)\in\mathcal{D},r\in R^{+},k\in K\}\). After making an assignment, completing an unassigned activity instance, or upon the arrival of a new case, the business process transitions to a new business process execution state. Definition 8 (State Transition): A state transition happens each time an action is taken to assign resource \(r\) to unassigned activity instance \(k\). First, the business process execution state (Def. 5) is adapted by changing the set of unassigned activity instances to \(K-\{k\}\), the set of available resources to \(R^{+}-\{r\}\), the set of unavailable resources to \(R^{-}\cup\{r\}\), and the set of current assignments to \(B\cup\{(r,k)\}\). This also changes the set of possible assignments \(D\), depending on \(R^{+}\) and \(K\). The state also changes when an activity instance completes, which releases the resource of assignment \((r,k)\) such that \(R^{+}\cup\{r\}\), \(R^{-}-\{r\}\) and \(B-\{(r,k)\}\). Furthermore, if the case related to the completed activity instance is not completed, a new unassigned activity instance \(k\) is generated, i.e., \(K\cup\{k\}\). Upon a new case arrival, an unassigned activity instance is also generated._ The last part of the optimization problem is the objective. In this paper, we minimize the average cycle time of completed cases, which we define below. Definition 9 (Completed Case, Cycle Time): For a sequence of events \(\sigma=\langle e_{1},e_{2},\ldots,e_{n}\rangle\) of a case, the cycle time \(c_{CT}\) is the difference between the time of occurrence of the first and the last event, i.e. \(c_{CT}=\#_{t}(e_{n})-\#_{t}(e_{1})\). We can determine the cycle time for an ongoing case \(\#_{c}(e_{i})\in C\) or a completed case \(\#_{c}(e_{i})\in\mathcal{C}-C\). Thus, the objective function can be written as: \[\text{minimize}\,\frac{1}{|\mathcal{C}|}\sum_{c\in\mathcal{C}-C}c_{CT} \tag{1}\] Fig. 2 shows an overview of the interaction between the environment and the agent, which is the decision maker. In any given state (Def. 5), we check if there are any possible assignments (Def. 7). If there are, the agent makes a single assignment, and the environment evolves to the next state by removing the respective available resource (Def. 4) and unassigned activity instance (Def. 3). This process continues until there are no more possible assignments (i.e., \(D=\emptyset\)). When there are no more possible assignments, we simulate the business process such that new events happen, which modify the state according to Def. 8. After the state changes, we check again if there are possible assignments. Figure 2: Environment’s perspective of the agent-environment interaction. ## 4 Method In this section, we present two learning-based methods for resource allocation in business processes to minimize the average cycle time of cases. Section 4.1 presents a Deep Reinforcement Learning (DRL) approach, and Section 4.2 presents a Score-based Value Function Approximation (SVFA) approach. The main difference between the two methods is that the DRL approach learns through directly interacting with the environment, while the SVFA approach only learns after observing the outcome of a complete simulation run. ### Deep Reinforcement Learning-based method The reinforcement learning cycle [17] depicts the interaction between an agent and the environment. Fig. 2 shows the environment, which is a discrete-event simulation in our problem. A discrete decision step is one iteration of the loop, where an action is taken, the environment transitions to the next state, and a reward is returned to the agent. One run of the simulation is called an episode that may contain a varying number of decision steps, depending on what happens in the simulation. The agent follows a policy \(\pi\), which is a function that maps the state space to the action space and defines which action should be taken in a given state. In reinforcement learning, the agent learns the policy function that maximizes the cumulative reward over an episode. The term Deep Reinforcement Learning (DRL) is used to identify a subset of RL algorithms that employ neural networks as approximations. This paper uses the DRL algorithm Proximal Policy Optimization (PPO) [12]. PPO clips updates to the policy function, approximated using a neural network, to ensure that the new policy is not too different from the old one, which helps stabilize the Figure 3: Agent’s perspective of the agent-environment interaction, adapted from [17]. optimization process. To speed up the learning process, we couple PPO with a masking function that, given a state, excludes infeasible actions. The remainder of this section defines the elements of the RL cycle, as shown in Fig 3. #### 3.2.2 State. The business process execution state (Def. 5) contains all information related to the current condition of the process. The observation is a vector of the relevant information the agent needs to take the best action (i.e., assignment). We have experimented with several representations of the observation and found the following leads to the best results: * A mapping \(\mathcal{R}\to\{0,1\}\), indicating for each resource whether it is assigned or not as defined in Def. 4. * A mapping \(\mathcal{R}\to T\), indicating the time a resource has been busy or \(\bot\) if the resource is not assigned. The time a resource \(r\) has been busy is calculated as \(t_{last}-t\), where \(t\) is the current time in the business process execution state, and \(t_{last}\) is the time \(\#_{t}(e)\) of the last event \(e\) for this resource \(r\), i.e., \(\#_{r}(e)=r\). * A mapping \(\mathcal{R}\to\mathcal{A}\), indicating the activity a resource is assigned to as defined in Def. 7, i.e., for resource \(r\): \(k\), if \((r,k)\in B\), or \(\bot\) otherwise. * A mapping \(\mathcal{A}\to\mathbb{R}\), indicating the proportion of unassigned instances of each activity compared to the total, i.e., for activity \(a\): \(\frac{|K_{a}|}{|K|}\). * The total number of unassigned activity instances, i.e., \(|K|\). Given an observation, the agent knows which resources are available and the number of activity instances of each activity. Furthermore, the observation provides information about current assignments, such as what activity a resource is allocated to and for how long it has been working. Using this information, the agent learns when other resources are likely available again, and it can, for example, wait for a better assignment. The size of the observation vector is \(3*|\mathcal{R}|+|\mathcal{A}|+1\). #### 3.2.3 Action. The actions are all assignments (Def. 7) and a postpone action, which delays the current decision moment and evolves the environment to the next state. To prevent the agent from taking infeasible actions, we use a masking function that sets the probability of choosing an impossible assignment (\(\mathcal{D}-D\) according to Def. 7) to zero. The masking function is a vector that maps \(\mathcal{D}\to\{0,1\}\), indicating if an assignment is possible or not, i.e., \(1\) if \((r,k)\in D\) and \(0\) otherwise. The size of the action vector is \(|\mathcal{D}|+1\). #### 3.2.4 State transition. Given a specific state and action, the business process transitions to a new state, according to Def. 8. Fig. 2 shows how the environment evolves to the next state based on the type of transition. If another assignment is possible, we take another action, which is at the same moment in (simulation) time. If there are no possible assignments, the environment evolves until an assignment is possible again. When the agent postpones, the environment is evolved until there is at least one different unassigned activity instance or available resource. **Reward.** The optimization objective is to minimize the average cycle time of cases (Def. 9), and we should reward the agent for actions that contribute to this objective. We give a reward of \(\frac{1}{1+c_{GT}}\) when a case \(c\) completes. We use an inverse function, such that a low cycle time results in a high reward. We add \(+1\) to avoid returning very high rewards. After experimenting with several reward functions, we found this function yields the best results. Additionally, the agent receives a small penalty for postponing to discourage the agent from postponing indefinitely. This penalty is dependent on the process and is defined in Section 5.2. Having defined these elements, we can train an agent that learns a policy function for resource allocation that minimizes the average cycle time of completed cases. The specifications of the algorithm are clarified in Section 5.2. ### Score-based value function approximation method The second method we propose is a score-based value function approximation (SVFA) approach that, based on a set of features, learns to make assignments that minimize the average cycle time. We consider all possible assignments \((r,k)\in D\) (Def. 7) in a given business process execution state and compute a score for each of them. The score is a function of a set of features related to assignment \((r,k)\) and learned weights. The features are: * \(MeanAssignment(r,k)\): the mean processing time of the assignment, estimated based on historical data. * \(VarAssignment(r,k)\): the variance of the processing time of the assignment, estimated based on historical data. * \(ActivityRank(r,k)\): we evaluate \(MeanAssignment(r,k^{\prime})\) of resource \(r\) for all other unassigned activity instances \(k^{\prime}\in K\) and rank them on the mean processing time, where the lowest mean processing time is ranked the highest. \(ActivityRank(r,k)\rightarrow\mathbb{N}\) is the rank of unassigned activity instance \(k\). * \(ResourceRank(r,k)\): we evaluate \(MeanAssignment(r^{\prime},k)\), for all other available resources \(r^{\prime}\in R^{+}\) and rank them on the mean processing time, where the lowest mean processing time is ranked the highest. \(ResourceRank(r,k\rightarrow\mathbb{N}\) is the rank of resource \(r\). * \(ProbFin(r,k)\): the probability that after an assignment \((r,k)\) case \(c\) is completed, where \(k\in K_{c}\). * \(QueueLength(k)\), the number of unassigned activity instances for a specific activity \(|K_{a}|\), given that \(k\in K_{a}\). The score function maps each state-action pair to a real value which represents the goodness of making a specific assignment in a specific state: \[Score(r,k)=w_{1}MeanAssignment(r,k)+w_{2}VarAssignment(r,k) \tag{2}\] \[+w_{3}ActivityRank(r,k)+w_{4}ResourceRank(r,k)-w_{5}ProbFin(r,k)\] \[-w_{6}QueueLength(k),\text{where }w_{i}\geq 0,\text{for }i\leq 6.\] We add the first four features in Equation 2 because a lower value is better for them. For the other two features, a high value is desirable, and thus we subtract them when computing the score. \(ProbFin(r,k)\) represents the probability of completing a case and ending the cycle time. The feature \(QueueLength(k)\) represents the number of unassigned activity instances of the same activity as \(k\in K_{a}\). A high value prioritizes the agent to allocate resources to this activity. At each decision moment, the score for all possible assignments is calculated. The agent considers the assignment with the lowest score the best and will make it if its score does not exceed a learned threshold \(w_{7}\). This seventh weight allows the agent to postpone. Similarly to the DRL approach, the agent makes assignments one by one. The agent then reevaluates all scores based on the new state and makes the next assignment. We use Bayesian optimization to find weights that minimize the average cycle time. Bayesian optimization is a global optimization technique that uses a probabilistic model to predict the relationship between the input parameters and the output of a function. The algorithm learns as new data is obtained and focuses on selecting parameters in promising areas of the search space. For more information on the Bayesian optimization algorithm, we refer to [6]. ## 5 Evaluation Section 5.1 introduces five scenarios and a complete network which is the combination of these scenarios. We use these business processes to train and evaluate our methods. Section 5.2 details the training setup. Finally, Section 5.3 shows the results of our methods benchmarked with three heuristics. ### Experimental setup In this paper, we study five small business processes, which we refer to as scenarios. We design these scenarios to highlight the strengths and weaknesses of three heuristics: Shortest Processing Time (SPT), which will always assign an activity to the resource with the shortest processing time for that activity; First-In-First-Out (FIFO), which will assign activities on a FIFO basis; and random, which assigns activities to resources randomly and therefore can be considered a good worst-case benchmark. These heuristics will work well in some scenarios, but there is no single rule-based algorithm that works well for all scenarios [10]. On the other hand, our proposed methods learn policies from problem-agnostic representations irrespective of problem size and characteristics. All scenarios consist of two activities and two resources but are different in process characteristics, such as processing time, process flow, and resource eligibility. For all scenarios, we assume that cases arrive according to a Poisson process with a rate of \(\lambda=\frac{1}{2}\). The processing time \(X\) is Gamma distributed with a mean of \(X_{r}\) and a standard deviation of \(0.25*X_{r}\), where \(r\in\mathcal{R}\). The first two scenarios are shown in Fig. 4 and are characterized by the utilization rate. In Fig. 4a, the utilization rate is relatively low, which means that the number of unassigned activity instances, \(|K_{a}|\), at an activity, \(a\), will be lower on average. Therefore, an agent should mainly minimize the processing time to minimize cycle time. Oppositely, the scenario in Fig. 4b has a high utilization rate, which means that the number of unassigned activity instances is high, on average. Therefore, cases will spend most of their cycle time waiting, which makes minimizing the waiting time important in this scenario. The next two scenarios are shown in Fig. 5. Fig. 4(a) shows a business process with one fast and one slow server. This scenario reflects a situation where there is one experienced employee who works faster than an inexperienced employee. More specifically, the slow server (i.e., resource \(r=6\)) is especially slow at performing activity F, and an agent should avoid this assignment. Fig. 4(b) depicts a business process where the processing time of the second activity is slower than the first. The processing speed of both resources on both activities is the same, which makes the prioritization of activities important. The optimal strategy in this scenario is always to process the downstream activity first (i.e., activity H), as this completes the case and finishes the cycle time. The fifth and last scenario is shown in Fig. 6 and models a process with two activities with different resource eligibility. This problem is a variant of the well-studied bipartite matching problem (e.g., [3]) and is known as an 'N-network', referring to the 'N' shape of the bipartite graph with resources and activities as nodes, and edges to represent resource eligibility (Def 6). This process structure is also common in business processes, where a senior employee can execute all activities, but a junior employee can only execute simple activities. In this scenario, the agent should learn how to divide the capacity of resource \(r=10\). Finally, we evaluate our methods on a network created by connecting all previously presented scenarios such that the departures of one segment in the network (i.e., a scenario) are the arrivals of the next segment. Note that the Figure 4: Two business processes characterized by their utilization rate. Figure 5: One business process with a slow server and one with slow downstream processing arrival process of the subsequent segments in the network is no longer Poisson distributed, as arrivals depend on the departure process of the previous segment. ### Training setup The source code required to reproduce the experiments can be found in our Github repository 3. We train and evaluate the methods for each scenario with a simulation with a duration of 5000 time units, in which, on average, \(5000\cdot\frac{1}{\lambda}=5000\cdot\frac{1}{2}=2500\) cases arrive. The main difference between the two methods in terms of training is that the DRL agent receives rewards continuously throughout an episode, whereas the score-based method only observes the average cycle time at the end of an episode. Footnote 3: [https://github.com/jeroenmiddelhuis/LearningResourceAllocation](https://github.com/jeroenmiddelhuis/LearningResourceAllocation) #### 5.2.1 Deep Reinforcement Learning-based method We use the maskable PPO implementation of the contributory version of the popular DRL Python package Stable-Baselines34 and create our environment using the Gym RL API 5. We use the default hyperparameter settings except for PPO's clipping parameter \(\epsilon\) and learning rate \(\alpha\), which we set to 0.1 and 0.0001, respectively, to prevent large policy updates due to the high stochasticity of the problem. This makes the training process more stable, but the algorithm takes longer to converge to good policy. We decrease the learning rate linearly with the remaining time to encourage exploitation. The discount factor determines to what extent future rewards are credited to the current action, which we set \(\gamma=0.999\) to account for the long-term effects of actions on the cycle time. For example, making good decisions at a given point in time can prevent congestion in the process later. Footnote 4: [https://github.com/Stable-Baselines-Team/stable-baselines3-contrib](https://github.com/Stable-Baselines-Team/stable-baselines3-contrib) The value of the penalty for postponing should be relative to other rewards. A high average cycle time leads to low rewards for completing cases. The complete network is a combination of the five scenarios, and therefore, the cycle time will be higher. For this reason, We penalize the agent with -0.1 in all scenarios and -0.01 in the complete network. During training, we save the model with the highest average cumulative reward over the previous five episodes, and we train the model until the policy does not improve anymore. Figure 6: Scenario 5: N-network #### 5.2.2 Score-based value function approximation method. To train the weights of the SVFA method, we use Bayesian optimization. We specify all parameters we want to optimize, which are all the weights, and define a search space for all of them. In our setup, we use \(0\leq w_{i}\leq 20\), for \(i\leq 7\). The optimization is an iterative procedure that we run for a maximum of 100 trials for all process models, and the method converges before using all trials. The weights are learned for each scenario and the complete network independently. The method was implemented in Python using the scikit-optimize package 6. Footnote 6: [https://scikit-optimize.github.io/stable/](https://scikit-optimize.github.io/stable/) ### Results Table 1 shows the results of evaluating the trained DRL and SVFA methods compared to three well-known heuristics on 100 replications (of 5000 time units) of each scenario. The table reports the mean cycle times the trained methods achieve in the scenarios. The best results are in boldface. Note that if differences are not significant, multiple methods can be best. As can be seen in the table, DRL and SVFA outperform all heuristics in all scenarios except for SPT in the high utilization scenario. In this scenario, a good strategy is evidently to continuously make efficient assignments, which is what the SPT heuristic does. Our methods could learn how to do this to some extent, but not as effectively as SPT. SPT also performs well on the low utilization network, which is expected, as this scenario is about minimizing the processing time. Our methods are also able to learn good policies in this scenario. However, the SPT heuristic lacks performance in the other three scenarios as its decision rules force it to make poor assignments (e.g., slow server). FIFO performs well (comparable to SCFA) on the downstream network, which makes sense as the resources are identical in terms of processing time. Therefore, completing cases results in the lowest cycle time, which is what the FIFO heuristic does. Our learning methods are also able to learn good policies for this problem. \begin{table} \begin{tabular}{l c c c c} \hline \hline Business process & DRL & SVFA & SPT & FIFO & Random \\ \hline N-network & **4.4 (0.04)** & **4.4 (0.04)** & 5.0 (0.10) & 4.6 (0.04) & 4.6 (0.04) \\ Downstream & **8.0 (0.17)** & 8.1 (0.21) & 11.5 (0.36) & 8.1 (0.28) & 9.3 (0.24) \\ High utilization & 14.2 (0.86) & 13.9 (0.58) & **12.8 (0.47)** & 20.5 (1.41) & 25.3 (1.90) \\ Low utilization & **4.8 (0.05)** & **4.8 (0.04)** & **4.8 (0.04)** & 5.0 (0.05) & 5.6 (0.07) \\ Slow server & 8.8 (0.19) & **8.5 (0.15)** & 21.5 (1.17) & 14.7 (0.98) & 17.5 (0.96) \\ Complete network & **33.2 (0.97)** & **32.2 (0.70)** & 43.8 (1.85) & 36.3 (1.48) & 44.5 (2.00) \\ \hline \hline \end{tabular} \end{table} Table 1: Mean cycle time and half-width of the 95%-confidence interval for the six business process models for DRL, SVFA, and three heuristics. Results were obtained over 100 simulations. In the complete network, SVFA achieves the lowest average cycle time, followed by DRL, although the difference is not significant. A challenge with our DRL reward function is delayed and sparse rewards because the agent only receives a reward when a case completes. Credit assignment is also an issue since it is unclear which action relates to which case, making it hard for the agent to learn which actions lead to low cycle times. The number of activities in the complete network is much higher compared to the small scenarios, which is likely why DRL performs slightly worse than SVFA. SVFA does not learn based on intermediate rewards but only on the final average cycle time of an episode. Lastly, the confidence intervals of our learning methods are smaller compared to the heuristics, which indicates they learn more stable policies. Our methods make assignments based on a specific process state, which allows the methods to adapt to different situations, such as different loads. On the contrary, the heuristics always follow the same policy, which can lead to sub-optimal assignments. ## 6 Related work In business process optimization, several resource allocation methods have been proposed. Prescriptive process monitoring methods optimize for a single case, but not the complete process [8, 14, 18, 19]. Three rule-based methods that do optimize the complete process use process mining techniques to determine static resource allocation strategies. [20] assigns resources to activities based on the workflow of the process model. [2, 21] mine resource-related information in various dimensions to create a ranking of assignments. There are also more flexible methods that make assignments based on the current state of the process. [11] combine an offline prediction model with a scheduling algorithm. However, this approach relies heavily on the accuracy of the predictions. Other methods learn what assignments lead to desirable results using RL [7, 5, 23]. The advantage of these methods is that no specific information about the process is required, such as resource behavior, to design or select assignment rules. However, the limitation of the current RL approaches is that they are not suitable for realistic-sized problems and do not optimize more complex business objectives, such as the average cycle time or waiting time. Manufacturing processes are closely related to business processes, and numerous methods have been proposed for resource allocation in manufacturing. First, there are a large number of heuristics, such as earliest due date or FIFO [13]. While these heuristics are robust, explainable, and computationally efficient, they are less adaptable in situations for which they are not explicitly developed [10]. DRL has also been applied in job shop scheduling to find solutions in tractable time [4]. For example, [9] train an agent to select dispatching rules (i.e., heuristics). [15] solve a dynamic assignment problem by employing graph neural networks and DRL to capture complex structural relationships between different resources and jobs. However, these methods assume the sequence of activities is known, which is not the case in business processes. ## 7 Conclusion In this paper, we presented two learning methods for resource allocation in business processes to minimize the average cycle time of cases. The first approach is based on deep reinforcement learning, where an agent learns what assignments lead to a low cycle time. We define the observation space, action space, and reward function that can be used to solve the resource allocation problem. The second approach uses a score-based value function approximation approach, where a function is learned that scores possible assignments and then chooses the assignment with the best value. We benchmark our proposed methods against well-known heuristics in five scenarios that represent business processes with specific properties, as well as one large scenario that represents a realistic-sized business process. The results show that our proposed methods can learn policies that are competitive with or outperform the heuristics independently of process characteristics and outperforms the heuristics for a realistic-sized business process. In future work, we aim to apply our methods to real-world business processes. These processes are more complex and stochastic compared to the processes currently studied in BPO. While our methods work for large synthetic processes and are suitable for scaling, validation on real-world processes is necessary. Additionally, a limitation of current learning methods is that they are tested in the same environment (e.g., simulation model) as they are trained. In a real-world application, however, it is unlikely that the process is stationary and the test environment is identical. Furthermore, time is a challenging factor in RL, as the RL cycle only considers discrete decision steps and not the simulation time between those steps. A time-based objective, such as the cycle time, in addition to sparse and delayed rewards, adds much complexity to the learning process. In future work, we will investigate how we can overcome these challenges. **Acknowledgements.** The research leading up to this paper is supported by the Dutch foundation for scientific research (NWO) under the CERTIF-AI project (grant nr. 17998).
2302.04102
WF-UNet: Weather Fusion UNet for Precipitation Nowcasting
Designing early warning systems for harsh weather and its effects, such as urban flooding or landslides, requires accurate short-term forecasts (nowcasts) of precipitation. Nowcasting is a significant task with several environmental applications, such as agricultural management or increasing flight safety. In this study, we investigate the use of a UNet core-model and its extension for precipitation nowcasting in western Europe for up to 3 hours ahead. In particular, we propose the Weather Fusion UNet (WF-UNet) model, which utilizes the Core 3D-UNet model and integrates precipitation and wind speed variables as input in the learning process and analyze its influences on the precipitation target task. We have collected six years of precipitation and wind radar images from Jan 2016 to Dec 2021 of 14 European countries, with 1-hour temporal resolution and 31 square km spatial resolution based on the ERA5 dataset, provided by Copernicus, the European Union's Earth observation programme. We compare the proposed WF-UNet model to persistence model as well as other UNet based architectures that are trained only using precipitation radar input data. The obtained results show that WF-UNet outperforms the other examined best-performing architectures by 22%, 8% and 6% lower MSE at a horizon of 1, 2 and 3 hours respectively.
Christos Kaparakis, Siamak Mehrkanoon
2023-02-08T14:50:52Z
http://arxiv.org/abs/2302.04102v2
# WF-UNet: Weather Fusion UNet for Precipitation Nowcasting ###### Abstract Designing early warning systems for harsh weather and its effects, such as urban flooding or landslides, requires accurate short-term forecasts (nowcasts) of precipitation. Nowcasting is a significant task with several environmental applications, such as agricultural management or increasing flight safety. In this study, we investigate the use of a UNet core-model and its extension for precipitation nowcasting in western Europe for up to 3 hours ahead. In particular, we propose the Weather Fusion UNet (WF-UNet) model, which utilizes the Core 3D-UNet model and integrates precipitation and wind speed variables as input in the learning process and analyze its influences on the precipitation target task. We have collected six years of precipitation and wind radar images from Jan 2016 to Dec 2021 of 14 European countries, with 1-hour temporal resolution and 31 square km spatial resolution based on the ERA5 dataset, provided by Copernicus, the European Union's Earth observation programme. We compare the proposed WF-UNet model to persistence model as well as other UNet based architectures that are trained only using precipitation radar input data. The obtained results show that WF-UNet outperforms the other examined best-performing architectures by 22%, 8% and 6% lower MSE at a horizon of 1, 2 and 3 hours respectively. UNet, Precipitation Nowcasting, Cloud Cover Nowcasting, Deep Learning ## I Introduction Heavy rainstorms can impact people's daily lives negatively. In 2017, the WMO (World Meteorological Organization)investigated the impact of precipitation extremes, in terms of both excess and deficient rainfall 1. Thousands of people died and were displaced due to flooding and landslides in many countries worldwide. These disruptions had widespread socio-economic impacts. Therefore, it would be advantageous to forecast these weather events in advance so that decision-makers can take steps to safeguard lives, properties and wealth. Footnote 1: [https://public.wmo.int/en/media/news/rainfall-extremes-cause-widespread-socio-economic-impacts](https://public.wmo.int/en/media/news/rainfall-extremes-cause-widespread-socio-economic-impacts) Computational weather forecasting is an integral feature of modern, industrialized societies. It is used for planning, organization and managing many personal and economic aspects of life. Many industries such as agriculture [1], mining [2] and construction [3] rely on weather forecasts to make decisions, and if climatological events occur that are unexpected, this can lead to significant economic losses. Similarly, accurate weather forecasts improve flight and road safety and help foresee potential natural disasters. In precipitation nowcasting, one attempts to provide a short-range forecast of rainfall intensity based on radar echo maps, rain gauges and other observation data [4]. Precipitation nowcasting is an important task because of its immense impact, for instance, in predicting road conditions [5] and enhancing flight safety by providing weather guidance for regional aviation [6]. Furthermore, it can help with flood mitigation, water resource management [7] and avoiding casualties by issuing citywide rainfall alerts [8]. Forecasting weather generally relies on Numerical Weather Prediction (NWP) [9], methods based on mathematical formulations using several atmospheric features. Although NWP is a powerful method for such forecasting tasks, it also has drawbacks as it requires too much computational power [10]. In addition, forecasting with NWP may be sensitive to noise present in the measurements of weather variables [11, 12]. NWP models may take hours to run and are also less accurate than persistence-based forecasts on less than 4 hour predictions [13, 14]. In recent years, the enormous amount of ever-increasing weather data has stimulated research interest in data-driven machine learning techniques for nowcasting tasks [15, 16, 17, 18, 19, 20]. Unlike the model-driven methods, data-driven models do not base their prediction on the calculations of the underlying physics of the atmosphere. Instead, they analyze and learn from historical weather data such as past wind speed and precipitation maps to predict the future. By taking advantage of available historical data, data-driven approaches have shown better performance than classical ones in many forecasting tasks [13, 14]. Recent advances in Artificial Neural Network architectures (ANNs) have demonstrated great potential in precipitation nowcasting tasks [21, 22, 23, 24]. The critical difference between NWP and ANNs is that the former is model-driven, and the latter is a data-driven approach [25, 26, 20]. While classical machine learning techniques rely on handcrafted features and domain knowledge, deep learning techniques automatize the extraction of those features [27]. Deep learning techniques have demonstrated remarkable results in several domains such as biomedical signal analysis, healthcare, neuroscience and dynamical systems, among others [18, 28, 29, 30, 31]. Due to underlying complex dynamics of weather data, ensuring accurate nowcasting at several temporal-spatial levels is a challenging task. Deep learning based models have recently received much attention in this area because of their powerful abilities to learn spatiotemporal feature representations in an end-to end fashion [20, 21, 32, 33]. UNet, one of the successful deep learning architectures, which was originally proposed for image segmentation task, have shown promising results in various domains such as precipitation nowcasting, background detection in video analysis and image super-resolution [20, 32, 33, 34, 35, 36]. It consists of a contracting path to extract features, and an expanding path, to reconstruct a segmented image, with a set of residual connections between them to enable precise localization. Most of the current state-of-the-art deep learning based models use only the past precipitation radar images to predict the future precipitations [20, 21, 25, 32]. However, classical weather forecasting which generally rely on Numerical Weather Prediction (NWP), uses several atmospheric features (weather variables). Generally, it is common for measurements from different modalities to carry complementary information about various aspects of a particular activity [37]. Therefore, data-driven models that combine data from multiple sources can potentially provide more accurate inferences. This research aims to develop a novel architecture that can incorporate multiple weather variables such as past precipitation together with wind speed, as input to more accurately forecast the precipitation. This paper is organized as follows. A brief overview of the related research works is given in Section 2. Section 3 introduces the proposed WF-UNet model. The data description and used preprocessing steps are explained in Section 4. The experimental setup and and evaluation of the models are given in Section 5. The obtained results are discussed in Section 6 and the conclusion is drawn in Section 7. ## II Related Work In recent years, data-driven models have gained much attention for predicting weather elements such as temperature, wind speed and precipitation [12, 28, 32, 38]. Due to the vast amount of available weather data and the fact that weather element forecasting can be formulated as a sequence prediction problem, deep neural networks architectures such as Recurrent Neural Network (RNN) [17], Long Short-Term Memory (LSTM) [21] and Convolutional Neural Network (CNN) [39] among others are suitable candidates to address various problems in this field. In particular, CNN architectures have shown their excellent ability to handle 2D and 3D images. Moreover, thanks to the versatility of CNNs, nowcasting problems can be tackled in different fashions. For instance, the authors in [40] and [15] treated the multiple timesteps as multiple channels in the network. This way, they could apply a 2D-CNN to perform the predictions. The authors in [25] also treated the multiple timesteps as depth in the samples and applied a 3D-CNN and approximate more complex functions. The authors in [21], introduced an architecture that combines both CNN's and RNN's strengths to be able to more efficiently work with image data as well as capturing long-range dependencies for the precipitation nowcasting task. Hybrid models like in [41] replace the fully-connected LSTM with ConvLSTM to better capture the spatiotemporal correlations. In [21], the network predicted raw pixels directly rather than predicting the transformation of the input frame whereas [42] predicted transformed pixels instead. In [4, 15, 20], the authors show the successful application of UNet model in precipitation task due to its autoencoder-like architecture and ability to tackle image-to-image translation problems. The SmaAt-UNet model, an extension of the UNet model, which significantly reduces the UNet parameters without compromising its performance, is introduced in [20]. Finally, the authors in [18] introduced BroadUNet by equipping the UNet model with asymmetric parallel convolutions and the Atrous Spatial Pyramid Pooling (ASPP) module. Despite its many advantages, UNet can still result in limited abilities when modelling long-range dependencies due to the intrinsic locality of the convolution-based operations. In [19], the researchers propose another variation of the UNet model, the TransUNet. CNN-Transformer is used as an encoder in TransUNet, whereas the original UNet decoder is used as a decoder. There has been success in applying this model to medical image segmentation tasks. Finally, in [33], the researchers propose another variation of the UNet model, the AA-TransUNet. A pair of key elements are added to extend TransUNet: Convolutional Block Attention Modules (CBAM) and Depthwise-separable Convolutions (DSC). Therefore, the model can explore the inter-channel and inter-spatial relationships of features by performing both channel-wise and spatial-wise attention. ## III Proposed Model This section introduces our proposed **WF-UNet** model which adopts a variation of the original UNet architecture as its core building block and combines it with a data fusion strategy in order to incorporate two different weather variables. ### _Core UNet model_ Compared to the original UNet [43], here 3D Convolution layers are used. These convolutions can not only capture the spatial information contained within one radar image, but the spatial information of an image with its previous timesteps, i.e., the _temporal_ information. Fig. 1 outlines the architecture of the used Core UNet model which consists of the encoder and an decoder paths. As shown in Fig. 1, the model includes five consecutive levels in each path with the following operations: double 3D convolution (blue arrows), dropout and spatial pooling (pink arrows), upsampling (yellow arrows) and finally feature concatenation (dotted rectangles). The last layer of the model is a \(1\times 1\) 3D convolution (green arrow), which produces a single feature map that represents the prediction results. ### _Weather Fusion UNet (WF-UNet)_ We propose the Weather Fusion UNet (WF-UNet) model for the precipitation nowcasting task. In contrast to other UNet based model which use only precipitation maps as input for precipitation nowcasting, here the proposed model utilizes past precipitation as well as wind speed radar images as input. In particular, the proposed WF-UNet architecture, shown in Fig. 2, is composed of two separate core UNet streams. Two different weather variable inputs, i.e., precipitation and wind speed radar images, pass through the two streams. Here, the network processes the precipitation and wind speed images separately and then fuse the extracted features from both streams at the later decision-making stage. This architectural design is intended to address intrinsic differences between information captured in precipitation and additional radar data by encouraging each stream to learn informative features from the respective sensor separately before combining the features. The fusion is implemented via concatenating along the depth axis of the Core UNet outputs, followed by a convolutional layer. Similar to UNet, WF-UNet outputs values between 0 and 1 by applying a 3D convolution with a 1x1 kernel and a subsequent linear activation function in the final layer. ## IV Data description and preprocessing Here, we provide an overview of the dataset adopted for this research. we have selected a subset of the ERA5 dataset [44], which includes observations of total precipitation and wind speed at the same resolution, coordinates and measurement interval. The ERA5 dataset is provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) through the Copernicus Climate Change Service (C3S). Through C3S, scientists combine observations of the climate system with the latest scientific findings to produce authoritative, quality-assured information about Europe's past, present, and future meteorologic conditions. The measurements of the selected weather variables come in grid radar images with 31 square kilometres per pixel resolution. Images from this region cover much of the western part of Europe within the given latitude and longitude coordinates. More specifically the study area includes all or part of 14 countries: Andorra, Belgium, Denmark, France, Germany, Ireland, Italy, Monaco, the Netherlands, Portugal, Spain, Switzerland, Luxembourg and the United Kingdom. The collected dataset covers a six-year period, from 2016 to 2021, with hourly measurements. Fig. 3 and 4 display the geographical area we are considering and image examples of the two selected variables respectively. The _Total Precipitation_ is the accumulated liquid and frozen water that falls to the Earth's surface, comprising rain and snow. The values of each pixel represent the accumulated amount of rainfall in the last hour one 31 square kilometer. The ERA5 dataset provides the \(u\) and \(v\) components of wind, i.e., eastward and northward components, measured at a height of 100 metres above the surface of the Earth, in metres per second. Given the \(u\) and \(v\) components, the wind speed of the wind vector \(V\) is determined by \(\sqrt{u^{2}+v^{2}}\). The collected raw radar maps have a dimension of \(105\times 173\) and one pixel corresponds to the accumulated rainfall or mean wind speed in the last hour on 31 square kilometers. As a data preparation step, we divided the values (separately for each weather variable) of both the training and testing set by the highest occurring value in the training set to normalize the data. Furthermore, we cropped the images to \(96\times 96\). There is a contrast in the number of pixels showing rainfall compared to those not. Therefore, following the lines of [20] and [18], we created two additional datasets, i.e., EU-50 and EU-20, whose images have at least 50% and 20% of rain in the pixels Fig. 1: The Core UNet encoder-decoder network. A multi-channel feature map is represented by each cube in the figure. Numbers on the top-right of a cube indicate the number of channels, while numbers on the left of a cube indicate the resolution. Note that at each level of the architecture, the feature maps have the same resolution. The input is the ground truth of a sequence of precipitation maps, whereas the output is the corresponding prediction map produced by the model. respectively. We split the dataset into a training set (years 2016-2020) and a testing set (year 2021). Additionally, for every training iteration, a validation set is created by randomly selecting 10% of the training set. A comparison of different sample sizes of the three datasets can be found in Table I. The EU-50 dataset is used to train and test the models. Additionally we used the test set of EU-20 to compare the generalization performance of the trained models. Fig. 5 shows an example of EU-50 dataset over 6 hours. ## V Experimental setup and evaluation The data is arranged so that the resulting inputs of total precipitation \(I_{tp}\) and wind speed \(I_{ws}\) are three dimensional array, i.e., \(I_{tp},I_{ws}\in R^{TxHxW}\). Here, \(T\) is the number of lags or previous time steps corresponding to the time dimension. \(H\) and \(W\) refer to the size of the image and make up the spatial dimensions. TensorFlow is used to implement the models and train and evaluate them on the given dataset. We aim at nowcasting precipitation map for one to three hours ahead. The number of previous time-steps (lag) is set to 12, which is empirically found to be the best among the tested lag values. The height and width of the images are 96 and 96. The model receives \(I_{tp}\) and \(I_{ws}\) inputs of shape \((12,96,96)\) and output of shape \((1,96,96)\). The Mean Squared Error (MSE) is used as the loss function and is optimized using Adam optimizer with an initial learning rate of 0.0001. In addition, the batch size and the dropout rate are set to 2 and 0.5, respectively. All previously described models were trained for a maximum of 200 epochs. We employed an early stopping criterion which stopped the training process when the validation loss did not increase in the last 15 epochs. Additionally, we used a learning rate scheduler that reduced the learning rate to a half of the previous learning rate when the validation loss did not increase for four epochs. Here, we use MSE as the primary metric to assess the model performance. Furthermore, we also include additional metrics such as accuracy, precision and recall. Following the lines of [10], we first create a binarized mask of the image according to a threshold to calculate these new metrics. This threshold is the mean value of the training set from the EU-50 dataset. Hence, any value equal to or above the threshold (0.0047) is replaced by 1, and any value below it is replaced by 0. The models are trained on a single NVIDIA Tesla V100 with 32Gb of VRAM. ## VI Results and Discussion We compare the performance of our WF-UNet model with four other models, i.e., the persistence model, core UNet Fig. 4: The dataset radar images for the different weather variable. The countries and coast outlines are included for visualization purposes only. Fig. 3: The study area framed in red. Fig. 2: The architecture of WF-UNet. Two different inputs (i.e. precipitation and wind speed radar images) pass through two separate but identical Core UNet streams. The outputs of those streams are then concatenated (late/decision-level fusion). Lastly, a final 3D convolution with a 1x1 kernel and a linear activation function are applied to produce the final output. [43], AsymmetricInceptionRes3DDR-UNet [32] and Broad-UNet [20]. The models all trained using training set of EU-50 data and tested over the test sets of both EU-50 and EU-20 datasets. The MSE is the main metric used for this comparison and is calculated over the denormalized data. Three additional metrics, i.e., accuracy, precision and recall are also computed over the binarized data, as described in section 5. The performance of different models on the EU-50 and EU-20 test datasets are shown in Table II and III respectively. From the obtained results, one can observe that our proposed WF-UNet model performs consistently better than all the other models for both EU-50 and EU-20 datasets and over different timestep ahead. From Table II, one can notice that the largest difference in performance of WF-UNet compared to the next best performing model is for the 1-hour ahead prediction. As the number of step-ahead increases, the gap between the performance of the proposed WF-UNet and the other examined models decreases. Fig. 6, shows the average MSE obtained using the ground truth and the nowcasts for all three timesteps ahead. One can note that the proposed WF-UNet outperforms the other tested models. Two precipitation nowcasting examples are shown in Fig. 7. The example on the left side is taken from the test set of the EU-20 dataset. The example on the right side is taken from the test set of the EU-50 dataset. As can be seen, the sample image of EU-50 has significantly more rain pixels than that of EU-20. Following the results from Tables II and III, the 1-hour ahead predictions in Fig. 7 appear to be the most similar to the ground truth image and as the hours ahead prediction increases Fig. 5: Example of the EU-50 dataset over 6 timesteps. The first row displays Total Precipitation radar images while the second row Wind Speed radar images. Fig. 6: Average test MSE over all three timesteps ahead. the similarity between the nowcasts and the ground truth image decreases. Fig. 8 shows an example of the learned feature maps for the two streams of WF-UNet model. The image fed to the network is the ground truth image previously shown in Figure 7. We can observe that different features have been extracted, some with more details than others. The obtained results show that the inclusion of multiple features such as precipitation and wind speed maps allows the WF-UNet to obtain more precise nowcasts. Furthermore, the decision-level fusion approach allows the network to process the two images separately and then fuse the extracted features from both streams at the later decision-making stage. With this approach, we can observe a 22% and an 11% improvement to the core UNet on both datasets for the 1-hour ahead predictions. The binarized predictions of the WF-UNet are 6% more accurate than the core UNet for 1-hour and 2-hour ahead predictions and 5% more accurate for 3-hours ahead predictions. This means that WF-UNet is better than current state-of-the-art models at predicting the position of precipitation pixels in the image. Since the model aims to perform a regression of each pixel with a wide range of values, achieving accurate forecasting or equivalently lower MSE values is more desirable than having good binary accuracy only. That is where our approach shows superior performance to the previous state-of-the-art models. Fig. 8: Feature maps outputted from the two streams before they are concatenated in the Decision-level Fusion stage. The first row represents the output of the Precipitation stream, while the second row the output of the Wind Speed stream. Fig. 7: WF-UNet precipitation nowcasts examples. The images in the left side are generated with the test set from the dataset containing at least 20% of rain pixels (EU-20). The images in the right side are generated with the test set from the dataset containing at least 50% of rain pixels (EU-50). ## VII Conclusion The WF-UNet, an extension of the UNet architecture, is introduced for precipitation nowcasting up to three hours ahead. The model incorporates and learns from wind speed and precipitation variables using two streams. The learned features of both streams are then fused before reaching to the output of the model. The use of decision-level fusion helps to capture the spatiotemporal information of past radar images better than the classical UNet model. Compared to other tested UNet-based models, WF-UNet extracts features more efficiently, making its predictions more accurate in short-term nowcasting.
2308.11232
Branched flows in active random walks and the formation of ant trail patterns
Branched flow governs the transition from ballistic to diffusive motion of waves and conservative particle flows in spatially correlated random or complex environments. It occurs in many physical systems from micrometer to interstellar scales. In living matter systems, however, this transport regime is usually suppressed by dissipation and noise. In this article we demonstrate that, nonetheless, noisy active random walks, characterizing many living systems like foraging animals, and chemotactic bacteria, can show a regime of branched flow. To this aim we model the dynamics of trail forming ants and use it to derive a scaling theory of branched flows in active random walks in random bias fields in the presence of noise. We also show how trail patterns, formed by the interaction of ants by depositing pheromones along their trajectories, can be understood as a consequence of branched flow.
King Hang Mok, Ragnar Fleischmann
2023-08-22T07:14:27Z
http://arxiv.org/abs/2308.11232v2
# Branched flows in active random walks and the formation of ant trail patterns ###### Abstract _Branched flow_ governs the transition from ballistic to diffusive motion of waves and conservative particle flows in spatially correlated random or complex environments. It occurs in many physical systems from micrometer to interstellar scales. In living matter systems, however, this transport regime is usually suppressed by dissipation and noise. In this article we demonstrate that, nonetheless, noisy active random walks, characterizing many living systems like foraging animals, and chemotactic bacteria, can show a regime of branched flow. To this aim we model the dynamics of trail forming ants and use it to derive a scaling theory of branched flows in active random walks in random bias fields in the presence of noise. We also show how trail patterns, formed by the interaction of ants by depositing pheromones along their trajectories, can be understood as a consequence of branched flow. ## I Introduction High intensity fluctuation patterns and extreme events are hallmarks of branched flow, which very generically occurs in the propagation of waves, rays or particles in weakly refracting correlated random (or even periodic) media [1, 2, 3, 4, 5]. It is a ubiquitous phenomenon and has been observed in many physical systems, e.g. in electronic currents refracted by weak impurities in high mobility semiconductors [6, 7], light diverted by slight variations of the refractive index [8, 9], microwaves propagating in disordered cavities [10, 11], sound waves deflected in the turbulent atmosphere [12, 13] or by density fluctuations in the oceans [14]. Wind driven sea waves are piled up by eddies in the ocean currents to form rogue waves [15, 16, 17, 18, 19] and tsunamis are focused to a multiple of their intensity even by minute changes in the ocean depth [20]. Branched flow dominates propagation on length scales between the correlation length of the medium and the mean free path of the flow that is traversing it, i.e. a regime between ballistic and diffusive transport in an environment with _frozen_ or _quenched_ disorder. An example of a branched flow is shown in Fig. 1a. Experiments that have analysed the laws of motion of individual Argentine ants (Linepithema humile) during trail formation [21] have inspired us to study if branched flow can also occur in random walks in biology. In living systems, motion in general is overdamped and the phase space structures responsible for branched flows can not form. However, frequently the inevitable input of energy happens in form of self-propulsion and motion is best described as a _active random walk_ (often referred to as _active brownian particles_) [22, 23, 24, 25]. And in many situations this biological or biologically inspired active random walks are subject to _bias fields_, like for instance the distribution of food in animal foraging or bacterial chemotaxis but also of chemicals acting as fuel for self-propelled colloids [26, 27, 28, 29]. In the dynamics of ants, the bias field is representing pheromones left by other ants along their paths. Using this example we show that in Figure 1: **Branched flow of ant trajectories.** Example densities of model trajectories of ants exiting from a hole in the center of a disc with a random pheromone field and leaving the disc at the edge. Comparison of the dynamics according to (**a**) the noise free (\(\gamma_{l}=0\)) differential equations of motion Eq. **c** with (**b**) the integro-differential dynamics of Eq. 5 with a detection radius of \(30\%\) of the correlation length of the pheromone field. (**c**) Example of a random pheromone field model with density \(n=1.0\). active random walks in correlated random bias fields one can observe the same phenomenology of branched flow as in conservative flows, and that due to the heavy-tailed density fluctuations associated with branched flow, this can have severe implications on pattern-formation. The agents in active random walks, however, are usually not only influenced by the bias fields but their directionality and/or position are also subject to temporal fluctuations (which we will assume to be uncorrelated in the following), leading also to diffusion. We study how this _stochastic diffusion_ destroys branched flow, by deriving a universal scaling theory as the main result of this paper. Finally, we simulate trail formation by explicitly modelling the pheromone deposition along ant trajectories and its feedback on the trajectories of following ants, resembling the phenomenology of the experimental observations of Ref. [21] and demonstrate its connection to the phase space structures of branched flows. ## II Ant dynamics The complex collective and social behaviour of ants is of course governed by many ways of interaction: visual, tactile and chemical. Here, following the lead of Ref. [21], we want to concentrate on a simple model of the dynamics of Argentinian ants as they are influenced by the pheromones deposited by other ants. Please note that, due to several uncertainties in the experimental knowledge that would require us to make assumptions in the model and fit many parameters, our aim will not be to quantitatively reproduce experimental findings but to formulate a clean, simplified model that captures the main aspects observed qualitatively and phenomenologically and which allows us to study the fundamental implications of correlated bias fields. The experiments reveal that the (mean) directional change of the ants' trajectories due to their perception of spatial variations in the pheromone concentrations can well be described by a (generalized) Weber's law [30]: \[\langle\Delta\varphi\rangle=A_{\Delta t}\,\frac{L-R}{L+R+T_{0}}, \tag{1}\] where \(\Delta\varphi\) is the change in direction of the ant's velocity \(\mathbf{v}=(v\cos\varphi,v\sin\varphi)^{T}\) in a given time interval \(\Delta t\), and \(\langle.\rangle\) denotes an appropriate ensemble average (because \(\Delta\varphi\) itself is a stochastic quantity, as will be discussed later). The quantities \(L\) and \(R\) are measuring the concentration \(c(\mathbf{r})\) of pheromones that the ant detects, integrated over certain domains to the left and right of its projected path, respectively (\(\mathbf{r}=(x,y)^{T}\) is the two dimensional position vector). At very low pheromone concentrations the detection threshold of the ants' sensors will eventually be reached and therefore Weber's law was adjusted by a threshold parameter \(T_{0}\). The proportionally constant \(A_{\Delta t}\) depends on the time interval. We have generalized the model of Ref. [21] slightly by introducing a smooth kernel function \(K(r^{\prime})\) instead of sharp domains. It is defined such that \[L-R =\iint_{\mathbb{R}^{2}}K\left(\mathbf{D}_{-\varphi}(\mathbf{r}^{ \prime}-\mathbf{r})\right)\,c(\mathbf{r}^{\prime})\,dx^{\prime}\,dy^{\prime} \tag{2}\] \[L+R =\iint_{\mathbb{R}^{2}}\left|K\left(\mathbf{D}_{-\varphi}( \mathbf{r}^{\prime}-\mathbf{r})\right)\right|c(\mathbf{r}^{\prime})\,dx^{ \prime}\,dy^{\prime}, \tag{3}\] when the ant is positioned in \(\mathbf{r}\) and running in direction \(\varphi\). \(D_{\theta}=\left(\begin{array}{cc}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{array}\right)\) denotes the rotation matrix. The integral of \(|K|\) is normalized to \(1\). Even though a smooth kernel is more realistic than sharp domains, we don't know its actual shape and thus have to assume some form. In the following for simplicity we chose a kernel based on a two times differentiated Gaussian, i.e. \[K(x,y)=2\,x\,y\,e^{-(x^{2}+y^{2})/\mathcal{R}^{2}}\Theta(x)/\mathcal{R}^{4}, \tag{4}\] where \(\Theta(x)\) is the Heaviside step function. The extent of the area over which the ant can detect the pheromone is parameterized by \(\mathcal{R}\). The kernel is illustrated in Fig. 2. We can now write the equations of motion as integro-differential equations (taking the limit \(\Delta t\to 0\)) as \[\begin{split}\dot{x}&=\cos\varphi\qquad\qquad\dot{y} =\sin\varphi\\ \dot{\varphi}&=A_{0}\,\frac{L-R}{L+R+T_{0}}.\end{split} \tag{5}\] Here we assumed (again following [21]) that the ants' speed does not vary strongly and that we can choose the magnitude of the velocity to be constant, \(v\equiv 1\)[31]. Figure 2: **Pheromone detection.** An ant running in y-direction, i.e. \(\varphi=\pi/2\). The kernel function of Eq. 4 is shown in colour code (blue to red). The solid black curve shows the value of the kernel along the dashed dotted diagonal cut. The maximum is reached at a distance \(\mathcal{R}\) from the origin, i.e. the position of the ant. To facilitate simulating many realizations of random bias fields and large ensembles of random walkers, we simplify these integro-differential equations into ordinary differential equations by Taylor-expaning the concentration field and taking the limit \(\mathcal{R}\to 0\) (see appendix VII.1). Furthermore, as hinted earlier, the changes of the direction of the ants are not deterministic, but stochastic quantities. Therefore we are also introducing noise terms to the equation of motion, finally yielding stochastic differential equations: \[\begin{split} dx&=\cos\varphi\,dt+\gamma_{1}\,dW_{x} \\ dy&=\sin\varphi\,dt+\gamma_{1}\,dW_{y}\\ d\varphi&=\alpha\,\frac{\mathbf{\nabla}c\cdot\mathbf{n }}{c+T_{0}}\,dt+\gamma_{2}\,dW_{\varphi}.\end{split} \tag{6}\] Here \(\mathbf{n}=(-\sin\varphi,\cos\varphi)^{T}\) is the unit normal vector of the ant's velocity and the proportionality constant \(\alpha\) is a measure of the sensitivity of the ant's response to spatial variations in the pheromone field. The quantities \(dW_{i}\) are independent white noise (Wiener) processes and the parameters \(\gamma_{1}=\sqrt{2\,D_{1}}\) and \(\gamma_{2}=\sqrt{2\,D_{2}}\) quantify the strength of translational and rotational diffusion, respectively [32]. In the case of the ant dynamics we will always assume \(\gamma_{1}=0\), but since our results will also be interesting and valid for other active random walks like bacterial chemotaxis, we will later also let \(\gamma_{1}>0\). Figure 1 illustrates that the (noise free) dynamics of these equations well captures the essentials of the dynamics obtained using the kernel method above, here shown in a random pheromone field model which will be introduced in the next section. ## III Dynamics in random fields and typical length scales In many situations the environment of an active random walk is complex and the bias field will be best described as a _correlated random field_. In the ant dynamics this might e.g. be in a crowded environment, where many ant trajectories cross paths. In the experiments of Ref. [21], where ants exit a hole in the centre of a disc in random directions, the estimated pheromone field does not show signs of trail formation in the first 10 minutes, but appears to be random. In branched flows even minute (but correlated) random variations in the environment lead to heavy-tailed, branch-like density fluctuations in the flow. We can therefore hypothesize that these structures can be the seeds of emerging patterns like the formation of ant-trails. We will therefore begin our analysis of ant dynamics by studying motion in random fields. We will use a very simple random field model with only few parameters: imagine a concentration field that is created by randomly placed Gaussian-shaped mono-disperse pheromone "droplets" (see appendix VII.2). It can be characterized by a correlation length \(\ell_{c}\) and the density \(n\) of droplets per area \(\ell_{c}^{2}\). Its mean and variance are then given by \(\langle c\rangle=n/\ell_{c}\) and \(\sigma_{c}^{2}=n/(\pi\ell_{c}^{4})\). An example for \(n=1\) is shown in Fig. 1c. For analysing the interplay of branched flow and stochastic diffusion we will need to know the scaling of the typical length scale of branched flows governed by the equations of motion 6. One prominent characteristic of branched flows is the occurrences of _random caustics_[33, 34, 1, 3, 2, 35] and for dynamics in correlated random potentials it is well established that the typical length scale of branched flow is given by the average distance (\(d_{c}\)) a ray or trajectory has to propagate until it reaches the first caustic [11, 2, 2, 3]. More details can be found in Appendix VII.3, where we transfer the scaling arguments derived earlier to the ant dynamics. The essence of these arguments is that in a paraxial approximation \(d_{c}\) is reached if the diffusion due to the random bias field in direction perpendicular to the initial propagation direction covers a correlation length of the random field, i.e. (assuming initial motion in \(x\) direction) \[\left\langle(y-y_{0})^{2}\right\rangle\propto\;\ell_{c}^{2},\] where \(<\cdot>\) is the average over many random fields of equal characteristics or, due to self-averaging, the average over initial conditions in sufficiently large systems. From this we find that \[d_{c}\propto\left(\frac{\ell_{c}}{\alpha r_{c}}\right)^{2/3}, \tag{7}\] with the _correlation radius_ of the random force \[r_{c}=\left(\int_{-\infty}^{\infty}C_{F}(x,0)\;dx\right)^{1/2} \tag{8}\] Figure 3: **Characteristic length scale.** The numerically obtained _mean distances to the first caustic_\(d_{c}\) for random pheromone fields for 441 combinations of different parameters \(n\), \(\alpha=[0.1,0.3,0.5]\) and \(T_{0}=[0.0,0.032,0.064,0.32,0.64,3.2,6.4]/\ell_{c}^{2}\) plotted versus \(n\) (the _trivial_ parameters are fixed to the values \(\ell_{c}=0.014\) and \(w=1\).). Each datapoint is averaged over 10 realizations of the pheromone field and \(10\,000\) initial conditions for each field. The simulations clearly confirm the scaling derived in Eq. 7: **(Left)**\(d_{c}\) in units of \(\ell_{c}\). **(Right)** The data of the left panel scaled by the right hand side of Eq. 7. where \(C_{F}(x,y)=\left\langle F_{y}(x,y)\,F_{y}(0,0)\right\rangle/\alpha^{2}\) is the correlation function of the force \(F_{y}=\left(\alpha\partial_{y}c\right)/\left(c+T_{0}\right)\). In the following we evaluated \(r_{c}\) numerically. To confirm the scaling behaviour of Eq. 7 we obtained the _first caustic statistics_ for a range of different parameters of the random pheromone fields by numerically integrating the stability matrix along trajectories as described in Appendix VII.4. The results are shown in Fig. 3. ## IV Diffusion and branched flow We are now equipped to study how the stochastic diffusion terms in Eq. 6 will suppress branched flows, which is illustrated in Fig. 4. To quantify the suppression we use the so-called _scintillation index_ of the trajectory density \[S(x)=\frac{\left\langle\rho^{2}(x,y)\right\rangle-\left\langle\rho(x,y) \right\rangle^{2}}{\left\langle\rho(x,y)\right\rangle^{2}}, \tag{9}\] where \(x\) is the main propagation direction of the flow and the average \(\left\langle\cdot\right\rangle\) is taken over realizations of random fields (and in practice, to save computation time, also over the direction perpendicular to the propagation direction). For simplicity we will use initially parallel flows in \(x\) direction in the following (instead of point sources where the main propagation direction would be radial). The region of the strongest branches in a branched flow is visible as a pronounced peak in the scintillation index (cf. Refs. [4; 11]). We will use the value of the scintillation index at the propagation distance where the branched flow is most pronounced in the absence of stochastic diffusion, i.e. at the peak position \(x_{p,0}\) for \(\gamma_{i}=0\) (as indicated in the lower panel of Fig. 4). Please note, that to be able to compare the trajectory densities (and thus the scintillation index) for different values of the stochastic diffusion terms, we need to make sure that a well defined state, i.e. a _non-equilibrium steady state_ (NESS), has been reached. Particles (= ants) are entering from the left and exit to the right (and on the left) of the integration region (for practical reasons we are using periodic boundaries in \(y\) direction). With increasing stochastic terms the total integration time until all particle have left the integration region is increasing. Figure 5 shows \(S(x_{p,0})\) normalized to the peak height \(S_{0}(x_{p,0})\) of the scintillation index in the absence of stochastic terms. We argue that branched flow will be suppressed when the stochastic terms are interfering with the basic mechanism of caustic creation. That means, when, at the mean time to the first caustic, the stochastic terms cause a diffusive standard deviation in \(y\) of the same order of magnitude as the diffusion due to the (static) pheromone field, i.e. proportional to the correlation length \(\ell_{c}\) of the random field. For translational diffusion this scaling argument for the characteristic fluctuation strength \(\gamma_{1}^{*}\) reads \[\left\langle(y-y_{0})^{2}\right\rangle={\gamma_{1}^{*}}^{2}\,d_{c}\propto\ell _{c}^{2}\] and thus by using Eq. 7 we can define \[\gamma_{1}^{*}=\left(\alpha\,\ell_{c}^{2}\,r_{c}\right)^{1/3} \tag{10}\] Figure 4: **Impact of stochastic diffusion.** Illustration of the suppression of branched flow by increasing stochastic diffusion. **(Upper panel)** Flow from a point source with increasing stochastic rotational diffusion (growing \(\gamma_{2}\)). The other parameters are \(\alpha=0.2\), \(T_{0}=0.0\), \(\ell_{c}=0.005\), and \(n=1.0\). The system size is set to unity \(\mathcal{L}=1\). See also _Video 1_ in the supplementary information. **(Lower panel)** Scintillation index for initially parallel branched flows for increasing translational stochastic diffusion (\(\gamma_{1}=0.0,\cdots,0.15\)). The scintillation index is averaged over \(y\) and \(60\) realisations of the fields. The three insets on the right illustrate single realizations corresponding to three of the scintillation index curves (as indicated by color). The other parameters are \(\alpha=0.025\), \(T_{0}=0.0\), \(\ell_{c}=0.01\), and \(n=1.0\). Figure 5: **Suppression of branched flow by diffusion**. The scintillation index \(S(x_{p,0})\) at the position \(x_{p,0}\) of the peak (of height \(S_{0}(x_{p,0})\) ) in the noise free branched flow as a function of the fluctuation strength (\(\gamma_{i}\)) for **(left panel)** translational diffusion and **(right panel)** rotational diffusion for 160 combinations of the other parameters: \(\alpha=[0.01,0.02,0.04,0.08,0.16]\), \(n=[1,2,4,8]\), \(T_{0}=[0,0.1,1.0,10.0]/\ell_{c}^{2}\), and \(\ell_{c}=[0.005,0.01]\) (shown are \(126\) parameter combinations for which \(x_{p,0}\) was within the range \([0.05,0.8]\). In each simulation the density is estimated using \(300\,000\) trajectories, and the scintillation index is averaged over \(60\) realizations of the random pheromone field. For rotational diffusion finding \(\gamma_{2}^{*}\) is easy since in the paraxial approximation the pheromone field and the stochastic term enter the equations of motion on the same footing, i.e. without further calculation we can simply assume \(\gamma_{2}^{*}\propto\gamma_{p}\) (cf. appendix VII.3) and we thus define \[\gamma_{2}^{*}=\alpha\:r_{c}. \tag{11}\] Using Eqs. 10 and 11 to rescale the abscissas of the data from Fig. 5, we find excellent data collapse in the upper panels of Fig. 6, confirming our scaling argument for the suppression of branched flows. The functional form of this suppression of branched flow, however, is surprising. One might naively expect that the Gaussian propagator of diffusion also leads to a Gaussian suppression of the scintillation index (this is e.g. what we would observe for the scintillation index of a periodic stripe pattern if we'd convolve it with a Gaussian kernel). In contrast we observe that the suppression is exponential in \(\gamma_{i}\), and not in \(\gamma_{i}^{2}\). This can be seen in the lower panels of Fig. 6. We find that for two orders of magnitude in the scintillation index we can write \[S(x_{p,0})=S_{0}(x_{p,0})\exp\left(-\eta_{i}\frac{\gamma_{i}}{\gamma_{i}^{*}} \right), \tag{12}\] with constants \(\eta_{1}\approx 3.70\) and \(\eta_{2}\approx 1.32\). ## V Trail formation Finally, we are going to study the dynamics of our model ants interacting with each other by depositing pheromones along their trajectories. As motivated earlier, we will not be aiming at a quantitative comparison with the experiment. Instead we want to restrict ourselves to observing the basic phenomenology and to illuminate the phase space structures connected with trail formation. As before, in our model, ants are entering from a point source in the centre of a circular environment and are taken out of the system, when they reach the boundary. Since in the experiment of Ref. [21] the ants tend to remain at the boundary of the circular arena once they have reached it, this appears to be an adequate approximation that allows for a clear model setup and well defined states. The model ants are simulated one by one, each entering the arena in a random initial direction, following Eq. 6 in the pheromone field deposited by their predecessors, and depositing pheromones themselves along their path. After deposition, pheromones will diffuse and evaporate on slower time scales. Again, following Ref. [21], we will assume that these time scales are long enough so that we can neglect these effects. In the following we model the deposition to be in droplets of fluctuating quantity along the path (fluctuating uniformly in an interval \([0,2\,\overline{c}]\)) at times \(t_{j}=j\Delta t_{d}\) which get smeared out by a Gaussian \(\exp[-(\mathbf{r}-\mathbf{r(t_{j})})^{2}/\sigma_{0}^{2}]/(\pi\sigma_{0}^{2})\), with arbitrary mean \(\overline{c}\), \(\sigma_{0}=0.005L\) (where \(L\) is the diameter of the arena), and \(\Delta t_{d}\) such that the concentration fluctuates by approximately \(10\%\) along the path, as illustrated in the upper leftmost panel of Fig. 7. The remaining parameters are chosen to be \(\alpha=0.2\), \(T_{0}=0.005\), \(\gamma_{1}=0\) and \(\gamma_{2}=0.2\). Figure 7 exemplifies the evolution of the pheromone field and the trail pattern in our model setup. The phenomenology is similar to that observed in the experiment. Trails start to form but might shift or depopulate over time while new trails are forming. Figure 8 and Video 2 in the supplemental material VIII illustrate the connection of the observed trail patterns to branched flows by showing the densities of _potential trajectories_: after \(N\) ants have deposited pheromones along their trajectories the possible trajectories of ant No. \(N+1\) in the current pheromone field have been calculated and their density plotted (for each denstiy shown we simulated \(200000\) trajectories). To make the phase space structures that are developing clearer, in addition we have plotted the potential trajectory density in the absence of stochastic diffusion, i.e. for \(\gamma_{1}=\gamma_{2}=0\). We clearly see, that after the initial approximately one hundred trajectories the phase space structures resemble those of a branched flow in a random environment and the most pronounced branches with their caustics correspond to the trails that have developed and are continuing to form. Figure 6: **Scaling theory.** The upper panels show the same data as in Fig. 5 but with abscissas scaled by \(\gamma_{1}^{*}\)**(upper left)** and \(\gamma_{2}^{*}\)**(upper right)**, respectively. The lower panels show the same data plotted together scaled according to Eq. 12 on a double logarithmic scale **(lower left)** and on a semilogarithmic scale **(lower right).** The red curves in the lower panels are the exponential function \(y=\exp(x)\) demonstrating that the inital suppression of the scintillation index is exponential. The red shaded regions in the upper panels illustrate that the further progression of the curves can be understood by the residual scintillation in the trajectory density due to the finite number of trajectories used in the simulation. The shaded areas are defined by \(S(x_{p,0})/S_{0}(x_{p,0})=\exp\left(-\eta_{i}\gamma_{i}/\gamma_{i}^{*}\right)+ S_{\rm sim}/S_{0}(x_{p,0})\), where \(S_{\rm sim}\approx 2.9\cdot 10^{-3}\) is the numerically observed average residual scintillation index (at large \(\gamma_{i}\)). (Note that the width of the shaded area is thus caused by the spread of \(S_{0}(x_{p,0})\) for the different parameters). ## VI Conclusion We have demonstrated that active random walks in correlated bias fields, even in the presence of dissipation and stochastic forces, can show a regime of branched flow. We have derived a scaling theory to estimate the strength of the stochastic forces that still allow for branched flows to form. Since it is a very robust mechanism creating density fluctuations with heavy-tailed distributions and extreme events, branched flow can be crucial in the selection of random dynamical patterns forming in transport of active matter on length scales between ballistic and diffusive spread. We have exemplified this in a simple model of the trail formation of pheromone depositing ants. ## VII Appendices ### Equations of motion To transform the integro-differential equations of motion (Eq. 5) into ordinary differential equations, we Taylor-expand the pheromone density to first order \[c(x+\Delta x,y+\Delta y)=c(x,y)+\partial_{x}c(x,y)\Delta x+\partial_{y}c(x,y) \Delta y,\] and insert this into Eqs. 2 and 3. In a coordinate system where \(x\) is aligned to the velocity of the ant we find \[L-R = \frac{\sqrt{\pi}}{2}\ \mathcal{R}\ \partial_{y}c(x,y)\] \[L+R = c(x,y)+\frac{\sqrt{\pi}}{2}\ \mathcal{R}\ \partial_{x}c(x,y).\] We see that the sensitivity of the ants rotation \(A_{0}=A_{0}(\mathcal{R})\) in reaction to variation in the pheromones field needs to be inversely proportional to the size \(\mathcal{R}\) of the detection domain, in order to keep the responses comparable. Writing \(A(\mathcal{R})=A_{0}(\mathcal{R}_{0})\mathcal{R}_{0}/\mathcal{R}\), with a arbitrarily chosen but sufficiently small \(\mathcal{R}_{0}\), we define \[\alpha=\lim_{\mathcal{R}\to 0}\frac{\sqrt{\pi}}{2}\ \mathcal{R}\ A_{0}( \mathcal{R})=\frac{\sqrt{\pi}}{2}A_{0}(\mathcal{R}_{0})\mathcal{R}_{0}, \tag{13}\] and after transforming back into the original (rotated) coordinate system, we find the equations of motion Eq. 6. ### Random pheromone field model We generate a random (non-negative) concentration field \(c(\mathbf{r})\) with prescribed correlation length \(\ell_{c}\) by convolving \(N\) randomly placed \(\delta\)-functions (in an area \(\mathcal{L}\times\mathcal{L}\)) with a Gaussian of width \(\ell_{g}=\ell_{c}/\sqrt{2}\): \[c(\mathbf{r})=w\ g(\mathbf{r})*\sum_{i=1}^{N}\delta(\mathbf{r}_{i})\] with \(g(\mathbf{r})=2e^{-2(x^{2}+y^{2})/\ell_{c}^{2}}/\left(\pi\ell_{c}^{2}\right)\) and a global weight prefactor \(w\). The mean density of \(c(\mathbf{r})\) is \(<c>=c_{o}=w\ N/\mathcal{L}^{2}=w\ n/\ell_{c}^{2}\) where \(n\) is the number of delta-functions per area \(\ell_{c}^{2}\), i.e. \(n=N\ell_{c}^{2}/\mathcal{L}^{2}\). For the correlation function we find \[C(\mathbf{r}) = <c(\mathbf{r}^{\prime}+\mathbf{r})c(\mathbf{r}^{\prime})>-c_{0}^ {2}\] \[= \frac{w^{2}\ n}{\pi\ell_{c}^{4}}e^{-r^{2}/\ell_{c}^{2}}.\] Throughout this article we will assume \(w=1\). Figure 7: **Trail pattern formation.** In the **upper row** the cumulated pheromone field (in an initially pheromone free circular arena) after a number of \(N\) ants have transversed the arena form the centre to the boundary with random initial direction. In the **lower row** the trajectories of the first ant entering the arena (leftmost panel) and 10, respectively 100, trajectories of consecutive ants at later times. (A more detailed description of the model and the parameters used are given in the text.) ### Scaling of the characteristic length To assess the characteristic length scale of branched flows of ant trajectories, we will follow Refs. [20; 37; 2] in using a simple scaling argument to find the parameter dependence of the _mean distance to the first caustic_ (\(d_{c}\)). We recapitulate the scaling argument for initially parallel trajectory-bundles, i.e. the ant trajectory equivalent of an initially plane wave propagating through a correlated random, weakly refractive medium. This case is easier to understand and numerically simpler to test than the case of a point source, but (except for a different constant prefactor) yields the same results. Caustics, first studied in ray optics, are contour-lines or -surfaces in coordinate space on which the number of solutions passing through each point in space changes abruptly. They are also singularities in the ray (or trajectory) density. Figure 9 illustrates the connection of branched flows and caustics. In the following we will assume that the sensitivity in the change of direction is sufficiently weak such that \(d_{c}\gg\ell_{c}\) and the mean free path (\(\ell_{\rm mfp}\)), i.e. the distance at which trajectories start to turn around, is much larger, \(\ell_{\rm mfp}\gg d_{c}\). We can then neglect the change in velocity in propagation direction (which we choose to be the \(x\)-direction), i.e. do a _paraxial approximation_. If we then follow a single trajectory in the random pheromone field its perpendicular velocity (\(v_{y}\)) will grow diffusively on timescales greater than \(\ell_{c}/v_{x}\approx\ell_{c}/1\). We can thus approximate its dynamics by \[\dot{y} =v_{y} \tag{14}\] \[\dot{v}_{y} =\frac{\alpha\partial_{y}c}{c+T_{0}}\approx\gamma_{p}\;\Gamma(t), \tag{15}\] with \(\langle\Gamma(t^{\prime})\Gamma(t)\rangle=\delta(t^{\prime}-t)\). The prefactor \(\gamma_{p}=\sqrt{2D_{p}}\) determines the diffusive growth : \(v_{y}=2D_{p}t\) caused by the random (but static) pheromone field. It can by found by directly integrating Eq. 15 to be \(\gamma_{p}=\alpha\;r_{c}\), with the _correlation radius_\(r_{c}\) of the fluctuating force Eq. 8. The variance in \(y\) can be found to be (see e.g. [37]) \[\big{\langle}(y-y_{0})^{2}\big{\rangle}=\frac{2}{3}D_{p}t^{3}, \tag{16}\] from which Eq. 7 easily follows (since \(x\approx t\)). Figure 8: **Phase space analysis** of the evolving trail pattern (the time evolution is shown in Video 2 in the supplemental material VIII). In the **upper row**, panel **(a)** shows the accumulated pheromone field deposited by 1200 ant trajectories. Panel **(b)** shows the latest 100 ant trajectories. Panel **(c)** reveals the density (on a logarithmic scale) of _potential trajectories_ the next ant could follow in the current pheromone field. Panel **(d)** in the **lower row** shows the density (on a logarithmic scale) of _potential trajectories_ in the absence of stochastic diffusion terms. Panel **(e)** shows Poincaré surfaces of section (PSSs) along the circles of corresponding colour indicated in the right panel. In each PSS the angular velocity \(v_{\varphi}=\dot{\varphi}\) is plotted against the polar coordinate angle \(\varphi\) at which the trajectory is intersecting the corresponding circle. ### Stability matrix and caustic condition To efficiently calculate the caustic statistics of branched flows in the ant-dynamics, we follow the methods developed in Ref. [38] and numerically evaluate the stability matrix along the trajectories until we reach a _caustic condition_. The stability matrix \(\mathbf{M}(t,t_{0})\) describes how a trajectory \(\mathbf{x}(t)=\mathbf{x}_{0}(t)+\delta\mathbf{x}(t)\), that starts infinitesimally close to a reference trajectory \(\mathbf{x}_{0}(t)\) at time \(t=t_{0}\) evolves over time (here \(\mathbf{x}(t)=(x(t),y(t),\varphi(t))^{T}\)), i.e. \[\delta\mathbf{x}(t)=\mathbf{M}\;\delta\mathbf{x}(t_{0}).\] The elements of \(\mathbf{M}\) are \(M_{ij}(t,t_{0})=\partial x_{i}(t)/\partial x_{j}(t_{0})\). Their dynamics is given by \[\dot{\mathbf{M}}=\mathbf{K}\;\mathbf{M}, \tag{17}\] with the Jacobian matrix of the equations of motion \(K_{ij}(t)=\partial\dot{x}_{i}(t)/\partial x_{j}(t)\), i.e. \[\mathbf{K}=\begin{pmatrix}0&0&-\sin\varphi\\ 0&0&\cos\varphi\\ K_{31}&K_{32}&K_{33}\end{pmatrix}, \tag{18}\] with \[K_{31} =\frac{A(-c_{sx}\sin\varphi+c_{xy}\cos\varphi)}{T_{0}+c}-\frac{ Ac_{s}(-c_{s}\sin\varphi+c_{y}\cos\varphi)}{(T_{0}+c)^{2}},\] \[K_{32} =\frac{A(c_{yz}\cos\varphi-c_{xy}\sin\varphi)}{T_{0}+c}-\frac{ Ac_{s}(-c_{s}\sin\varphi+c_{y}\cos\varphi)}{(T_{0}+c)^{2}},\] \[K_{33} =\frac{A(-c_{yz}\sin\varphi-c_{x}\cos\varphi)}{T_{0}+c}\] where \(c_{i}=\partial c/\partial x_{i}\) and \(c_{ij}=\partial^{2}c/\partial x_{i}\partial x_{j}\). The reference trajectory reaches a caustic, i.e. a singularity in the trajectory density, when the area of the projection onto coordinate space of the parallelogram spanned in phase space by \(\dot{\mathbf{x}}(t)\) and \(\delta\mathbf{x}(t)\) vanishes. For initially parallel trajectories (\(\dot{\mathbf{x}}(t_{0})=(1,0,0)^{T}\) and \(\delta\mathbf{x}(t_{0})=(0,1,0)^{T}\)) we can write the caustic condition as \[\dot{x}\;M_{22}-\dot{y}\;M_{12}=M_{22}\cos\varphi-M_{12}\sin\varphi=0. \tag{19}\] ## VIII Ancillary files: **Video 1**: **Example of a flow of ant trajectories** from a point source in a random pheromone field with increasing stochastic rotational diffusion (growing \(\gamma_{2}\)). The other parameters are \(\alpha=0.2\), \(T_{0}=0.0\), \(\ell_{c}=0.005\), and \(n=1.0\). **Video 2**: **Example of the trailpattern evolution** as described in Figs. 7 and 8. The left panel of the **upper row** shows the accumulated pheromone field successively deposited by the ants. The centre panel shows the latest 10, respectively 100 ant trajectories. The right panel reveals the density (on a logarithmic scale) of _potential trajectories_ of the next ant in the current pheromone field. The right panel in the **lower row** shows the density of _potential trajectories_ in the absence of stochastic diffusion terms. The left panel shows Poincare surfaces of section along the circles of corresponding colour indicated in the right panel.
2305.08947
Tests of the Charge Convexity Conjecture in Caswell-Banks-Zaks Theory
The Charge Convexity Conjecture (CCC) states that in a unitary conformal field theory in $d\geq 3$ dimensions with a global symmetry, the minimal dimension of operators in certain representations of the symmetry, as a function of the charge $q$ of the representation (or a generalized notion of it), should be convex. More precisely, this was conjectured to be true when $q$ is restricted to positive integer multiples of some integer $q_0$. The CCC was tested on a number of examples, most of which are in $d<4$ dimensions, and its version in which $q_0$ is taken to be the charge of the lowest-dimension positively-charged operator was shown to hold in all of them. In this paper we test the conjecture in a non-trivial example of a $d=4$ theory, which is the family of Caswell-Banks-Zaks IR fixed points of $SU(N_c)$ gauge theory coupled to $N_f$ massless fermions and $N_s$ massless scalars. In these theories, the lowest-dimension gauge-invariant operators that transform non-trivially under the global symmetry are mesons. These may consist of two scalars, two fermions or one of each. We find that the CCC holds in all applicable cases, providing significant new evidence for its validity, and suggesting a stronger version for non-simple global symmetry groups.
Ofer Aharony, Yacov-Nir Breitstein
2023-05-15T18:35:53Z
http://arxiv.org/abs/2305.08947v3
# Tests of the Charge Convexity Conjecture in Caswell-Banks-Zaks Theory ###### Abstract The Charge Convexity Conjecture (CCC) states that in a unitary conformal field theory in \(d\geq 3\) dimensions with a global symmetry, the minimal dimension of operators in certain representations of the symmetry, as a function of the charge \(q\) of the representation (or a generalized notion of it), should be convex. More precisely, this was conjectured to be true when \(q\) is restricted to positive integer multiples of some integer \(q_{0}\). The CCC was tested on a number of examples, most of which are in \(d<4\) dimensions, and its version in which \(q_{0}\) is taken to be the charge of the lowest-dimension positively-charged operator was shown to hold in all of them. In this paper we test the conjecture in a non-trivial example of a \(d=4\) theory, which is the family of Caswell-Banks-Zaks IR fixed points of \(SU(N_{c})\) gauge theory coupled to \(N_{f}\) massless fermions and \(N_{s}\) massless scalars. In these theories, the lowest-dimension gauge-invariant operators that transform non-trivially under the global symmetry are mesons. These may consist of two scalars, two fermions or one of each. We find that the CCC holds in all applicable cases, providing significant new evidence for its validity, and suggesting a stronger version for non-simple global symmetry groups. Introduction and summary Motivated by a possible generalization of the weak gravity conjecture ([1], see also [2]) to asymptotically-anti-de-Sitter space-times, it was conjectured in [3] that the dimensions of operators in unitary conformal field theories (CFTs) in \(d\geq 3\) space-time dimensions with global symmetries obey a _Charge Convexity Conjecture_ (CCC). In the simplest case where the global symmetry is \(U(1)\), we can denote by \(\Delta(q)\) the dimension of the lowest-dimension operator with some positive integer charge \(q\). The conjecture then states that for some positive integer \(q_{0}\) and for any positive integers \(q_{1},q_{2}\), the inequality \[\Delta((q_{1}+q_{2})q_{0})\geq\Delta(q_{1}q_{0})+\Delta(q_{2}q_{0}) \tag{1.1}\] is obeyed. The conjecture in [3] included the further condition that \(q_{0}\) should not be parameterically large in any parameters of the CFT, and the conjecture was shown to hold in many examples (generically the conjecture becomes stronger as \(q_{0}\) is taken to be smaller). Most of the tests of the conjecture were in some perturbative expansion (small coupling, large \(N\), etc.), and in this context it is enough to show that \(\Delta(2q_{0})>2\Delta(q_{0})\) for the conjecture to hold (when the perturbative expansion is valid). A natural generalization of the conjecture to more complicated global symmetries is that there should be a representation \(r_{0}\) of the global symmetries, such that if \(\Delta(n)\) is the dimension of the lowest-dimension operator in the \(n\)'th symmetric power of the representation \(r_{0}\), then for any positive integers \(n_{1},n_{2}\), \(\Delta(n_{1}+n_{2})\geq\Delta(n_{1})+\Delta(n_{2})\). Further discussion of the conjecture and its motivations may be found in [3] (see also [4]-[8]). There, the non-abelian generalization was focused to a representation \(r_{0}\) of a simple component of the symmetry group. One can consider an expansion of this to any product of simple components (and \(U(1)\)'s), where the represenation \(r_{0}\) should have non-trivial weights in each of the components included in the product. A counter-example to the original phrasing of the conjecture was found in [7], which constructed a \(d=3\) example where \(q_{0}\) must be chosen to be exponentially large in some parameters of the CFT. However, slightly weaker versions of the conjecture are still consistent with all known examples. The simplest version which is not ruled out states that \(q_{0}\) should be the charge of the lowest-dimension positively-charged operator; this is consistent with all known examples. For a general global symmetry, if we look at a specific product of simple factors of the global symmetry, then the corresponding generalization would be that \(r_{0}\) should be the representation of the lowest-dimension operator that is charged under all of these simple factors. As far as we know, there are no counter-examples to this version of the conjecture. The perturbative tests of the conjecture that were performed up to now were mostly in \(d<4\) dimensions, because it is much easier to construct and to analyze CFTs in lower dimensions. In \(d=4\) the simplest examples of non-trivial unitary CFTs are low-energy fixed points of asymptotically-free gauge theories, and the computations of anomalous dimensions in these theories are more complicated than the ones analyzed in [3]. In this paper we test the conjecture in a simple example of this class - the Caswell-Banks-Zaks ([9]-[13]) fixed points of \(SU(N_{c})\) gauge theory coupled to \(N_{f}\) massless fermions and \(N_{s}\) massless scalars in the fundamental representation. As we review below, when the number of fermions and scalars is close to the asymptotic freedom bound (and when the number of scalars is small enough) these theories flow to IR fixed points which are weakly coupled (for large \(N_{c}\)) and can be analyzed in perturbation theory (in the gauge coupling, and in \(\phi^{4}\)-type couplings when scalars are present). The lowest-dimension gauge-invariant operators charged under the global symmetry in these theories are mesons, and the next-lowest charged operators can be thought of as products of mesons. In perturbation theory it is enough to check that the dimension of a two-meson operator (in a symmetric product of the global symmetry representations) is larger than twice the dimension of single-meson operators for the conjecture (1.1) to hold. For mesons made from two scalars, it was argued already in [3] that the conjecture holds, and we review this below. In this paper we test the conjecture for the other mesons of the theory, made from two fermions or from a fermion and a scalar. These operators carry charges under 3 possible subgroups of the global symmetry, and we find that in all 3 cases, the conjecture (in its version of the previous paragraph) holds (in perturbation theory). This provides significant new evidence for the validity of the conjecture, including the expansion to products of simple components discussed above. There are many possible generalizations of our analysis to other perturbative fixed points in \(d=4\). One can consider other gauge groups, or additional matter fields. In particular one can add additional fields in the adjoint representation, as in supersymmetric gauge theories. ## 2 Caswell-Banks-Zaks fixed points in 4 dimensions 4-dimensional \(SU(N_{c})\) gauge theories with \(N_{f}\) flavors of massless Dirac fermions and \(N_{s}\) flavors of massless scalars in the fundamental representation, have an infrared-stable fixed point for certain ratios of \(N_{f},N_{s},N_{c}\)[9]-[13]. The Lagrangian of the theory can be explicitly written as in [13]: \[{\cal L}=-\frac{1}{4}F^{A}_{\mu\nu}F^{A\mu\nu}+{\rm Tr}_{f}\left(\bar{\psi}i \not{D}\psi\right)+{\rm Tr}_{s}\left(D_{\mu}\phi^{\dagger}D^{\mu}\phi\right) -\bar{h}{\rm Tr}_{s}\left(\phi^{\dagger}\phi\phi^{\dagger}\phi\right)-\bar{f} \left({\rm Tr}_{s}\left(\phi^{\dagger}\phi\right)\right)^{2}, \tag{2.1}\] where the traces \({\rm Tr}_{f},{\rm Tr}_{s}\) are over the flavor indices of the fermions and scalars, respectively. The scalars are viewed here as an \(N_{c}\times N_{s}\) matrix, and the fermions as an \(N_{c}\times N_{f}\) Dirac fermion-valued matrix. The global symmetry of the fermions is \(SU(N_{f})_{L}\times SU(N_{f})_{R}\times U(1)_{B}\), and that of the scalars is \(SU(N_{s})\times U(1)_{B^{\prime}}\). A gauge-invariant operator that transforms non-trivially under the \(U(1)_{B,B^{\prime}}\) symmetries will be either a baryon, a mixed meson (\(\phi^{\dagger}\psi\) or \(\bar{\psi}\phi\)), or a product of one or more of these with a \(U(1)_{B,B^{\prime}}\) neutral operator. In the large \(N\) approximation, baryons are non-trivial and their \(U(1)\) charges will diverge, so they are irrelevant to the discussion of the CCC. In the case of mixed mesons, their \(U(1)\) charges are proportional to their charge under the respective global \(SU(N)\) symmetry, so we do not need to consider these symmetries separately. The scaling limit in which the fixed point can be proven to occur is \(N_{c}\rightarrow\infty\), with \[x_{s}=\frac{N_{s}}{N_{c}},\qquad x_{f}=\frac{N_{f}}{N_{c}},\qquad\lambda=\frac {N_{c}g^{2}}{16\pi^{2}},\qquad h=\frac{N_{c}\tilde{h}}{16\pi^{2}},\qquad f= \frac{N_{c}N_{s}\tilde{f}}{16\pi^{2}} \tag{2.2}\] held fixed. The perturbative expansion is in the 't Hooft couplings, \(\lambda,h,f\), which are considered to be of the same parametric order. The \(\beta\) function of the 't Hooft gauge coupling \(\lambda\) is, to two loop order: \[\beta_{\lambda}=-\frac{22-x_{s}-4x_{f}}{3}\lambda^{2}+b_{1}\lambda^{3}, \tag{2.3}\] where \(b_{1}=b_{1}(x_{s},x_{f},N_{c})\) is of parametric order 1 in the large \(N_{c}\) limit. The parameter regime compatible with reliable weakly coupled fixed points is when the theory has a number of flavors slightly below the asymptotic freedom bound: \[\varepsilon\equiv\frac{1}{75}\left(22-x_{s}-4x_{f}\right)\ll 1\quad\ \&\quad\ \ \varepsilon>0. \tag{2.4}\] Higher order corrections are then smaller than the two loop term, and can safely be neglected. The non-trivial fixed point for the gauge coupling \(\lambda\) to this order is given by: \[\lambda^{*}=\frac{\varepsilon}{1+x_{s}/50}+O(\varepsilon^{2}). \tag{2.5}\] It is shown in [12],[13] that there are fixed points for the quartic scalar couplings with real coupling constants, obtained by solving for the vanishing scalar \(\beta\) functions \(\beta_{h},\beta_{f}\) to one loop order (and fine-tuning the scalar masses to zero). At leading order in the couplings, the four non-trivial fixed points of the quartic scalar interactions, \((h_{+}^{*},f_{\pm+}^{*}),(h_{-}^{*},f_{\pm-}^{*})\), are given by: \[h_{\pm}^{*}=\lambda^{*}\frac{3\pm\sqrt{6-2x_{s}}}{4(1+x_{s})},\qquad f_{\pm+}^ {*}=\lambda^{*}\left(-\frac{\sqrt{6-3x_{s}}}{4}\pm A_{+}\right),\qquad f_{\pm- }^{*}=\lambda^{*}\left(\frac{\sqrt{6-3x_{s}}}{4}\pm A_{-}\right), \tag{2.6}\] with \[A_{\pm}=\frac{3\sqrt{2-(13\pm 6\sqrt{6-3x_{s}})x_{s}+x_{s}^{2}-2x_{s}^{3}}}{4 \sqrt{3}(1+x_{s})}. \tag{2.7}\] The range of parameters where any of these four fixed points obtain real values is approximately: \[x_{s}=\frac{N_{s}}{N_{c}}\leq 0.84. \tag{2.8}\] The theory at any fixed point in this regime is referred to as the Caswell-Banks-Zaks theory. The CCC was verified in [3] for operators made of scalars. Note that the bounds (2.4),(2.8) don't allow a CFT with only scalar matter, but do allow one with only fermionic matter. For the remainder of this paper, we denote \(N=N_{c}\) for simplicity. ### Global symmetry representations The global symmetry group of the fermions is \(SU(N_{f})_{L}\times SU(N_{f})_{R}\times U(1)_{B}\), and that of the scalars is \(SU(N_{s})\times U(1)_{B^{\prime}}\). As explained above, there is no need to discuss the \(U(1)\) charges separately. The \(SU(N)\) groups are discussed below. #### 2.1.1 Scalar mesons The (anti)scalar fields are in a trivial representation of the \(SU(N_{f})\) symmetry components, and in a (anti)fundamental representation of the \(SU(N_{s})\) component. The scalar meson \(\phi^{*}\phi\) therefore transforms in the representation \(\mathbf{\overline{N}_{s}\times N_{s}}=(\mathbf{N_{s}^{2}-1})+1\). Only the former of the two resulting irreducible representations (irreps.) is relevant to the CCC, and it will be our focus. #### 2.1.2 Fermion mesons It is convenient to separate the Dirac fermions into Weyl fermions: \[\psi=\underbrace{\frac{1}{2}(1+\gamma^{5})\psi}_{\equiv\psi_{R}}+\underbrace{\frac {1}{2}(1-\gamma^{5})\psi}_{\equiv\psi_{L}},\qquad\bar{\psi}=\underbrace{\frac{1 }{2}\bar{\psi}(1+\gamma^{5})}_{=\bar{\psi}_{L}}+\underbrace{\frac{1}{2}\bar{ \psi}(1-\gamma^{5})}_{=\bar{\psi}_{R}}. \tag{2.9}\] In this notation, the Weyl fermions belong to the following group representations: \[\psi_{L}\sim\left(\mathbf{N_{f}},\mathbf{1}\right),\hskip 14.226378pt\psi_{R} \sim\left(\mathbf{1},\mathbf{N_{f}}\right),\hskip 14.226378pt\bar{\psi}_{L} \sim\left(\mathbf{N_{f}},\mathbf{1}\right),\hskip 14.226378pt\bar{\psi}_{R} \sim\left(\mathbf{1},\mathbf{N_{f}}\right). \tag{2.10}\] Here the left-hand side of the parentheses refers to the representation of \(SU(N_{f})_{L}\) and the right-hand side to \(SU(N_{f})_{R}\). In 4 dimensions all the degrees of freedom quadratic in the fermions can be described by mesons in the different Lorentz representations: \[\left\{1,\gamma^{5},\gamma^{\mu},\gamma^{5}\gamma^{\mu},\gamma^{\mu\nu}\right\}, \tag{2.11}\] contracted between \(\bar{\psi}\) and \(\psi\), where \(\mu,\nu\) are space-time indices, \(\gamma^{5}\equiv i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}=\mathrm{diag}(-1,-1,1,1)\) (the last expression in a particular basis that separates \(\psi_{L},\psi_{R}\)), and \(\gamma^{\mu\nu}\equiv\frac{1}{2}\left[\gamma^{\mu},\gamma^{\nu}\right]\). To see the representations of the different mesons, we can contract the fermions while keeping in mind that each \(\gamma^{\mu}\) interchanges left- and right-handed Weyl fermions, while \(\gamma^{5}\) does not - so the pseudoscalar and psudovector are in the same representations as the scalar and vector, respectively. For the scalar representation: \[\bar{\psi}\psi=\begin{pmatrix}\bar{\psi}_{R}\\ \bar{\psi}_{L}\end{pmatrix}\left(\psi_{L}\quad\psi_{R}\right)=\bar{\psi}_{R} \psi_{L}+\bar{\psi}_{L}\psi_{R}\hskip 14.226378pt\Rightarrow\hskip 14.226378pt \bar{\psi}\psi,\bar{\psi}\gamma^{5}\psi\in\left(\mathbf{N_{f}},\mathbf{N_{f}} \right)+\left(\mathbf{N_{f}},\mathbf{N_{f}}\right). \tag{2.12}\] For the vector representation: \[\bar{\psi}\gamma^{\mu}\psi=\begin{pmatrix}\bar{\psi}_{R}\\ \bar{\psi}_{L}\end{pmatrix}\gamma^{\mu}\left(\psi_{L}\quad\psi_{R}\right) \sim\bar{\psi}_{R}\cdots\psi_{R}+\bar{\psi}_{L}\cdots\psi_{L}\hskip 14.226378pt \Rightarrow\bar{\psi}\gamma^{\mu}\psi,\bar{\psi}\gamma^{5}\gamma^{\mu}\psi \in\left(\mathbf{1},\mathbf{N_{f}^{2}-1}\right)+\left(\mathbf{N_{f}^{2}-1}, \mathbf{1}\right)+2\cdot(\mathbf{1},\mathbf{1}). \tag{2.13}\] The trivial representation of vectors is irrelevant to the CCC, and so will be ignored from this point forward. The tensors have \([\gamma^{\mu},\gamma^{\nu}]\), so there are 2 interchanges - which restore the original contraction between the Weyl fermions. Therefore, they are in the same flavor representation as the scalar and pseudoscalar: \(\bar{\psi}\gamma^{\mu\nu}\psi\in\left(\mathbf{N_{f}},\mathbf{N_{f}}\right)+ \left(\mathbf{N_{f}},\mathbf{N_{f}}\right)\). It can be seen from here, that the only mixing of multi-meson operators in symmetric products of the flavor representations in (2.12),(2.13) will be between the 2-scalar and the 2-tensor mesons (where the space-time indices of the 2-tensor mesons will be contracted to give a Lorentz scalar). It is useful to further distinguish the representations using the chiral projections mentioned above, to divide the different mesons into their irreducible representations: \[\bar{\psi}(1-\gamma^{5})\psi,\bar{\psi}(1-\gamma^{5})\gamma^{\mu \nu}\psi\in\left(\mathbf{N_{f}},\mathbf{N_{f}}\right);\hskip 14.226378pt\bar{ \psi}(1+\gamma^{5})\psi,\bar{\psi}(1+\gamma^{5})\gamma^{\mu\nu}\psi\in\left( \mathbf{\bar{N_{f}}},\mathbf{N_{f}}\right);\] \[\bar{\psi}(1-\gamma^{5})\gamma^{\mu}\psi\in\left(\mathbf{1}, \mathbf{N_{f}^{2}-1}\right);\hskip 14.226378pt\bar{\psi}(1+\gamma^{5})\gamma^{\mu} \psi\in\left(\mathbf{N_{f}^{2}-1},\mathbf{1}\right). \tag{2.14}\] The operators in different representations, and their symmetric products with themselves, do not mix with each other. Our calculations treat the left- and right-handed projections simultaneously. #### 2.1.3 Mixed mesons Similarly to the previous cases, the mixed mesons transform in the representations: \[\bar{\psi}\phi\in\left(\mathbf{N_{s}},\mathbf{N_{f}},\mathbf{1}\right)+\left( \mathbf{N_{s}},\mathbf{1},\mathbf{N_{f}}\right);\hskip 14.226378pt\phi^{*}\psi\in \left(\mathbf{N_{s}},\mathbf{N_{f}},\mathbf{1}\right)+\left(\mathbf{N_{s}}, \mathbf{1},\mathbf{N_{f}}\right), \tag{2.15}\] where the representations are of the respective components \(SU(N_{s}),SU(N_{f})_{L},SU(N_{f})_{R}\). The conjecture as stated in [3] only requires one operator in a representation with weights of order one of each simple subgroup. In the case of fermionic mesons, both the representation of mesons contracted as scalars and tensors, and that of the mesons contracted as vectors, have such non-trivial components in both \(SU(N_{f})\)'s. The chirally projected mixed meson has non-trivial weights in \(SU(N_{s})\) as well as either one of the \(SU(N_{f})\)'s. Thus, verifying the conjecture in either of the fermionic representations in addition to the scalar one, or even only in the mixed meson case, will suffice for verifying it in the theory. However, we show below that it holds for all of the above representations, giving evidence for a stronger version of the conjecture. Anomalous dimensions of single mesons ### Fermionic mesons We begin by computing the generic-index correlation function: \[\left\langle(\psi_{\alpha a\bar{u}}\bar{\psi}_{\dot{\alpha}b\dot{\jmath}})(p)\psi_ {\beta ck}(q_{1})\bar{\psi}_{\dot{\beta}d\bar{u}}(-q_{2})\right\rangle. \tag{3.1}\] Here \(a,b,c,d\) are color indices (and the first two will be contracted inside the meson to give a gauge-invariant object), \(i,j,k,l\) flavor indices in the appropriate representation of \(SU(N_{f})\times SU(N_{f})_{R}\) (the specific representation depends on the type of spinor structure contraction; all possibilities will be addressed in what follows, so this is kept generic), and \(\alpha,\dot{\alpha},\dot{\beta},\dot{\beta}\) are fermionic indices. The relevant diagrams up to one loop order are shown in Figure 1. We denote them by \(I\) through \(IV\) respectively, row by row and left to right within rows. Note that these and other diagrams we compute in this work are not gauge-invariant. We compute them in the Feynman gauge, and after contracting the color indices of the mesons, our results for the meson anomalous dimensions will be gauge-invariant, so they will not depend on this gauge choice. We will use dimensional regularization with \(d=4-\epsilon\). #### 3.1.1 Diagram computation The tree-level diagram is equal (for our normalization of the composite operator) to: \[I=\frac{i\not{p}_{\alpha\dot{\alpha}}}{q_{2}^{2}}\frac{i\not{q}_{\dot{1}}\delta _{\dot{\alpha}}}{q_{1}^{2}}\delta_{a\dot{a}}\delta_{i\dot{a}}\delta_{bc}\delta _{jk}=-\frac{\not{q}_{\alpha\dot{\beta}}}{q_{2}^{2}}\frac{\not{q}_{\dot{1}} \delta_{\dot{\alpha}}}{q_{1}^{2}}\delta_{ad}\delta_{il}\delta_{bc}\delta_{jk}. \tag{3.2}\] For the propagator correction diagram, we can first compute the propagator corrections to leading loop order. The relevant Feynman diagrams are shown in Figure 2. Here \(A\) is an adjoint color representation index, \(\mu\) is a (Minkowski) spacetime index, and the others as before. The tree level inverse propagator equals \(-i\not{p}_{\alpha\dot{\alpha}}\delta_{ab}\delta_{ij}\). Let us compute the one loop diagram: \[\delta D_{\alpha\dot{\alpha}}(p)=(ig)^{2}\delta_{ij}t_{cn}^{A}t_{bc}^{A}\int \frac{d^{d}k}{(2\pi)^{d}}\gamma^{\mu}\frac{i\not{k}}{k^{2}}\gamma_{\mu}\frac{- i}{(p-k)^{2}}. \tag{3.3}\] We have \(t_{bc}^{A}t_{cn}^{A}=(t^{A}t^{A})_{ba}=C_{F}\delta_{ab}=\frac{N^{2}-1}{2N} \delta_{ab}\), and to leading order in \(\epsilon\)\(\gamma^{\mu}\not{k}\gamma_{\mu}=-2\not{k}\), so the divergent term (as \(\epsilon\to 0\)) is: Figure 1: Tree-level and one-loop Feynman diagrams for the correlation function of a single meson with a fermion-antifermion pair. \[\delta D_{\alpha\dot{\alpha}}(p)=\frac{2g^{2}C_{F}}{(2\pi)^{4}}\delta_{ ij}\delta_{ab}\int d^{d}k\frac{\not{k}}{k^{2}(p-k)^{2}}=\frac{2g^{2}C_{F}}{(2\pi)^{4}} \delta_{ij}\delta_{ab}\int_{0}^{1}dx\int d^{d}k\frac{\not{k}}{(k^{2}-2xp\cdot k +xp^{2})^{2}}=\] \[\underset{k\to k-xp}{=}\frac{2g^{2}C_{F}}{(2\pi)^{4}}\delta_{ij} \delta_{ab}\int_{0}^{1}dx\int d^{d}k\frac{\not{p}}{(k^{2}+x(1-x)p^{2})^{2}} \underset{k\to k_{E}}{=}\frac{2ig^{2}C_{F}}{(2\pi)^{4}}\delta_{ij}\delta_{ab} \int_{0}^{1}dx\not{p}\int\frac{d^{d}k_{E}}{(k_{E}^{2}+\Delta)^{2}}=\] \[=\frac{2ig^{2}C_{F}}{(4\pi)^{2}}\delta_{ij}\delta_{ab}\int_{0}^{ 1}dx\not{p}\Gamma(\epsilon)\underset{\approx(p_{E}^{2})^{-\epsilon}}{\underbrace{ \Delta^{-\epsilon}}}=\frac{ig^{2}C_{F}}{(4\pi)^{2}\epsilon}\delta_{ij}\delta_{ ab}\left(p_{E}^{2}\right)^{-\epsilon}\not{p}. \tag{3.4}\] Here and throughout we use the subscript \(E\) to denote a Euclidean momentum. From this we can extract the fermion renormalization function, once we introduce a renormalization scale \(M\). We do this by separating the dimensionful part into \(\left(p_{E}^{2}\right)^{-\epsilon}=\left(\frac{p_{E}^{2}}{M^{2}}\right)^{- \epsilon}M^{-2\epsilon}\approx M^{-2\epsilon}\). Thus: \[Z_{\psi}=1-\frac{g^{2}C_{F}}{(4\pi)^{2}\epsilon}M^{-2\epsilon}+O(g^{4}), \tag{3.5}\] in agreement with the result in [17] for our case. We insert this result into diagram II and get: \[II=\frac{i\not{q}_{\dot{\alpha}\dot{\alpha}}}{q_{2}^{2}}\delta D_{\gamma\dot{ \gamma}}(q_{2})\delta_{bc}\delta_{jk}\frac{i\not{q}_{\dot{\gamma}\dot{\beta}}} {q_{2}^{2}}\frac{i\not{q}_{\dot{\gamma}\dot{\beta}\dot{\alpha}}}{q_{1}^{2}}= \frac{g^{2}C_{F}}{(4\pi)^{2}\epsilon}M^{-2\epsilon}\frac{\not{q}_{\dot{\alpha} \dot{\beta}}}{q_{2}^{2}}\frac{\not{q}_{\dot{\gamma}\dot{\beta}\dot{\alpha}}}{ q_{1}^{2}}\delta_{il}\delta_{jk}\delta_{bc}\delta_{ad}, \tag{3.6}\] with another equal contribution coming from the complementary diagram with a one-loop correction to the antifermion propagator. Diagram III, with the gluon exchange between fermions, gives by similar manipulations: \[III=-\frac{g^{2}}{4(4\pi)^{2}\epsilon q_{1}^{2}q_{2}^{2}}\delta_{il}\delta_{jk }t_{da}^{A}t_{bc}^{A}(\gamma^{\nu}\gamma^{\mu}\not{q}_{2})_{\alpha\beta}(\not{q }_{\dot{\gamma}}\gamma_{\mu}\gamma_{\nu})_{\beta\dot{\alpha}}M^{-2\epsilon} \tag{3.7}\] Diagram IV contains a vertex factor of \(t_{ba}^{A}\). Upon contraction in the color indices we'll get \(t_{aa}^{A}=0\) as the gauge group is \(SU(N)\), and so this diagram will vanish. Next, we take a moment to consider possible sign changes from the fermion contractions. The different contractions are shown schematically in Figure 3. The \(\not{A}\) terms do not play a part and should simply help to illustrate where the different \(\psi\) terms come from. The spinor indices are dealt with separately. The calculation of the signs is according to the parity of the number of swaps of fermionic operators necessary to reach canonical form - which is the number of contraction line intersections in Figure 3, plus the number of contractions with \(\bar{\psi}\) preceding \(\psi\). We see that all diagrams get a factor of \(-1\) relative to the convention that sets \(Z_{\psi}=1+O(g^{2})\). This has no effect here, but we take it into account for good measure as it will be significant in the bi-meson case. The next step is color index contractions. Since we are interested in mesons which are gauge singlets, we contract the color indices by \(\delta_{ab}\) in all diagrams. For diagrams \(I,II\) we simply get: \[I\rightarrow\delta_{ab}(I)=\underset{=\delta_{cd}}{\underbrace{ \delta_{ab}\delta_{ad}\delta_{bc}}}\delta_{il}\delta_{jk}\frac{\not{q}_{\dot{ \alpha}\dot{\beta}}}{q_{2}^{2}}\frac{\not{q}_{\dot{\beta}\dot{\alpha}}}{q_{1}^ {2}}=\delta_{cd}\delta_{il}\delta_{jk}\frac{\not{q}_{\dot{\alpha}\dot{\beta}}} {q_{2}^{2}}\frac{\not{q}_{\dot{\beta}\dot{\alpha}}}{q_{1}^{2}} \tag{3.8}\] \[II\rightarrow\delta_{ab}(II)=-\delta_{ab}\delta_{bc}\delta_{ad} \frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}\frac{\not{q}_{\dot{\alpha} \dot{\beta}}}{q_{2}^{2}}\frac{\not{q}_{\dot{\beta}\dot{\alpha}}}{q_{1}^{2}} \delta_{il}\delta_{jk}=-\delta_{cd}\frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2} \epsilon}\frac{\not{q}_{\dot{\alpha}\dot{\beta}}}{q_{2}^{2}}\frac{\not{q}_{ \dot{\beta}\dot{\alpha}}}{q_{1}^{2}}\delta_{il}\delta_{jk}. \tag{3.9}\] For diagram \(III\), we contract the color factor separately: \[\delta_{ab}(III)\propto\delta_{ab}t_{da}^{A}t_{bc}^{A}=(t^{A}t^{A})_ {dc}=C_{F}\delta_{dc} \tag{3.10}\] \[\Rightarrow III\rightarrow\delta_{ab}(III)=\delta_{cd}\frac{g^{2}C_ {F}M^{-2\epsilon}}{4(4\pi)^{2}\epsilon}\delta_{il}\delta_{jk}(\gamma^{\nu} \gamma^{\mu}\not{q}_{\dot{\alpha}})_{\alpha\dot{\beta}}(\not{q}_{\dot{\gamma}} \gamma_{\mu}\gamma_{\nu})_{\beta\dot{\alpha}}. \tag{3.11}\] Figure 2: Feynman diagrams for the fermion propagator, at tree level and one loop level. #### 3.1.2 Contraction with different spinor structures We begin with the chiral scalars. To get the correlation functions for the scalar mesons \(\bar{\psi}(1\pm\gamma^{5})\psi\), we need to contract the spinor indices with \(\left(1\pm\gamma^{5}\right)_{\dot{\alpha}\alpha}\): * For diagrams \(I,II\): \[q\hskip-5.0pt/_{\dot{\alpha}\beta}q\hskip-5.0pt/_{\dot{\alpha} \beta}\left(1\pm\gamma^{5}\right)_{\dot{\alpha}\alpha} =\left[q\hskip-5.0pt/_{\dot{\alpha}}\left(1\pm\gamma^{5}\right)q \hskip-5.0pt/_{\dot{\alpha}\beta}\right]_{\beta\dot{\beta}}\] (3.12) \[\Rightarrow I_{s}\equiv\left(1\pm\gamma^{5}\right)_{\dot{\alpha}\alpha} \cdot\left(I\right) =\frac{\left[q\hskip-5.0pt/_{\dot{\alpha}}\left(1\pm\gamma^{5} \right)q\hskip-5.0pt/_{\dot{\alpha}\beta}\right]_{\beta\dot{\beta}}}{q_{1}^{2} q_{2}^{2}}\delta_{il}\delta_{jk}\delta_{cd}\] (3.13) \[II_{s}\equiv\left(1\pm\gamma^{5}\right)_{\dot{\alpha}\alpha} \cdot\left(II\right) =-\frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}\frac{\left[q \hskip-5.0pt/_{\dot{\alpha}}\left(1\pm\gamma^{5}\right)q\hskip-5.0pt/_{\dot{ \alpha}\beta}\right]_{\beta\dot{\beta}}}{q_{1}^{2}q_{2}^{2}}\delta_{il}\delta_ {jk}\delta_{cd}.\] (3.14) * For diagram \(III\): \[\left(1\pm\gamma^{5}\right)_{\dot{\alpha}\alpha}\left(\gamma^{ \tau}\gamma^{\mu}q\hskip-5.0pt/_{\dot{\alpha}\beta}\right)_{\alpha\beta}(q \hskip-5.0pt/_{\dot{\alpha}\gamma}\gamma_{\mu})_{\beta\dot{\alpha}}=\left[q \hskip-5.0pt/_{\dot{\alpha}}\gamma_{\mu}\gamma_{\nu}\left(1\pm\gamma^{5} \right)\gamma^{\gamma}\gamma^{\mu}q\hskip-5.0pt/_{\dot{\alpha}\beta}\right]_{ \beta\dot{\beta}}=\delta_{\mu}^{\mu}\delta_{\nu}^{\nu}\left[q\hskip-5.0pt/_{ \dot{\alpha}}\left(1\pm\gamma^{5}\right)q\hskip-5.0pt/_{\dot{\alpha}\beta} \right]_{\beta\dot{\beta}}=16\left[q\hskip-5.0pt/_{\dot{\alpha}}\left(1\pm \gamma^{5}\right)q\hskip-5.0pt/_{\dot{\alpha}\beta}\right]_{\beta\dot{\beta}}\] \[\Rightarrow III_{s}\equiv\left(1\pm\gamma^{5}\right)_{\dot{\alpha}\alpha} \cdot\left(III\right)=\frac{4g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon} \frac{\left[q\hskip-5.0pt/_{\dot{\alpha}}\left(1\pm\gamma^{5}\right)q \hskip-5.0pt/_{\dot{\alpha}\beta}\right]_{\beta\dot{\beta}}}{q_{1}^{2}q_{2}^{2 }}\delta_{il}\delta_{jk}\delta_{cd}.\] (3.15) The full one-loop correlation function is: \[\left\langle(\psi_{\alpha ai}\bar{\psi}_{\dot{\alpha}bj})(p)\psi_{\beta ck}(q_ {1})\bar{\psi}_{\dot{\beta}dl}(q_{2})\right\rangle(1\pm\gamma^{5})_{\dot{ \alpha}\alpha}=I_{s}+2\cdot II_{s}+III_{s}=\left(1+\frac{2g^{2}C_{F}M^{-2 \epsilon}}{(4\pi)^{2}\epsilon}\right)\frac{\left[q\hskip-5.0pt/_{\dot{\alpha }}\left(1\pm\gamma^{5}\right)q\hskip-5.0pt/_{\dot{\alpha}\beta}\right]_{\beta \dot{\beta}}}{q_{1}^{2}q_{2}^{2}}\delta_{il}\delta_{jk}. \tag{3.16}\] It can be expressed as a renormalized correlation function multiplied by operator renormalization functions as follows: \[\psi=\sqrt{Z_{\psi}}[\psi];\hskip 14.226378pt\bar{\psi}=\sqrt{Z_{ \psi}}[\bar{\psi}];\hskip 14.226378pt(\bar{\psi}\psi)=Z_{s}[\bar{\psi}\psi] \tag{3.17}\] \[\left\langle(\psi_{\alpha ai}\bar{\psi}_{\dot{\alpha}bj})(p)\psi_{ \beta ck}(q_{1})\bar{\psi}_{\dot{\beta}dl}(q_{2})\right\rangle(1\pm\gamma^{5})_ {\dot{\alpha}\alpha}=Z_{s}Z_{\psi}\left\langle\left[(\psi_{\alpha ai}\bar{\psi }_{\dot{\alpha}bj})\right](p)\left[\psi_{\beta ck}\right](q_{1})\left[\bar{ \psi}_{\dot{\beta}dl}\right](q_{2})\right\rangle(1\pm\gamma^{5})_{\dot{\alpha} \alpha}. \tag{3.18}\] We also denote: \[Z_{tot}^{s}\equiv Z_{s}Z_{\psi}=Z_{tot,0}^{s}+\delta Z_{tot}^{s};\hskip 14.226378ptZ _{s}=Z_{s}^{0}+\delta Z_{s};\hskip 14.226378ptZ_{\psi}=1+\delta Z_{\psi}, \tag{3.19}\] Figure 3: Fermion operator contractions for the Feynman diagrams of a single meson correlation function with a fermion-antifermion pair. and obtain: \[Z^{s}_{tot,0}+\delta Z^{s}_{tot}=\left(Z^{0}_{s}+\delta Z_{s}\right) \left(1+\delta Z_{\psi}\right) \tag{3.20}\] \[\Rightarrow Z^{0}_{s}=Z^{s}_{tot,0};\hskip 14.226378pt\delta Z_{s}= \delta Z^{s}_{tot}-Z^{0}_{s}\delta Z_{\psi}=\delta Z^{s}_{tot}-Z^{s}_{tot,0} \delta Z_{\psi}. \tag{3.21}\] At scale \(M\) this is, using (3.5): \[Z^{s}_{tot,0}=1;\hskip 14.226378pt\delta Z^{s}_{tot}=\frac{2g^{2}C_{F}M ^{-2\epsilon}}{(4\pi)^{2}\epsilon} \tag{3.22}\] \[\Rightarrow Z^{0}_{s}=1;\hskip 14.226378pt\delta Z_{s}=\frac{2g^{2} C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}+\frac{g^{2}C_{F}M^{-2\epsilon}}{(4 \pi)^{2}\epsilon}=\frac{3g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}N\epsilon}. \tag{3.23}\] Finally we can compute the anomalous dimension: \[\gamma_{s}=\frac{M}{Z_{s}}\frac{\partial Z_{s}}{\partial M}=-2 \epsilon M\cdot\left(\frac{3g^{2}C_{F}M^{-1-2\epsilon}}{(4\pi)^{2}\epsilon} \right)=-\frac{6g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}}\stackrel{{ \epsilon\to 0}}{{\rightarrow}}-\frac{6g^{2}C_{F}}{(4\pi)^{2}}, \tag{3.24}\] in agreement with [14]. For the chiral vector mesons, we contract with \(\left[\left(1\pm\gamma^{5}\right)\gamma^{\rho}\right]_{\dot{\alpha}\alpha}\), with \(\rho\) a spacetime index. * For diagrams \(I,II\): \[\not{q}_{\alpha\dot{\beta}}\not{q}_{\beta\dot{\alpha}}\left[\left(1\pm \gamma^{5}\right)\gamma^{\rho}\right]_{\dot{\alpha}\alpha}=\left[\not{q}_{1} \left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{q}_{2}\right]_{\dot{\beta}\dot{ \beta}}\Rightarrow\left\{\begin{array}{c}I_{v}=\frac{\left[\not{q}_{1}\left(1 \pm\gamma^{5}\right)\gamma^{\rho}\not{q}_{2}\right]_{\dot{\beta}\dot{\beta}}} {q_{1}^{2}q_{2}^{2}}\delta_{il}\delta_{jk}\delta_{cd}\\ II_{v}=-\frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}\frac{\left[\not{q }_{1}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{q}_{2}\right]_{\dot{\beta} \dot{\beta}}}{q_{1}^{2}q_{2}^{2}}\delta_{il}\delta_{jk}\delta_{cd}\end{array},\] (3.25) * For diagram \(III\): \[\left(\gamma^{\nu}\gamma^{\mu}\not{q}_{2}\right)_{\alpha\dot{\beta }}\left(\not{q}_{1}\gamma_{\mu}\gamma_{\nu}\right)_{\beta\dot{\alpha}}\left[ \left(1\pm\gamma^{5}\right)\gamma^{\rho}\right]_{\dot{\alpha}\alpha}=\left[ \not{q}_{1}\gamma_{\mu}\gamma_{\nu}\left(1\pm\gamma^{5}\right)\gamma^{\rho} \gamma^{\nu}\gamma^{\mu}\not{q}_{2}\right]_{\dot{\beta}\dot{\beta}}=\] \[=\left[\not{q}_{1}\left(1\pm\gamma^{5}\right)\gamma_{\mu}\gamma \gamma^{\rho}\gamma^{\nu}\gamma^{\mu}\not{q}_{2}\right]_{\dot{\beta}\dot{ \beta}}=-2\left[\not{q}_{1}\left(1\pm\gamma^{5}\right)\gamma_{\mu}\gamma^{\rho }\gamma^{\mu}\not{q}_{2}\right]_{\dot{\beta}\dot{\beta}}=4\left[\not{q}_{1} \left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{q}_{2}\right]_{\dot{\beta}\dot{ \beta}}\] \[\Rightarrow III_{v}=\frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}\frac{ \left[\not{q}_{1}(1\pm\gamma^{5})\gamma^{\rho}\not{q}_{2}\right]_{\dot{\beta} \dot{\beta}}}{q_{1}^{2}q_{2}^{2}}\delta_{il}\delta_{jk}\delta_{cd},\] (3.26) and the renormalization functions are: \[Z^{v}_{tot}=1-\frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}\hskip 14.226378pt \Rightarrow\hskip 14.226378pt\delta Z_{v}=-\frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2} \epsilon}+\frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}=0 \tag{3.27}\] as expected from a Noether current associated with a conserved symmetry. The anomalous dimension also vanishes accordingly: \[\gamma_{v}=0. \tag{3.28}\] For the tensor mesons, we contract with \((\gamma^{\rho\sigma})_{\dot{\alpha}\alpha}\equiv\frac{1}{2}\left[\gamma^{\rho}, \gamma^{\sigma}\right]_{\dot{\alpha}\alpha}\). * For diagrams \(I,II\): \[\not{q}_{\alpha\dot{\beta}}\not{q}_{1\dot{\beta}\dot{\alpha}}(\gamma^{\rho \sigma})_{\dot{\alpha}\alpha}=\left(\not{q}_{1}\gamma^{\rho\sigma}\not{q}_{2} \right)_{\dot{\beta}\dot{\beta}}\Rightarrow\left\{\begin{array}{c}I_{t}= \frac{\left(\not{q}_{1}\gamma^{\rho\sigma}\not{q}_{2}\right)_{\dot{\beta}\dot{ \beta}}}{q_{1}^{2}q_{2}^{2}}\delta_{il}\delta_{jk}\delta_{cd}\\ II_{t}=-\frac{g^{2}C_{F}M^{-2\epsilon}\left(\not{q}_{1}\gamma^{\rho\sigma}\not{q}_{ 2}\right)_{\dot{\beta}\dot{\beta}}}{(4\pi)^{2}\epsilon}\frac{\left[\not{q}_ {1}\gamma^{\rho}\not{q}_{2}\right]_{\dot{\beta}\dot{\beta}}}{q_{1}^{2}q_{2}^{2 }}\delta_{il}\delta_{jk}\delta_{cd}\end{array},\] (3.29) * For diagram \(III\): \[\left(\gamma^{\nu}\gamma^{\mu}\not{q}_{2}\right)_{\alpha\dot{\beta}}\left(\not{q} _{1}\gamma_{\mu}\gamma_{\nu}\right)_{\beta\dot{\alpha}}(\gamma^{\rho\sigma})_{ \dot{\alpha}\alpha}=\frac{1}{2}\left(\not{q}_{1}\gamma_{\mu}\underbrace{\gamma_{ \nu}\left[\gamma^{\rho},\gamma^{\sigma}\right]\gamma^{\nu}\gamma^{\mu}\not{q}_{2 }}_{=4\not{q}_{1}\gamma^{\rho\sigma}\right]_{\dot{\beta}\dot{\beta}}}=0 \Rightarrow III_{t}=0,\] (3.30) and the renormalization functions and anomalous dimension are accordingly: \[Z^{t}_{tot}=1-\frac{2g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}\hskip 14.226378pt \Rightarrow Z_{t}=1-\frac{g^{2}C_{F}M^{-2\epsilon}}{(4\pi)^{2}\epsilon}\hskip 14.226378pt \Rightarrow\gamma_{t}=\frac{2g^{2}C_{F}}{(4\pi)^{2}}, \tag{3.31}\] agreeing with [14] as well. ### Mixed mesons #### 3.2.1 Diagrams We consider the mixed meson \(\phi^{*}\psi\), and the correlation function \[\left\langle\left(\phi^{*}_{ia^{\prime}}\psi_{jb^{\prime}\alpha}\right)\left(p \right)\phi_{i^{\prime}a}(p_{1})\overline{\psi}_{j^{\prime}b\dot{a}}(-p_{2}) \right\rangle\delta_{a^{\prime}b^{\prime}}. \tag{3.32}\] Again we assume from the start that the flavor indices give a non-vanishing correlation function: \(i=i^{\prime},j=j^{\prime}\). We have 4 contributing diagrams up to one-loop order, shown in Figure 4. They are the tree-level, scalar/fermion propagator correction, and the gluon exchange. There are also scalar propagator corrections from the \(\phi^{4}\) interactions and from the "seagull" vertex, but they are independent of the momenta and therefore do not contribute to the field renormalizations and to the anomalous dimensions. The Feynman rule for the scalar gauge vertex is given in Figure 5. The tree level diagram is equal to: \[\frac{i}{p_{1}^{2}}\frac{ip\not{\cal Z}_{\alpha\dot{\alpha}}}{p_{2}^{2}}\delta _{ab}=-\frac{p\not{\cal Z}_{\alpha\dot{\alpha}}}{p_{1}^{2}p_{2}^{2}}\delta_{ab}. \tag{3.33}\] The diagram with a fermion propagator correction \(\delta D_{\dot{\beta}\dot{\beta}}(p_{2})\) is \[-\frac{p\not{\cal Z}_{\alpha\beta}}{p_{1}^{2}p_{2}^{2}}\delta D_{\dot{\beta} \dot{\beta}}(p_{2})\frac{ip\not{\cal Z}_{\dot{\beta}\dot{\alpha}}}{p_{2}^{2}} =-\frac{p\not{\cal Z}_{\alpha\beta}}{p_{1}^{2}p_{2}^{2}}\frac{ig^{2}C_{F}}{ \left(4\pi\right)^{2}\epsilon}\left(p_{2E}^{2}\right)^{-\epsilon}\not{\cal P} _{\dot{\beta}\dot{\beta}}\frac{ip\not{\cal Z}_{\dot{\beta}\dot{\alpha}}}{p_{2 }^{2}}\approx\frac{g^{2}C_{F}\,\left(p_{E}^{2}\right)^{-\epsilon}}{\left(4\pi \right)^{2}\epsilon\,\,\,\,p_{1}^{2}p_{2}^{2}}\not{\cal P}_{\dot{\alpha}\dot{ \alpha}}\delta_{ab}; \tag{3.34}\] (color factors are trivial and only included in the end). The scalar propagator correction is at tree level \(-ip^{2}=ip_{E}^{2}\). At one-loop level, it is: \[\frac{2ig^{2}C_{F}}{\left(4\pi\right)^{2}\epsilon}\delta_{ab}\left(p_{E}^{2} \right)^{1-\epsilon}, \tag{3.35}\] and the scalar renormalization function is \[Z_{\phi}=1+\frac{2g^{2}C_{F}}{\left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2} \right)^{-\epsilon}. \tag{3.36}\] The value of the corresponding diagram will be that of the tree level multiplied by the ratio of the propagator corrections, giving: \[-\frac{p\not{\cal P}_{\dot{\alpha}\dot{\alpha}}}{p_{1}^{2}p_{2}^{2}}\cdot \frac{2g^{2}C_{F}}{\left(4\pi\right)^{2}\epsilon}\delta_{ab}\left(p_{E}^{2} \right)^{-\epsilon}. \tag{3.37}\] The gluon exchange diagram is equal to: \[-\frac{g^{2}}{\left(4\pi\right)^{2}p_{1}^{2}p_{2}^{2}\epsilon}\left(t^{4} \right)_{bb^{\prime}}\left(t^{4}\right)_{a^{\prime}a}\not{\cal P}_{\dot{\alpha} \dot{\alpha}}\left(p_{E}^{2}\right)^{-\epsilon}. \tag{3.38}\] In the tree level diagrams and the ones with propagator corrections, the color contraction is trivial and amounts to \(\delta_{ab}\) (and was implicit in the diagram calculations). In the diagram with the gluon exchange it gives \(C_{F}\delta_{ab}\), similarly to (3.10). Since there is only one pair of fermionic operators, and they are ordered in the standard way, there are no sign changes from their anticommutation relations. Figure 4: Single mixed meson diagrams. Only diagrams relevant to the anomalous dimension up to one-loop are presented. Figure 5: Feynman rule for scalar QCD, to leading order in the gauge coupling. #### 3.2.2 Resummation and renormalization function All the amplitudes have the same spinor form \(\mathpzc{p}\mathpzc{l}_{\alpha\dot{\alpha}}\), and so there is no need to project onto different representations. Each diagram contributes once, so the total (relevant part of the) correlation function is: \[\left\langle\left(\phi^{*}_{i\alpha^{\prime}}\psi_{j^{\prime}\alpha}\right)(p) \phi_{i^{\prime}a}(p_{1})\overline{\psi}_{j^{\prime}b\dot{\alpha}}(-p_{2}) \right\rangle\delta_{a^{\prime}b^{\prime}}=-\frac{\mathpzc{P}\mathpzc{l}_{ \alpha\dot{\alpha}}}{p_{1}^{2}p_{2}^{2}}\delta_{ab}\left(1+\frac{2g^{2}C_{F}}{ \left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2}\right)^{-\epsilon}\right), \tag{3.39}\] and so the total renormalization function, after setting a renormalization scale \(M\), is \[Z^{tot}_{\phi^{*}\psi}=1+\frac{2g^{2}C_{F}}{\left(4\pi\right)^{2}\epsilon}M^{- 2\epsilon}. \tag{3.40}\] We want to extract the wavefunction renormalization of the mixed meson operator; so we decompose \(Z^{tot}_{\phi^{*}\psi}\) as \(Z^{tot}_{\phi^{*}\psi}=Z_{\phi^{*}\psi}Z^{1/2}_{\psi}Z^{1/2}_{\phi}\) and get: \[\delta Z_{\phi^{*}\psi}=\delta Z^{tot}_{\phi^{*}\psi}-\frac{1}{2}\delta Z_{ \psi}-\frac{1}{2}\delta Z_{\phi}=\frac{3g^{2}C_{F}}{2\left(4\pi\right)^{2} \epsilon}M^{-2\epsilon}, \tag{3.41}\] and the anomalous dimension of the mixed meson is thus \[\gamma_{\phi^{*}\psi}=\frac{\partial\log Z_{\phi^{*}\psi}}{\partial\log M}= \left(-2\epsilon\right)\cdot\left(\frac{3g^{2}C_{F}}{2\left(4\pi\right)^{2} \epsilon}\right)=-\frac{3g^{2}C_{F}}{\left(4\pi\right)^{2}}. \tag{3.42}\] ## 4 Anomalous dimensions of double mesons In this section, we keep contributions up to the leading order in \(\frac{1}{N}\) contributing to the the difference in (1.1). ### Scalar double mesons This subsection is a review of the discussion in [3], with some elaboration regarding symmetry group indices. The Lagrangian is (2.1), and as our basic operator, we choose scalar mesons of the type \(\phi^{*}\phi\), which is in the adjoint representation of \(SU(N_{s})\). One can separate the loop level diagrams into ones involving only interactions within each meson (intra-meson interaction diagrams), and ones that involve interactions between the mesons (inter-meson interaction diagrams). The form of the \(1/N\) expansion in the 't Hooft limit (in which meson operators factorize at large \(N\)[15]) implies that the inter-meson diagrams start contributing at order \(1/N\) compared to the intra-meson diagrams (this is true for all types of mesons that we discuss). In this case (unlike the fermion mesons we discuss later) there is no mixing between operators - and therefore no need to diagonalize the renormalization matrix. Then the intra-meson diagrams will simply cancel out when computing the difference \(\gamma_{(\phi^{*}\phi)^{n}}-n\gamma_{\phi^{*}\phi}\), and we only need to consider corrections from inter-meson diagrams. At one loop order, there are two types of these diagrams - gluon exchanges and \(\phi^{4}\) interactions. We consider in the following the same correlator as in [3], but begin with flavor indices explicit and generic - namely: \[\left\langle\left(\phi^{*}_{i\alpha^{\prime}}\phi_{b^{\prime}j}\phi^{*}_{k \alpha^{\prime}}\phi_{d^{\prime}l}\right)(p)\phi^{*}_{i^{\prime}a}(-p_{1}) \phi_{b^{\prime}j^{\prime}}(q_{1})\phi^{*}_{k^{\prime}c}(-p_{2})\phi_{dl^{ \prime}}(q_{2})\right\rangle\delta_{a^{\prime}b^{\prime}}\delta_{c^{\prime}d^{ \prime}}, \tag{4.1}\] with \(a,b,c,d\) color indices and \(i,j,k,l\) flavor indices. The tree-level amplitude is immediately found to be \(\frac{1}{p_{1}^{2}p_{2}^{2}q_{1}^{2}q_{2}^{2}}\) times a sum of products of \(\delta\) symbols. Gluon exchange diagramsThe leading contribution involving gluons comes from the vertex illustrated in Figure 5 along with its Feynman rule. The diagrams that contribute in the leading order can be arranged in pairs, such as the diagrams presented in Figure 6. In each pair, in one diagram the exchange is between scalar lines with the same charge flow direction and the other between lines with opposite directions. In the left diagram in Figure 6, the color factor is \[\delta_{a^{\prime}b}\delta_{cd^{\prime}}(t^{A})_{ab^{\prime}}(t^{A})_{c^{ \prime}d}\approx\delta_{a^{\prime}b}\delta_{cd^{\prime}}\delta_{ad}\delta_{c^{ \prime}b^{\prime}}, \tag{4.2}\] which, after contracting the color indices within the mesons \(\delta_{b^{\prime}a^{\prime}}\delta_{d^{\prime}c^{\prime}}\), gives \(\delta_{cb}\delta_{ad}\). In the right diagram the color factor is: \[\delta_{a^{\prime}b}\delta_{c^{\prime}d}(t^{A})_{ab^{\prime}}(t^{A})_{cd^{ \prime}}=\delta_{a^{\prime}b}\delta_{c^{\prime}d}\delta_{ad^{\prime}}\delta_{ cb^{\prime}}\stackrel{{\cdot\cdot\cdot b_{a^{\prime}}\delta_{d^{ \prime}c^{\prime}}}}{{\rightarrow}}\delta_{cb}\delta_{ad}, \tag{4.3}\] so the two diagrams eventually have the same color factor. The flavor factors are trivially identical between the diagrams. The remaining component is the momentum loop factor, in which only the ultraviolet-divergent term contributes to the anomalous dimension. This is the leading order term for large loop momentum \(w\), and has opposite values between the two diagrams. This can be seen from the vertex factors, which are \((2p_{1}-w)_{\mu}(2q_{2}-w)^{\mu}\sim w^{2}\) and \((2p_{1}-w)_{\mu}(2p_{2}+w)^{\mu}\sim-w^{2}\), for the left and right diagrams, respectively. In this way, we see the divergent terms of the two diagrams cancel out, as is the case for all other pairs of diagrams. Thus, contributions to the difference of anomalous dimensions from this type of diagrams is of the next order in the 't Hooft gauge coupling: \(O(g^{4}N)=O\left(\frac{\lambda^{2}}{N}\right)\). \(\phi^{4}\) interaction diagrams.The \(\phi^{4}\) interactions can be described using a single comprehensive vertex Feynman rule, illustrated in Figure 7. In Figure 7 we introduced the many-index \(\delta\) symbol, which is 1 if all indices are equal and 0 otherwise. Without loss of generality (in the large N limit), we can assume all external legs have different flavor indices, e.g. \((i^{\prime},j^{\prime},k^{\prime},l^{\prime})=(1,2,3,4)\) (this numbering is for convenience, although \(i^{\prime},k^{\prime}\) are components of a different representation than \(j^{\prime},l^{\prime}\)). Then the flavor indices will have to match them - either in the unpermuted order: \((j,i,l,k)=(1,2,3,4)\), or in the permuted order: \((j,i,l,k)=(1,4,3,2)\) (permuting the other pair as well returns to a configuration equivalent to the original). Thus, we can omit the flavor \(\delta\) symbols and keep the relevant information by referring to the diagrams as either unpermuted or permuted. In particular, in these cases the last term of Figure 7 always vanishes. In what follows, we need the contribution of the tree-level diagrams, depicted in Figure 8. The momentum factor in both of them is \(\frac{i}{l}\frac{i}{l^{\prime}}\frac{i}{l^{\prime}}\frac{i}{l^{\prime}}\frac{i} {l^{\prime}}=\frac{1}{p_{1}^{\prime}p_{2}^{\prime}q_{2}^{\prime}}\). In addition there are the color factors - for the unpermuted diagram this is \(\delta_{ab^{\prime}}\delta_{ba^{\prime}}\delta_{cd^{\prime}}\delta_{cd^{\prime }}\delta_{ac^{\prime}}\delta_{a^{\prime}b^{\prime}}\delta_{cd^{\prime}}= \delta_{ab}\delta_{cd}\), and for the permuted diagram we get \(\delta_{ab^{\prime}}\delta_{bc^{\prime}}\delta_{cb^{\prime}}\delta_{cd^{ \prime}}\delta_{ad^{\prime}}\delta_{a^{\prime}b^{\prime}}\delta_{cd}=\delta_{ ad}\delta_{cb}\). Next, we turn to evaluate the 1-loop correction. A representative (unpermuted) diagram is depicted in Figure9. We Figure 8: Tree-level diagrams for the scalar correlation function. Left diagram is unpermuted, right is permuted. Figure 6: Example diagrams for gluon exchange between scalar mesons. Figure 7: Feynman rule for \(\phi^{4}\) interaction. denote the vertex factor as \(-i\kappa\equiv-i\left(\tilde{f}\delta_{ba^{\prime}}\delta_{cd^{\prime}}+\tilde{h} \delta_{cb}\delta_{a^{\prime}d^{\prime}}\right)\). The diagram then gives: \[-\frac{\kappa M^{-2\epsilon}}{(4\pi)^{2}p_{1}^{2}p_{2}^{2}q_{1}^{2}q_{2}^{2} \epsilon}. \tag{4.4}\] We can see that the dependence on the momenta is the same as the tree-level diagrams to leading order in \(\epsilon\). Then the difference between different one-loop diagrams will manifest in the color factors, contained inside \(\kappa\). After inclusion of the color factors from the free legs \(\delta_{ab^{\prime}}\delta_{dc^{\prime}}\) and contraction with \(\delta_{a^{\prime}b^{\prime}}\delta_{c^{\prime}d^{\prime}}\), \(\kappa\) becomes \(\tilde{f}\delta_{ab}\delta_{cd}+\tilde{h}\delta_{ad}\delta_{cb}\). A similar contraction for the other diagrams shows that the \(\tilde{f}\) terms recover the color factor of the tree level diagram, and the \(\tilde{h}\) terms switch between the unpermuted and permuted diagrams. Since the bi-meson operator is symmetrized, they get equal coefficients, and in the end we get a positive multiple of \((\tilde{f}+\tilde{h})(\delta_{ab}\delta_{cd}+\delta_{ad}\delta_{cb})\) instead of \(\kappa\). There is also a symmetry factor in some diagrams. There are 4 diagrams, 2 of which have the same charge flow direction in the two loop propagator - giving a symmetry factor of 2. Then the sum over diagrams gives a factor of 3. The overall correction to the renormalization function is thus \[\delta\left(\frac{Z_{(\phi^{*}\phi)^{2}}}{Z_{\phi^{*}\phi}^{2}}\right)=-\frac {3(\tilde{f}+\tilde{h})M^{-2\epsilon}}{(4\pi)^{2}\epsilon}<0. \tag{4.5}\] From here we find that \[\gamma_{(\phi^{*}\phi)^{2}}-2\gamma_{\phi^{*}\phi}=\frac{\partial}{\partial \log M}\log\left(\frac{Z_{(\phi^{*}\phi)^{2}}}{Z_{\phi^{*}\phi}^{2}}\right)=-2 \epsilon\log\left(\frac{Z_{(\phi^{*}\phi)^{2}}}{Z_{\phi^{*}\phi}^{2}}\right) >0, \tag{4.6}\] supporting the CCC. ### Fermionic double mesons To compute the anomalous dimensions of these operators, we compute their correlation function with 2 fermion operators and 2 antifermion operators: \[\left\langle\left(\bar{\psi}_{\dot{\alpha}b^{\prime}j^{\prime}}\psi_{\alpha a ^{\prime}i^{\prime}}\bar{\psi}_{\dot{\beta}d^{\prime}l^{\prime}}\psi_{\beta c ^{\prime}k^{\prime}}\right)(p)\bar{\psi}_{\dot{\gamma}ai}(-p_{1})\psi_{\gamma bj }(p_{2})\bar{\psi}_{\dot{\delta}ck}(-p_{3})\psi_{\delta dl}(p_{4})\right\rangle \delta_{a^{\prime}b^{\prime}}\delta_{c^{\prime}d^{\prime}}, \tag{4.7}\] where \(a,b,c,d,a^{\prime},b^{\prime},c^{\prime},d^{\prime}\) are color indices, \(i,j,k,l,i^{\prime},j^{\prime},k^{\prime},l^{\prime}\) are flavor indices, and \(\alpha,\dot{\alpha},\dot{\beta},\dot{\gamma},\dot{\gamma},\delta,\dot{\delta}\) are spinor indices. The spinor indices are abbreviated for most of this subsection as \(A=\alpha\dot{\gamma},B=\gamma\dot{\alpha},C=\beta\dot{\delta},D=\delta\dot{ \beta},A^{\prime}=\beta\dot{\gamma},C^{\prime}=\alpha\dot{\delta}\). As in the scalar case, we can assume the flavor indices of the 4 legs of the bi-meson are \(1,2,3,4\) respectively (even for \(\bar{\psi}\) operators and odd for \(\psi\)). Since we are interested in a symmetrized representation, this includes the meson with the possible permutations \(1\leftrightarrow 3,2\leftrightarrow 4\) (permuting the particles completely and not just the flavor index). Also as in the scalar case, it's enough to consider the permutation \(2\leftrightarrow 4\) (with an exception we discuss later). In a similar manner to the computation of the single fermionic meson operators, and as shown explicitly for the tree-level diagrams in Figure12, we find that the fermionic operator contractions give in this case a (+) sign in the unpermuted (or doubly permuted) branch and a (-) sign in the singly permuted branches. This means the tree level amplitude equals (after contraction of the color indices of the bi-meson operator, but before considering contractions of spinor structures): \[\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(p\!\!\!\!/_{\alpha\dot{ \gamma}}p\!\!\!/_{\gamma\dot{\alpha}}p\!\!\!/_{\beta\dot{\beta}}\delta_{ab} \delta_{cd}-p\!\!\!\!/_{\beta\dot{\gamma}}p\!\!\!/_{\gamma\dot{\alpha}}p\!\!\!/_{ \alpha\dot{\beta}}p\!\!\!/_{\alpha\dot{\alpha}}p\!\!/_{\delta\dot{\beta}} \delta_{ad}\delta_{bc}\right). \tag{4.8}\] Figure 9: Representative diagram for the scalar meson correction from \(\phi^{4}\) interaction. Any quantum correction that keeps the flavors matched will keep the fermionic lines the same (and might add some new ones detached from the originals). Then going forward, we can calculate only the first "branch", involving \(\delta_{ab}\delta_{cd}\), and obtain the other by the permutation \[\mathcal{P}\mathcal{A}\mathcal{P}_{C}\leftrightarrow\mathcal{P}_{C}\mathcal{P} \mathcal{I}_{A^{\prime}}\text{ (spinor indices\&roles)},a\leftrightarrow c\text{ (color indices)}. \tag{4.9}\] #### 4.2.1 1-loop level diagrams Here we only consider inter-meson diagrams. The intra-meson diagrams are derived from the ones computed for the single meson case; their leading contribution will be that computed for one meson, multiplied by the tree level contribution for the other meson. There are 8 inter-meson diagrams in this order (including permutation), and they are presented in Figs. 10, 11. Black (blue) fermion lines indicate the fields whose color indices are to be contracted together. The results for diagrams I-IV, respectively, are (up to \(O(\epsilon^{0})\) corrections): \[I=\frac{g^{2}}{4\left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2} \right)^{-\epsilon}\frac{\mathcal{P}\mathcal{A}\mathcal{P}_{D}}{p_{1}^{2}p_{2 }^{2}p_{3}^{2}p_{4}^{2}}\left(t^{I}\right)_{cc^{\prime}}\left(t^{I}\right)_{b^ {\prime}b}\delta_{aa^{\prime}}\delta_{dd^{\prime}}\left[\mathcal{P}\gamma^{ \mu}\gamma^{\nu}\right]_{B}\left[\gamma_{\nu}\gamma_{\mu}\mathcal{P}_{S}^{ \dagger}\right]_{C} \tag{4.10}\] \[II=\frac{g^{2}}{4\left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2} \right)^{-\epsilon}\frac{\mathcal{P}\mathcal{P}\mathcal{B}\mathcal{P}_{C}}{p_ {1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(t^{I}\right)_{aa^{\prime}}\left(t^{I }\right)_{d^{\prime}d}\delta_{bb^{\prime}}\delta_{cc^{\prime}}\left[\mathcal{P} \gamma^{\mu}\gamma^{\nu}\right]_{D}\left[\gamma_{\nu}\gamma_{\mu}\mathcal{P}_{ A}^{\dagger}\right]_{A}\] (4.11) \[III=-\frac{g^{2}}{4\left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2} \right)^{-\epsilon}\frac{\mathcal{P}\mathcal{P}_{B}\mathcal{P}_{D}}{p_{1}^{2} p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(t^{I}\right)_{aa^{\prime}}\left(t^{I}\right)_{cc^{ \prime}}\delta_{bb^{\prime}}\delta_{dd^{\prime}}\left[\gamma^{\nu}\gamma^{\mu }\mathcal{P}\mathcal{S}\right]_{C}\left[\gamma_{\nu}\gamma_{\mu}\mathcal{P}_{ A}^{\dagger}\right]_{A}\] (4.12) \[IV=-\frac{g^{2}}{4\left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2} \right)^{-\epsilon}\frac{\mathcal{P}\mathcal{A}\mathcal{P}_{C}}{p_{1}^{2}p_{2 }^{2}p_{3}^{2}p_{4}^{2}}\left(t^{I}\right)_{b^{\prime}b}\left(t^{I}\right)_{ d^{\prime}d}\delta_{aa^{\prime}}\delta_{cc^{\prime}}\left[\mathcal{P}\gamma^{\mu} \gamma^{\nu}\right]_{D}\left[\mathcal{P}\gamma_{\mu}\gamma_{\nu}\right]_{B}. \tag{4.13}\] We compute explicitly the color index contractions for unperturbed diagram I: \[\left(t^{I}\right)_{cc^{\prime}}\left(t^{I}\right)_{b^{\prime}b}\delta_{aa^{ \prime}}\delta_{dd^{\prime}}\delta_{a^{\prime}b^{\prime}}\delta_{c^{\prime}d^ {\prime}}=\left(t^{I}\right)_{b^{\prime}b}\left(t^{I}\right)_{cc^{\prime}} \delta_{c^{\prime}d}\delta_{b^{\prime}a}=\left(t^{I}\right)_{ab}\left(t^{I} \right)_{cd}\approx\frac{1}{2}\delta_{ad}\delta_{bc}. \tag{4.14}\] A similar calculation yields the same result for diagrams II-IV. Figure 10: Unpermuted diagrams, in order I-IV. Figure 11: Permuted diagrams, in order \(\Gamma\)-IV’. Figure 12: Fermion operator contractions - tree level. #### 4.2.2 Fermion index contraction The contraction is done with the chiral spinor structures: \(\left\{1\pm\gamma^{5},(1\pm\gamma^{5})\gamma^{\mu},(1\pm\gamma^{5})\gamma^{\mu\nu}\right\}\), since they also span the degrees of freedom quadratic in the fermions, and have simpler representations than their counterparts without chiral projections. The change from section 3.1 is the chiral projection of tensors. It creates some redundancy, as the tensor structure \(\gamma^{5}\gamma^{\mu\nu}\) can be expressed using the one without \(\gamma^{5}\), but this does not affect our results. In this part of the results we use the following identities for the \(\gamma\) matrices (to leading order in \(\epsilon\)): \[\gamma^{\mu}\gamma^{\nu}=\eta^{\mu\nu}+\gamma^{\mu\nu};\ \ \ \ \gamma^{5} \gamma^{\mu}\gamma^{\nu}=\eta^{\mu\nu}\gamma^{5}+\frac{i}{2}\epsilon^{\mu\nu \rho\sigma}\gamma_{\rho\sigma};\ \ \ \ \ \gamma^{\mu}\gamma^{\nu}\gamma^{\rho}=\eta^{\mu\nu}\gamma^{\rho}-\eta^{\mu\rho} \gamma^{\nu}+\eta^{\nu\rho}\gamma^{\mu}+i\epsilon^{\mu\nu\rho\sigma}\gamma^{5} \gamma_{\sigma};\] \[\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma}=\eta^{\mu \nu}\eta^{\rho\sigma}+\eta^{\nu\rho}\eta^{\mu\sigma}-\eta^{\mu\rho}\eta^{\nu \sigma}+\eta^{\mu\nu}\gamma^{\rho\sigma}+\eta^{\mu\rho}\gamma^{\sigma\nu}+\eta ^{\nu\rho}\gamma^{\mu\sigma}+\eta^{\rho\sigma}\gamma^{\rho\mu}+\eta^{\rho \sigma}\gamma^{\mu\nu}+i\epsilon^{\mu\nu\rho\sigma}\gamma^{5}. \tag{4.15}\] The identities (4.15) use the fact that in 4 dimensions, the algebra of \(\gamma\) matrices (indeed, all \(4\times 4\) matrices in the spinor indices) is spanned by \(\left\{1,\gamma^{5},\gamma^{\mu},\gamma^{5}\gamma^{\mu},\gamma^{\mu\nu}\right\}\). Chiral scalars.We contract the spinor matrices in each diagram with (non-normalized) chiral projection operators \(\left(1\pm\gamma^{5}\right)_{\alpha\dot{\alpha}}\left(1\pm\gamma^{5}\right)_{ \beta\dot{\beta}}\), corresponding to the symmetric square of each chiral single scalar meson. This gives: Tree level: \[{\cal P}\!f_{\alpha\dot{\gamma}}{\cal P}\!f_{\dot{\gamma}\dot{\alpha}}{\cal P} \!f_{\dot{\beta}\dot{\beta}}{\cal P}\!f_{\dot{\delta}\dot{\beta}}\to\left[{ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P }\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P} \!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\! \left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1 \pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm \gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^ {5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P }\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\! \left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1 \pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm \gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{ 5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right) {\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P }\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\! \left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1 \pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm \gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm \gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm \gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm \gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm \gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^ {5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5} \right){\cal P}\!\left(1\pm\gamma^{5}\right){\cal P}\!\left(1\pm\gamma^{5}\right){ \cal P}\!\left(1\pm\gamma^{ Diagram I: \[\left.\left[\not{p}_{2}\gamma^{\mu\nu}\gamma^{\nu}\right]_{B}\left[ \gamma_{\nu}\gamma_{\mu}\not{p}_{\delta}\right]_{C}\rightarrow\left[\not{p}_{2 }\gamma^{\mu}\gamma^{\nu}\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma}\not{p}_ {1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{ \rho\sigma}\gamma_{\nu}\not{p}_{3}\right]_{\delta\delta}=\] \[=\left[\not{p}_{2}\left(\eta^{\mu\nu}+\gamma^{\mu\nu}\right)\left( 1\pm\gamma^{5}\right)\gamma^{\rho\sigma}\not{p}_{1}\right]_{\gamma\gamma}\left[ \not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{\rho\sigma}\left(\eta_{\mu\nu}- \gamma_{\mu\nu}\right)\not{p}_{3}\right]_{\delta\delta}=\] \[=d\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma} \not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right) \gamma_{\rho\sigma}\not{p}_{3}\right]_{\delta\delta}-\left[\not{p}_{2}\gamma^ {\mu\nu}\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma}\not{p}_{1}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{\rho\sigma} \gamma_{\mu\nu}\not{p}_{3}\right]_{\delta\delta}, \tag{4.22}\] and the second term here needs simplification. First we note that \(\gamma^{\mu\nu}\gamma^{\rho\sigma}\) is the part of \(\gamma^{\mu\nu}\gamma^{\nu\rho}\gamma^{\sigma}\) which is antisymmetric in each of the swaps \(\mu\leftrightarrow\nu,\rho\leftrightarrow\sigma\), and so we have: \[\gamma^{\mu\nu}\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma}= \left(1\pm\gamma^{5}\right)\gamma^{\mu\nu}\gamma^{\rho\sigma}=\] \[=\left(1\pm\gamma^{5}\right)\left(\eta^{\nu\rho}\eta^{\mu\sigma}- \eta^{\mu\rho}\eta^{\nu\sigma}+\eta^{\mu\rho}\gamma^{\sigma\nu}+\eta^{\nu \rho}\gamma^{\mu\sigma}+\eta^{\mu\sigma}\gamma^{\nu\rho}+\eta^{\nu\sigma} \gamma^{\rho\mu}+i\epsilon^{\mu\nu\rho\sigma}\gamma^{5}\right), \tag{4.23}\] and similarly: \[\left(1\pm\gamma^{5}\right)\gamma_{\rho\sigma}\gamma_{\mu\nu}=\left(1\pm \gamma^{5}\right)\left(\eta_{\sigma\mu}\eta_{\rho\nu}-\eta_{\rho\mu}\eta_{ \sigma\nu}+\eta_{\rho\mu}\gamma_{\nu\sigma}+\eta_{\rho\mu}\gamma_{\rho\nu}+ \eta_{\rho\nu}\gamma_{\sigma\mu}+\eta_{\sigma\nu}\gamma_{\mu\rho}+i\epsilon_ {\rho\sigma\mu\nu}\gamma^{5}\right). \tag{4.24}\] Putting it together, this second term is (for \(d=4\)): \[\left[\not{p}_{2}\gamma^{\mu\nu}\left(1\pm\gamma^{5}\right)\gamma ^{\rho\sigma}\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm \gamma^{5}\right)\gamma_{\rho\sigma}\gamma_{\mu\nu}\not{p}_{3}\right]_{\delta \delta}=\] \[=\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\left(\eta^{\nu\rho }\eta^{\mu\sigma}-\eta^{\mu\rho}\eta^{\nu\sigma}+\eta^{\mu\rho}\gamma^{ \sigma\nu}+\eta^{\nu\rho}\gamma^{\mu\sigma}+\eta^{\mu\sigma}\gamma^{\nu\rho} +\eta^{\nu\sigma}\gamma^{\rho\mu}+i\epsilon^{\mu\nu\rho\sigma}\gamma^{5} \right)\not{p}_{1}\right]_{\gamma\gamma}\cdot\] \[\cdot\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\left(\eta_{ \sigma\mu}\eta_{\nu\sigma}+\eta_{\sigma\mu}\gamma_{\rho\nu}+\eta_{\rho\nu} \gamma_{\sigma\mu}+\eta_{\sigma\nu}\gamma_{\rho\mu}\right)\not{p}_{3}\right]_ {\delta\delta}-\] \[-\quad\epsilon^{\mu\nu\rho\sigma}\epsilon_{\rho\sigma\mu\nu} \left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{5}\not{p}_{1}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{5}\not{p}_{3 }\right]_{\delta\delta}=\] \[=24\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\not{p}_{1} \right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\not{p}_{3 }\right]_{\delta\delta}-8\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{ \sigma\nu}\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^ {5}\right)\gamma_{\sigma\nu}\not{p}_{3}\right]_{\delta\delta}+\] \[+24\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{5}\not{p}_ {1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{5} \not{p}_{3}\right]_{\delta\delta}=\cdots \tag{4.25}\] Since \((1\pm\gamma^{5})\gamma^{5}=\pm(1\pm\gamma^{5})\), the last term here is \[24\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{5}\not{p}_{1}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{5}\not{p}_{3 }\right]_{\delta\delta}=24\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\not{p}_{1 }\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\not{p}_{3 }\right]_{\delta\delta}, \tag{4.26}\] so the total contribution of the diagram is: \[=12\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\sigma\nu}\not{p}_{1} \right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{ \sigma\nu}\not{p}_{3}\right]_{\delta\delta}-48\left[\not{p}_{2}\left(1\pm \gamma^{5}\right)\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm \gamma^{5}\right)\not{p}_{3}\right]_{\delta\delta}. \tag{4.27}\] Diagram II: \[\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \not{p}_{2B}\not{p}_{C}\left[\not{p}\gamma^{\mu}\gamma^{\nu}\right]_{D} \left[\gamma_{\nu}\gamma_{\mu}\not{p}_{4}\right]_{A}\rightarrow\left[\not{p}_{2 }\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma}\gamma_{\nu}\gamma_{\mu}\not{p}_ {1}\right]_{\gamma\gamma}\left[\not{p}_{4}\gamma^{\mu}\gamma^{\nu}\gamma^{\nu} \left(1\pm\gamma^{5}\right)\gamma_{\rho\sigma}\not{p}_{3}\right]_{\delta \delta}=\right.\right.\right.\right.\right.\] \[=\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma} \left(\eta^{\mu\nu}-\gamma^{\mu\nu}\right)\not{p}_{1}\right]_{\gamma\gamma} \left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{\rho\sigma}\not{p}_{3} \right]_{\delta\delta}-48\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\not{p}_{1 }\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\not{p}_{3 }\right]_{\delta\delta}. \tag{4.28}\] Diagram III: \[=\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma} \left(\eta^{\mu\nu}-\gamma^{\mu\nu}\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4} \left(1\pm\gamma^{5}\right)\gamma_{\rho\sigma}\gamma_{\mu}\not{p}_{1}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{\rho\sigma} \gamma^{\nu}\gamma^{\mu}\not{p}_{3}\right]_{\delta\delta}=\right.\] \[=4\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma} \left(\eta^{\mu\nu}-\gamma^{\mu\nu}\right)\not{p}_{1}\right]_{\gamma\gamma} \left[\not{p}_{4}\left(1\pm\gamma^{5}\right and the contribution of the diagram is \[12\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\sigma\nu}\not{p}_{1} \right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{\sigma \nu}\not{p}_{3}\right]_{\delta\delta}+48\left[\not{p}_{2}\left(1\pm\gamma^{5} \right)\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5} \right)\not{p}_{3}\right]_{\delta\delta}. \tag{4.31}\] Diagram IV similarly gives: \[\not{p}_{A}\not{p}_{C}\left[\not{p}\leftrightarrow^{\mu}\gamma^{ \nu}\right]_{D}\left[\not{p}\leftrightarrow_{\mu}\gamma_{\nu}\right]_{B} \rightarrow\left[\not{p}_{2}\gamma_{\mu}\gamma_{\nu}\left(1\pm\gamma^{5} \right)\gamma^{\rho\sigma}\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4} \gamma^{\mu}\gamma^{\nu}\left(1\pm\gamma^{5}\right)\gamma_{\rho\sigma}\not{p}_ {3}\right]_{\delta\delta}=\] \[=\left[\not{p}_{2}\left(\eta^{\mu\nu}+\gamma^{\mu\nu}\right)\left( 1\pm\gamma^{5}\right)\gamma^{\rho\sigma}\not{p}_{1}\right]_{\gamma\gamma} \left[\not{p}_{4}\left(\eta_{\mu\nu}+\gamma_{\mu\nu}\right)\left(1\pm\gamma^{5 }\right)\gamma_{\rho\sigma}\not{p}_{3}\right]_{\delta\delta}=\] \[=\cdots=12\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{ \sigma\nu}\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5 }\right)\gamma_{\sigma\nu}\not{p}_{3}\right]_{\delta\delta}+48\left[\not{p}_{2} \left(1\pm\gamma^{5}\right)\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4} \left(1\pm\gamma^{5}\right)\not{p}_{3}\right]_{\delta\delta}. \tag{4.32}\] Chiral vectors.Now we contract the amplitudes with the chiral vector spinor structure: \(\left[\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\right]_{\alpha\dot{\alpha}} \left[\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\right]_{\beta\dot{\beta}}\), where \(\rho,\sigma\) are for now independent. Note that in this case, the combination of the two permutations generally does not yield the original configuration, because of the contraction with possibly different \(\gamma\) matrices. Thus, it is necessary to explicitly symmetrize by both permutations \(\psi_{1}\leftrightarrow\psi_{3}\) and \(\overline{\psi}_{2}\leftrightarrow\overline{\psi}_{4}\), which is equivalent to symmetrizing by the permutations \(\psi_{1}\leftrightarrow\psi_{3}\) and \(\gamma^{\rho}\leftrightarrow\gamma^{\sigma}\). It follows that we can keep treating only the two branches we discussed before, but impose that the indices will eventually need to be symmetric. Tree level: \[\not{p}_{\alpha\dot{\alpha}}\not{p}_{\gamma\dot{\alpha}}\not{p}_{\delta\dot{ \beta}}\not{p}_{\delta\dot{\beta}}\not{p}_{\delta\dot{\beta}}\to\left[\not{p} _{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}_{1}\right]_{\gamma\gamma} \left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\not{p}_{3}\right]_ {\delta\delta}. \tag{4.33}\] Diagram I: \[\not{p}_{A}\not{p}_{D}\left[\not{p}\leftrightarrow^{\mu}\gamma^{ \nu}\right]_{B}\left[\gamma_{\nu}\gamma_{\mu}\not{p}\right]_{C} \rightarrow\left[\not{p}\leftrightarrow^{\mu}\gamma^{\nu}\left(1\pm \gamma^{5}\right)\gamma^{\rho}\not{p}\right]_{\gamma\gamma}\left[\not{p} \not{q}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\gamma_{\nu}\gamma_{\mu} \not{p}\not{g}\right]_{\delta\delta}=\] \[=\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right)\gamma^{\mu}\gamma ^{\nu}\gamma^{\rho}\not{p}\right]_{\gamma\dot{\gamma}}\left[\not{p}\not{q} \left(1\pm\gamma^{5}\right)\gamma^{\sigma}\gamma_{\nu}\not{p}\not{g}\right]_{ \delta\delta}=\cdots \tag{4.34}\] Now we simplify using the formula (4.15) for the product of \(3\)\(\gamma\) matrices to get: \[\cdots =\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right)\left(\eta^{\mu \nu}\gamma^{\rho}-\eta^{\mu\rho}\gamma^{\nu}+\eta^{\nu\rho}\gamma^{\mu}+i \gamma^{5}\gamma_{\nu}\epsilon^{\mu\nu}\right)\not{p}\not{g}\right]_{\gamma \gamma}\cdot\] \[\cdot\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right)\left(\delta^{ \sigma}_{\nu}\gamma_{\mu}-\delta^{\sigma}_{\mu}\gamma_{\nu}+\eta_{\nu\mu}\gamma^ {\sigma}+i\gamma^{5}\gamma_{\nu}\epsilon^{\sigma}_{\nu\mu}\right)\not{p} \not{g}\right]_{\delta\delta}=\] \[=\eta^{\mu\sigma}\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right) \gamma^{\rho}\not{p}\right]_{\gamma\gamma}\left[\not{p}\not{q}\left(1\pm\gamma^ {5}\right)\gamma_{\mu}\not{g}\right]_{\delta\delta}-\eta^{\sigma\nu}\left[ \not{p}\not{q}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}\right]_{\gamma \gamma}\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right)\gamma_{\nu}\not{p} \not{g}\right]_{\delta\delta}+\] \[+d\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not {p}\right]_{\gamma\gamma}\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right)\gamma^{ \sigma}\not{p}\right]_{\delta\delta}-\left[\not{p}\not{q}\left(1\pm\gamma^{5} \right)\gamma^{\sigma}\not{p}\right]_{\gamma\gamma}\left[\not{p}\not{q}\left(1 \pm\gamma^{5}\right)\gamma^{\sigma}\not{p}\right]_{\delta\delta}+\] \[+\eta^{\sigma\rho}\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right) \gamma^{\mu}\not{p}\right]_{\gamma\gamma}\left[\not{p}\not{q}\left(1\pm\gamma^{5} \right)\gamma_{\mu}\not{p}\not{g}\right]_{\delta\delta}-\left[\not{p}\not{q} \left(1\pm\gamma^{5}\right)\gamma^{\sigma}\not{p}\right]_{\gamma\gamma}\left[ \not{p}\not{q}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\not{p}\right]_{\delta \delta}+\] \[+\delta^{\rho}_{\mu}\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right) \gamma^{\mu}\not{p}\right]_{\gamma\gamma}\left[\not{p}\not{q}\left(1\pm\gamma^{5} \right)\gamma^{\sigma}\not{p}\right]_{\delta\delta}\pm 2i\epsilon^{\alpha\rho\tau}\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right) \gamma^{\varsigma}\gamma_{\nu}\not{p}\not{g}\right]_{\gamma\gamma}\left[\not{p} \not{q}\left(1\pm\gamma^{5}\right)\gamma_{\nu}\not{p}\not{g}\right]_{\delta \delta}+\] \[+2i\epsilon^{\sigma\rho\omega}_{\mu}\left[\not{p}\not{q}\left(1\pm\gamma^{5} \right)\gamma^{\mu}\not{p}\not{q}\right]_{\gamma\gamma}\left[\not{p}\not{q} \left(1\pm\gamma^{5}\right)\gamma^{\varsigma}\gamma_{\nu}\not{p}\not{g}\right]_{ \delta\delta}+\epsilon^{\mu\nu\rho\tau}\epsilon^{\sigma\omega}_{\mu\nu}\left[ \not{p}\not{q}\left(1\pm\gamma^{5}\right)\gamma^{\varsigma}\gamma_{\nu}\not{p} \not{g}\right]_{\gamma\gamma}\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right) \gamma^{\varsigma}\gamma_{\nu}\not{p}\not{g}\right]_{\delta\delta}\stackrel{{ d\simeq 4}}{{\approx}}\] \[\approx 4\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not {p}\right]_{\gamma\gamma}\left[\not{p}\not{q}\left(1\pm\gamma^{5}\right) \gamma^{\sigma}\not{p}\right]_{\delta\delta}. \tag{4.35}\] Diagram II: \[\not{p}_{B}\not{p}_{B}\not{p}_{C}\left[\not{p}\leftrightarrow^{\mu}\gamma^{\nu} \right]_{D}\left[\gamma_{\nu}\gamma_{\mu}\not{p}\right]_{A} \rightarrow\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\gamma_{ \ Diagram III: \[\begin{array}{l}\mathpzc{p}\mathpzc{L}_{B}\mathpzc{p}_{D}\left[\gamma^{\nu} \gamma^{\mu}\mathpzc{p}\right]_{C}\left[\gamma_{\nu}\gamma_{\mu}\mathpzc{p} \right]_{A}\rightarrow\left[\mathpzc{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{ \rho}\gamma^{\nu}\gamma^{\mu}\mathpzc{p}_{1}\right]_{\gamma\gamma}\left[ \mathpzc{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\gamma_{\nu}\gamma_{ \mu}\mathpzc{p}_{3}\right]_{\delta\delta}=\\ \\ =\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\left(\eta^{\mu\nu} \gamma^{\rho}-\eta^{\rho\mu}\gamma^{\nu}+\eta^{\nu\rho}\gamma^{\mu}+i\gamma^{5} \gamma_{\tau}\epsilon^{\mu\nu\rho\tau}\right)\mathpzc{p}_{1}\right]_{\gamma \gamma}\left[\mathpzc{p}_{4}\left(1\pm\gamma^{5}\right)\left(\delta_{\nu}^{ \sigma}\gamma_{\mu}-\delta_{\mu}^{\sigma}\gamma_{\nu}+\eta_{\nu\mu}\gamma^{ \sigma}+i\gamma^{5}\gamma_{\omega}\epsilon^{\sigma}_{\nu\mu}\right)\mathpzc{p} _{3}\right]_{\delta\delta}=\\ \\ =\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\left(\eta^{\mu\nu} \gamma^{\rho}-\eta^{\mu\rho}\gamma^{\nu}+\eta^{\nu\rho}\gamma^{\mu}-i\gamma^{5 }\gamma_{\tau}\epsilon^{\mu\nu\rho\tau}\right)\mathpzc{p}^{\mathcal{G}} \right]_{\gamma\gamma}\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right) \left(\delta_{\nu}^{\sigma}\gamma_{\mu}-\delta_{\mu}^{\sigma}\gamma_{\nu}+\eta _{\nu\mu}\gamma^{\sigma}+i\gamma^{5}\gamma_{\omega}\epsilon^{\sigma}_{\nu\mu} \right)\mathpzc{p}\mathpzc{G}\right]_{\delta\delta}=\cdots\end{array} \tag{4.38}\] This time only the first \(\gamma^{5}\gamma_{\tau}\epsilon^{\mu\nu\rho\tau}\) gets a \((-)\) sign, so we get: \[\cdots\approx 4\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5} \right)\gamma^{\rho}\mathpzc{p}\right]_{\gamma\gamma}\left[\mathpzc{p} \mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\mathpzc{p}\mathpzc{G} \right]_{\delta\delta}-2\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5} \right)\gamma^{\sigma}\mathpzc{p}\mathpzc{G}\right]_{\gamma\gamma}\left[ \mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\mathpzc{p} \mathpzc{G}\right]_{\delta\delta}+ \tag{4.39}\] \[+4\eta^{\sigma\rho}\left[\mathpzc{p}\mathpzc{L}\left(1\pm \gamma^{5}\right)\gamma^{\sigma}\mathpzc{p}\right]_{\gamma\gamma}\left[ \mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma_{\mu}\mathpzc{p} \mathpzc{G}\right]_{\delta\delta}+4i\epsilon^{\mu\sigma\rho\tau}\left[ \mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma_{\tau}\mathpzc{p} \mathpzc{G}\right]_{\gamma\gamma}\left[\mathpzc{p}\mathpzc{L}\left(1\pm \gamma^{5}\right)\gamma_{\mu}\mathpzc{p}\mathpzc{G}\right]_{\delta\delta}.\] Diagram IV: \[\mathpzc{p}\mathpzc{L}\mathpzc{p}\mathpzc{L}\mathpzc{p}\mathpzc{L} \left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma^{\mu} \mathpzc{p}\right]_{\gamma\gamma}\left[\mathpzc{p}\mathpzc{L}\left(1\pm \gamma^{5}\right)\gamma^{\sigma}\mathpzc{p}_{3}\right]_{\delta\delta}=\] \[=\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma _{\mu}\gamma_{\nu}\gamma^{\rho}\mathpzc{p}\right]_{\gamma\gamma}\left[ \mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma^{\mu}\gamma^{\nu} \gamma^{\sigma}\mathpzc{p}_{3}\right]_{\delta\delta}=\] \[=\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\left( \eta^{\mu\nu}\gamma^{\rho}-\eta^{\mu\rho}\gamma^{\nu}+\eta^{\nu\rho}\gamma^{\mu}+i \gamma^{5}\gamma_{\tau}\epsilon^{\mu\nu\rho\tau}\right)\mathpzc{p}\right]_{ \gamma\gamma}\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\left( \delta_{\nu}^{\sigma}\gamma_{\mu}-\delta_{\mu}^{\sigma}\gamma_{\nu}+\eta_{\nu \mu}\gamma^{\sigma}-i\gamma^{5}\gamma_{\omega}\epsilon^{\sigma}_{\nu\mu} \right)\mathpzc{p}\mathpzc{G}\right]_{\delta\delta}=\cdots=\] \[=4\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma^{ \sigma}\mathpzc{p}\mathpzc{G}\right]_{\gamma\gamma}\left[\mathpzc{p}\mathpzc{ L}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\mathpzc{p}\right]_{\delta\delta}-4\left[ \mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\mathpzc{p} \mathpzc{G}\right]_{\gamma\gamma}\left[\mathpzc{p}\mathpzc{L}\left(1\pm \gamma^{5}\right)\gamma^{\rho}\mathpzc{G}\right]_{\delta\delta}\] \[+4\eta^{\sigma\rho}\left[\mathpzc{p}\mathpzc{L}\left(1\pm \gamma^{5}\right)\gamma^{\mu}\mathpzc{p}\mathpzc{G}\right]_{\gamma\gamma} \left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\gamma_{\mu}\mathpzc{p} \mathpzc{G}\right]_{\delta\delta}\pm 4i\epsilon^{\mu\sigma\rho\tau}\left[\mathpzc{p}\mathpzc{L}\left(1\pm \gamma^{5}\right)\gamma_{\tau}\mathpzc{p}\mathpzc{G}\right]_{\gamma\gamma} \left[\mathpzc{p}\mathpzc{G}\left(1\pm\gamma^{5}\right)\gamma_{\mu}\mathpzc{p} \mathpzc{G}\right]_{\delta\delta}. \tag{4.40}\] #### 4.2.3 Resummation Chiral scalar bi-meson Intra-meson contributionsThe correlation function for each branch (permuted or unpermuted) consists of \[\text{Tree level}+4\cdot\text{prop. corr.}+2\cdot\text{gluon exchange}, \tag{4.41}\] which becomes \[\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left[1+\frac{4g^{2}C_{F}}{(4\pi)^{ 2}\epsilon}\left(p_{E}^{2}\right)^{-\epsilon}\right]\left\{\delta_{ab}\delta_{ cd}\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\mathpzc{p} \mathpzc{G}\right]_{\gamma\gamma}\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5} \right)\mathpzc{p}\mathpzc{G}\right]_{\gamma\delta}-\delta_{ad}\delta_{bc} \left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\mathpzc{p}\mathpzc{G} \right]_{\gamma\delta}\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5} \right)\mathpzc{p}\mathpzc{G}\right]_{\delta\delta}\right\}. \tag{4.42}\] Inter-meson contributionsFirst, we look at the one-loop unpermuted contribution, and divide it into convenient building blocks: * Spinor structure: we have from diagrams I,II \[4\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\mathpzc{p}\mathpzc{G} \right]_{\gamma\gamma}\left[\mathpzc{p}\mathpzc{G}\left(1\pm\gamma^{5}\right) \mathpzc{p}\mathpzc{G}\right]_{\delta\delta}-2\left[\mathpzc{p}\mathpzc{p} \mathpzc{L}\gamma^{\mu\nu}\mathpzc{p}\mathpzc{G}\right]_{\gamma\gamma}\left[ \mathpzc{p}\mathpzc{G}\gamma\mu\nu\mathpzc{p}\mathpzc{G}\right]_{\gamma\gamma} \left[\mathpzc{p}\mathpzc{G}\gamma^{\rho\sigma}\mathpzc{G}\right]_{\delta \delta\delta},\] (4.43) and from diagrams III,IV \[4\left[\mathpzc{p}\mathpzc{L}\left(1\pm\gamma^{5}\right)\mathpzc{p}\mathpzc{G} \right]_{\gamma\gamma}\left[\mathpzc{p}\mathpzc{G}\left(1\pm\gamma^{5} \right)\mathpzc{p}\mathpzc{G}\right]_{\delta\delta}+2\left[\mathpzc{p} \mathpzc{L}\gamma^{\mu\nu}\mathpzc{p}\mathpzc{G}\right]_{\gamma\gamma} \left[\mathpzc{p}\mathpzc{G}\gamma\mu\nu\mathpzc{p}\mathpzc{G}\right]_{\delta \delta\delta}\pm i\epsilon_{\mu\nu\rho\tau}\left[\mathpzc{p}\mathpzc{G} \gamma^{\mu\nu}\mathpzc{p}\mathpzc{G}\right]_{\gamma\gamma}\left[\mathpzc{p} \mathpzc{G}\gamma^{\rho\sigma}\mathpzc{G}\right]_{\delta\delta}.\] (4.44) * All 4 diagrams have the same color structure \(\frac{1}{2}\delta_{ad}\delta_{bc}\). The remaining coefficient is \[\pm\frac{g^{2}}{4\left(4\pi\right)^{2}}\frac{\left(p_{E}^{2}\right)^{-\epsilon}}{ \epsilon}\frac{\left(p_{E}^{2}\right)^{ Summing these 4 diagrams gives: \[-\delta_{ad}\delta_{bc}\frac{g^{2}}{\left(4\pi\right)^{2}}\epsilon\frac{\left(p_{E }^{2}\right)^{-\epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(\left[p \not{\varepsilon}\gamma^{\mu\nu}p\right]_{\gamma\uparrow}\left[p\not{ \varepsilon}\gamma_{\mu\nu}p\not{\varepsilon}\right]_{\delta\delta}\pm\frac{i}{ 2}\epsilon_{\mu\nu\rho\sigma}\left[p\not{\varepsilon}\gamma^{\mu\nu}p\not{ \varepsilon}\right]_{\gamma\uparrow}\left[p\not{\varepsilon}\gamma^{\rho \sigma}p\not{\varepsilon}\right]_{\delta\delta}\right)=\cdots \tag{4.46}\] We see that this is a linear combination of the structures we had in 2-tensor bi-meson operators. We will show that it is in fact proportional to the structure appearing in the respective chiral 2-tensor operators, and find the proportionality constant. First we note that: \[\gamma^{5}\gamma^{\mu\nu}=\frac{i}{2}\epsilon^{\mu\nu\tau\omega}\gamma_{\tau \omega}\Rightarrow\left(1\pm\gamma^{5}\right)\gamma^{\mu\nu}=\gamma^{\mu\nu} \pm\frac{i}{2}\epsilon^{\mu\nu\tau\omega}\gamma_{\tau\omega}\] \[\Rightarrow\left[\left(1\pm\gamma^{5}\right)\gamma^{\mu\nu}\right]_{\alpha \dot{\alpha}}\left[\left(1\pm\gamma^{5}\right)\gamma^{\rho\sigma}\right]_{ \beta\dot{\beta}}=\left[\gamma^{\mu\nu}\pm\frac{i}{2}\epsilon^{\mu\nu\tau \omega}\gamma_{\tau\omega}\right]_{\alpha\dot{\alpha}}\left[\gamma^{\rho \sigma}\pm\frac{i}{2}\epsilon^{\rho\sigma\kappa\lambda}\gamma_{\kappa\lambda }\right]_{\beta\dot{\beta}}. \tag{4.47}\] Contracting this with \(\eta_{\mu\rho}\eta_{\nu\sigma}\) then gives \[\left[\gamma^{\mu\nu}\pm\frac{i}{2}\epsilon^{\mu\nu\tau\omega}\gamma_{\tau \omega}\right]_{\alpha\dot{\alpha}}\left[\gamma_{\mu\nu}\pm\frac{i}{2} \epsilon_{\mu\nu}^{\;\;\;\kappa\lambda}\gamma_{\kappa\lambda}\right]_{\beta \dot{\beta}}=\] \[=\left[\gamma^{\mu\nu}\right]_{\alpha\dot{\alpha}}\left[\gamma_{\mu\nu}\right] _{\beta\dot{\beta}}\pm\frac{i}{2}\epsilon^{\mu\nu\tau\omega}\left[\gamma_{\tau \omega}\right]_{\alpha\dot{\alpha}}\left[\gamma_{\mu\nu}\right]_{\beta\dot{ \beta}}\pm\frac{i}{2}\epsilon_{\mu\nu}^{\;\;\;\kappa\lambda}\left[\gamma^{\mu \nu}\right]_{\alpha\dot{\alpha}}\left[\gamma_{\kappa\lambda}\right]_{\beta \dot{\beta}}-\frac{1}{4}\underbrace{\epsilon^{\mu\nu\tau\omega}\epsilon_{\mu \nu}^{\;\;\;\kappa\lambda}}_{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; 2-meson contributionsAs in the scalar case, we consider first the unpermuted diagrams. Everything is as in that case, except for the spinor structure, which is \[12\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\mu\nu}\not{p}_{1}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{\mu\nu}\not{p} _{3}\right]_{\delta\delta}-48\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\not {p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\not {p}_{3}\right]_{\delta\delta} \tag{4.53}\] for diagrams I,II and \[12\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\mu\nu}\not{p}_{1} \right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma_{\mu \nu}\not{p}_{3}\right]_{\delta\delta}+48\left[\not{p}_{2}\left(1\pm\gamma^{5} \right)\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5} \right)\not{p}_{3}\right]_{\delta\delta} \tag{4.54}\] for diagrams III,IV. Summing them gives: \[=-\frac{24g^{2}}{\left(4\pi\right)^{2}\epsilon}\frac{\left(p_{E}^{2}\right)^{ -\epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ad}\delta_{bc}\left[ \not{p}_{2}\left(1\pm\gamma^{5}\right)\not{p}_{1}\right]_{\gamma\gamma}\left[ \not{p}_{4}\left(1\pm\gamma^{5}\right)\not{p}_{3}\right]_{\delta\delta}, \tag{4.55}\] In total, the correlation function is: \[\left\langle M_{tt}^{\pm}(p)\bar{\psi}(p_{1})\psi(p_{2})\bar{ \psi}(p_{3})\psi(p_{4})\right\rangle=\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^ {2}}\left\{1-\frac{4g^{2}C_{F}}{\left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2} \right)^{-\epsilon}\right\}.\] \[-\frac{24g^{2}}{\left(4\pi\right)^{2}\epsilon}\frac{\left(p_{E}^ {2}\right)^{-\epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ad}\delta _{bc}\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\not{p}_{1}\right]_{\gamma \gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\not{p}_{3}\right]_{\delta \delta}+\frac{24g^{2}}{\left(4\pi\right)^{2}\epsilon}\frac{\left(p_{E}^{2} \right)^{-\epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ab}\delta_{ cd}\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\not{p}_{3}\right]_{\gamma \delta}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\not{p}_{1}\right]_{\delta \gamma}. \tag{4.56}\] After deduction of the fermion renormalization correction this becomes: \[\left\langle M_{tt}^{\pm}(p)\left[\bar{\psi}\right](p_{1})\left[ \psi\right](p_{2})\left[\bar{\psi}\right](p_{3})\left[\psi\right](p_{4})\right\rangle =\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left\{1-\frac{2g^{2}C_{F}}{ \left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2}\right)^{-\epsilon}\right\}.\] \[\cdot\left\{\delta_{ab}\delta_{cd}\left[\not{p}_{2}\left(1\pm \gamma^{5}\right)\gamma^{\mu\nu}\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_ {4}\left(1\pm\gamma^{5}\right)\gamma_{\mu\nu}\not{p}_{3}\right]_{\delta\delta}- \delta_{ad}\delta_{bc}\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\mu \nu}\not{p}_{3}\right]_{\gamma\delta}\left[\not{p}_{4}\left(1\pm\gamma^{5} \right)\gamma_{\mu\nu}\not{p}_{4}\right]_{\delta\gamma}\right\}-\] \[-\frac{24g^{2}}{\left(4\pi\right)^{2}\epsilon}\frac{\left(p_{E}^{2 }\right)^{-\epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ad}\delta_{ bc}\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\not{p}_{1}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\not{p}_{3}\right]_{ \delta\delta}+\frac{24g^{2}}{\left(4\pi\right)^{2}\epsilon}\frac{\left(p_{E}^{2 }\right)^{-\epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ab}\delta_{ cd}\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\not{p}_{3}\right]_{\gamma \delta}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\not{p}_{1}\right]_{ \delta\gamma}. \tag{4.57}\] Chiral vector bi-meson Intra-meson contributionsIn this case, there is no single meson contribution to the renormalization function. The amplitude (already corrected for the external fermion renormalization) is just the tree level: \[\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left\{\delta_{ab}\delta_{cd} \left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}_{1}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}_ {3}\right]_{\delta\delta}-\delta_{ad}\delta_{bc}\left[\not{p}_{2}\left(1\pm \gamma^{5}\right)\gamma^{\rho}\not{p}_{3}\right]_{\gamma\delta}\left[\not{p}_{4} \left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}_{4}\right]_{\delta\gamma}\right\}. \tag{4.58}\] Inter-meson contributionsAgain, everything is the same as in the scalar case, except for the spinor structure. This structure is: \[4\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}_{4}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\not{p} \right]_{\delta\delta} \tag{4.59}\] for diagrams I,II, \[4\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}_{4} \right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{ \sigma}\not{p}_{4}\right]_{\delta\delta}-4\left[\not{p}_{2}\left(1\pm\gamma^{5} \right)\gamma^{\sigma}\not{p}_{4}\right]_{\gamma\gamma}\left[\not{p}_{4} \left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}_{3}\right]_{\delta\delta}\] \[+4\eta^{\sigma\rho}\left[\not{p}_{2}\left(1\pm\gamma^{5}\right) \gamma^{\mu}\not{p}_{4}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5 }\right)\gamma_{\mu\nu}\not{p}_{4}\right]_{\delta\delta}\right. \tag{4.60}\] for diagram III and \[4\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\not{p}_{4}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\sigma}\not{p} _{4}\right]_{\delta\delta}-4\left[\not{p}_{2}\left(1\pm\gamma^{5}\right) \gamma^{\sigma}\not{p}_{4}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm \gamma^{5}\right)\gamma^{\rho}\not{p}_{4}\right]_{\delta\delta}\] \[+4\eta^{\sigma\rho}\left[\not{p}_{2}\left(1\pm\gamma^{5}\right) \gamma^{\mu}\not{p}_{4}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm \gamma^{5}\right)\gamma_{\mu\nu}\not{p}_{4}\right]_{\delta\delta} \tag{4.61}\] for diagram IV. Summing them gives: \[\frac{g^{2}}{\left(4\pi\right)^{2}\epsilon}\delta_{ad}\delta_{bc}\frac{\left(p_{E }^{2}\right)^{-\epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left\{\left[\not{p}_{ In total, the (partially renormalized) correlation function is: \[\left\langle M_{vv}^{\pm\mu\nu}(p)\left[\bar{\psi}\right](p_{1})\left[ \psi\right](p_{2})\left[\bar{\psi}\right](p_{3})\left[\psi\right](p_{4})\right\rangle =\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\] \[+\frac{g^{2}}{(4\pi)^{2}}\frac{\left(p_{E}^{2}\right)^{-\epsilon} }{\epsilon}\frac{p_{E}^{2}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ad} \delta_{bc}\left\{\left[\not{p}\mathcal{z}\left(1\pm\gamma^{5}\right)\gamma^{ \nu}\not{p}\not{s}\right]_{\gamma\delta}\left[\not{p}\mathcal{z}\left(1\pm \gamma^{5}\right)\gamma^{\nu}\not{p}\not{s}\right]_{\delta\delta}-\eta^{\mu\nu} \left[\not{p}\mathcal{z}\left(1\pm\gamma^{5}\right)\gamma^{\nu}\not{p}\not{s }\right]_{\gamma\delta}\left[\not{p}\mathcal{z}\left(1\pm\gamma^{5}\right) \gamma_{\rho}\not{p}\not{s}\right]_{\delta\delta}\right\}-\] \[-\frac{g^{2}}{(4\pi)^{2}}\frac{\left(p_{E}^{2}\right)^{-\epsilon} }{\epsilon}\frac{\left(p_{E}^{2}\right)^{-\epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^ {2}p_{4}^{2}}\delta_{ab}\delta_{cd}\left\{\left[\not{p}\mathcal{z}\left(1\pm \gamma^{5}\right)\gamma^{\nu}\not{p}\not{s}\right]_{\gamma\delta}\left[\not{p} \mathcal{z}\left(1\pm\gamma^{5}\right)\gamma^{\nu}\not{p}\not{s}\right]_{ \delta\gamma}-\eta^{\mu\nu}\left[\not{p}\mathcal{z}\left(1\pm\gamma^{5}\right) \gamma^{\nu}\not{p}\not{s}\right]_{\gamma\delta}\left[\not{p}\mathcal{z}\left( 1\pm\gamma^{5}\right)\gamma_{\rho}\not{p}\not{s}\right]_{\delta\gamma}\right\}. \tag{4.63}\] Separation into irreps.The vectors bi-meson with free indices is a Lorentz 2-tensor, and as such can be decomposed into 3 irreps.: _Trace (scalar)._ For this we contract the indices above with \(\eta_{\mu\nu}\), giving: \[\left\langle M_{vv}^{\pm,tr}(p)\left[\bar{\psi}\right](p_{1})\left[\psi \right](p_{2})\left[\bar{\psi}\right](p_{3})\left[\psi\right](p_{4})\right\rangle=\] \[=\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left[\not{p} \mathcal{z}\left(1\pm\gamma^{5}\right)\gamma^{\mu}\not{p}\not{s}\right]_{ \gamma\delta}\left\{\delta_{ab}\delta_{cd}-3\delta_{ad}\delta_{bc}\frac{g^{2} \left(p_{E}^{2}\right)^{-\epsilon}}{\left(4\pi\right)^{2}\epsilon}\right\}-\] \[-\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left[\not{p} \mathcal{z}\left(1\pm\gamma^{5}\right)\gamma^{\mu}\not{p}\not{s}\right]_{ \gamma\delta}\left[\not{p}\mathcal{z}\left(1\pm\gamma^{5}\right)\gamma_{\mu} \not{p}\not{s}\right]_{\delta\gamma}\left\{\delta_{ad}\delta_{bc}-3\delta_{ ab}\delta_{cd}\frac{g^{2}\left(p_{E}^{2}\right)^{-\epsilon}}{\left(4\pi\right)^{2} \epsilon}\right\}. \tag{4.64}\] _Traceless symmetric._ We denote this irrep. by \(\left(\mu\nu\right)\), and project on it by taking \[\left(\cdots\right)_{\left(\mu\nu\right)}=\frac{1}{2}\left[\left(\cdots\right) _{\mu\nu}+\left(\cdots\right)_{\nu\mu}\right]-\frac{1}{d}\eta_{\mu\nu}\left( \cdots\right)_{\rho}^{\rho}. \tag{4.65}\] Under this operation, the last term in each correction (the one \(\propto\eta^{\mu\nu}\)) vanishes, and the expressions with \[\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\mu}\not{p}_{1}\right]_{ \gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\nu}\not{p} _{3}\right]_{\delta\delta},\left[\not{p}_{2}\left(1\pm\gamma^{5}\right)\gamma^ {\nu}\not{p}_{1}\right]_{\gamma\gamma}\left[\not{p}_{4}\left(1\pm\gamma^{5} \right)\gamma^{\mu}\not{p}_{3}\right]_{\delta\delta}\] are identified (and similarly with the permuted terms). Thus, we get: \[\left\langle M_{vv}^{\pm(\mu\nu)}(p)\left[\bar{\psi}\right](p_{1} )\left[\psi\right](p_{2})\left[\bar{\psi}\right](p_{3})\left[\psi\right](p_{4} )\right\rangle=\] \[-\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left[\not{p}_{2} \left(1\pm\gamma^{5}\right)\gamma^{\mu}\not{p}\not{s}\right]_{\gamma\delta} \left[\not{p}\mathcal{z}\left(1\pm\gamma^{5}\right)\gamma^{\nu}\not{p}\not{s} \right]_{\delta\gamma}\left\{\delta_{ad}\delta_{bc}+\delta_{ab}\delta_{cd}\frac{ g^{2}\left(p_{E}^{2}\right)^{-\epsilon}}{\left(4\pi\right)^{2}\epsilon}\right\}. \tag{4.66}\] The third representation is the antisymmetric one, but as mentioned above, the permutation of both the fermions and the antifermions is equivalent to that of the Lorentz indices. Therefore, the antisymmetric representation is orthogonal to the symmetrized bi-meson operator and need not be considered here. #### 4.2.4 Fierz identities Some of the quantities involve a combination of color and spinor structures that does not enable directly comparing them for extraction of the renormalization function. We can remedy this using the Fierz identities. We use the identities provided in [16]. We need to account for the difference in normalization used here, relative to [16]: \[\sigma^{\mu\nu}=i\gamma^{\mu\nu};\hskip 14.226378ptP_{R,L}=\frac{1}{2}\left(1\pm \gamma^{5}\right). \tag{4.67}\] Note that the chiral projection operator appears in all terms equally and so its relative normalization always cancels out. **Scalars and tensors** We get for the scalars: \[\delta_{ab}\delta_{cd}\left[\not{p}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Inserting this back into (4.73) gives: \[\left[\mathpzc{p}\mathpzc{z}\left(1\pm\gamma^{5}\right)\gamma^{\mu} \mathpzc{p}\mathpzc{p}\right]_{\gamma\delta}\left[\mathpzc{p}\mathpzc{z} \left(1\pm\gamma^{5}\right)\gamma^{\nu}\mathpzc{p}\mathpzc{z}\right]_{\delta \dot{\gamma}}=\] \[=\frac{1}{2}\left[\mathpzc{y}_{2}\left(1\pm\gamma^{5}\right) \gamma^{\nu}\mathpzc{p}\mathpzc{y}_{1}\right]_{\gamma\dot{\gamma}}\left[ \mathpzc{y}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\mu}\mathpzc{y}_{3}\right]_ {\delta\dot{\delta}}+\frac{1}{2}\left[\mathpzc{y}_{2}\left(1\pm\gamma^{5} \right)\gamma^{\nu}\mathpzc{p}\mathpzc{y}_{1}\right]_{\gamma\dot{\gamma}} \left[\mathpzc{y}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\nu}\mathpzc{y}_{3} \right]_{\delta\dot{\delta}}+\] \[-\frac{1}{2}\eta^{\mu\nu}\left[\mathpzc{y}_{2}\left(1\pm\gamma^{5 }\right)\gamma^{\rho}\mathpzc{y}\mathpzc{y}_{1}\right]_{\gamma\dot{\gamma}} \left[\mathpzc{y}_{4}\left(1\pm\gamma^{5}\right)\gamma_{\rho}\mathpzc{y}_{3} \right]_{\delta\dot{\delta}}\mp\frac{i}{2}\epsilon_{\ \sigma\rho}^{\mu\ \nu}\left[ \mathpzc{y}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\rho}\mathpzc{y}_{1}\right] _{\gamma\dot{\gamma}}\left[\mathpzc{y}_{4}\left(1\pm\gamma^{5}\right)\gamma^{ \sigma}\mathpzc{y}\mathpzc{y}_{3}\right]_{\delta\dot{\delta}}. \tag{4.75}\] As a sanity check, we verify we can recover the contracted identity by contracting both sides with \(\eta_{\mu\nu}\): \[\left[\mathpzc{p}\mathpzc{z}\left(1\pm\gamma^{5}\right)\gamma_ {\mu}\mathpzc{p}\mathpzc{z}\right]_{\gamma\dot{\delta}}\left[\mathpzc{p} \mathpzc{z}\left(1\pm\gamma^{5}\right)\gamma^{\mu}\mathpzc{p}\mathpzc{z} \right]_{\delta\dot{\gamma}}=\] \[=\frac{1}{2}\left[\mathpzc{y}_{2}\left(1\pm\gamma^{5}\right) \gamma^{\mu}\mathpzc{y}_{1}\right]_{\gamma\dot{\gamma}}\left[\mathpzc{y}_{4} \left(1\pm\gamma^{5}\right)\gamma_{\mu}\mathpzc{y}_{3}\right]_{\delta\dot{ \delta}}\cdot 2-\frac{d}{2}\left[\mathpzc{y}_{2}\left(1\pm\gamma^{5}\right) \gamma^{\rho}\mathpzc{y}_{1}\right]_{\gamma\dot{\gamma}}\left[\mathpzc{y}_{4 }\left(1\pm\gamma^{5}\right)\gamma_{\rho}\mathpzc{y}_{3}\right]_{\delta\dot{ \delta}}\overset{d\to 4}{\approx}\] \[\approx-\left[\mathpzc{y}_{2}\left(1\pm\gamma^{5}\right)\gamma^{ \mu}\mathpzc{y}_{1}\right]_{\gamma\dot{\gamma}}\left[\mathpzc{y}_{4}\left(1 \pm\gamma^{5}\right)\gamma_{\mu}\mathpzc{y}_{3}\right]_{\delta\dot{\delta}}, \tag{4.76}\] which agrees with the literature. Trace irrep.The amplitude becomes \[=\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ab}\delta_{cd}\left[ 1-\frac{3g^{2}\left(p_{E}^{2}\right)^{-\epsilon}}{\left(4\pi\right)^{2} \epsilon}\right]\left[\mathpzc{y}_{2}\left(1\pm\gamma^{5}\right)\gamma^{\mu} \mathpzc{y}_{1}\right]_{\gamma\dot{\gamma}}\left[\mathpzc{y}_{4}\left(1\pm \gamma^{5}\right)\gamma_{\mu}\mathpzc{y}_{3}\right]_{\delta\dot{\delta}}-\] \[-\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ad}\delta_ {bc}\left[1-\frac{3g^{2}\left(p_{E}^{2}\right)^{-\epsilon}}{\left(4\pi\right)^ {2}\epsilon}\right]\left[\mathpzc{y}_{2}\left(1\pm\gamma^{5}\right)\gamma^{ \mu}\mathpzc{y}\mathpzc{z}\right]_{\gamma\dot{\delta}}\left[\mathpzc{y}_{ \mathcal{C}}\left(1\pm\gamma^{5}\right)\gamma_{\mu}\mathpzc{p}\mathpzc{z} \right]_{\delta\dot{\gamma}}. \tag{4.77}\] We can immediately see that the anomalous dimension of this bi-meson operator is positive, since as we saw for the scalar case, it has the opposite sign to the correction of the renormalization function in our regularization scheme. This supports the conjecture, as the single-meson anomalous dimension vanishes. Traceless symmetric irrep.First we need to adjust the identity (4.75): by substituting \(\mu\nu\to\left(\mu\nu\right)\) we get simply: \[\Rightarrow\left[\mathpzc{p}\mathpzc{z}\left(1\pm\gamma^{5}\right)\gamma^{ \left(\mu}\mathpzc{p}\mathpzc{s}\right]_{\gamma\dot{\delta}}\left[ \mathpzc{p}\mathpzc{z}\left(1\pm\gamma^{5}\right)\gamma^{\nu}\mathpzc{p} \mathpzc{z}\right]_{\delta\dot{\gamma}}=\left[\mathpzc{y}_{2}\left(1\pm \gamma^{5}\right)\gamma^{\left(\mu}\mathpzc{y}_{1}\right]_{\gamma\dot{\gamma} \dot{\gamma}}\left[\mathpzc{y}_{4}\left(1\pm\gamma^{5}\right)\gamma^{\nu} \mathpzc{y}_{3}\right]_{\delta\dot{\delta}}, \tag{4.78}\] and so the amplitude becomes \[=\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left[\mathpzc{y}_ {2}\left(1\pm\gamma^{5}\right)\gamma^{\left(\mu}\mathpzc{y}_{1}\right]_{ \gamma\dot{\gamma}}\left[\mathpzc{y}_{4}\left(1\pm\gamma^{5}\right)\gamma^{ \nu}\mathpzc{y}_{3}\right]_{\delta\dot{\delta}}\delta_{ab}\delta_{cd}\left[1- \frac{g^{2}\left(p_{E}^{2}\right)^{-\epsilon}}{\left(4\pi\right)^{2}\epsilon} \right]-\] \[-\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left[\mathpzc{p} \mathpzc{z}\left(1\pm\gamma^{5}\right)\gamma^{\left(\mu}\mathpzc{y}\mathpzc{ z}\right]_{\gamma\dot{\delta}}\left[\mathpzc{p}\mathpzc{z}\left(1\pm\gamma^{5}\right)\gamma^{\nu} \mathpzc{p}\mathpzc{z}\right]_{\delta\dot{\gamma}}\delta_{ad}\delta_{bc} \left[1-\frac{g^{2}\left(p_{E}^{2}\right)^{-\epsilon}}{\left(4\pi\right)^{2} \epsilon}\right]. \tag{4.79}\] Once again we can immediately see that the anomalous dimension is positive, supporting the conjecture. #### 4.2.5 Renormalization function and anomalous dimension calculation We see that for the irreps. of scalars and tensors, there is a mixing of operators at the 2-meson level, and the renormalization functions need to be diagonalized. We set a renormalization scale \(M\) and note that renormalization will replace each factor of \(\left(p_{E}^{2}\right)^{-\epsilon}\) by \(M^{-2\epsilon}\left(1+O(\epsilon\right)\right)\). Next we denote \(\frac{g^{2}}{\left(4\pi\right)^{2}\epsilon}M^{-2\epsilon}\equiv\alpha\) for brevity. The renormalization matrix is (recalling that \(C_{F}=\frac{N^{2}-1}{2N}\approx\frac{N}{2}\)): \[\begin{pmatrix}1+3\left(2C_{F}-1\right)\alpha&12\alpha\\ -\frac{\alpha}{4}&1-\left(2C_{F}+3\right)\alpha\end{pmatrix}=\begin{pmatrix}1+3(N-1 )\alpha&12\alpha\\ -\frac{\alpha}{4}&1-\left(N+3\right)\alpha\end{pmatrix}. \tag{4.80}\] Its eigenvalues are computed to be: \[\lambda_{1,2}=1+(N-3)\alpha\pm 2N\alpha+O\left(\frac{\alpha}{N}\right). \tag{4.81}\] On the other hand, the squared renormalization matrix for the 1-meson operators is simply: \[\begin{pmatrix}1+6C_{F}\alpha&&\\ &1-2C_{F}\alpha\end{pmatrix}=\begin{pmatrix}1+3\alpha N&&\\ &1-\alpha N\end{pmatrix}, \tag{4.82}\] and its eigenvalues are the diagonal entries. The differentiation by \(\log(M)\) flips the sign of the correction, so the smallest anomalous dimension will relate to the largest eigenvalue of the renormalization matrix. Therefore, to test the conjecture, one needs to compare the larger eigenvalues between the two matrices, i.e. compare \[\lambda_{1}=1+3(N-1)\alpha,\ \ \ \ \ \lambda_{1}^{(1)}=1+3\alpha N. \tag{4.83}\] Subtracting them, we get: \[\lambda_{1}-\lambda_{1}^{(1)}=-3\alpha+O\left(\frac{\alpha}{N}\right)\stackrel{{ \alpha>0}}{{<}}0, \tag{4.84}\] supporting the conjecture. Note that in all cases we see that the leading 2-meson correction to the anomalous dimension is \(O\left(\alpha\right)\), in contrast to the overall leading correction to the renormalization function which is \(O\left(\alpha N\right)\), as expected from the large \(N\) limit. ### Mixed double mesons Here our basic operator is of the form \(\phi^{*}\psi\), and we consider the correlation function: \[\left\langle(\phi_{ia^{\prime}}^{*}\psi_{jb^{\prime}\alpha}\phi_{kk^{\prime}} ^{*}\psi_{l\mu^{\prime}\beta})\left(p\right)\phi_{i^{\prime}a}(p_{1})\overline {\psi}_{j^{\prime}b\dot{a}}(-p_{2})\phi_{k^{\prime}c}(p_{3})\overline{\psi}_{ l^{\prime}d\dot{\beta}}(-p_{4})\right\rangle\delta_{a^{\prime}b^{\prime}} \delta_{c^{\prime}d^{\prime}}. \tag{4.85}\] We assume for simplicity \(i\neq k,j\neq l\) (with a symmetrization of both pairs of indices), and that the other flavor indices configure in a way that gives a non-vanishing amplitude. Here, unlike the mesons with only either scalars or fermions, we have to consider explicitly the symmetrization by both pairs of flavor indices. We refer to the case \(i^{\prime}=i,j^{\prime}=j\) as the unpermuted diagrams. We also keep the shorthand notation \(A=\alpha\dot{\alpha},B=\beta\dot{\beta},A^{\prime}=\beta\dot{\alpha},B^{ \prime}=\alpha\dot{\beta}\). The additional diagrams are shown in Figure 13 - diagrams I-IV for the unpermuted case, and diagram V for the fermion-unpermuted cases (diagram V is uncolored because the scalar permutation is kept generic). For the (unpermuted) tree-level, multiplying two copies of (3.33) gives the amplitude: \[\frac{\mathcal{P}\mathcal{L}_{\alpha\dot{\alpha}}p\mathcal{P}_{\dot{\alpha} \dot{\beta}}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ab}\delta_{cd}. \tag{4.86}\] #### 4.3.1 Unpermuted diagrams - I-IV We omit color factors here and deal with them separately. Diagrams I-IV are respectively equal to: \[I =II=\frac{g^{2}}{\left(4\pi\right)^{2}\epsilon}\left(p_{E}^{2} \right)^{-\epsilon}\frac{\mathcal{P}\mathcal{L}_{A}\mathcal{P}_{B}}{p_{1}^{2} p_{2}^{2}p_{3}^{2}p_{4}^{2}} \tag{4.87}\] \[III =-\frac{g^{2}}{4\left(4\pi\right)^{2}\epsilon}\frac{\left[\gamma ^{\prime}\gamma^{\prime}p\mathcal{E}_{A}\right]\left[\gamma_{\nu}\gamma_{\mu} p\mathcal{A}_{B}\right]}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(p_{E}^{2} \right)^{-\epsilon}\] (4.88) \[IV =-\frac{g^{2}}{\left(4\pi\right)^{2}\epsilon}\frac{\mathcal{P} \mathcal{L}_{A}\mathcal{P}_{B}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(p_{ E}^{2}\right)^{-\epsilon}. \tag{4.89}\] The color factor for diagrams I-IV is \(\frac{1}{2}\delta_{bc}\delta_{da}\), similarly to the previous cases. #### 4.3.2 Diagram V Diagram V needs a more delicate consideration of the scalar flavor factors, due to the structure of the \(\phi^{4}\) couplings, so we keep them explicit and deal with the color factors at the same time. The index part is: \[\delta_{bb^{\prime}}\delta_{dd^{\prime}}\left[\tilde{f}\left( \delta_{ii^{\prime}}\delta_{aa^{\prime}}\delta_{kk^{\prime}}\delta_{cc^{ \prime}}+\delta_{ik^{\prime}}\delta_{ca^{\prime}}\delta_{ki^{\prime}}\delta_{ ac^{\prime}}\right)+\tilde{h}\left(\delta_{ii^{\prime}}\delta_{ac^{\prime}}\delta_{kk^{ \prime}}\delta_{ca^{\prime}}+\delta_{ik^{\prime}}\delta_{cc^{\prime}}\delta_{ ki^{\prime}}\delta_{aa^{\prime}}\right)\right]\delta_{a^{\prime}b^{\prime}}\delta_{c^{\prime}d^{ \prime}}= \tag{4.90}\] \[=\left[\tilde{f}\left(\delta_{ii^{\prime}}\delta_{ab}\delta_{kk^{ \prime}}\delta_{cd}+\delta_{ik^{\prime}}\delta_{cb}\delta_{ki^{\prime}}\delta_{ ad}\right)+\tilde{h}\left(\delta_{ii^{\prime}}\delta_{ad}\delta_{kk^{\prime}}\delta_{ cb}+\delta_{ik^{\prime}}\delta_{cd}\delta_{ki^{\prime}}\delta_{ab}\right) \right]. \tag{4.91}\] This is a contribution of the same order as in the case of scalar mesons. As we argue later, the \(\tilde{f}\) terms are suppressed by \(O\left(\frac{1}{N}\right)\) compared to the \(\tilde{h}\) terms and can be neglected going forward. The other factor (coming from the momentum loop, including a symmetry factor) is: \[-\frac{1}{2\left(4\pi\right)^{2}\epsilon}\frac{\cancel{p}\cancel{p}_{1}^{2}p_{2 }^{2}p_{3}^{2}p_{4}^{2}}\left(p_{E}^{2}\right)^{-\epsilon}. \tag{4.92}\] Taking permutations into account as well, this term gives \(\tilde{h}\delta_{aa}\delta_{cb}\) in the cases without scalar permutation, and \(\tilde{h}\delta_{ab}\delta_{cd}\) in the cases with it (times the other factor (4.92)). #### 4.3.3 Permuted diagrams, Fermion operator signs and spin structure The fermion permutation \(j\leftrightarrow l\) changes the spinor and color structures as: \[\cancel{p}\cancel{A}\cancel{p}_{l} \leftrightarrow\cancel{p}\cancel{A}\cancel{p}_{B^{\prime}} \tag{4.93}\] \[\delta_{ab}\delta_{cd} \leftrightarrow\delta_{ad}\delta_{cb} \tag{4.94}\] while the scalar permutation only changes the color structures. Making both permutations then returns the color structure to its unpermuted form, while the spinor structure remains permuted. Similarly to the fermionic meson case, when contracting the fermionic operators, we see that a change in sign only occurs if the external fermion legs are permuted, i.e. in the cases of a fermion permutation with or without a scalar permutation. Here the spin structure is simpler than that of the fermionic double meson case. Diagrams I,II,IV,V simply have \(\cancel{p}\cancel{A}\cancel{p}_{B}\) (in the unpermuted case), and so do all the intra-meson diagrams. The only exception is diagram III, which has the structure: \[\left[\gamma^{\mu}\gamma^{\mu}\cancel{p}\cancel{e}\right]_{A}\left[\gamma_{ \nu}\gamma_{\mu}\cancel{p}\cancel{e}\right]_{B}=4\cancel{p}\cancel{e}_{A} \cancel{p}_{B}+\left[\gamma^{\mu}\cancel{p}\cancel{e}\right]_{A}\left[\gamma_ {\mu\nu}\cancel{p}\cancel{e}\right]_{B}. \tag{4.95}\] #### 4.3.4 Diagram resummation We know that the anomalous dimension of the single meson is uniform (indeed, one can show that there is only one spinor degree of freedom for this operator), so we can subtract it in advance and only consider the inter-meson contributions. The sum of the unpermuted diagrams I-IV is \[-\frac{g^{2}}{8\left(4\pi\right)^{2}\epsilon}\frac{\left(p_{E}^{2}\right)^{- \epsilon}}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left[\gamma^{\mu\nu}\cancel{ p}\cancel{e}\right]_{A}\left[\gamma_{\mu\nu}\cancel{p}\cancel{e}\right]_{B} \delta_{bc}\delta_{da}. \tag{4.96}\] The permutations affect the color and spinor structure; summing over them gives: Figure 13: Double mixed meson diagrams I-V. \[\left[\gamma^{\mu\nu}\cancel{p}\right]_{A}\left[\gamma_{\mu\nu} \cancel{p}\right]_{B}\left(\delta_{bc}\delta_{da}-\frac{1}{N}\delta_{ba}\delta_{ dc}\right)\rightarrow\] \[\rightarrow\left[\gamma^{\mu\nu}\cancel{p}\right]_{A}\left[ \gamma_{\mu\nu}\cancel{p}\right]_{B}\delta_{bc}\delta_{da}+\left[\gamma^{\mu \nu}\cancel{p}\right]_{A}\left[\gamma_{\mu\nu}\cancel{p}\right]_{B}\delta_{ba }\delta_{dc}-\] \[-\left[\gamma^{\mu\nu}\cancel{p}\right]_{A^{\prime}}\left[\gamma _{\mu\nu}\cancel{p}\right]_{B^{\prime}}\delta_{ba}\delta_{dc}-\left[\gamma^{ \mu\nu}\cancel{p}\right]_{A^{\prime}}\left[\gamma_{\mu\nu}\cancel{p}\right]_ {B^{\prime}}\delta_{bc}\delta_{da}=\] \[=\left(\delta_{bc}\delta_{da}+\delta_{ba}\delta_{dc}\right)\left( \left[\gamma^{\mu\nu}\cancel{p}\right]_{A}\left[\gamma_{\mu\nu}\cancel{p} \right]_{B}-\left[\gamma^{\mu\nu}\cancel{p}\right]_{A^{\prime}}\left[\gamma_ {\mu\nu}\cancel{p}\right]_{B^{\prime}}\right). \tag{4.97}\] We can use the Fierz identity of equation (33) of [16] (modified to our notation of \(\gamma^{\mu\nu}\)) to convert this result to the spinor structure of the tree level. This is most conveniently done if we retroactively insert a left/right projection \(\frac{1}{2}\left(1\pm\gamma^{5}\right)\) to each meson. The results are equivalent between the projections, so we can consider left-handed fermions. \[\left[P_{L}\gamma^{\mu\nu}\cancel{p}\right]_{A}\left[P_{L}\gamma_{\mu\nu} \cancel{p}\right]_{B}=-\left\{\frac{1}{2}\left[P_{L}\gamma^{\mu\nu}\cancel{p} \right]_{A^{\prime}}\left[P_{L}\gamma_{\mu\nu}\cancel{p}\right]_{B^{\prime}}+ 6\left[P_{L}\cancel{p}\right]_{A^{\prime}}\left[P_{L}\cancel{p}\right]_{B^{ \prime}}\right\}. \tag{4.98}\] and similarly with the fermion-permuted contribution: \[\left[P_{L}\gamma^{\mu\nu}\cancel{p}\right]_{A^{\prime}}\left[P_{L}\gamma_{ \mu\nu}\cancel{p}\right]_{B^{\prime}}=\frac{1}{2}\left[P_{L}\gamma^{\mu\nu} \cancel{p}\right]_{A}\left[P_{L}\gamma_{\mu\nu}\cancel{p}\right]_{B}-6\left[ P_{L}\cancel{p}\right]_{A}\left[P_{L}\cancel{p}\right]_{B}. \tag{4.99}\] Subtracting them, as they appear in the correlation function, gives:: \[\left[P_{L}\gamma^{\mu\nu}\cancel{p}\right]_{A}\left[P_{L}\gamma_ {\mu\nu}\cancel{p}\right]_{B}-\left[P_{L}\gamma^{\mu\nu}\cancel{p}\right]_{A ^{\prime}}\left[P_{L}\gamma_{\mu\nu}\cancel{p}\right]_{B^{\prime}}=\] \[=\left[P_{L}\gamma^{\mu\nu}\cancel{p}\right]_{A}\left[P_{L} \gamma_{\mu\nu}\cancel{p}\right]_{B}-\left[P_{L}\gamma^{\mu\nu}\cancel{p} \right]_{A^{\prime}}\left[P_{L}\gamma_{\mu\nu}\cancel{p}\right]_{B^{\prime}} =12\left\{\left[P_{L}\gamma^{\mu\nu}\cancel{p}\right]_{A}\left[P_{L}\cancel{ p}\right]_{B}-\left[P_{L}\cancel{p}\right]_{A^{\prime}}\left[P_{L}\cancel{ p}\right]_{B^{\prime}}\right\}. \tag{4.100}\] We see that the antisymmetrized spin structure of diagram III gives a relative factor of 12. The same applies to the right-handed projections. The permuted versions of diagram V give: \[-\frac{\tilde{h}}{2\left(4\pi\right)^{2}}\frac{\left(p_{E}^{2} \right)^{-\epsilon}}{\epsilon}\frac{\left(p_{E}^{2}\right)^{-\epsilon}}{p_{1}^{ 2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\delta_{ad}\delta_{cb}\cancel{p}\cancel{p} \cancel{}_{A}\cancel{p}\cancel{}_{B}\rightarrow\] \[\rightarrow-\frac{\tilde{h}}{2\left(4\pi\right)^{2}}\frac{\left(p _{E}^{2}\right)^{-\epsilon}}{\epsilon}\frac{\left(p_{E}^{2}\right)^{-\epsilon} }{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(\delta_{ad}\delta_{cb}\cancel{ p}\cancel{}_{A}\cancel{p}\cancel{}_{B}+\delta_{cd}\delta_{ab}\cancel{p} \cancel{}_{A}\cancel{p}\cancel{}_{B}-\delta_{ad}\delta_{cb}\cancel{p} \cancel{}_{A^{\prime}}\cancel{p}\cancel{}_{B^{\prime}}-\delta_{cd}\delta_{ ab}\cancel{p}\cancel{}_{A^{\prime}}\cancel{}_{B^{\prime}}\right)=\] \[=-\frac{\tilde{h}}{2\left(4\pi\right)^{2}}\frac{\left(p_{E}^{2} \right)^{-\epsilon}}{\epsilon}\frac{\left(p_{E}^{2}\right)^{-\epsilon}}{p_{1}^{ 2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(\delta_{ad}\delta_{cb}+\delta_{cd}\delta_{ ab}\right)\left(\cancel{p}\cancel{}_{A}\cancel{p}\cancel{}_{B}- \cancel{p}\cancel{}_{A^{\prime}}\cancel{}_{B^{\prime}}\right). \tag{4.101}\] A similar result holds for the permutation of the tree-level diagram (4.86), giving: \[\frac{1}{p_{1}^{2}p_{2}^{2}p_{3}^{2}p_{4}^{2}}\left(\delta_{ab}\delta_{cd}+ \delta_{cb}\delta_{ad}\right)\left(\cancel{p}\cancel{}_{A}\cancel{p}_{B}- \cancel{p}\cancel{}_{A^{\prime}}\cancel{}_{B^{\prime}}\right). \tag{4.102}\] In total, the correlation function correction coming from the inter-meson interactions is \[-\frac{\tilde{h}+3g^{2}}{2\left(4\pi\right)^{2}}\frac{\left(p_{E}^{2}\right)^{- \epsilon}}{\epsilon}\frac{\left(p_{E}^{2}\right)^{-\epsilon}}{p_{1}^{2}p_{2}^{2}p_ {3}^{2}p_{4}^{2}}\left(\delta_{bc}\delta_{da}+\delta_{ba}\delta_{dc}\right) \left\{\left[P_{L}\cancel{p}\cancel{}\right]_{A}\left[P_{L}\cancel{p} \cancel{}\right]_{B}-\left[P_{L}\cancel{p}\cancel{}\right]_{A^{\prime}}\left[P_ {L}\cancel{p}\cancel{}\right]_{B^{\prime}}\right\} \tag{4.103}\] The correction to the correlation function coming from the inter-meson diagrams is thus (at scale \(M\)): \[\delta^{\prime}Z_{\left(\phi^{*}\psi\right)^{2}}=\delta Z_{\left(\phi^{*}\psi \right)^{2}}-\delta\left[Z_{\left(\phi^{*}\psi\right)^{2}}^{2}\right]=-\frac{ \tilde{h}+3g^{2}}{2\left(4\pi\right)^{2}\epsilon}M^{-2\epsilon} \tag{4.104}\] Seeing as the coefficient here is always negative for \(g^{2},\tilde{h}>0\), we have an agreement with the CCC. #### Notes about the order of limits To consider the Caswell-Banks-Zaks fixed point, we work in the order of limits: \[\frac{1}{N}\ll\lambda,h,f\ll 1. \tag{4.105}\] We saw for scalar mesons that the contribution of gluon exchange diagrams is at most of the order \(O\left(\frac{\lambda^{2}}{N}\right)\). The \(\phi^{4}\) diagrams give a leading contribution of \((\tilde{h}+\tilde{f})\), without any additional factors of \(N\). Then we compare these to the 't Hooft couplings: \(\tilde{h}\propto\frac{h}{N},\tilde{f}\propto\frac{f}{N_{t}N}\sim\frac{f}{N^{2}}\). From this analysis it is clear that the \(\tilde{h}\) term in fact dominates the contribution to the anomalous dimension relevant to the CCC, and satisfies it. The contribution to the 2-fermion operator found in section 4.2 is of the same order of magnitude as that of the \(\tilde{h}\) diagram. As for the mixed mesons, the gluon exchange diagrams I-IV give an \(O(\frac{\lambda}{N})\) correction, and the \(\phi^{4}\) diagram V gives an \(O(\tilde{h}+\tilde{f})\) correction, as in the scalar double meson case. This justifies neglecting the \(\tilde{f}\) terms. When we evaluate the correction to the anomalous dimension of the double meson operator, relative to the part attributed to the single meson operator, we consider inter-meson diagrams that are suppressed by \(\frac{1}{N}\) relative to the intra-meson diagrams, but remain at one-loop order in the couplings. This is allowed because the correction is known to also be suppressed by \(\frac{1}{N}\) - and thus so is any contribution to it involving intra-meson interactions. However, contributions including non-trivial intra-meson interactions will also be subleading in perturbation theory, and therefore will be overall suppressed. ## Acknowledgements We would like to thank Eran Palti, Adar Sharon, Tomer Solberg and Masataka Watanabe for useful discussions. This work was supported in part by an Israel Science Foundation (ISF) center for excellence grant (grant number 2289/18), by ISF grant no. 2159/22, by Simons Foundation grant 994296 (Simons Collaboration on Confinement and QCD Strings), by grant no. 2018068 from the United States-Israel Binational Science Foundation (BSF), by the Minerva foundation with funding from the Federal German Ministry for Education and Research, by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant "Holography and the Swampland", and by a research grant from Martin Eisenstein. OA is the Samuel Sebba Professorial Chair of Pure and Applied Physics.
2301.04452
Uncertainty Estimation based on Geometric Separation
In machine learning, accurately predicting the probability that a specific input is correct is crucial for risk management. This process, known as uncertainty (or confidence) estimation, is particularly important in mission-critical applications such as autonomous driving. In this work, we put forward a novel geometric-based approach for improving uncertainty estimations in machine learning models. Our approach involves using the geometric distance of the current input from existing training inputs as a signal for estimating uncertainty, and then calibrating this signal using standard post-hoc techniques. We demonstrate that our method leads to more accurate uncertainty estimations than recently proposed approaches through extensive evaluation on a variety of datasets and models. Additionally, we optimize our approach so that it can be implemented on large datasets in near real-time applications, making it suitable for time-sensitive scenarios.
Gabriella Chouraqui, Liron Cohen, Gil Einziger, Liel Leman
2023-01-11T13:19:24Z
http://arxiv.org/abs/2301.04452v1
# Uncertainty Estimation based on Geometric Separation ###### Abstract In machine learning, accurately predicting the probability that a specific input is correct is crucial for risk management. This process, known as uncertainty (or confidence) estimation, is particularly important in mission-critical applications such as autonomous driving. In this work, we put forward a novel geometric-based approach for improving uncertainty estimations in machine learning models. Our approach involves using the geometric distance of the current input from existing training inputs as a signal for estimating uncertainty, and then calibrating this signal using standard post-hoc techniques. We demonstrate that our method leads to more accurate uncertainty estimations than recently proposed approaches through extensive evaluation on a variety of datasets and models. Additionally, we optimize our approach so that it can be implemented on large datasets in near real-time applications, making it suitable for time-sensitive scenarios. uncertainty estimation, geometric separation, calibration, confidence evaluation 2320231-241/23/-. 2023 Gabriella Chouraqui et al. ## 1 Introduction Machine learning models, such as neural networks, random forests, and gradient boosted trees, are widely used in various fields, including computer vision and transportation, and are transforming the field of computer science Niculescu-Mizil and Caruana (2006); Zhang and Haghani (2015). However, the probabilistic nature of classifications made by these models means that misclassifications are inevitable. As a result, estimating the uncertainty for a particular input is a crucial challenge in machine learning. In fact, many machine learning models have some built-in measure of confidence that is often provided to the user for risk management purposes. The field of _uncertainty calibration_ aims to improve the accuracy of the confidence estimates made by machine learning models Guo et al. (2017). Confidence evaluation, or the model's prediction of its success rate on a specific input, is a crucial aspect of mission-critical machine learning applications, as it provides a realistic estimate of the probability of success for a classification and enables informed decisions about the _current_ situation. Even a highly accurate model may encounter an unexpected situation, which can be communicated to the user through confidence estimation. For example, consider an autonomous vehicle using a model to identify and classify traffic signs. The model is very accurate, and in most cases, its classifications are correct with high confidence. However, one day, it encounters a traffic sign that is obscured, e.g., by heavy vegetation. In this case, the model's classification is likely to be incorrect. Estimating confidence, or uncertainty, is a crucial tool for assessing unavoidable risks, allowing system designers to address these risks more effectively and potentially avoid unexpected and catastrophic consequences. For example, our autonomous vehicle may reduce its speed and activate additional sensors until it reaches higher confidence. Therefore, all popular machine learning models have mechanisms for determining confidence that can be calibrated to maximize the quality of confidence estimates Niculescu-Mizil and Caruana (2005); Guo et al. (2017); Kumar et al. (2019), and there is ongoing research to calibrate models more effectively and enable more reliable applications Leistner et al. (2009); Sun et al. (2007). Existing calibration methods can be divided into two categories: post-hoc methods that perform a transformation that maps the raw outputs of classifiers to their expected probabilities Kull et al. (2019); Guo et al. (2017); Gupta and Ramdas (2021), and ad-hoc methods that adapt the training process to produce better calibrated models Thulasidasan et al. (2019); Hendrycks et al. (2019). Post-hoc calibration methods are easier to apply because they do not change the model and do not require retraining. However, ad-hoc methods may lead to better model training in the first place and more reliable models. With the success of both approaches, recent research has focused on using ensemble methods whose estimates are a weighted average of multiple calibration methods Ashukha et al. (2020); Ma et al. (2021); Zhang et al. (2020); Pakdaman and Cooper (2016); Naeini et al. (2015). Another recent line of work attempts to further refine the uncertainty estimations by refining the grouping of confidence estimations, e.g., Perez-Lebel et al. (2022); Hebert-Johnson et al. (2018). In principle, post-hoc calibration can be viewed as cleaning up a signal, namely the model's original confidence estimate. Interestingly, if we follow this logic, it is clear that the maximal attainable benefit lies in the quality of the signal. To see this, consider a model that plots the same confidence for all inputs. In this case, the best result that can be achieved is to set that confidence to the model's average accuracy over all inputs. Therefore, finding better signals to calibrate is a promising direction for research. In this work, we introduce a novel approach for improving uncertainty estimates in machine learning models _using geometry_. We first provide an algorithm for calculating the maximal geometric _separation_ of an input. However, calculating the geometric separation of an input requires evaluating the whole space of training inputs, making it a computationally expensive method that is not always feasible. Therefore, we suggest multiple methods to accelerate the process, including a lightweight approximation called _fast-separation_ and several data reduction methods that shorten the geometric calculation. We demonstrate that using our geometric-based method, combined with a standard calibration method, leads to more accurate confidence estimations than calibrating the model's original signal across different models and datasets. Even more, our approach yields better estimation even when compared to state-of-the-art calibration methods Kumar et al. (2019); Gupta and Ramdas (2021); Guo et al. (2017a); Zhang et al. (2020); Naeini et al. (2015); Kull et al. (2017). Additionally, we show that our approach can be implemented in near real-time on a variety of datasets through the use of multiple levels of approximation and optimization. This is particularly useful for practical applications that require rapid decision-making, such as autonomous driving. The entire code is available at our Github Leman et al. (2022). ## 2 Related Work As mentioned above, uncertainty calibration is about estimating the model's success probability of classifying a given example. Post-hoc calibration methods apply some transformation to the model's confidence (without changing the model) such transformations include Beta calibration (Beta) Kull et al. (2017), Platt scaling (Platt) Platt (1999), Temperature Scaling (TS) Guo et al. (2017a); Kull et al. (2019), Ensemble Temperature Scaling (ETS) Zhang et al. (2020), and cubic spline Gupta and Ramdas (2021). In brief, these methods are limited by the best learnable mapping between the model's confidence estimations, and the actual confidence. That is, post-hoc calibration map each confidence value to another calibrated value whereas our method introduces a new signal that can be calibrated just like the model's original signal. Another work that uses a geometric distance in this context is Dalitz (2009). There, the confidence score is computed directly from the geometric distance, while we first fit a function on a subset of the data to learn the specific behavior of the dataset and model. Moreover, the work in Dalitz (2009) only applies to the k-nearest neighbor model, while our method is applicable to all models. The recently proposed Scaling Binning Calibrator (SBC) of Kumar et al. (2019) uses a fitting function on the confidence values, divides the inputs into bins of equal size, and outputs the function's average in each bin. Histogram Binning (HB) Gupta and Ramdas (2021) uses a similar idea but divides the inputs into uniform-mass (rather than equal-size) bins. Interestingly, while most post-hoc calibration methods are model agnostic, recent methods have begun to look at a neural network non-probabilistic output called logits (before applying softmax) Guo et al. (2017b); Ding et al. (2020); Wenger et al. (2019). Thus, some new post-hoc calibration methods apply only to neural networks. Ensemble methods are similar to post-hoc calibration methods as they do not change the model, but they consider multiple signals to determine the model's confidence Ashukha et al. (2020); Ma et al. (2021). For example, Bayesian Binning into Quantiles (BBQ) Naeini et al. (2015) is an extension of HB that uses multiple histogram binning models with different bin numbers, and partitions then outputs scores according to Bayesian averaging. The same methodology of Bayesian averaging is applied in Ensemble of Near Isotonic Regression Pakdaman and Cooper (2016), but instead of histogram binning, they use nearly isotonic regression models. Ad-hoc calibration is about training models in new manners aimed to yield better uncertainty estimations. Important techniques in this category include mixup training Thulasidasan et al. (2019), pre-training Hendrycks et al. (2019), label-smoothing Muller et al. (2019), data augmentation Ashukha et al. (2020), self-supervised learning Hendrycks et al. (2019), Bayesian approximation (MC-dropout) Gal and Ghahramani (2016); Gal et al. (2017), Deep Ensemble (DE) Lakshminarayanan et al. (2017), Snapshot Ensemble Huang et al. (2017a), Fast Geometric Ensembling (FGE) Garipov et al. (2018), and SWA-Gaussian (SWAG) Maddox et al. (2019). A notable approach is to use geometric distances in the loss function while training the model Xing et al. (2020). The authors work with a representation space that maximizes intra-class distances, minimizes inter-class distances, and uses the distances to estimate the confidence. Ad-hoc calibration is perhaps the best approach in public as it tackles the core of models' calibration directly. However, because it offers specific training methods, it is of less use to large and already trained models, and the impact of each workshop is limited to a specific model type (e.g., DNNs in Garipov et al. (2018)). In comparison, post-hoc and ensemble methods (and our own method) often work for numerous models. Our geometric method is largely inspired by the approach of robustness proving in machine learning models. In this field, formal methods are used to prove that specific inputs are robust to small adversarial perturbations. That is, we formally prove that all images in a certain geometric radius around a specific train-set image receive the same classification Narodytska et al. (2018); Katz et al. (2017); Huang et al. (2017); Gehr et al. (2018); Ehlers (2017); Einziger et al. (2019). These works rely on formal methods produced in an offline manner and thus apply only to training set inputs (known apriori). Whereas confidence estimation reasons about the current input. However, the underlying intuition, i.e., that geometrically similar inputs should be classified in the same manner is also common to our work. Indeed, our work shows that geometric properties of the inputs can help us quantify the uncertainty in certain inputs and that, in general, inputs that are less geometrically separated and are 'on the edge' between multiple classifications are more error-prone than inputs that are highly separated from other classes. Thus our work reinforces the intuition behind applying formal methods to prove robustness and supports the intuition that more robust training models would be more dependable. ## 3 Geometric Separation In this section, we define a geometric separation measure that reasons about the distance of a given input from other inputs with different classifications. Our end goal is to use this measure to provide confidence estimations. Formally, a model receives a data input, \(x\), and outputs the pair \(\langle\mathcal{C}(x),\mathit{conf}(x)\rangle\), where \(\mathcal{C}(x)\) is the model's classification of \(x\) and \(\mathit{conf}(x)\) reflects the probability that the classification is correct. We estimate the environment around \(x\) where inputs are closer to inputs of certain classifications over the others. Our work assumes that the inputs are normalized, and thus these distances carry the same significance between the different inputs. In Section 3.1, we define geometric separation and provide an algorithm to calculate it. Our evaluation shows that geometric separation produces a valuable signal that improves confidence estimations. However, calculating geometric separation is too cumbersome for real-time systems, so we suggest a lightweight approximation in Section 3.2. Finally, Section 3.3 explains how we use the geometric signal to derive \(\mathit{conf}(x)\). That is, mapping a real number corresponding to the geometric separation to a number in \([0,1]\) corresponding to the confidence ratio. ### Separation Measure We look at the displacement of \(x\) compared to nearby data inputs within the training set. Intuitively, when \(x\) is close to other inputs in \(\mathcal{C}(x)\) (i.e., inputs with the same classification as \(x\)) and is far from inputs with other classifications, then the model is correct with a high probability, implying that \(\mathit{conf}(x)\) should be high. On the other hand, when there are training inputs with a different classification close to \(x\), we estimate that \(\mathcal{C}(x)\) is more likely to be incorrect. Below we provide definitions that allow us to formalize this intuitive account. In what follows, we consider a model \(\mathcal{M}\) to consist of a machine learning model (e.g., a gradient boosted tree or a neural network), along with a labeled train set, \(\mathit{Tr}\), used to generate the model. We use an implicit notion of distance and denote by \(d(x,y)\) the distance between inputs \(x\) and \(y\), and by \(\mathit{D}(x,A)\) the distance between the input \(x\) and the set \(A\) (i.e., the minimal distance between \(x\) and the inputs in \(A\)). **Definition 1** (Safe and Dangerous inputs).: _Let \(\mathcal{M}\) be a model. For an input \(x\) in the sample space we define:_ \[F_{\mathcal{M}}(x):=\{x^{\prime}\in\mathit{Tr}:\mathcal{C}(x^{\prime})= \mathcal{C}(x)\}.\] _We denote by \(\overline{F}_{\mathcal{M}}(x)\) the set \(\mathit{Tr}\setminus F_{\mathcal{M}}(x)\). An input \(x\in\mathcal{X}\) is labeled as safe if \(D(x,F_{\mathcal{M}}(x))<D(x,\overline{F}_{\mathcal{M}}(x))\), and it is labeled as dangerous otherwise._ **Definition 2** (Zones).: _Let \(x\) be a safe (dangerous) input. A zone for \(x\), denoted \(z_{x}\), is such that for any input \(y\), if \(d(x,y)<z_{x}\), then \(D(y,F_{\mathcal{M}}(x))<D(y,\overline{F}_{\mathcal{M}}(x))\) (\(D(y,F_{\mathcal{M}}(x))\geq D(y,\overline{F}_{\mathcal{M}}(x))\)). For each \(x\) we denote the maximal such zone by \(\mathcal{Z}(x)\)._ In other words, a zone of a safe (dangerous) input \(x\) is a radius around \(x\) such that all inputs in this ball are closer to an input in \(F_{\mathcal{M}}(x)\) (\(\overline{F}_{\mathcal{M}}(x)\)) than to any input in \(\overline{F}_{\mathcal{M}}(x)\) (\(F_{\mathcal{M}}(x)\)). \(\mathcal{Z}(x)\) is the _maximal_ zone attainable of \(x\). Figure 1 provides a geometric illustration of the safe and danger zones of a given input and of the separation values. For illustration purposes, the figure uses the \(L_{2}\) norm with two dimensions, whereas our data usually includes many more dimensions. For example, a \(30\times 30\) traffic sign image will have 900 dimensions. In the figure, the shapes represent the classification of training set inputs. In yellow, we see a new input (\(x\) on the left-hand-side and \(y\) on the right-hand-side) which the model classifies as a triangle. \(x\) is a safe input because it is closer to other triangles in the training set than it is to the squares. The green highlighted ball represents its maximal zone. The input \(y\) is dangerous because the closest training set input is a square. The red highlighted ball represents its maximal zone which dually represents how far we need to distance ourselves from \(y\) so that inputs classified as triangles may become closer than other inputs. **Definition 3** (Separation).: _The separation of a data input \(x\) with respect to the model \(\mathcal{M}\) is \(\mathcal{Z}(x)\) when \(x\) is a safe input, and \(-1\cdot\mathcal{Z}(x)\) when \(x\) is a dangerous input._ Figure 1: Geometric representation of safe and dangerous inputs, maximal zones, and separation values. The various classifications are illustrated via different shapes, and the safe (danger) zones of x (y) are illustrated via green (red) circles. That is, the separation of \(x\) encapsulates the maximal zone for \(x\) (provided by the absolute value) together with an indication of whether the input is safe or dangerous (provided by the sign). The separation of \(x\) depends only on the classification of \(x\) by the model and the train set. This is because our definition partitions the inputs in \(\mathit{Tr}\) into two sets: one with \(\mathcal{C}(x)\), \(F_{\mathcal{M}}(x)\), and one with all other classifications, \(\overline{F}_{\mathcal{M}}(x)\). These sets vary between models only when they disagree on the classification of \(x\). Note that \(x\)'s for which the distance from \(F_{\mathcal{M}}(x)\) equals the distance from \(\overline{F}_{\mathcal{M}}(x)\) are considered dangerous inputs, and their separation measure will be zero. As mentioned, Definition 2 and Definition 3 use an implicit notion of distance and can accept any distance metric (e.g., \(L_{1},L_{2}\) or \(L_{\infty}\)). However, throughout this work, we use \(L_{2}\) as it is a standard measure for safety features in adversarial machine learning Moosavi-Dezfooli et al. (2017), in addition to it being easy to illustrate and intuitive to understand. Moreso, as our work targets real-time confidence estimations using \(L_{2}\) allows us to leverage standard and well-optimized libraries. Accordingly, all our definitions and calculations assume the \(L_{2}\) metrics (Euclidean distances). Nevertheless, Section 4.2.1 shows that other metrics are also feasible. Next, we provide a formula for calculating the separation of a given input \(x\) within the \(L_{2}\) distance metric. **Definition 4**.: _Given a model \(\mathcal{M}\) and an input \(x\), define:_ \[\overline{\mathcal{S}}^{\mathcal{M}}(x)=\min_{x^{\prime\prime}\in\overline{F} _{\mathcal{M}}(x)}\max_{x^{\prime}\in F_{\mathcal{M}}(x)}\frac{d^{2}(x,x^{ \prime\prime})-d^{2}(x,x^{\prime})}{2d(x^{\prime},x^{\prime\prime})}\] **Lemma 1**.: _Let \(x,x^{\prime},x^{\prime\prime}\in\mathbb{R}^{n}\) be inputs such that \(d(x,x^{\prime})<d(x,x^{\prime\prime})\). The maximal distance \(M(x,x^{\prime},x^{\prime\prime})\) for which if \(y\in\mathbb{R}^{n}\) such that \(d(x,y)<M(x,x^{\prime},x^{\prime\prime})\), then \(d(y,x^{\prime})<d(y,x^{\prime\prime})\) is_ \[\frac{d^{2}(x,x^{\prime\prime})-d^{2}(x,x^{\prime})}{2d(x^{\prime},x^{\prime \prime})}.\] Proof.: Since any three points in space define a plane we focus on the plane defined by these three points. Figure 2 demonstrates a geometric positioning of the points and the main constructions in the proof. The perpendicular bisector to the line between \(x^{\prime}\) and \(x^{\prime\prime}\) divides the plane into two parts: one in which all the points are closer to \(x^{\prime\prime}\) than to \(x^{\prime}\) (the lower part in the figure) and one in which all the points are closer to \(x^{\prime}\) than to \(x^{\prime\prime}\) (the upper part in the figure). Our goal is thus to establish the distance between \(x\) and the lower part of the plane. Hence, \(M(x,x^{\prime},x^{\prime\prime})\) amounts to the distance Figure 2: Illustration of the proof of Lemma 1 from \(x\) to the perpendicular bisector to the line between \(x^{\prime}\) and \(x^{\prime\prime}\). Using trigonometric calculations, it is straightforward to verify that indeed \[M(x,x^{\prime},x^{\prime\prime})=\frac{d^{2}(x,x^{\prime\prime})-d^{2}(x,x^{ \prime})}{2d(x^{\prime},x^{\prime\prime})}.\] **Proposition 1**.: \(\overline{\mathcal{S}}^{\mathcal{M}}(x)\) _is the separation of \(x\) with respect to the model \(\mathcal{M}\) (in Definition 3)._ Proof.: Let \(x\) be a safe input, and \(y\) be an input such that: \[d(x,y)<\min_{x^{\prime\prime}\in\overline{F}_{\mathcal{M}}(x)}\max_{x^{\prime }\in F_{\mathcal{M}}(x)}\frac{d^{2}(x,x^{\prime\prime})-d^{2}(x,x^{\prime})}{ 2d(x^{\prime},x^{\prime\prime})}.\] We first show that \(y\) is closer to \(F_{\mathcal{M}}(x)\) than to \(\overline{F}_{\mathcal{M}}(x)\). Let \(z^{\prime\prime}\in\overline{F}_{\mathcal{M}}(x)\), it suffices to show that there exist \(z^{\prime}\in F_{\mathcal{M}}(x)\) such that \(d(y,z^{\prime})<d(y,z^{\prime\prime})\). Notice that: \[d(x,y)<\max_{x^{\prime}\in F_{\mathcal{M}}(x)}\frac{d^{2}(x,z^{\prime\prime}) -d^{2}(x,x^{\prime})}{2d(x^{\prime},z^{\prime\prime})}.\] Therefore, there exist a \(z^{\prime}\in F_{\mathcal{M}}(x)\) for which: \[d(x,y)<\frac{d^{2}(x,z^{\prime\prime})-d^{2}(x,z^{\prime})}{2d(z^{\prime},z^{ \prime\prime})}\] Thus, since \(x\) is a safe input, using Lemma 1, we conclude that \(d(y,z^{\prime})<d(y,z^{\prime\prime})\). The proof follows similar arguments for dangerous inputs, taking the distances as \(-\overline{\mathcal{S}}^{\mathcal{M}}\) and flipping the inequalities. To show the maximality, observe that the intersection point marked by \(w\) in Figure 2, which is at distance \(\overline{\mathcal{S}}^{\mathcal{M}}(x)\) from \(x\), can be easily shown to be of equal distances from \(F_{\mathcal{M}}(x)\) and \(\overline{F}_{\mathcal{M}}(x)\). While separation provides the maximal zone, it is expensive to calculate. As can be seen in Definition 4, to estimate the separation of one specific input, we go over many triplets of inputs. The exact amount is unbounded and depends on the dataset. Thus, separation is infeasible to compute in near real-time. Therefore, when time or computation resources are limited, we require a different and computationally simpler notion. Accordingly, the following section provides an efficient approximation of the separation measure. ### Fast-Separation Approximation We approximate the separation of a given input using only its distance from \(F_{\mathcal{M}}(x)\) and its distance from \(\overline{F}_{\mathcal{M}}(x)\). This simplification allows us to calculate a zone for any given input, which is not necessarily the maximal one. The reliance on these two distances enables a faster calculation since we do not perform an exhaustive search over many triplets of inputs. In particular, we do not consider the geometric positioning of the inputs that determine the distance from these sets. **Definition 5** (Fast-Separation).: _Given a model \(\mathcal{M}\), the fast-separation of an input \(x\), denoted \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\), is defined as:_ \[\underline{\mathcal{S}}^{\mathcal{M}}(x)=\frac{D(x,\overline{F}_{\mathcal{M}}( x))-D(x,F_{\mathcal{M}}(x))}{2}\] Notice that just as is the case for separation, if \(x\) is a safe input, its fast-separation value will be strictly positive and non-positive otherwise. Figure 3 illustrates the notion of fast-separation. In particular, it exemplifies why it only provides an approximation of the more accurate separation measure. It encapsulates a zone that is less than or equal to that of separation. Sub-figure (a) demonstrates a case in which \(\underline{\mathcal{S}}^{\mathcal{M}}(x)=\overline{\mathcal{S}}^{\mathcal{M} }(x)\), while sub-figure (b) presents a case where \(\overline{\mathcal{S}}^{\mathcal{M}}(x)\) is considerably larger than \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\). The separation measure defined as the maximal safe zone is applicable to all norms. However, the explicit formula \(\overline{\mathcal{S}}^{\mathcal{M}}\), given in Definition 4 is only applicable in \(L_{2}\). The following proposition demonstrates that fast separation, \(\underline{\mathcal{S}}^{\mathcal{M}}\), calculates a zone that is always contained in the maximal zone for any distance metric. Thus, it approximates the geometric separation for all metrics as the proof only requires the triangle inequality. **Proposition 2**.: _For any metric \(\ell\), and for any input \(x\), \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) (calculated with respect to \(\ell\)) is a zone of \(x\). That is, \(|\underline{\mathcal{S}}^{\mathcal{M}}(x)|\leq\mathcal{Z}(x)\). Furthermore, \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) has the same sign as the separation of \(x\)._ Proof.: Let \(x\) be a safe input, we show that \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) is a zone of \(x\). Let \(y\) be a point such that \[d(x,y)<\underline{\mathcal{S}}^{\mathcal{M}}(x)=\frac{D(x,\overline{F}_{ \mathcal{M}}(x))-D(x,F_{\mathcal{M}}(x))}{2}.\] We show that \(D(y,F_{\mathcal{M}}(x))<D(y,\overline{F}_{\mathcal{M}}(x))\). Take \(z^{\prime}\in F_{\mathcal{M}}(x)\) and \(z^{\prime\prime},w\in\overline{F}_{\mathcal{M}}(x)\) such that \(d(x,z^{\prime})=D(x,F_{\mathcal{M}}(x))\), \(d(x,z^{\prime\prime})=D(x,\overline{F}_{\mathcal{M}}(x))\), and \(d(y,w)=D(y,\overline{F}_{\mathcal{M}}(x))\). Using the triangle inequality we get: \[D(y,F_{\mathcal{M}}(x))\leq d(y,z^{\prime})\leq d(x,z^{\prime})+ d(x,y)\] \[< d(x,z^{\prime})+\frac{d(x,z^{\prime\prime})-d(x,z^{\prime})}{2}= \frac{d(x,z^{\prime\prime})+d(x,z^{\prime})}{2}\] \[= d(x,z^{\prime\prime})-\frac{d(x,z^{\prime\prime})-d(x,z^{\prime })}{2}<d(x,z^{\prime\prime})-d(x,y)\] \[\leq d(x,w)-d(x,y)\leq d(y,w)=D(y,\overline{F}_{\mathcal{M}}(x))\] For dangerous inputs, the proof follows similar arguments, switching \(F_{\mathcal{M}}(x)\) and \(\overline{F}_{\mathcal{M}}(x)\). For the sign of \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) it is easy to see that for a safe (dangerous) input, \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) will be positive (negative) and therefore has the same sign as the separation. Proposition 2 shows that \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) induces a zone that is always smaller than the maximal zone for any distance metric. In the case of \(L_{2}\), we have a formula for calculating the maximal zone (\(\overline{\mathcal{S}}^{\mathcal{M}}(x)\)), and the following proposition provides an approximation bound. **Proposition 3**.: _The following holds for any point \(x\):_ \[|\overline{\mathcal{S}}^{\mathcal{M}}(x)-\underline{\mathcal{S}}^{\mathcal{M} }(x)|\leq\frac{D(x,F_{\mathcal{M}}(x))+D(x,\overline{F}_{\mathcal{M}}(x))}{2}.\] Proof.: We here prove the bound for safe inputs \(x\), the proof for dangerous inputs is similar. Let \(x\) be a safe input. By definition: \[|\overline{\mathcal{S}}^{\mathcal{M}}(x)-\underline{\mathcal{S}}^{ \mathcal{M}}(x)|=\overline{\mathcal{S}}^{\mathcal{M}}(x)-\underline{\mathcal{ S}}^{\mathcal{M}}(x)=\] \[= \min_{x^{\prime\prime}\in\overline{F}_{\mathcal{M}}(x)}\max_{x^ {\prime}\in F_{\mathcal{M}}(x)}\frac{d^{2}(x,x^{\prime\prime})-d^{2}(x,x^{ \prime})}{2d(x^{\prime},x^{\prime\prime})}\] \[\quad-\frac{D(x,\overline{F}_{\mathcal{M}}(x))-D(x,F_{\mathcal{M }}(x))}{2}\] Let \(z^{\prime\prime}\in\overline{F}_{\mathcal{M}}(x)\) be an input such that \(d(x,z^{\prime\prime})=D(x,\overline{F}_{\mathcal{M}}(x))\), and let \(z^{\prime}\in F_{\mathcal{M}}(x)\) be a input for which the maximum on the expression above is obtained. Then, we have: \[|\overline{\mathcal{S}}^{\mathcal{M}}(x)-\underline{\mathcal{S}}^{ \mathcal{M}}(x)|\] \[\leq \max_{x^{\prime}\in F_{\mathcal{M}}(x)}\frac{d^{2}(x,z^{\prime \prime})-d^{2}(x,x^{\prime})}{2d(x^{\prime},z^{\prime\prime})}-\frac{d(x,z^{ \prime\prime})-D(x,F_{\mathcal{M}}(x))}{2} \tag{1}\] \[= \frac{d^{2}(x,z^{\prime\prime})-d^{2}(x,z^{\prime})}{2d(z^{\prime },z^{\prime\prime})}-\frac{d(x,z^{\prime\prime})-D(x,F_{\mathcal{M}}(x))}{2}\] (2) \[\leq \frac{d(x,z^{\prime\prime})+d(x,z^{\prime})}{2}-\frac{d(x,z^{ \prime\prime})-D(x,F_{\mathcal{M}}(x))}{2}\] (3) \[= \frac{d(x,z^{\prime})+D(x,F_{\mathcal{M}}(x))}{2}\] (4) \[\leq \frac{D(x,F_{\mathcal{M}}(x))+D(x,\overline{F}_{\mathcal{M}}(x))} {2} \tag{5}\] The first inequality (Equation (1)) holds due to the definition of the minimum function. The second inequality (Equation (3)) is due to the triangle inequality. The last inequality (Equation (5)) holds because, since \(x\) is a safe input, the maximal distance between \(x\) and \(z^{\prime}\) can't be greater than the distance from \(x\) to \(\overline{F}_{\mathcal{M}}(x)\). Notice that the above bound is tight, in the sense that there exists an example witnessing the exact bound, as shown in Figure 4 below. ### Calibration of the Geometric Separation In this section, we use the geometric notions of \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) and \(\overline{\mathcal{S}}^{\mathcal{M}}(x)\) to derive confidence estimations (\(\mathit{conf}(x)\)). Notice that \(\mathit{conf}(x)\in(0,1)\) while the geometric notions are in \((-\infty,+\infty)\). Next, we explain how to translate between the two. For each value of \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) (\(\overline{\mathcal{S}}^{\mathcal{M}}(x)\)), we need to match a confidence value. To do so, we split the data into a Validation set, \(\mathit{Vs}\), which is disjoint from the train and test sets. Such a methodology is commonly used in post-hoc calibration methods Guo et al. (2017); Platt (1999); Kull et al. (2017); Mozafari et al. (2018); Tomani et al. (2022); Zhang et al. (2020); Gupta and Ramdas (2021); Kumar et al. (2019). We then measure the accuracy for inputs with similar \(\underline{\mathcal{S}}^{\mathcal{M}}(x)\) (or \(\overline{\mathcal{S}}^{\mathcal{M}}(x)\)) on \(\mathit{Vs}\). At this point, we have pairs \((y,z)\) where \(y\) is a geometric separation value, and \(z\) is the desired confidence value (as measured by the accuracy on \(\mathit{Vs}\)). The next step is to find a low-dimensionality function that maximizes accuracy. Hence, we perform a fitting between \(\underline{\mathcal{S}}^{\mathcal{M}}\) (or \(\overline{\mathcal{S}}^{\mathcal{M}}\)) values and the ratios of correct classifications (on \(\mathit{Vs}\)) for each unique value. E.g., if for \(\underline{\mathcal{S}}^{\mathcal{M}}\) value of \(10\) we see that 90% of the points are classified correctly, then we'll add the pair \(\langle 10,0.9\rangle\) to the fitting function. Intuitively, we expect very low confidence values for highly negative distances and approach 100% confidence when the distances are large and positive. ## 4 Experimental Results In this section, we evaluate the effectiveness of our geometric approach. First, we explain the evaluation methodology in Section 4.1, including the datasets and models. Then we continue our experiment results step by step by gradually explaining the tradeoffs and design decisions we take throughout this work. ### Methodology #### 4.1.1 Datasets Our evaluation uses the following standard datasets: * _Modified National Institute of Standards and Technology database (MNIST)_ LeCun and Cortes (2010). A dataset that consists of hand-written images designed for training various image processing systems. It includes 70,000 28x28 grayscale images belonging to one of ten labels. * _Fashion MNIST (Fashion)_ Xiao et al. (2017). A dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories. * _German Traffic Signs Recognition Benchmark (GTSRB)_ Houben et al. (2013). A large image set of traffic signs for the single-image, multi-class classification problem. It consists of 50,000 RGB images of traffic signs, belonging to 43 classes. * _American Sign Language (SignLang)_Techperson (2017). A database of hand gestures representing a multi-class problem with 24 classes of letters. It consists of 30,000 28x28 grayscale images. * _Canadian Institute for Advanced Research (CIFAR10)_Krizhevsky et al. (2009). A dataset containing 32x32 RGB images of 60,000 objects from 10 classes. For each dataset, we randomly partitioned the data into three subsets: train set _Tr_ (60%), validation set _Vs_ (20%) and test set _Ts_ (20%). As is standard practice, we used normalized datasets (e.g., the same image size for all images), see Leman et al. (2022) for details. The train set is used to calculate (fast-)separation and train the model. The validation set is used to evaluate the confidence estimation associated with each (fast-)separation value. These values, in turn, are used to fit an isotonic function. Finally, the test set is used to evaluate the confidence on new inputs that were _not_ present in the train and validation sets. #### 4.1.2 Models In our evaluation, we use the following popular machine learning models: Random Forest (RF) Breiman (2001), Gradient Boosting Decision Trees (GB) Mason et al. (1999), and Convolutional Neural Network (CNN) Gu et al. (2018). We chose these models because they are different: RF and GB are tree-based, while CNN is a neural network. For RF and GB, we configured the hyperparameters (e.g., the maximal depth of trees) by cross-validation on the train set via the random search technique Bergstra and Yoshua (2012). For CNN, we used the configuration suggested by practitioners. Our specific configurations as well as the accuracy scores of each of the models are detailed in Leman et al. (2022). #### 4.1.3 Evaluation Algorithms To evaluate our method, we compare our (fast-)separation-based confidence estimation to the following methods: the built-in isotonic regression calibration implemented by Sklearn library, \(Iso\) Zadrozny and Elkan (2002); the built-in Platt scaling calibration method implemented by Sklearn library, \(Platt\) Platt (1999); the scaling-binning calibrator, \(SBC\) Kumar et al. (2019) implemented by the same authors repository; the histogram-binning, \(HB\) Gupta and Ramdas (2021) implemented by the same authors repository; the beta calibrator, \(Beta\) Kull et al. (2017) implemented by Kuppers et al. (2020); the bayesian binning into quantiles calibrator, \(BBQ\) Naeini et al. (2015) implemented by Kuppers et al. (2020); the temperature scaling calibrator, \(TS\) Guo et al. (2017a) implemented by Kerrigan et al. (2021); and the ensemble temperature scaling calibrator, \(ETS\) Zhang et al. (2020) implemented by Kerrigan et al. (2021). Notice that \(TS\) and \(ETS\) are calibration methods for neural networks thus we only apply those to CNNs. Each method receives the same baseline model as an input yielding a slightly different calibrated model. Note that our method is evaluated against the uncalibrated model as our method does not affect the model. Moreover, it allows us to compare our method against different calibration methods, as shown in Table 2. To evaluate the confidence predictions, we use the _Expected Calibration Error (ECE)_, which is a standard method to evaluate confidence calibration of a model Xing et al. (2020); Krishnan and Tickoo (2020). Concretely, the predictions sample of size \(n\) are partitioned into \(M\) equally spaced bins \((B_{m})_{m\leq M}\), and ECE measures the difference between the sample accuracy in the \(m^{th}\) bin and the the average confidence in it Naeini et al. (2015). Formally, ECE is calculated by the following formula: \[ECE=\sum_{m=1}^{M}\frac{|B_{m}|}{n}\left|\mathrm{acc}\left(B_{m}\right)-\mathrm{ conf}\left(B_{m}\right)\right|\] where: \(\mathrm{acc}\left(B_{m}\right)=\frac{1}{|B_{m}|}\cdot|\{x\in B_{m}:\mathcal{C} (x)\text{ is correct}\}|\), and \(\mathrm{conf}\left(B_{m}\right)=\frac{1}{|B_{m}|}\sum_{x\in B_{m}}\mathit{ conf}(x)\). ### Empirical Study #### 4.2.1 Distance metrics As mentioned in Section 3.1, the notion of geometric separation is applicable to any norm. In fact, as shown in Proposition 2, the fast-separation approximation provides a zone under any norm. Thus, we have evaluated the ECE obtained from fast-separation under different norms. The results are given in Table 2. As can be observed, the ECE is low regardless of the selection of norm indicating the attractiveness of the geometric signal. However, while some norms are more accurate for some datasets, there is no universally superior norm. Thus, the following experiments focus on the \(L_{2}\) norm from the reasons specified in Section 3.1. #### 4.2.2 Fitting Function As mentioned in Section 3.3, for our fitting function we can use any existing calibration function. Post-hoc calibration methods based on fitting functions typically use either a logistic (Sigmoid) or an isotonic regression Zadrozny and Elkan (2002). Isotonic regression fits a non-decreasing free-form line to a sequence of observations. In comparison, Sigmoid is a continuous step function. We used both fitting functions on our fast-separation values and obtained similar accuracy. We opt here to present the isotonic regression as it provides the best empirical results, as motivated by Figure 5. \begin{table} \begin{tabular}{c c c c c} \hline **Dataset** & **Model** & \(L_{1}\) & \(L_{2}\) & \(L_{\infty}\) \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** \\ **Sigmoid** Figure 5 illustrates an example of the success ratio of the Random Forest model for MNIST inputs with varying values of \(\underline{\mathcal{S}}^{\mathcal{M}}\) scores (similar behavior was observed for the various models and datasets). We clustered inputs with a similar score together (into 50 bins overall) as each classification is correct or not, and we are looking for the average. The black line represents the Sigmoid function, and the green line represents the isotonic regression. As can be observed, both regressions are nearly identical on all the points with positive \(\underline{\mathcal{S}}^{\mathcal{M}}\) values. We eventually chose isotonic regression because it better fitted the few points with negative \(\underline{\mathcal{S}}^{\mathcal{M}}\) values. Interestingly, these points were consistently a poor fit for the Sigmoid regression rendering it slightly less accurate on average. Also, observe that the transition is around the value 0, indicating that the distinction between safe and dangerous points is meaningful in confidence evaluation. ### Confidence Evaluation This section presents the experimental results of the confidence estimation. #### 4.3.1 Estimating Confidence Table 2 presents the main experimental results of our work. The table summarizes ECEs for our method (with bin size \(15\)). Each entry in the table describes the ECE and the 95% confidence interval. We highlight the most accurate method for each experiment in bold. In this experiment, we perform one hundred random splits of the data into train, validation, and test sets for each model and dataset. We then measure the ECE of the confidence estimation for all test set inputs, average the result and take the 95% confidence intervals. 1 First, observe that \(\underline{\mathcal{S}}^{\mathcal{M}}\) and \(\overline{\mathcal{S}}^{\mathcal{M}}\) yield very similar ECEs, and that the differences between them are usually statistically insignificant. Figure 5: An illustration of the inputs to the fitting function (blue diamonds and red dots), and the functions fitted by Sigmoid (black line) and isotonic regression (green line). The inputs are for the MNIST dataset and the Random Forest model. Thus, we conclude that \(\underline{\mathcal{S}}^{\mathcal{M}}\) is a good approximation of \(\overline{\mathcal{S}}^{\mathcal{M}}\) despite being considerably simpler to compute. The next interesting comparison is between \(\underline{\mathcal{S}}^{\mathcal{M}}\) and \(Iso\). We use the same fitting function (isotonic regression) in both cases, but \(Iso\) performs the calibration on the model's natural uncertainty estimation, and \(\underline{\mathcal{S}}^{\mathcal{M}}\) performs the calibration on geometric distances. Our \(\underline{\mathcal{S}}^{\mathcal{M}}\) almost consistently improves the confidence estimations across the board compared to \(Iso\), \(Platt\), \(SBC\), \(HB\), \(TS\), \(ETS\), \(Beta\), and \(BBQ\). Specifically, we derive improvements up to 99% in almost all tested models and datasets. Such results demonstrate the potential of geometric signals to improve the effectiveness of uncertainty estimation. Table 3 describes the improvement of our fast-separation-based method over recently proposed posthoc calibration techniques. The improvement is calculated using the ratio of the difference between our ECE and the competitor's ECE. Observe that our method always improves the alternatives except for CNNs on the Fashion dataset, where it loses by 5%. Such results position our geometric method as a competitive approach for confidence estimation. However, note that our fast-separation can be used alongside the existing methods. #### 4.3.2 Tabular Data Fast-separation was designed for image data sets since they appear to be most governed by geometry, that is, different images will likely be geometrically separable. Nonetheless, as a controlled \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline **Dataset** & **Model** & \(\underline{\mathcal{S}}^{\mathcal{M}}\) & \(\overline{\mathcal{S}}^{\mathcal{M}}\) & **Iso** & **Platt** & **SBC** & **HB** & **BBQ** & **Beta** & **TS** & **ETS** \\ \hline \hline \multirow{4}{*}{**Image**} & CNN & **0.15\({}_{\pm 0.01}\)** & **0.15\({}_{\pm 0.01}\)** & 0.17\({}_{\pm 0.01}\) & 0.52\({}_{\pm 0.04}\) & 8.91\({}_{\pm 0.16}\) & 0.32\({}_{\pm 0.02}\) & 0.22\({}_{\pm 0.01}\) & 0.64\({}_{\pm 0.02}\) & 0.20\({}_{\pm 0.01}\) & 0.20\({}_{\pm 0.01}\) \\ \cline{2-11} & RF & **0.35\({}_{\pm 0.02}\)** & 0.36\({}_{\pm 0.02}\) & 0.92\({}_{\pm 0.03}\) & 1.49\({}_{\pm 0.02}\) & 3.92\({}_{\pm 0.11}\) & 0.46\({}_{\pm 0.02}\) & 1.13\({}_{\pm 0.03}\) & 0.37\({}_{\pm 0.02}\) & - & - \\ \cline{2-11} & GB & **0.34\({}_{\pm 0.02}\)** & **0.34\({}_{\pm 0.02}\)** & 1.74\({}_{\pm 0.03}\) & 1.97\({}_{\pm 0.03}\) & 8.46\({}_{\pm 0.07}\) & 0.45\({}_{\pm 0.02}\) & 0.65\({}_{\pm 0.03}\) & 0.47\({}_{\pm 0.02}\) & - & - \\ \hline \multirow{4}{*}{**Image**} & CNN & **0.37\({}_{\pm 0.04}\)** & **0.37\({}_{\pm 0.04}\)** & 0.38\({}_{\pm 0.04}\) & 2.83\({}_{\pm 0.53}\) & 29.01\({}_{\pm 0.49}\) & 1.22\({}_{\pm 0.18}\) & 1.08\({}_{\pm 0.21}\) & 1.98\({}_{\pm 0.25}\) & 0.90\({}_{\pm 0.11}\) & 0.77\({}_{\pm 0.09}\) \\ \cline{2-11} & RF & **0.37\({}_{\pm 0.02}\)** & 0.38\({}_{\pm 0.02}\) & 2.55\({}_{\pm 0.04}\) & 4.19\({}_{\pm 0.03}\) & 13.99\({}_{\pm 0.11}\) & 0.85\({}_{\pm 0.05}\) & 3.08\({}_{\pm 0.04}\) & 0.56\({}_{\pm 0.03}\) & - & - \\ \cline{2-11} & GB & **0.61\({}_{\pm 0.03}\)** & 0.63\({}_{\pm 0.03}\) & 10.04\({}_{\pm 0.07}\) & 19.63\({}_{\pm 0.03}\) & 31.25\({}_{\pm 0.12}\) & 1.42\({}_{\pm 0.05}\) & 9.28\({}_{\pm 0.11}\) & 5.36\({}_{\pm 0.10}\) & - & - \\ \hline \multirow{4}{*}{**Image**} & CNN & **0.09\({}_{\pm 0.05}\)** & 0.10\({}_{\pm 0.06}\)** & **0.09\({}_{\pm 0.05}\)** & 0.12\({}_{\pm 0.07}\) & 17.77\({}_{\pm 0.21}\) & 1.24\({}_{\pm 1.03}\) & 1.24\({}_{\pm 1.03}\) & 1.24\({}_{\pm 1.04}\) & 0.11\({}_{\pm 0.01}\) & 0.12\({}_{\pm 0.01}\) \\ \cline{2-11} & RF & **0.08\({}_{\pm 0.01}\)** & **0.08\({}_{\pm 0.01}\)** & 0.46\({}_{\pm 0.02}\) & 1.76\({}_{\pm 0.02}\) & 17.34\({}_{\pm 0.18}\) & 0.16\({}_{\pm 0.02}\) & 0.86\({}_{\pm 0.02}\) & 0.29\({}_{\pm 0.01}\) & - & - \\ \cline{2-11} & GB & **0.07\({}_{\pm 0.01}\)** & **0.07\({}_{\pm 0.01}\)** & 4.01\({}_{\pm 0.05}\) & 5.93\({}_{\pm 0.06}\) & 31.01\({}_{\pm 0.08}\) & 0.46\({}_{\pm 0.03}\) & 0.78\({}_{\pm 0.05}\) & 0.70\({}_{\pm 0.03}\) & - & - \\ \hline \multirow{4}{*}{**Image**} & CNN & 0.75\({}_{\pm 0.03}\) & 0.75\({}_{\pm 0.04}\) & **0.71\({}_{\pm 0.03}\)** & 6.60\({}_{\pm 0.72}\) & 7.36\({}_{\pm 0.20}\) & 1.10\({}_{\pm 0.05}\) & 2.18\({}_{\pm 0.15}\) & 9.15\({}_{\pm 0.10}\) & 0.82\({}_{\pm 0.04}\) & 0.89\({}_{\pm 0.04}\) \\ \cline{2-11} & RF & **0.78\({}_{\pm 0.04}\)** & 0.82\({}_{\pm 0.04}\) & 1.03\({}_{\pm 0.05}\) & 3.75\({}_{\pm 0.04}\) & 3.52\({}_{\pm 0.10}\) & 1.07\({}_{\pm 0.05}\) & 1.23\({}_{\pm 0.05}\) & 0.83\({}_{\pm 0.03}\) & - & - \\ \cline{2-11} & GB & **0.79\({}_{\pm 0.04}\)** & **0.79\({}_{\pm 0.084}\)** & 3.82\({}_{\pm 0.06}\) & 5.01\({}_{\pm 0.05}\) & 3.90\({}_{\pm 0.12}\) & 1.01\({}_{\pm 0.04}\) & 1.41\({}_{\pm 0.05}\) & 0.97\({}_{\pm 0.05}\) & - & - \\ \hline \multirow{4}{*}{**Image**} & CNN & **1.12\({}_{\pm 0.07}\)** & 1.16\({}_{\pm 0.07}\) & 1.28\({}_{\pm 0.06}\) & 6.05\({}_{\pm 0.21}\) & 3.57\({}_{\pm 0.10}\) & 4.10\({}_{\pm 0.10}\) & 5.31\({}_{\pm 0.24}\) & 24.76\({}_{\pm 0.21}\) & 3.68\({}_{\pm 0.08}\) & 3.45\({}_{\pm 0.12}\) \\ \cline{2-11} & RF & **1. experiment, we also tested our method on non-visual tabular data. Here, we have no apriori intuition that the geometric signal is feasible. We used two datasets: Red wine quality Cortez et al. (2009), which contains a total of twelve variables and 1,599 observations and six classes, and airline passenger satisfaction Klein (2019), which contains a total of twenty-five variables, 129,880 observations, and two classes. In most experiments, we saw a small improvement of ranging between 1% to 77% in accuracy. Thus, we conclude that our method achieves good results on tabular data as well. However, the improvement was not uniform and there were a few cases where Iso was superior to our own. Thus, the geometric signal may also be useful for non-visual data but further investigations are required to adapt the method to various datasets. ## 5 Optimizing Performance As shown in the previous section, the fast-separation approximation yields competitive confidence estimations promptly for small and medium-sized datasets. Nonetheless, our approach may still be too slow to handle large datasets due to the need to calculate geometric notions on the entire training set. To address this bottleneck, we explore the impact of several standard methods for dimensionality reduction on the quality of our approach for confidence estimation. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline **Dataset** & **Model** & **Iso** & **Platt** & **SBC** & **HB** & **BBQ** & **Beta** & **TS** & **ETS** \\ \hline \multirow{4}{*}{**SBC**} & CNN & 11.8\% & 71.2\% & 98.3\% & 53.1\% & 31.8\% & 76.6\% & 25.0\% & 25.0\% \\ & RF & 62.0\% & 76.5\% & 91.1\% & 23.9\% & 69.0\% & 5.4\% & - & - \\ & GB & 80.5\% & 82.7\% & 96.0\% & 24.4\% & 47.7\% & 27.7\% & - & - \\ \hline \multirow{4}{*}{**SBC**} & CNN & 2.6\% & 86.9\% & 98.7\% & 69.7\% & 65.7\% & 81.3\% & 58.9\% & 51.9\% \\ & RF & 85.5\% & 91.2\% & 97.4\% & 56.5\% & 88.0\% & 33.9\% & - & - \\ & GB & 93.9\% & 96.9\% & 98.0\% & 57.0\% & 93.4\% & 88.6\% & - & - \\ \hline \multirow{4}{*}{**SBC**} & CNN & 0.0\% & 25.0\% & 99.5\% & 92.7\% & 92.7\% & 18.2\% & 25.0\% \\ & RF & 82.6\% & 95.5\% & 99.5\% & 50.0\% & 90.7\% & 72.4\% & - & - \\ & GB & 98.3\% & 98.8\% & 99.8\% & 84.8\% & 91.0\% & 90.0\% & - & - \\ \hline \multirow{4}{*}{**SBC**} & CNN & -5.6\% & 88.6\% & 89.8\% & 31.8\% & 65.6\% & 91.8\% & 8.5\% & 15.7\% \\ & RF & 24.3\% & 79.2\% & 77.8\% & 27.1\% & 36.6\% & 6.0\% & - & - \\ \cline{1-1} & GB & 79.3\% & 84.2\% & 79.7\% & 21.8\% & 44.0\% & 18.6\% & - & - \\ \hline \multirow{4}{*}{**SBC**} & CNN & 12.5\% & 81.5\% & 68.6\% & 72.7\% & 78.9\% & 95.5\% & 69.6\% & 67.5\% \\ \cline{1-1} & RF & 58.9\% & 69.8\% & 54.5\% & 39.6\% & 64.3\% & 17.0\% & - & - \\ \cline{1-1} & GB & 81.7\% & 83.6\% & 46.4\% & 41.2\% & 6.6\% & 52.0\% & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: Relative improvement percentage of ECE of \(\hat{\mathcal{S}}^{\mathcal{M}}\) over other calibration methods. ### Handling Large Datasets Large datasets are datasets with a large number of images or with large images with many pixels. In such cases, each calculation of fast separation requires potentially going over many comparisons that slow down the process. Here, we explore ways to either reduce the image size, or to reduce the number of images.2 The following list reviews various known techniques for reducing the dimensionality of the data, the first four reduce the number of pixels, and the last two reduce the number of images in the set used to calculate geometric distances. Footnote 2: One can also directly manipulate the searching algorithm to improve the calculation complexity of the nearest neighbor by, e.g., randomization or special data structures. We leave exploring this option for future work. In order to have a fair comparison, we define the reduction parameter \(t\) to indicate the amount of data reduced in each method. In each method, a reduction parameter \(t\) implies that we reduce the dataset size by a factor of \(t^{2}\). E.g., in the Pooling technique, we can reduce 2x2 images into a pixel reducing the image size, while K-means would reduce the number of images, and both would reduce it by a factor of four so that the total number of pixels in the set is the same for each reduction parameter value for all the methods. **Pooling**Mosteller (1948) is an operation that calculates a function for patches of a feature map and uses it to create a down-sampled (pooled) feature map. For example, if one wants a 2-pool of an image, one reduces its size by 2x2, and every square of 2x2 is then represented as the output of the function on the squared elements. Some broadly used functions for pooling are average (\(pool\)) and maximum (\(maxpool\)). **Principal Component Analysis (\(PCA\))**F.R.S. (1901) linearly transforms the data into a new coordinate system where most of the variation in the data can be described with fewer dimensions than the initial data. For reduction parameter 2 we reduce each image to a new smaller image with a reduction factor of four in the number of pixels. **Resizing using a Bilinear Interpolation (\(RBI\))**Smith (1981) is a generalization of single dimension linear interpolation. RBI performs linear interpolation in one direction and then again in the other direction. Resizing using a Bilinear Interpolation is common in computer vision applications that are based on convolutional neural networks. For a reduction parameter 2 we resize the image to a new image in which both the length and width are two times smaller, ending with an image four times smaller than the original one. **Sampling random pixels (\(Rand_{pix}\))** reduces the number of pixels in the metadata by a random sample. Notice that this approach can be viewed as the baseline for other pixel-reducing techniques. We chose the number of pixels sampled to be the original pixel number divided by the squared reduction parameter. **K-means**MacQueen (1967) clustering is a vector quantization method aiming to partition \(n\) observations into \(k\) clusters in which each observation belongs to the cluster with the nearest mean (cluster centroid). K-means clustering minimizes variances in the clusters (squared Euclidean distances). Here, we set \(k\) to be the reduced dimension of the compressed dataset. E.g., if the original dataset had 10,000 images, and we set \(k=1,000\), we get a reduction factor of x10 from \(10,000\) dimensions to \(1,000\). When using this method we first find the centroids of the dataset, and then use these as the metadata for calculating geometric separation. **Sampling the training set (\(Rand_{set}\))** reduces the number of inputs in the training set, by picking a random sample. We chose the sample size to be the dataset size divided by the squared reduction parameter. ### Experimental Results To evaluate the effect of these reductions on our algorithm, we apply the reduction to the whole dataset and then calculate the fast-separation values on the reduced dataset. Note that the models are trained on the original dataset, so accuracy is not affected. For RGB images, we further changed the color to grayscale images, which reduced the size of images by a factor of 3 while keeping the image as close as possible to the original one. The experiments were executed on a desktop PC with Intel(R) 16 Cores(TM) i7-10700 CPU @ 2.90GHz, and 16GB RAM. Figures 6 and 7 show a comparison of the speed and accuracy of the various methods in the Random Forest model. As shown in Figure 6, all data optimizations increase the number of predictions per second, and we can readily reach several hundred estimations per second which is a sufficient speedup for our needs. All methods show almost the same speedup on the algorithm for each hyperparameter value, except for k-means which sometimes has a better speedup and RBI which has a slightly lower speedup. Since there is little variability in the experiments, the confidence intervals are barely visible. Our experiments show that the time performance is not affected by the model. Thus, Figure 6 presents only the results for the Random forest model, i.e., the average number of predictions per second. Observe that the number of predictions per seconds is the same for all other models. Importantly, this improvement in runtime does not come at a meaningful cost for the confidence estimation, as shown in Figure 7. While the error in the calibration estimation slightly changes across different methods and reduction parameters, the changes seem insignificant. Specifically, in Figure 6: Time comparison between various reduction methods on all datasets with Random Forest model. The error bar show the 95% confidence interval over 10 shuffles. The colors denote the various reduction parameters. the SignLanguage dataset we observe an increase in the ECE, which we believe is due to the fact that the original dataset is already quite small, rendering the datasize optimization pointless. Our experiments also show similar behavior across different models. For example, Figure 8 shows that all three models obtain similar errors with maxpooling with different parameters. Moreover, our results outperform most state-of-the-art algorithms even with a 4-pool. When using our method one needs to ship besides the model and the fitting function also the dataset itself, since the calculation of the fast-separation requires the calculation of distances to all images in the training set. This may imply memory overhead, which can be critical when using big datasets. Using a reduction of the dataset allows us to reduce the training set size needed to Figure 8: ECE measures with \(95\%\) confidence intervals on 10 shuffles for various datasets on several models after applying max-pooling with different reduction parameters. Figure 7: ECE comparison between various reduction methods on all datasets with Random forest model. The error bar show the 95% confidence interval over 10 shuffles. The black line shows the ECE score of the method without any reduction. The colors denote the various reduction parameters. be shipped. As we have shown, the reduced dataset still obtains improved results, thus freeing memory usage of the algorithm. Users can also predetermine the trade-off they would like between throughput or memory and ECE and adjust the reduction parameters accordingly. ## 6 Conclusion Our work introduces geometric separation-based algorithms for confidence estimation in machine learning models. Specifically, we measure a geometric separation score and use the specific model to translate each score value into a confidence value using a standard post-hoc calibration method. Thus, inputs close to training set examples of the same class receive higher confidence than those close to examples with a different classification. Thus, our algorithms depend on the specific model but as a black box resulting in methods that work for all machine learning models. Our evaluation shows that geometric separation improves confidence estimations in visual workloads. However, calculating geometric separation is computationally complex and time intensive. Thus, we suggest multiple approximation techniques to speed up the process and bring it to practicality. Our extensive evaluation shows that such approximations retain most of the benefits of geometric separations and drastically improve confidence estimation along with supporting many calculations per second, enabling real-time applications. For example, we can process live camera feeds at multiple hundreds of calculations per second. Our work is unique because it extracts a new external signal to derive confidence estimations. Thus, we can leverage the existing post-hoc calibration techniques to calibrate our signal and meet various optimization criteria. We showed that the same calibration method (Isotonic regression) yields a lower ECE when performed on the geometric signal rather than on the model's original signal. The achieved accuracy improves on a diverse set of recently proposed calibration methods. Notably, our approach reduces the error in confidence estimations by up to 99% compared to alternative methods (depending on the specific dataset and model). Looking into the future, we plan to address the dependence of this work on normalized inputs and tackle datasets with variable-sized images. In such datasets, the geometric distances may also depend on the resolution and alignment of the object. As a director, we plan to use the CNN middle layer latent space as the feeding vector (rather than the original images) in the geometric separation calculation. In any case, the ability to derive fast approximations of geometric separation would be a valuable tool in future research.
2303.12449
The Hyperbolic Plane in $\mathbb{E}^3$
We build an explicit $C^1$ isometric embedding $f_{\infty}:\mathbb{H}^2\to\mathbb{E}^3$ of the hyperbolic plane whose image is relatively compact. Its limit set is a closed curve of Hausdorff dimension 1. Given an initial embedding $f_0$, our construction generates iteratively a sequence of maps by adding at each step $k$ a layer of $N_{k}$ corrugations. To understand the behavior of $df_\infty$ we introduce a $formal$ $corrugation$ $process$ leading to a $formal$ $analogue$ $\Phi_{\infty}:\mathbb{H}^2\to \mathcal{L}(\mathbb{R}^2,\mathbb{R}^3)$. We show a self-similarity structure for $\Phi_{\infty}$. We next prove that $df_\infty$ is close to $\Phi_{\infty}$ up to a precision that depends on the sequence $N_*:= (N_{k})_k$. We then introduce the $pattern$ $maps$ $\boldsymbol{\nu}_{\infty}^\Phi$ and $\boldsymbol{\nu}_{\infty}$, of respectively $\Phi_{\infty}$ and $df_\infty$, that together with $df_0$ entirely describe the geometry of the Gauss maps associated to $\Phi_{\infty}$ and $df_\infty$. For well chosen sequences of corrugation numbers, we finally show an asymptotic convergence of $\boldsymbol{\nu}_{\infty}$ towards $\boldsymbol{\nu}_{\infty}^\Phi$ over circles of rational radii.
Vincent Borrelli, Roland Denis, Francis Lazarus, Mélanie Theillière, Boris Thibert
2023-03-22T10:36:04Z
http://arxiv.org/abs/2303.12449v2
# Hyperbolic Plane in \(\mathbb{R}^{3}\) ###### Abstract We build an explicit \(C^{1}\) isometric embedding \(f_{\infty}:\mathbb{H}^{2}\to\mathbb{E}^{3}\) of the hyperbolic plane whose image is relatively compact. Its limit set is a closed curve of Hausdorff dimension 1. Our construction generates iteratively a sequence of maps by adding at each step \(k\) a layer of \(N_{k}\) corrugations. In parallel, we introduce a formal corrugation process leading to a limit map \(\Phi_{\infty}:\mathbb{H}^{2}\to\mathscr{L}(\mathbb{R}^{2},\mathbb{R}^{3})\) that we call the _formal analogue_ of \(df_{\infty}\). We show that \(\Phi_{\infty}\) approximates \(df_{\infty}\). In fact, when the sequence of corrugation numbers \((N_{k})_{k}\) grows to infinity the map \(\Phi_{\infty}\) encodes the asymptotic behavior of \(df_{\infty}\). Moreover, analysing the geometry of \(\Phi_{\infty}\) appears much easier than studying \(f_{\infty}\). In particular, we are able to exhibit a form of self-similarity of \(\Phi_{\infty}\). ###### Contents * 1 Introduction * 2 The general strategy * 2.1 Working on \(\overline{\mathbb{H}^{2}}\) * 2.2 A Nash & Kuiper-like approach * 2.3 Holder regularity * 3 The corrugation process * 3.1 The target differential * 3.2 The corrugation frame * 3.3 The corrugation process on \(\widetilde{C}\) * 3.4 Properties of the corrugation process * 4 Isometric 3-corrugated immersions * 4.1 Primitive basis of the cone of metrics * 4.2 Definition of the 3-corrugated process * 4.3 Properties of 3-corrugated limit maps * 4.4 Proof of the existence part * 4.5 End of proof of Proposition 19 * 5 Proofs of Theorem 1 and Proposition 3 * 5.1 The wavefront forms * 5.2 The initial embedding. * 5.3 The sequence of metrics * 5.4 Existence and regularity of the limit map * 5.5 Embedded nature of the limit map * 6 Formal Corrugation Process * 6.1 The sequence \((\Phi_{k,i})_{k,i}\) * 6.2 The map \(\Phi\mapsto\Phi^{c}\) * 6.3 Comparing \(\Phi_{k,i}\) to \(df_{k,i}\) * 6.4 Proof of Theorem 4 * 7 Gauss map * 7.1 The corrugation matrices * 7.2 The formal corrugation matrices * 7.3 Properties of the formal analogue * 7.4 Proof of Theorem 6 * 7.5 Proof of Proposition 7.6 Structure of the Gauss map ## 1 Introduction The Hilbert-Efimov theorem asserts that the hyperbolic plane \(\mathbb{H}^{2}\) does not admit any \(C^{2}\) isometric embedding into the Euclidean 3-space \(\mathbb{E}^{3}\)[6, 5]. In contrast, the \(C^{1}\) embedding theorem of Nash [10] as extended by Kuiper [7, 8] shows the existence of infinitely many \(C^{1}\) isometric embeddings of \(\mathbb{H}^{2}\) into \(\mathbb{E}^{3}\). Since \(\mathbb{H}^{2}\) is non-compact, the question of the behavior of such embeddings at infinity arises naturally. Following Kuiper [8] and De Lellis [9], we consider the limit set \(\mathrm{Limset}(f)\) of a map \(f:\mathbb{H}^{2}\to\mathbb{E}^{3}\). This is the set of points in \(\mathbb{E}^{3}\) that are limits of sequences \((f(p_{n}))_{n}\), where \((p_{n})_{n}\) is a sequence of points of \(\mathbb{H}^{2}\) converging to infinity in the Alexandroff one point compactification of \(\mathbb{H}^{2}\). In 1955, Kuiper [8] exhibited an isometric embedding of \(\mathbb{H}^{2}\) in \(\mathbb{E}^{3}\) whose image is unbounded and with void limit set. More than sixty years later, De Lellis [9] was able to enhance this result in codimension two for any Riemannian \(n\)-dimensional manifold by prescribing the limit set. In the case of \(\mathbb{H}^{2}\) the existence of a nonempty limit set in \(\mathbb{E}^{4}\) implies the following counter-intuitive fact: any point of \(\mathrm{Limset}(f)\) is at infinite distance from every other point of \(f(\mathbb{H}^{2})\) for the induced distance of \(\mathbb{E}^{4}\). Indeed any path joining a point of \(\mathbb{H}^{2}\) to a point on the boundary at infinity \(\partial_{\infty}\mathbb{H}^{2}\) has infinite length as well as its image in \(\mathbb{E}^{4}\). In this paper we consider isometric embeddings of \(\mathbb{H}^{2}\) in codimension one with nonempty limit set. We construct maps that naturally extend to the boundary at infinity \(\partial_{\infty}\mathbb{H}^{2}\) so that their limit sets are images of \(\partial_{\infty}\mathbb{H}^{2}\) by the extensions. We thus obtain maps defined over the compact domain \(\overline{\mathbb{H}^{2}}=\mathbb{H}^{2}\cup\partial_{\infty}\mathbb{H}^{2}\) so that we may now study the regularity of the extensions transversely to the boundary. We shall work with the Poincare disk model \(\mathbb{H}^{2}=(\mathrm{Int}D^{2},h)\), where \(D^{2}\) is the closed unit disk of the Euclidean plane \(\mathbb{E}^{2}\) and \(h=4\frac{dx^{2}+dy^{2}}{(1-x^{2}-y^{2})^{2}}\) is the hyperbolic metric. We obtain the following results. **Theorem 1**.: _There exists a map \(D^{2}\to\mathbb{E}^{3}\) which is \(\beta\)-Holder for any \(0<\beta<1\) and whose restriction to the interior is a \(C^{1}\)-isometric embedding of the hyperbolic plane \(\mathbb{H}^{2}.\) Its limit set is a closed curve of Hausdorff dimension 1._ Figure 1: Global view of a 3-corrugated \(C^{1}\)-isometric embedding of \(\mathbb{H}^{2}\) whose limit set is a closed curve of Hausdorff dimension 1. Our pictures are obtained with the corrugation numbers \(10,100,1000,20\,000,2\,000\,000,240\,000\,000\). The \(\beta\)-Holder regularity in Theorem 1 is the best we can hope for: in any embedding with \(\beta=1\) - that is, of Lipschitz regularity - the image of a radius of \(D^{2}\) would have finite length which would be in contradiction with the fact that a curve going to infinity in the hyperbolic plane must have infinite length. Embedding vs immersion.We build our isometric embeddings by first choosing an initial embedding \(f_{0}:D^{2}\to\mathbb{E}^{3}\) such that the induced pullback metric \(g_{0}:=f_{0}^{*}\left\langle\cdot,\cdot\right\rangle\) is _strictly short_ over \(\mathrm{Int}D^{2}\), i.e. \(g_{0}<h\). We next choose a sequence of metrics \((g_{k})_{k}\) defined on \(D^{2}\) and converging to the hyperbolic metric \(h\) on \(\mathrm{Int}D^{2}\). We then construct a sequence of maps \((f_{k})_{k}\) where \(f_{k}\) is obtained from \(f_{k-1}\) by adding waves with appropriate directions and frequencies. These _corrugations_ increase the lengths in such a way that \(f_{k}\) is approximately \(g_{k}\)-isometric. If the convergence of the metrics is fast enough then the sequence \((f_{k})_{k}\) converges to an \(h\)-isometric limit map \(f_{\infty}\) of \(C^{1}\) regularity. The rate of convergence of the sequence \((g_{k})_{k}\) has a strong impact on the properties of \(f_{\infty}\), including the \(C^{1}\) regularity and the embedded character. This rate forces the relative increase of lengths at each corrugation. When this increase is too large, the successive corrugations intersect each other as on Figure 2. Note that increasing the corrugation frequencies replaces the local behavior by an approximately homothetic figure, hence does not remove the self-intersections. This phenomenon is reminiscent of sawtooth curves of fixed length for which increasing the number of teeth replaces teeth by homothetic ones with the same slope. Although the hyperbolic metric explodes on the boundary, we manage to build a sequence \((g_{k})_{k}\) of metrics whose rate of increase is bounded, allowing us to ensure that the limit surface is embedded. Fractal behavior and 3-corrugated embeddings.A connection between \(C^{1}\) isometric embeddings and fractal behavior has been observed for the construction of isometric embeddings of the flat torus and of the reduced sphere [2, 1]. The self-similarity behavior arises from a specific construction that iteratively deforms a surface in a constant number of given directions. This leads us to introduce the notion of 3-corrugated embedding. In the Nash and Kuiper approach, the map \(f_{k}\) is built from \(f_{k-1}\) by applying several corrugation steps. The number of steps and directions of the corrugations depend on the isometric default of \(f_{k-1}\) with respect to \(g_{k}\): Figure 2: Increasing the length creates self-intersections. \[D_{k} := g_{k}-f_{k-1}^{*}\left\langle\cdot,\cdot\right\rangle.\] This default is a positive definite symmetric bilinear form that can be expressed as a finite linear combination of squares of linear forms \(\ell_{k,1},\cdots\ell_{k,n_{k}}\): \[D_{k}=\sum_{i=1}^{n_{k}}\eta_{k,i}\ell_{k,i}\otimes\ell_{k,i} \tag{1}\] where each \(\eta_{k,i}\) is a smooth positive function defined on the compact domain \(D^{2}\) (see [10]). Then, a sequence of intermediary maps \[f_{k-1}=f_{k,0},\ f_{k,1},...,\ f_{k,n_{k}}=f_{k}\] are iteratively constructed by adding corrugations whose wavefronts have directions determined by \(\ker\ell_{k,i}\). For every \(i\in\{1,\ldots,n_{k}\}\), the amplitude and frequency of the corrugation are chosen so that \(f_{k,i}\) is approximately isometric for the metric \(f_{k-1}^{*}\left\langle\cdot,\cdot\right\rangle+\sum_{j=1}^{i}\eta_{k,j}\ell_ {k,j}\otimes\ell_{k,j}\). In particular, \(f_{k,n_{k}}\) is approximately \(g_{k}\)-isometric. In our approach, we use an isometric default \(D_{k,i}=g_{k}-f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle\) updated at each step \((k,i)\). This allows to construct \(f_{k,i}\) using only the data of \(f_{k,i-1}\) and \(g_{k}\). We also manage to choose an initial map and three fixed linear forms \(\ell_{1},\ell_{2},\ell_{3}\) such that (1) hold for all \(k\). This is in contrast with the Nash and Kuiper approach where the linear forms \(\ell_{k,i}\) and their number \(n_{k}\) depend on \(k\). Such a dependence prevents the appearance of any form of self-similarity in the limit. This motivates the introduction of the following notion, see Section 4.2. **Definition 2**.: We say that the sequence \((f_{k,i})_{k,i}\) and its limit \(f_{\infty}\) are \(n\)_-corrugated_ whenever \(n_{k}=n\) and these linear forms are independent of \(k\) **Proposition 3**.: _The embedding of Theorem 1 can be chosen \(3\)-corrugated._ As illustrative examples, the embeddings of the square flat torus and of the reduced sphere in [1, 2] are \(3\)-corrugated and have been shown to exhibit a fractal behavior. In the sequel, we will always consider \(3\)-corrugated sequences. Corrugation Process.There are several methods to construct \(f_{k,i}\) from \(f_{k,i-1}\), for instance by using Kuiper corrugations or the convex integration formula of Gromov. All are based on the idea of corrugations and introduce a free parameter \(N_{k,i}\in\mathbb{N}^{*}\) called _number of corrugations_. Here, we choose the corrugation process of [12] to generates, at each step \((k,i)\), a map \(f_{k,i}\) satisfying \[\mu_{k,i}=f_{k,i}^{*}\langle\cdot,\cdot\rangle+O\left(\frac{1}{N_{k,i}}\right)\] where \(\mu_{k,i}:=f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle+\eta_{k,i}\ell_{k,i} \otimes\ell_{k,i}\). In our case of a 3-corrugated process, the \(\ell_{k,i}\) are given by a fixed set of three 1-forms \(\ell_{i}\) with \(i\in\{1,2,3\}\), and \(\eta_{k,i}\) is the coefficient of \(\ell_{i}\otimes\ell_{i}\) in the decomposition of \(g_{k}-f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle\) in the basis \((\ell_{j}\otimes\ell_{j})_{j\in\{1,2,3\}}\). We then write \[f_{k,i}=CP_{i}(f_{k,i-1},g_{k},N_{k,i}). \tag{2}\] Each maps \(f_{k}=f_{k,3}\) is then approximately \(g_{k}\)-isometric in the sense that \[\|f_{k}^{*}\left\langle\cdot,\cdot\right\rangle-g_{k}\|_{C,\infty}=O\left( \frac{1}{N_{k,1}}\right)+O\left(\frac{1}{N_{k,2}}\right)+O\left(\frac{1}{N_{k,3}}\right).\] If the corrugation numbers are chosen large enough and if the convergence toward \(h\) of the metrics \(g_{k}\) is fast enough, namely if the series \[\sum_{k}\|g_{k+1}-g_{k}\|_{K,\infty}^{1/2}<+\infty \tag{3}\] converges on any compact set \(K\subset\mathrm{Int}D^{2}\), then the sequence \((f_{k,i})_{k,i}\) converges toward a \(C^{1}\) maps \(f_{\infty}\) on \(\mathrm{Int}D^{2}\) which is \(h\)-isometric, see Section 2.2. Formal Corrugation Process.We introduce in this work the notion of _formal corrugation process_. For a point \(p\in D^{2}\), there are infinitely many isometric linear maps \(L:(\mathbb{R}^{2},\mu_{k,i}(p))\rightarrow\mathbb{E}^{3}\), i.e. that satisfy \(L^{*}\left\langle\cdot,\cdot\right\rangle=\mu_{k,i}(p)\). By extension, there are infinitely many sections \(L:D^{2}\to Mono Figure 3: Successive zooms of the limit set : any point of \(\mathrm{Limset}(f)\) is at infinite distance from every other point of \(f(\mathbb{H}^{2})\). are formal solutions of the \(\mu_{k,i}\)-isometric relation. In Section 3.1, we show that there exists a \(\mu_{k,i}\)-isometric section \(L_{k,i}\) such that \[df_{k,i}=L_{k,i}+O(1/N_{k,i}). \tag{4}\] Moreover \(L_{k,i}\) is given by a _pointwise_ formula of the form \[L_{k,i}:=df_{k,i-1}+\mathbf{z}_{k,i}\otimes\ell_{k,i}. \tag{5}\] for some \(\mathbf{z}_{k,i}:D^{2}\to\mathbb{E}^{3}\) depending on \(\eta_{k,i}\), \(\ell_{k,i}\) and \(N_{k,i}\). Note that there is no reason for the section \(L_{k,i}\) to be holonomic1. We say that \(L_{k,i}\) is obtained by a _formal corrugation process_ and we write, analogously to (2), Footnote 1: Recall that \(L\) is holonomic if there exists a map \(F:D^{2}\to\mathbb{E}^{3}\) such that \(dF=L\). \[L_{k,i}=FCP_{i}(df_{k,i-1},g_{k},N_{k,i}).\] The adjective _formal_ refers to the fact that \(L_{k,i}\) is a formal solution of the \(\mu_{k,i}\)-isometric relation. Formal analogues.We now introduce the notion of formal analogues that will appear to encode the asymptotic behavior of the differential \(df_{\infty}\). By iterating the formal corrugation process we obtain, in parallel to the 3-corrugated sequence \((f_{k,i})_{(k,i)}\), a sequence \((\Phi_{k,i})_{(k,i)}\) of formal solutions given by \[\Phi_{0}=df_{0}\quad\text{and}\quad\Phi_{k,i}=FCP_{i}(\Phi_{k,i-1},g_{k},N_{k, i}).\] Under condition (3) the sequence \((\Phi_{k,i})_{(k,i)}\) converges toward a \(C^{0}\) map \(\Phi_{\infty}\) on \(\operatorname{Int}\!D^{2}\) (see Lemma 36). The map \(\Phi_{\infty}\) is a formal solution for the \(h\)-isometric relation, that is \[\Phi_{\infty}^{*}\left\langle\cdot,\cdot\right\rangle=h.\] If the corrugation process (2) were exact, that is, if \(f_{k,i}^{*}\left\langle\cdot,\cdot\right\rangle=\mu_{k,i}\) for all \((k,i)\), then we would have \(\Phi_{k,i}=df_{k,i}\). For that reason, we call \(\Phi_{k,i}\) and \(\Phi_{\infty}\) the _formal analogues_ of \(df_{k,i}\) and \(df_{\infty}\). In fact, the difference between the differential and its formal analogue depends on the corrugation numbers, see Section 6.3. Theorem 4 below states that \(df_{\infty}\) and \(\Phi_{\infty}\) can be made arbitrarily close by choosing sufficiently large corrugation numbers. **Theorem 4**.: _Let \(K\subset Int\,D^{2}\) be a compact set. For every \(\varepsilon>0\) there exists a sequence of corrugation numbers \(N_{*}=(N_{k,i})_{k,i}\) such that_ \[\|\Phi_{\infty}-df_{\infty}\|_{K,\infty}\leq\varepsilon.\] One may interpret Theorem 4 as a weak \(C^{1}\) density result. For a given initial embedding \(f_{0}\) and a sequence of metrics \(g_{k}\) the limit map \(\Phi_{\infty}\) is well defined for every choice of the corrugation numbers \((N_{k,i})_{k,i}\). The theorem implies that for large enough corrugation numbers, the corresponding \(\Phi_{\infty}\) are realized by Figure 4: Comparison between the normal map \({\bf n}_{2,2}\) of \(f_{2,2}\) (top) and the normal map \({\bf n}_{2,2}^{\Phi}\) of \(\Phi_{2,2}\) (bottom). The pictures show the images of an arc of amplitude \(\frac{2\pi}{7L}=0.008\,976\dots\) of the circle \(\{\rho=0.7\}\). These images lie in the unit 2-sphere and are represented in the domain of its usual parametrization. holonomic sections up to \(\varepsilon\). A notable observation is that \(\Phi_{k}\) only depends on \(\Phi_{0}\), on the three linear forms \(\ell_{1},\ell_{2},\ell_{3}\), on the corrugation numbers \(N_{1,1},...,N_{k,3}\) and on the values of \(g_{1},\ldots,g_{k}\) at the considered point. This is not the case for \(df_{k}\); even if the corrugation process (2) is pointwise, the derivatives of \(f_{k}\) involve the derivatives of \(g_{k}\). Hence, studying \(\Phi_{k}\) and its limit \(\Phi_{\infty}\) greatly simplifies the understanding of the geometry of \(f_{k}\) and its limit \(f_{\infty}\). Our numerical experiments actually show a remarkable similarity between \(\Phi_{k}\) and \(df_{k}\), see Figures 4 and 6. Normal map of \(f_{\infty}\).The formal analogue \(\Phi_{\infty}\) gives a key to understand the behavior of the normal map \(\mathbf{n}_{\infty}\) of \(f_{\infty}\). At each point \(p\in\mathrm{Int}D^{2}\), the formal analogue defines an oriented plane \(Im\left(\Phi_{\infty}(p)\right)\) and thus a unit _formal normal_\(\mathbf{n}_{\infty}^{\Phi}(p)\). It follows by Theorem 4 that \(\mathbf{n}_{\infty}^{\Phi}\) can be arbitrarily close to \(\mathbf{n}_{\infty}\) if the corrugation numbers are conveniently chosen. We thus mainly focus on \(\mathbf{n}_{\infty}^{\Phi}.\) An obvious observation is that \(\mathbf{n}_{\infty}^{\Phi}\) is not \(SO(2)\)-rotationally symmetric despite the fact that the initial application has rotational symmetry and that all the metrics \(g_{k}\) depend only on \(\rho\). This is due to the accumulation of corrugations in three different directions which has the effect of destroying any rotational symmetry. However the destruction of symmetry is not total: a rotational symmetry of finite order persists for \(\mathbf{n}_{\infty}^{\Phi}\). To understand the origin of these symmetries, it is convenient to introduce a variant of the normal map that we call the _normal pattern_. To define this object we complete the normal \(\mathbf{n}_{\infty}\) to an orthonormal basis \(\mathbf{F}_{\infty}=(\mathbf{t}_{\infty},\mathbf{w}_{\infty},\mathbf{n}_{ \infty})\) of \(\mathbb{E}^{3}\) by introducing \(\mathbf{t}_{\infty}\) the normalized derivative of \(f_{\infty}\) in the direction \(\partial_{\rho}\) and adding \(\mathbf{w}_{\infty}\) to form a direct orthonormal basis. Similarly we define \(\mathbf{F}_{\infty}^{\Phi}\) and \(\mathbf{F}_{0}\) by considering \(\Phi_{\infty}\) and \(df_{0}\) instead of \(df_{\infty}\). At each point \(p\in\mathrm{Int}D^{2}\), there exist two orthogonal matrices \(\mathscr{M}_{\infty}(p)\) and \(\mathscr{M}_{\infty}^{\Phi}(p)\) such that \[\mathbf{F}_{\infty}(p)=\mathbf{F}_{0}(p)\cdot\mathscr{M}_{\infty}(p)\quad \text{and}\quad\mathbf{F}_{\infty}^{\Phi}(p)=\mathbf{F}_{0}(p)\cdot\mathscr{M }_{\infty}^{\Phi}(p)\] and therefore, if \((\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\) is the standard basis of \(\mathbb{E}^{3}\), we can write \[\mathbf{n}_{\infty}=\mathbf{F}_{0}\cdot\mathscr{M}_{\infty}\cdot\mathbf{e}_{ 3}\quad\text{and}\quad\mathbf{n}_{\infty}^{\Phi}=\mathbf{F}_{0}\cdot\mathscr{M }_{\infty}^{\Phi}\cdot\mathbf{e}_{3}. \tag{6}\] It follows that, up to a rotation only depending on the point \(p=(\rho,\varphi)\) and on the initial map \(f_{0}\), these normals are fully described by their _normal patterns_ \[\boldsymbol{\nu}_{\infty}:=\mathscr{M}_{\infty}\cdot\mathbf{e}_{3}\quad\text{ and}\quad\boldsymbol{\nu}_{\infty}^{\Phi}:=\mathscr{M}_{\infty}^{\Phi}\cdot \mathbf{e}_{3}.\] Obviously, the behavior of these normal patterns depend on the corrugation sequence \(N_{*}=(N_{k,i})_{k,i}\) but it happens that two numbers extracted from this sequence play a special role: * \(M=M(N_{*})\) the greatest common divisor of all the terms of the sequence * \(L=L(N_{*})\) the greatest common divisor of the subsequences \((N_{k,2})_{k\in\mathbb{N}^{*}}\) and \((N_{k,3})_{k\in\mathbb{N}^{*}}\) of \(N_{*}\). We show in Lemma 49 that the formal normal pattern \[\varphi\longmapsto\boldsymbol{\nu}_{\infty}^{\Phi}(N_{*})\left(\rho,\varphi\right)\] is \(\frac{2\pi}{7L}\)-periodic. Since \(\mathbf{F}_{0}\) has rotational symmetry, the following proposition follows: **Proposition 5**.: _The formal normal \(\mathbf{n}_{\infty}^{\Phi}\) has rotational symmetry of order \(\frac{2\pi}{7L}\)._ Let \(m\) be an integer such that \(0<m<M\). Another remarkable property can be observed on the circles \(K=\{\rho=\frac{m}{M}\}\): along those circles a simple asymptotic connection exists between the normal pattern \(\boldsymbol{\nu}_{\infty}\) and its formal version \(\boldsymbol{\nu}_{\infty}^{\Phi}\). Indeed, replacing the sequence \(N_{*}\) by a multiple \(nN_{*}=(nN_{k,i})_{k,i}\), \(n\in\mathbb{N}^{*}\), has the effect of composing \(\boldsymbol{\nu}_{\infty}(N_{*})\) by the reparametrization \((\rho,\varphi)\mapsto(\rho,n\varphi).\) As a consequence, \(\Gamma_{K}^{\Phi}(N_{*})\) and \(\Gamma_{K}^{\Phi}(nN_{*})\) are equal as sets, see Lemma 49. This crucial fact, combined with Theorem 4, highlights a link between the image \(\Gamma_{K}^{\Phi}(N_{*})\) of \(K\) by the formal normal pattern \(\boldsymbol{\nu}_{\infty}^{\Phi}(N_{*})\) and its images \(\Gamma_{K}(nN_{*})\) by \(\boldsymbol{\nu}_{\infty}(nN_{*})\): **Theorem 6** (Asymptotic behavior).: _Let \(m\) be an integer such that \(1\leq m\leq M-1\) and let \(K\) be the circle \(\{\rho=\frac{m}{M}\}\), we have_ \[\lim_{n\to+\infty}\Gamma_{K}(nN_{*})=\Gamma_{K}^{\Phi}(N_{*}).\] In this theorem, it is understood that the limit applies to those elements \(\Gamma_{K}(nN_{*})\) that are well-defined, that is for which the sequences \(nN_{*}\) lead to a well-defined map \(f_{\infty}\). Self-similarity behavior.Recall from (6) that the normals \(\mathbf{n}_{\infty}(p)\) and \(\mathbf{n}_{\infty}^{\Phi}(p)\) only differ from \(\boldsymbol{\nu}_{\infty}(p)\) and \(\boldsymbol{\nu}_{\infty}^{\Phi}(p)\) by a rotation mapping the standard basis \((\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\) to \(\mathbf{F}_{0}(p)=(\mathbf{t}_{0}(p),\mathbf{w}_{0}(p),\mathbf{n}_{0}(p)).\) Proposition 7 below shows that the image \(G_{K}^{\Phi}\) of \(K\) by \(\mathbf{n}_{\infty}^{\Phi}\) approximately decomposes into a finite union of copies of \(\Gamma_{K}^{\Phi}\subset\mathbb{S}^{2}\). Each copy is obtained by applying a rotation to \(\Gamma_{K}^{\Phi}\) that maps the standard basis onto \(\mathbf{F}_{0}\) evaluated at some specific points. **Proposition 7**.: _Let \(m\) be an integer such that \(0<m<M-1\) and \(K\) be the circle \(\{\rho=\frac{m}{M}\}\). The Hausdorff distance, as subsets of \(\mathbb{S}^{2}\), between the image \(G_{K}^{\Phi}\) of the formal Gauss map and_ \[\bigcup_{\ell=0}^{7L-1}\mathbf{F}_{0}(\rho,\frac{2\ell\pi}{7L})\cdot\Gamma_{K }^{\Phi}\] _is at most \(\frac{2\pi}{7L}\)._ In Section 7.6, we next show that \(\Gamma_{K}^{\Phi}(N_{*})\) also approximately decomposes into a finite union of sub-patterns, themselves decomposing in turn into even smaller patterns, and so on to infinity. In that sense, Proposition 7 reveals a form of self-similarity of \(G_{K}^{\Phi}\) along the circles of the form \(\{\rho=\frac{m}{M}\}\), see Figure 5. Regularity of \(\mathbf{n}_{\infty}^{\Phi}\).We finally focus on the regularity of \(\mathbf{n}_{\infty}^{\Phi}\). Proposition 47 states that this regularity is the same at every point of a circle centered at the origin. Now, if the normal \(\mathbf{n}_{\infty}^{\Phi}\) were of class \(C^{1}\), we could extract the extrinsic curvatures from the shape operator \(-\mathbf{d}\mathbf{n}_{\infty}^{\Phi}\). Here, the normal \(\mathbf{n}_{\infty}^{\Phi}\) is not \(C^{1}\) but we may consider the modulus of continuity of \(\mathbf{n}_{\infty}^{\Phi}\). Proposition 48 states that this modulus, when restricted to the circles \(\{\rho=Cte\}\), is related to the modulus of a Weierstrass-like function: \[W_{\rho}(\varphi):=\sum_{k=1}^{\infty}\left(\sum_{i=1}^{3}\alpha_{k,i}^{\Phi} (\rho)\cos(b_{k,i}\varphi+c_{k,i}(\rho))\right).\] In this expression, the coefficient \(\alpha_{k,i}^{\Phi}(\rho)\) can be made explicite, see Lemma 44, and has polynomial growth in \(\rho^{k}\). We thus expect that the regularity of \[\varphi\longmapsto\mathbf{n}_{\infty}^{\Phi}(\rho,\varphi)\] Figure 5: The image by \(\mathbf{n}_{2,2}^{\Phi}\) of an arc of amplitude \(\frac{2\pi}{7L}\) is made of \(N_{1,3}/N_{1,2}=10\) copies of a same sub-pattern, see Section 7.6. For the readability of the figure, only a half period \(\frac{2\pi}{14L}\) with 5 copies is shown. Indeed the images of two successive arcs of amplitude \(\frac{2\pi}{14L}\) are almost overlaid on each other giving the illusion of a single image traveled back and forth. Nevertheless, the end points of an arc of amplitude \(\frac{2\pi}{7L}\) have images that differ from a rotation of (small) angle \(\frac{2\pi}{7L}\), see Proposition 5. Here \(\rho=0.7\). decreases as \(\rho\) tends towards \(1\). We have observed this phenomenon numerically, as illustrated in Figure 6. ## 2 The general strategy Here, we introduce the necessary ingredients of our construction of an isometric embedding of the hyperbolic plane. Our construction relies on a sequence of metrics \((g_{k})_{k}\) converging to the hyperbolic metric, from which we build a sequence of maps \((f_{k})_{k}\) converging to an isometric embedding. The \(C^{1}\) convergence of \((f_{k})_{k}\) can be reduced to the validity of four properties \((P_{1})\)-\((P_{4})\) involving \((f_{k})_{k}\) and \((g_{k})_{k}\). We prove in Section 5 that these properties can be fulfilled. Assuming \((P_{1})\)-\((P_{4})\), we show in Section 2.3 that our construction has maximum Holder regularity on the limit set. ### Working on \(\overline{\mathbb{H}^{2}}\) We use polar coordinates on \(D^{2}\) and therefore introduce the cylinder \[C_{0}:=\{(\rho,\varphi)\,|\,\rho\in]0,1],\;\varphi\in\mathbb{R}/(2\pi\mathbb{ Z})\}\] and its universal covering \(\widetilde{C}_{0}=\,]0,1]\,\times\mathbb{R}\). We orient \(C_{0}\) and \(\widetilde{C}_{0}\) by requiring \((\partial_{\rho},\partial_{\varphi})\) to be direct and we endow \(Int\,C_{0}\) with the metric \[h:=4\frac{d\rho^{2}+\rho^{2}d\varphi^{2}}{(1-\rho^{2})^{2}}\] to obtain a Riemannian surface \((Int\,C_{0},h)\) isometric to the punctured Poincare disk \(\mathbb{H}^{2}\setminus\{O\}.\) From an initial immersion/embedding \(f_{0}\) defined on \(C_{0}\), we will iteratively apply a corrugation process to produce a sequence of immersions/embeddings \((f_{q})_{q}\) that will be \(C^{0}\)-converging toward a limit map \(f_{\infty}\) defined on \(C_{0}\). Moreover this sequence will be \(C^{1}\)-converging on the interior of \(C_{0}\) and the limit map \(f_{\infty}\) will be isometric between \((Int\,C_{0},h)\) and \(\mathbb{H}^{2}\setminus\{O\}\). The extension at the origin of the constructed maps \((f_{q})_{q}\) can be realized by modifying iteratively a sequence of lifts \((\widetilde{f}_{q})_{q}\) on disks centered at the origin and of arbitrarily small radius of \(\widetilde{C}_{0}\) (see [11] p.199, Complement 9.28, for a general result, or [1] for an explicit construction of an extension from an equatorial ribbon to a whole 2-sphere). This part of the construction will be skipped here because it only perturbs \(f_{\infty}\) on a compact domain of \(Int\,C_{0}\). We therefore focus on constructing \(f_{\infty}\) on the compact annulus \[C:=C_{0}\setminus Int\,D(\rho_{0})\] obtained by removing from \(C_{0}\) an open disk centered at \(O\) and of radius \(\rho_{0}\) for some \(0<\rho_{0}<1.\) Accordingly, we denote \(\widetilde{C}:=\,]\rho_{0},1]\,\times\mathbb{R}\). ### A Nash & Kuiper-like approach We briefly summarize our approach that relies on the classical Nash and Kuiper construction of isometric maps, see [10, 7, 4, 9]. In our context, it starts with a short initial embedding \(f_{0}:C\to\mathbb{E}^{3}\) and a sequence of increasing metrics \((g_{k})_{k\in\mathbb{N}}\) defined on \(C\) such that \(g_{0}=f_{0}^{*}\,\langle\cdot,\cdot\rangle\) and \(\lim_{k\to+\infty}g_{k}=h\). The hyperbolic metric \(h\) is not defined on \(\{(1,\varphi)\}\) so this last limit is only required over \(C^{*}:=C\setminus(\{1\}\times\mathbb{R}/2\pi\mathbb{Z})\). Let \((\tau_{k})_{k\in\mathbb{N}^{*}}\) be a decreasing sequence of positive numbers such that \[\mathrm{T}:=\sum_{k=1}^{+\infty}\tau_{k}<+\infty.\] From the initial embedding \(f_{0}\), we will build a sequence \((f_{k})_{k\in\mathbb{N}^{*}}\) of maps defined on \(C\) satisfying the following properties at every point \(p\in C\): 1. \(\|g_{k}-f_{k}^{*}\,\langle\cdot,\cdot\rangle\|\leq\|g_{k+1}-g_{k}\|\) 2. \(\|f_{k}-f_{k-1}\|\leq\tau_{k}\) 3. \(\|df_{k}-df_{k-1}\|\leq\tau_{k}+A\,\|g_{k}-f_{k-1}^{*}\,\langle\cdot,\cdot \rangle\|^{1/2}\) \((P_{4})\ \sum_{k}\|g_{k}-g_{k-1}\|_{K,\infty}^{1/2}<+\infty\) for any compact set \(K\subset C^{*}.\) In \((P_{3})\), the factor \(A\) is a constant that does not depend on \(k\). Here and thereafter, we use the operator norm for linear maps and for a symmetric bilinear form \(b\) we use the norm \[\|b\|=\sup_{v\in\mathbb{R}^{2}}\frac{|b(v,v)|}{\|v\|^{2}}.\] We also denote by \(\|\cdot\|_{C,\infty}\) the supremum norm taken over \(C\). **Proposition 8** (Nash-Kuiper).: _If the sequences \((f_{k})_{k\in\mathbb{N}^{*}}\) and \((g_{k})_{k\in\mathbb{N}^{*}}\) satisfy Properties (\(P_{1}\)) to (\(P_{4}\)) then \((f_{k})_{k\in\mathbb{N}^{*}}\) converges toward a map \(f_{\infty}\) which is continuous over \(C\) and an \(h\)-isometric immersion of class \(C^{1}\) over \(C^{*}.\)_ Proof.: We provide the proof for the sake of completeness. Property \((P_{2})\) ensures the \(C^{0}\) convergence over \(C\) of the sequence \((f_{k})_{k}\) toward a continuous map \(f_{\infty}\) such that \(\|f_{\infty}-f_{0}\|_{C,\infty}\leq\mathrm{T}\). Properties \((P_{1})\) leads to the inequality \[\|g_{k}-f_{k-1}^{*}\left\langle\cdot,\cdot\right\rangle\|^{1/2} \leq \|g_{k}-g_{k-1}\|^{1/2}+\|g_{k-1}-f_{k-1}^{*}\left\langle\cdot, \cdot\right\rangle\|^{1/2}\] \[\leq 2\|g_{k}-g_{k-1}\|^{1/2}.\] The convergence of the series in \((P_{4})\) thus implies the convergence of the series \(\sum_{k}\|g_{k}-f_{k-1}^{*}\left\langle\cdot,\cdot\right\rangle\|_{K,\infty}^ {1/2}\). Together with the convergence of \(\sum_{k}\tau_{k}\), this implies by \((P_{3})\) that \(\sum_{k}\|df_{k}-df_{k-1}\|_{K,\infty}\) is convergent for every compact \(K\subset C^{*}.\) It follows that \(f_{\infty}\) is \(C^{1}\) over \(C^{*}\). Then property \((P_{1})\) ensures that \(f_{\infty}\) is \(h\)-isometric over \(C^{*}.\) ### Holder regularity The limit map \(f_{\infty}\) is \(C^{1}\) everywhere except on the boundary \(\partial D^{2}.\) At those points, its regularity can be controlled by the sequence of metrics \((g_{k})_{k}\) together with the sequence \((\tau_{k})_{k}\) introduced in 2.2. **Proposition 9**.: _Let \(0<\beta<1.\) Under the assumptions (\(P_{1}\))-(\(P_{4}\)), if_ \((i)\) _there exists \(k_{0}\in\mathbb{N}\) such that, for all \(k\geq k_{0},\)\(\tau_{k}\leq\|g_{k}-g_{k-1}\|_{C,\infty}^{1/2}\)_ \((ii)\)__\(\sum_{k}\tau_{k}^{1-\beta}\|g_{k}-g_{k-1}\|_{C,\infty}^{\beta/2}<+\infty\)__ _then the limit map \(f_{\infty}\) is \(\beta\)-Holder on \(C.\)_ Proof.: The proof relies on the following classical interpolation inequality: \[\|F\|_{C^{0,\beta}}\leq 2^{1-\beta}\|dF\|_{\infty}^{\beta}\|F\|_{\infty}^{1-\beta}\] where \(F\) is a \(C^{1}\) map and \(\|F\|_{C^{0,\beta}}=\sup_{x\neq y}\frac{|F(y)-F(x)|}{|y-x|^{\beta}}\) denotes the Holder norm. From Properties \((P_{1}),(P_{2})\) and \((P_{3})\) stated in 2.2 we have \[\left\{\begin{array}{l}\|f_{k}-f_{k-1}\|_{C,\infty}\leq\tau_{k}\\ \|df_{k}-df_{k-1}\|_{C,\infty}\leq\tau_{k}+A\|g_{k}-g_{k-1}\|_{C,\infty}^{1/2}. \end{array}\right.\] If the series \(\sum\|g_{k}-g_{k-1}\|_{C,\infty}^{1/2}\) was convergent, then the previous inequalities would imply that the limit map \(f_{\infty}\) exists and is \(C^{1}\) on \(C\). Otherwise, the series \(\sum\|g_{k}-g_{k-1}\|_{C,\infty}^{1/2}\) is divergent and it follows from Assumption \((i)\) of the lemma that for \(k\geq k_{0}\) we have \[\|df_{k}-df_{k-1}\|_{C,\infty}\leq(1+A)\|g_{k}-g_{k-1}\|_{C,\infty}^{1/2}.\] The interpolation inequality leads to \[\|f_{k}-f_{k-1}\|_{C^{0,\beta}}\leq 2^{1-\beta}(1+A)^{\beta}\tau_{k}^{1-\beta} \|g_{k}-g_{k-1}\|_{C,\infty}^{\beta/2}.\] Thanks to Assumption \((ii)\) and the fact that \(C^{0,\beta}(D^{2})\) is a Banach space, we can deduce that the limit map \(f_{\infty}\) is \(\beta\)-Holder on \(C\). ## 3 The corrugation process ### The target differential Let \(f:\widetilde{C}\mapsto\mathbb{E}^{3}\) be a smooth immersion. We fix an affine projection \(\varpi:\widetilde{C}\to\mathbb{R}\) and a family of loops \(\gamma:\widetilde{C}\times(\mathbb{R}/\mathbb{Z})\to\mathbb{E}^{3}\) that is periodic in the first coordinate, i.e. that satisfies \[\gamma((\rho,\varphi+2\pi),s)=\gamma((\rho,\varphi),s).\] **Definition 10** (Corrugation process [12]).: We denote by \(F:\widetilde{C}\to\mathbb{E}^{3}\) the map defined by \[\forall p\in\widetilde{C},\qquad F(p):=f(p)+\frac{1}{N}\int_{0}^{N\varpi(p)}( \gamma(p,s)-\overline{\gamma}(p))\mathrm{d}s, \tag{7}\] where \(\overline{\gamma}(p):=\int_{0}^{1}\gamma(p,t)\mathrm{d}t\) denotes the average of the loop \(\gamma(p,\cdot)\) and \(N\in\mathbb{N}^{*}\) is any integer. We say that \(F\) is obtained from \(f\) by a _corrugation process_. Observe that, for all \(x\in\mathbb{R}\), \[\int_{x}^{x+1}(\gamma(p,s)-\overline{\gamma}(p))\mathrm{d}s=0. \tag{8}\] The differential of \(F\) has the following expression \[dF(p)=df(p)+(\gamma(p,N\varpi(p))-\overline{\gamma}(p))\otimes d\varpi+\frac {1}{N}\int_{0}^{N\varpi(p)}d(\gamma(p,s)-\overline{\gamma}(p))\mathrm{d}s. \tag{9}\] The last term can be made arbitrarily small by increasing the number \(N\). We therefore introduce a definition for the remaining terms. They appear to contain important geometric information. See Section 6 and 7. **Definition 11**.: [Target differential] We denote by \(L\) the map given by \[L(p):=df(p)+(\gamma(p,N\varpi(p))-\overline{\gamma}(p))\otimes d\varpi\] that we call the _target differential_. The defining formula (7) for \(F\) and the expression (9) of its differential imply for all \(p\in\widetilde{C}\): 1. \(\|F(p)-f(p)\|=O(1/N)\) 2. \(\|dF(p)-L(p)\|=O(1/N)\). Indeed, the integrand is \(1\)-periodic and has vanishing average. Thus, Formula (7) allows to build a map \(F\) that is arbitrarily close to \(f\) and whose differential is the target differential \(L\), up to \(O(1/N)\). ### The corrugation frame For each point \(p\in\widetilde{C}\), the pair \((f,\varpi)\) defines a line \[W(p):=df(p)(\ker\,d\varpi)\] of the tangent space \(df(p)(T\widetilde{C}).\) Note that \(W\) is tangent to the corrugation wavefront \(f(\{\varpi=\mathrm{Cst}\})\). We denote by \(W^{\perp}\) the orthogonal complement of \(W\) in \(df(p)(T\widetilde{C})\) and by \(V=[V]^{W}+[V]^{W^{\perp}}\) the components of any tangent vector \(V\) in the orthogonal direct sum \(W\oplus W^{\perp}\). Let \((v,w)\) be a direct basis of \(\mathbb{R}^{2}\) such that \(d\varpi(p)(v)>0\) and \(w\in\ker\,d\varpi(p)\). We put \[\begin{split}\mathbf{w}(p):=&\ \frac{df(p)(w)}{\|df(p)(w)\|}\\ \mathbf{n}(p):=&\ \frac{df(p)(v)\wedge df(p)(w)}{\|df(p )(v)\wedge df(p)(w)\|}\quad\text{and}\\ \mathbf{t}(p):=&\ \mathbf{w}(p)\wedge\mathbf{n}(p). \end{split} \tag{10}\] Observe that \((\mathbf{t}(p),\mathbf{w}(p))\) is a direct orthonormal basis of \(df(p)(T_{p}\widetilde{C})\) because \(f\) is assumed to be an immersion, and that \(\mathbf{n}(p)\) spans its normal space. We call the orthonormal frame \(\mathbf{F}=(\mathbf{t},\mathbf{w},\mathbf{n})\) the _corrugation frame_. It only depends on the pair \((f,\varpi)\). We also introduce a vector \(u\), depending on \(p\), to be such that \(df(p)(u)\) is collinear to \(\mathbf{t}(p)\) and \(d\varpi(u)=1.\) This choice allows a simpler writing of the various formulas which occur in this article. Remark that \[[df]^{W^{\perp}}=df(u)\otimes d\varpi. \tag{11}\] ### The corrugation process on \(\widetilde{C}\) We choose the family of loops \(\gamma:\widetilde{C}\times(\mathbb{R}/\mathbb{Z})\to\mathbb{R}^{3}\) defined by \[\gamma(\cdot,s):=r\left(\cos\theta\,\mathbf{t}+\sin\theta\,\mathbf{n}\right) \quad\text{with}\quad\theta:=\alpha\cos(2\pi s) \tag{12}\] and where \(r\) and \(\alpha\) are functions determined below. The image of each loop is an arc of circle of amplitude \(2\alpha\) and radius \(r\). Its average is \[\overline{\gamma}=r\left(\int_{0}^{1}\cos(\alpha\cos(2\pi s))\mathrm{d}s \right)\mathbf{t}=rJ_{0}(\alpha)\mathbf{t}\] where \(J_{0}\) denote the Bessel function of the first kind and of order \(0\). Recall the definition of \(u\) introduced in Section 3.2. We choose \(r\) and \(\alpha\) such that \(\overline{\gamma}=df(u)\), i.e., \[rJ_{0}(\alpha)\mathbf{t}=df(u). \tag{13}\] In absolute value, the Bessel function \(J_{0}\) is lower than \(1\). Hence Formula (13) is satisfied if and only if \[r\geq\|df(u)\|\quad\text{and}\quad\alpha:=J_{0}^{-1}\left(\frac{1}{r}\|df(u) \|\right), \tag{14}\] where \(J_{0}^{-1}\) denotes the inverse of the restriction \(J_{0}|_{[0,\kappa_{0}[}\) of the Bessel function to the positive numbers less than its first zero \(\kappa_{0}=2.404...\) **Lemma 12**.: _If the radius \(r\) and the amplitude \(\alpha\) satisfy (14) then the target differential \(L\) has the following expression_ \[L=[df]^{W}+r(\cos\theta\,\mathbf{t}+\sin\theta\,\mathbf{n})\otimes d\varpi\] _and_ \[L^{*}\left\langle\cdot,\cdot\right\rangle=f^{*}\left\langle\cdot,\cdot\right\rangle +\left(r^{2}-\|df(u)\|^{2}\right)d\varpi\otimes d\varpi.\] Proof.: From Definition 11 and the value of the average \(\overline{\gamma}\), we have \[L = df-df(u)\otimes d\varpi+r(\cos\theta\,\mathbf{t}+\sin\theta\, \mathbf{n})\otimes d\varpi \tag{15}\] It then remains to observe that \([df]^{W}=df-df(u)\otimes d\varpi\) by (11) to obtain the first equality. The second equality is easily checked over the basis \((u,w)\). For any smooth map \(\eta:\widetilde{C}\to\mathbb{R}_{\geq 0}\) we consider the metric on \(\widetilde{C}\) \[\mu:=f^{*}\left\langle\cdot,\cdot\right\rangle+\eta d\varpi\otimes d\varpi.\] **Corollary 13**.: _Let \(\alpha\) be given by (13) with_ \[r:=\sqrt{\eta+\left\|df(u)\right\|^{2}}\] _then the map \(L\) is \(\mu\)-isometric, i.e., \(L^{*}\left\langle\cdot,\cdot\right\rangle=\mu\). In particular, at every point \(p\in\widetilde{C}\), the linear map \(L(p)\) is a monomorphism._ Since \(\gamma\) depends only on \(f\) and \(\eta\), we introduce the following notations. Notations.Let \(f\) be an immersion, \(\varpi\) a projection, \(N>0\) be an integer and \(\eta>0\) a function. The map obtained by the corrugation process in Definition 10 is denoted by \[F=CP(f,\eta,\varpi,N),\] where we choose the family of loops (12) with \(r\) and \(\alpha\) as in Corollary 13. Beware that \(f\) should be an immersion in order to have a well defined corrugation frame, allowing to define \(\gamma\) as in (12). We also denote by \[L(f,\eta,\varpi,N)\] the target differential \(L\) of Definition 11, where we again choose the family of loops (12) with \(r\) and \(\alpha\) as in Corollary 13. ### Properties of the corrugation process In this section, we fix an immersion \(f\), a projection \(\varpi\) and a function \(\eta>0\). **Lemma 14**.: _For all point \(p\in\widetilde{C}\), the map \(F=CP(f,\eta,\varpi,N)\) satisfies_ \[\left\{\begin{array}{l}\|F(p)-f(p)\|=O(\frac{1}{N}),\\ \|dF(p)-L(p)\|=O(\frac{1}{N}),\\ \|F^{*}\left\langle\cdot,\cdot\right\rangle(p)-\mu(p)\|=O(\frac{1}{N}),\\ \|dF(p)-df(p)\|\leq\|dF(p)-L(p)\|+\sqrt{7\eta(p)}\|d\varpi\|\end{array}\right.\] Proof.: The first two equalities follow from properties \((i)\) and \((ii)\) of Section 3.1, the third from \((ii)\) and the fact that \(L^{*}\left\langle\cdot,\cdot\right\rangle=\mu.\) For the last inequality we have, by the triangle inequality, \[\|dF(p)-df(p)\|\leq\|dF(p)-L(p)\|+\|L(p)-df(p)\|.\] Equation (15) shows that the difference \(L-df\) reduces to a tensor product of the form \(X\otimes d\varpi\) where \[\|X\|^{2} = (r\cos\theta-\|df(u)\|)^{2}+r^{2}\sin^{2}\theta.\] By using Equation (13), we obtain \[\|X\|^{2}=r^{2}\left(1+J_{0}^{2}(\alpha)-2J_{0}(\alpha)\cos\theta\right).\] Since the first positive root of \(J_{0}\) is lower than \(\pi\) we have \(\pi>\alpha>\theta\) and \[\|X\|^{2}\leq r^{2}\left(1+J_{0}^{2}(\alpha)-2J_{0}(\alpha)\cos\alpha\right).\] We then use the following inequality from [3, Sublemma 5]: \[1+J_{0}^{2}(\alpha)-2J_{0}(\alpha)\cos\alpha\leq 7(1-J_{0}^{2}(\alpha))\] that holds for every \(\alpha\) between zero and the first positive root of \(J_{0}\). We finally obtain by Corollary 13 \[\|X\|^{2}\leq 7\left(r^{2}-\|df(u)\|^{2}\right)=7\eta.\] For every linear map \(\Psi:\mathbb{R}^{2}\to\mathbb{E}^{3}\) we set \[\lambda(\Psi):=\inf_{v\in\mathbb{R}^{2}\setminus\{0\}}\frac{\|\Psi(v)\|}{\|v \|}. \tag{16}\] So, \(\Psi\) is a monomorphism if and only if \(\lambda(\Psi)>0\). **Lemma 15**.: _Let \(F=CP(f,\eta,\varpi,N)\). For all \(p\in\widetilde{C}\), we have_ \[\lambda(dF(p))\geq\lambda(df(p))-\|dF(p)-L(p)\|.\] _Hence, if \(\lambda(df(p))>\|dF(p)-L(p)\|\) for all \(p\in\widetilde{C}\) then the map \(F\) is an immersion._ Proof.: For every vector \(v\in\mathbb{R}^{2}\), we have by the reverse triangle inequality: \[\|dF(p)(v)\|\geq\|L(p)(v)\|-\|dF(p)(v)-L(p)(v)\|.\] Since \(L\) is \(\mu\)-isometric, we also have \[\|L(p)(v)\|=\sqrt{\mu(p)(v,v)}\geq\|df(p)(v)\|.\] Putting together the two inequalities, we easily deduce the inequality in the lemma. **Lemma 16**.: _Let \(F=CP(f,\eta,\varpi,N)\). If \(f\) is an embedding, \(F\) is an immersion and if, on some compact set \(K\), the amplitude \(\alpha\) is strictly lower than \(\pi/2\) then for \(N\) large enough the restriction of \(F\) on \(K\) is an embedding._ Proof.: Let \(p\in K\) and let \(w,u\) be as in Section 3.2. From (15) and Lemma 14, we have \[dF(p)(u) = r(p)(\cos\theta(p)\,\mathbf{t}(p)+\sin\theta(p)\,\mathbf{n}(p)) +O(1/N)\] \[dF(p)(w) = df(p)(w)+O(1/N)\] with \(\theta(p)=\alpha(p)\cos(2\pi N\varpi(p))\). It follows that the angle between the normal \(\mathbf{n}_{F}(p)\) of \(F\) at \(p\) and the normal \(\mathbf{n}(p)=\mathbf{n}_{f}(p)\) is less than \(\alpha(p)+O(1/N)\). By the hypothesis on \(\alpha\), we deduce that there exists a radius \(\delta(p)>0\) and a corrugation number \(N(p)\) such that for all \(q\in D(p,\delta(p))\) and for all \(N\geq N(p)\) the angle between \(\mathbf{n}_{F}(q)\) and \(\mathbf{n}(p)\) is strictly less than \(\pi/2\). Since \(K\) is compact, we easily deduce that there exists \(\delta_{K}>0\) and a corrugation number \(N_{K}\) such that for all \(p,q\in K\) with \(\|p-q\|<\delta_{K}\) the angle between \(\mathbf{n}_{F}(q)\) and \(\mathbf{n}(p)\) is strictly less than \(\pi/2\). Thus, over each \(D^{2}(p,\delta_{K})\) the immersion \(F\) is a graph over \(df(p)(\mathbb{R}^{2})\), hence its restriction to \(D^{2}(p,\delta_{K})\) is an embedding. The crucial point of this approach is that \(\delta_{K}\) does not depend on \(N\geq N_{K}.\) We now consider the two distances \[d_{f}(p_{1},p_{2}):=\|f(p_{2})-f(p_{1})\|\quad\text{and}\quad d_{F}(p_{1},p_{2 }):=\|F(p_{2})-F(p_{1})\|\] and the following neighborhood of the diagonal of \(K\times K\) : \[V(\delta_{K})=\{(p_{1},p_{2})\in K\times K\,|\,\exists p\text{ such that }p_{1},p_{2}\in D^{2}(p,\delta_{K})\}.\] Since the complement \(K\times K\setminus V(\delta_{K})\) is relatively compact and \(f\) is an embedding, we have on this complement \[d_{f}(p_{1},p_{2})\geq d_{min}>0.\] From Lemma 14, we know that \(\|F-f\|_{K,\infty}=O(N^{-1})\) and thus there exists \(N_{K}^{\prime}\geq N_{K}\) such that for all \(N\geq N_{K}^{\prime}\), \(\|F-f\|_{K,\infty}<d_{min}/3\). It follows that \[\forall(p_{1},p_{2})\in K\times K,\quad d_{F}(p_{1},p_{2})\geq d_{f}(p_{1},p_ {2})-\frac{2}{3}d_{min}.\] This implies that the function \(d_{F}\) never vanishes on \(K\times K\setminus V(\delta_{K})\). Since the restriction to \(D^{2}(p,\delta_{K})\) of \(F\) is an embedding, the distance \(d_{F}\) can not vanish on \(V(\delta_{k})\), except at the points of the diagonal. This shows that \(F\) is an embedding. Descending to the quotient \(C\).In general, the affine projection \(\varpi:\widetilde{C}\to\mathbb{R}\) does not descend to the quotient \(C.\) However, its differential \(d\varpi:\widetilde{C}\to\mathscr{L}(\mathbb{R}^{2},\mathbb{R})\) does, since it is constant. If the immersion \(f:\widetilde{C}\to\mathbb{E}^{3}\) and the map \(\eta:\widetilde{C}\to\mathbb{R}_{>0}\) descend to the quotient, the metric \(\mu=f^{*}\left\langle\cdot,\cdot\right\rangle+\eta d\varpi\otimes d\varpi\) also descends to the quotient. The lemma below easily follows from Definition 10 and the 1-periodicity observed in Equation (8). **Lemma 17**.: _If \(f\) and \(\eta\) descend to the quotient \(C\) and if \(\varpi\) satisfies_ \[\forall(\rho,\varphi)\in\widetilde{C},\quad\varpi(\rho,\varphi+2\pi)-\varpi( \rho,\varphi)\in\mathbb{Z}\] _then the map \(F=CP(f,\mu,\varpi,N)\) descends to the quotient \(C.\)_ ## 4 Isometric 3-corrugated immersions In this section, we show under very general assumptions on the sequence of metrics \((g_{k})_{k}\) that it is possible to simplify the Nash-Kuiper construction. This simplification consists, at each step \(k\), in fixing the number \(n_{k}\) of linear forms to 3 and in considering throughout the same linear forms \(\ell_{1},\ell_{2},\ell_{3}\). This allows us to express the variable \(\eta\), involved in the Corrugation Process, as a function of the target metric and the map to be corrugated. The result is an _explicit_ iterative process (21) defining a sequence of applications converging to a \(C^{1}\) isometry, see Proposition 19. We say that the limit map is 3-corrugated. Note that Proposition 19 is stated in the particular context of the construction of an isometric embedding of the Poincare disk into \(\mathbb{E}^{3}\) but it could be applied, _mutatis mutandis_, to explicitly construct codimension 1 isometric immersions in \(\mathbb{E}^{n+1}\). In this case, the number of linear forms to consider is \(s_{n}:=\frac{n(n+1)}{2}\) and the limit map is \(s_{n}\)-corrugated. ### Primitive basis of the cone of metrics Let \(\mathcal{S}_{2}(\mathbb{R}^{2})\) be the vector space of symmetric bilinear forms of \(\mathbb{R}^{2}\) and let \((\ell_{i}\otimes\ell_{i})_{i\in\{1,2,3\}}\) a basis of \(\mathcal{S}_{2}(\mathbb{R}^{2})\) where \(\ell_{1}\), \(\ell_{2}\) and \(\ell_{3}\) are three linear forms on \(\mathbb{R}^{2}\). We denote by \((H_{i})_{i\in\{1,2,3\}}\) the dual basis of the primitive basis \((\ell_{i}\otimes\ell_{i})_{i\in\{1,2,3\}}\). So, each \(H_{i}:\mathcal{S}_{2}(\mathbb{R}^{2})\rightarrow\mathbb{R}\) is a linear form and for any symmetric bilinear form \[B=\sum_{i=1}^{3}\eta_{i}\ell_{i}\otimes\ell_{i}\] we have \(\eta_{i}=H_{i}(B).\) Let \[h_{max}:=\max\{\|H_{1}\|,\|H_{2}\|,\|H_{3}\|\} \tag{17}\] be the maximum of the norms2 of the three linear forms \(H_{i}\). For any \(B\) we thus have Footnote 2: Recall that we use the operator norms. \[|H_{i}(B)|\leq h_{max}\|B\|.\] Let \(D:C\rightarrow\mathcal{S}_{2}(\mathbb{R}^{2})\) be of class \(C^{\infty}\). We also introduce \[H_{min}(D)(p):=\min_{i\in\{1,2,3\}}H_{i}(D)(p)\quad\text{ and }\quad H _{min}(D):=\inf_{p\in C}H_{min}(D)(p). \tag{18}\] We thus have \[H_{min}(D)>0\quad\iff\quad D\in C^{\infty}(C,\mathscr{G})\] where \(\mathscr{G}\) is the positive cone \[\mathscr{G}:=\{\eta_{1}\ell_{1}\otimes\ell_{1}+\eta_{2}\ell_{2} \otimes\ell_{2}+\eta_{3}\ell_{3}\otimes\ell_{3}\,|\,\eta_{1}>0,\eta_{2}>0,\eta _{3}>0\}. \tag{19}\] ### Definition of the 3-corrugated process Let \((\varpi_{i})_{i\in\{1,2,3\}}\) be three affine projections satisfying the condition of Lemma 17. We set \(\ell_{i}=d\varpi_{i}\) and assume that \((\ell_{i}\otimes\ell_{i})_{i\in\{1,2,3\}}\) is a basis of \(\mathcal{S}_{2}(\mathbb{R}^{2})\). In general the coefficient \(\eta_{k,i}=H_{i}(D_{k,i})\) of the decomposition of the difference \[D_{k,i}:=g_{k}-f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle \tag{20}\] on this basis has no reason to be positive. If at each step \((k,i)\) this coefficient \(\eta_{k,i}\) is positive then the corrugation process is well-defined and can be used to build a sequence \((f_{k,i})_{k,i}\) of corrugated maps. In that case, we write \[f_{k,i}=CP_{i}(f_{k,i-1},g_{k},N_{k,i}),\qquad i\in\{1,2,3\}, \tag{21}\] for \(CP(f_{k,i-1},\eta_{k,i},\varpi_{i},N_{k,i})\). Indeed, the \(\varpi_{i}\) being given once for all, the coefficient \(\eta_{k,i}\) can be deduced from \(g_{k}\) and \(f_{k,i-1}\). The affine projection used in the corrugation process is indicated by the subscript \(i\) in \(CP_{i}\). We use the convention \(f_{k-1}=f_{k,0}=f_{k-1,3}\) and set \(f_{k}:=f_{k,3}\). **Definition 18**.: When the maps \(f_{k}\) are constructed as above by iterating the corrugation process (21) involving the same three maps \(\varpi_{i}\), \(i\in\{1,2,3\}\), we say that the sequence \((f_{k})_{k}\) is obtained by a _3-corrugated process_ and that the limit map \(f_{\infty}\) (if it exists) is _3-corrugated_. ### Properties of 3-corrugated limit maps The existence of a 3-corrugated process is not trivial. Not only must we ensure that the isometric default \(g_{1}-f_{0}^{*}\left\langle\cdot,\cdot\right\rangle\) of the initial map \(f_{0}\) lies in the positive cone \(\mathscr{G}\) generated by the \((\ell_{i}\otimes\ell_{i})_{i}\) but we also need to make sure that this property remains true for the successive isometric defaults \(D_{k,i}\): precisely at each step \((k,i)\) the \(\ell_{i}\otimes\ell_{i}\)-component of \(D_{k,i}\) must be positive. To deal with this problem we will consider only sequences of metrics satisfying the following property \[\forall k\in\mathbb{N}^{*},\forall p\in C,\quad g_{k}(p)-g_{k-1}(p)\in \mathscr{G}\] ( \[P_{5}\] ) **Proposition 19**.: _Let \(f_{0}:C\to\mathbb{E}^{3}\) be an immersion and \((g_{k})_{k}\uparrow h\) be an increasing sequence of metrics defined on \(C\) and converging toward \(h\) on \(C^{*}=C\setminus(\{1\}\times\mathbb{R}/2\pi\mathbb{Z})\). If_ * _the sequence of metrics_ \((g_{k})_{k}\) _satisfies Property (_\(P_{4}\)_) and (_\(P_{5}\)_)_ * _the corrugation numbers_ \((N_{k,i})_{k,i}\) _are chosen large enough_ _then the sequence \((f_{k,i})_{k,i}\) iteratively defined by (21) \(C^{0}\) converges on \(C\) toward a 3-corrugated map \(f_{\infty}\). The sequence is also \(C^{1}\) converging on \(C^{*}\) and, on that set, \(f_{\infty}\) is a \(h\)-isometric immersion._ The proof of this proposition is given in the two next sections. The first one shows that the sequence \((f_{k})_{k}\) is well-defined and the second one that Properties (\(P_{1}\)), (\(P_{2}\)) and (\(P_{3}\)) hold. Proposition 19 then follows by applying Proposition 8. ### Proof of the existence part The map \(f_{k,i}\) constructed via formula (21) is well-defined provided that the map \(f_{k,i-1}\) is an immersion and the coefficient \(\eta_{k,i}=H_{i}(D_{k,i})\) is positive. For all \(i\in\{1,2,3\}\) we put \[\mu_{k,i}:=f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle+\eta_{k,i}\ell_{i }\otimes\ell_{i}\quad\text{and}\quad Err_{k,i}:=\mu_{k,i}-f_{k,i}^{*}\left\langle \cdot,\cdot\right\rangle.\] We also set \(err_{k,i}:=\|Err_{k,i}\|_{C,\infty}.\) By Lemma 14, \(err_{k,i}=O(N_{k,i}^{-1})\). The following Lemmas 20, 21 and 22 show how to control the coefficients \(\eta_{k,i}\) and the isometric default \(D_{k,4}:=g_{k}-f_{k}^{*}\left\langle\cdot,\cdot\right\rangle\) of the map \(f_{k}=f_{k,3}\) in terms of the \(err_{k,i}\). Then, Lemma 23 gives a sufficient condition on the choice of the corrugation numbers to obtain a well defined sequence of maps \((f_{k,i})_{k\in\mathbb{N}^{*},i\in\{1,2,3\}}\). **Lemma 20**.: _For every \(i\in\{1,2,3\}\), recalling that \(D_{k,i}=g_{k}-f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle\), we have_ \[Err_{k,i}=H_{i}(D_{k,i})\ell_{i}\otimes\ell_{i}-D_{k,i}+D_{k,i+1}.\] _In particular_ \[\forall j\neq i,\qquad H_{j}(Err_{k,i})=H_{j}(D_{k,i+1}-D_{k,i}). \tag{22}\] Proof.: It is enough to decompose \(Err_{k,i}\) as follows: \[Err_{k,i} = \mu_{k,i}-f_{k,i}^{*}\left\langle\cdot,\cdot\right\rangle\] \[= (\mu_{k,i}-f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle)-( g_{k}-f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle)+(g_{k}-f_{k,i}^{*} \left\langle\cdot,\cdot\right\rangle)\] \[= H_{i}(D_{k,i})\ell_{i}\otimes\ell_{i}-D_{k,i}+D_{k,i+1},\] where the last line comes from the definition of \(\mu_{k,i}\). **Lemma 21**.: _With \(H_{min}\) and \(h_{max}\) as in (18) and (17), we have for all \(p\in C\)_ \[H_{1}(D_{k,1}(p)) \geq H_{min}(D_{k,1})\] \[H_{2}(D_{k,2}(p)) \geq H_{min}(D_{k,1})-h_{max}.err_{k,1}\] \[H_{3}(D_{k,3}(p)) \geq H_{min}(D_{k,1})-h_{max}.(err_{k,1}+err_{k,2}).\] Proof.: The first inequality is trivial. For the second one, by definition of \(h_{max}\) we have \[\|H_{2}(Err_{k,1})\|_{C,\infty}\leq h_{max}.err_{k,1}.\] By using (22) of Lemma 20 with \(j=2\) and \(i=1\) we deduce for all \(p\in C\) \[H_{2}(D_{k,2}(p)) \geq H_{2}(D_{k,1}(p))-h_{max}.err_{k,1}\] \[\geq H_{min}(D_{k,1})-h_{max}.err_{k,1}.\] For the \(\ell_{3}\otimes\ell_{3}\)-component of \(D_{k,3}\), we similarly have \[H_{3}(Err_{k,1})=H_{3}(D_{k,2}-D_{k,1})\text{ and }H_{3}(Err_{k,2})=H_{3}(D_{k,3 }-D_{k,2})\] and thus \[H_{3}(D_{k,3}(p)) = H_{3}(D_{k,1}(p))+H_{3}(Err_{k,1}(p))+H_{3}(Err_{k,2}(p))\] \[\geq H_{3}(D_{k,1}(p))-h_{max}.(err_{k,1}+err_{k,2})\] \[\geq H_{min}(D_{k,1})-h_{max}.(err_{k,1}+err_{k,2})\] which is the third inequality of the lemma. The following lemma is needed to ensure that the difference \(D_{k,4}=g_{k}-f_{k,3}^{*}\left\langle\cdot,\cdot\right\rangle\) is small enough so that the map \(f_{k,3}\) is short for the next metric \(g_{k+1}\). **Lemma 22**.: _Let \(C_{H}:=1+h_{max}\|\ell_{2}^{2}\|+h_{max}\|\ell_{3}^{2}\|.\) We have_ \[\|D_{k,4}\|_{C,\infty}\leq C_{H}.(err_{k,1}+err_{k,2}+err_{k,3}).\] Proof.: It follows from (22) of Lemma 20 that \[\left\{\begin{array}{lll}|H_{2}(D_{k,1}-D_{k,2})|&\leq&h_{max}.err_{k,1}\\ |H_{3}(D_{k,1}-D_{k,3})|&\leq&h_{max}.(err_{k,1}+err_{k,2}).\end{array}\right.\] By Lemma 20 we have for every \(i\in\{1,2,3\}\) \[D_{k,i+1}=D_{k,i}+Err_{k,i}+H_{i}(D_{k,i})\ell_{i}\otimes\ell_{i}\] which implies that \[D_{k,4}=D_{k,1}+\sum_{i=1}^{3}Err_{k,i}-\sum_{i=1}^{3}H_{i}(D_{k,i})\ell_{i} \otimes\ell_{i}\] hence \[\|D_{k,4}\|_{C,\infty} = \|\sum_{i=1}^{3}Err_{k,i}+\sum_{i=1}^{3}H_{i}(D_{k,1}-D_{k,i}) \ell_{i}\otimes\ell_{i}\|_{C,\infty}\] \[\leq err_{k,1}+err_{k,2}+err_{k,3}\] \[+h_{max}err_{k,1}\|\ell_{2}^{2}\|+h_{max}(err_{k,1}+err_{k,2})\| \ell_{3}^{2}\|\] \[\leq \left(1+h_{max}\|\ell_{2}^{2}\|+h_{max}\|\ell_{3}^{2}\|\right)( err_{k,1}+err_{k,2}+err_{k,3}).\] Recall from Section 2.2 that we have introduced a decreasing sequence of positive numbers \((\tau_{k})_{k\in\mathbb{N}^{*}}\) with finite sum \(\mathrm{T}\). These \(\tau_{k}\) are helpful to guide the choice of the corrugation numbers. For a reason that will become clear in the sequel, we further assume that \[\mathrm{T}\leq\frac{1}{2}\lambda_{C}(df_{0})\quad\mbox{ where }\quad\lambda_{C}(df_{0}):=\inf_{p\in C}\lambda(df_{0}(p)). \tag{23}\] Recall that \(f_{0}\) is the initial map and that \(\lambda(df_{0}(p))\) is given by (16). Note also that \(f\) is an immersion if and only if \(\lambda_{C}(df)>0\). **Lemma 23**.: _Under Assumption \((a)\) of Proposition 19, choose at each step \((k,i)\) the corrugation number \(N_{k,i}\) so that_ \[err_{k,i}\leq\min\left(\frac{H_{min}(D_{k,1})}{4h_{max}},\frac{H_{ min}(g_{k+1}-g_{k})}{6C_{H}h_{max}}\right) \tag{24}\] _and_ \[\|df_{k,i}-L_{k,i}\|_{C,\infty}\leq\frac{\tau_{k}}{3}, \tag{25}\] _where \(L_{k,i}=L(f_{k,i},\eta_{k,i},\varpi_{i},N_{k,i})\) is the target differential of Definition 11. Then the sequence \((f_{k,i})_{k,i}\) is well-defined._ Proof.: We first assume \(k=1.\) By assumption \((a)\) of Proposition 19, we have \(g_{1}-f_{0}^{*}\left\langle\cdot,\cdot\right\rangle\in C^{\infty}(C,\mathscr{ G})\) which is equivalent to the fact that \(H_{min}(D_{1,1})>0.\) Hence \(\eta_{1,1}=H_{1}(D_{1,1})\) is positive and since \(f_{0}\) is an immersion, the map \(f_{1,1}\) is well-defined for any choice of \(N_{1,1}\). Nevertheless, to apply the next corrugation process to \(f_{1,1}\), we need to ensure that \(f_{1,1}\) is an immersion. If \(N_{1,1}\) is chosen such that (25) holds, then by Lemma 15 \[\lambda_{C}(df_{1,1})\geq\lambda_{C}(df_{0})-\frac{\tau_{1}}{3}\] and by (23) we deduce that \(f_{1,1}\) is an immersion. Moreover, if \(N_{1,1}\) is such that (24) holds, then by Lemma 21, \(H_{2}(D_{1,2})>0\). Thus \(f_{1,2}\) is well-defined. If \(N_{1,2}\) is chosen so that (25) holds then \[\lambda_{C}(df_{1,2})\geq\lambda_{C}(df_{1,1})-\frac{\tau_{1}}{3} \geq\lambda_{C}(df_{0})-\frac{2\tau_{1}}{3}\] and by (23), \(f_{1,2}\) is an immersion. If moreover \(N_{1,2}\) is chosen so that (24) holds then \(f_{1,3}\) is well-defined and \(\lambda_{C}(df_{1,3})\geq\lambda_{C}(df_{0})-\tau_{1}>0\) by (23), thus \(f_{1}:=f_{1,3}\) is an immersion. To continue the iteration, we need to ensure that \(g_{2}-f_{1}^{*}\left\langle\cdot,\cdot\right\rangle\in C^{\infty}(C,\mathscr{ G})\) which is equivalent to \(H_{min}(D_{2,1})>0\). If \(N_{1,3}\) is chosen according to (24) then by Lemma 22 \[\|D_{1,4}\|_{C,\infty}\leq\frac{H_{min}(g_{2}-g_{1})}{2h_{max}}\] Since \(|H_{i}(D_{1,4}(p))|\leq h_{max}\|D_{1,4}(p)\|\leq h_{max}\|D_{1,4}\|_{C,\infty}\) for all \(i\in\{1,2,3\}\) and all \(p\in C\) we deduce \[|H_{i}(D_{1,4})|\leq\frac{1}{2}H_{min}(g_{2}-g_{1}).\] From the fact that \(D_{2,1}=g_{2}-f_{1,3}^{*}\left\langle\cdot,\cdot\right\rangle=(g_{2}-g_{1})+( g_{1}-f_{1,3}^{*}\left\langle\cdot,\cdot\right\rangle)\) we now have \[H_{i}(D_{2,1}) = H_{i}(g_{2}-g_{1})+H_{i}(D_{1,4})\geq\frac{1}{2}H_{min}(g_{2}-g_ {1}).\] From Assumption \((a)\) of Proposition 19, we have \(H_{min}(g_{2}-g_{1})>0\) which shows that \(H_{min}(D_{2,1})>0.\) By an easy induction, we conclude that the \(f_{k,i}\) are iteratively well defined, each of them being an immersion satisfying \[\lambda_{C}(df_{k,i})\geq\lambda_{C}(df_{0})-\sum_{j=1}^{k}\tau_{k}\geq\frac{1 }{2}\lambda_{C}(df_{0}). \tag{26}\] ### End of proof of Proposition 19 **Lemma 24** (Property \(P_{1}\)).: _Under Assumption \((a)\) of Proposition 19, if the corrugation numbers are chosen to satisfy (24) and (25) then Property (\(P_{1}\)) holds:_ \[\forall k\in\mathbb{N}^{*},\forall p\in C:\qquad\|g_{k}(p)-f_{k}^{*}\left\langle \cdot,\cdot\right\rangle(p)\|\leq\|g_{k+1}(p)-g_{k}(p)\|.\] Proof.: We use Lemma 22 and the fact that the corrugation numbers are chosen accordingly to (24) to write for all \(p\) \[\|D_{k,4}(p)\|\leq\frac{H_{min}(g_{k+1}-g_{k})(p)}{2h_{max}}\leq\frac{1}{2}\| g_{k+1}(p)-g_{k}(p)\|\] where \(H_{min}\) is defined in (18). Since \(D_{k,4}=g_{k}-f_{k}^{*}\left\langle\cdot,\cdot\right\rangle\), the above inequality proves the lemma. **Lemma 25** (Property \(P_{3}\)).: _Under Assumption \((a)\) and \((b)\) of Proposition 19, if in addition the corrugations numbers \(N_{k,i}\) are chosen to satisfy (24) and (25) then Property (\(P_{3}\)) holds:_ \[\forall k\in\mathbb{N}^{*},\forall p\in C:\|df_{k}(p)-df_{k-1}(p)\|\leq\tau_{k }+A\,\|g_{k}(p)-f_{k-1}^{*}\left\langle\cdot,\cdot\right\rangle(p)\|^{1/2}.\] Proof.: From Lemma 14 and condition (25) we deduce for all \(p\in C\): \[\|df_{k}(p)-df_{k-1}(p)\|\leq\sum_{i=1}^{3}\|df_{k,i}(p)-df_{k,i-1}(p)\|\leq \tau_{k}+\sqrt{7}\sum_{i=1}^{3}\|\sqrt{H_{i}(D_{k,i}(p))}\ell_{i}\|.\] By Lemma 20 we have, omitting the variable \(p\), \[\left\{\begin{array}{rcl}H_{2}(D_{k,2})&=&H_{2}(Err_{k,1})+H_{2}(D_{k,1})\\ H_{3}(D_{k,3})&=&H_{3}(Err_{k,2})+H_{3}(Err_{k,1})+H_{3}(D_{k,1})\end{array}\right.\] thus \[\left\{\begin{array}{rcl}|H_{2}(D_{k,2})|&\leq&h_{max}err_{k,1}+h_{max}\|D_ {k,1}\|\\ |H_{3}(D_{k,3})|&\leq&h_{max}(err_{k,2}+err_{k,1})+h_{max}\|D_{k,1}\|.\end{array}\right.\] By condition (24) we obtain \[|H_{2}(D_{k,2})|\leq\frac{1}{4}H_{min}(D_{k,1})+h_{max}\|D_{k,1}\|\leq\frac{5 }{4}h_{max}\|D_{k,1}\|\] and similarly \(|H_{3}(D_{k,3})|\leq\frac{3}{2}h_{max}\|D_{k,1}\|.\) Finally, \[\sum_{i=1}^{3}\sqrt{7}\|\sqrt{H_{i}(D_{k,i})}\ell_{i}\|\leq A\|D_{k,1}\|^{1/2}\] for some constant \(A\) that only depends on \(h_{max}\) and the norms \(\|\ell_{i}\|\) of the linear forms \((\ell_{i})_{i\in\{1,2,3\}}\). This concludes the proof of the lemma. Proof of Proposition 19.: Lemma 23 shows the existence part. Lemmas 24 and 25 show that Properties \((P_{1})\) and \((P_{3})\) hold provided that the corrugation numbers are chosen to satisfy (24), (25). By Lemma 14, we can further choose the corrugation numbers \(N_{k,i}\) so that \[\|f_{k,i}-f_{k,i-1}\|_{C,\infty}\leq\frac{\tau_{k}}{3} \tag{27}\] and such a choice ensures Property \((P_{2})\). It then remains to apply Proposition 8 to conclude. ## 5 Proofs of Theorem 1 and Proposition 3 In this section, we choose the three affine projections \(\varpi_{i}\), \(i\in\{1,2,3\}\), the initial embedding \(f_{0}\) and the sequence of metrics \((g_{k})_{k}\) to apply Proposition 19 in order to obtain a \(3\)-corrugated embedding \(f_{\infty}\) satisfying the statement of Theorem 1. ### The wavefront forms Let \(a\in\frac{1}{2\pi}\mathbb{Z}^{*}\). We consider the three projections \(\varpi_{i}:\widetilde{C}\to\mathbb{R}\) defined by \[\varpi_{1}(\rho,\varphi):=-\rho,\quad\varpi_{2}(\rho,\varphi):=\rho-a\varphi \quad\text{and}\quad\varpi_{3}(\rho,\varphi):=\rho+a\varphi \tag{28}\] We use the circular convention \(\varpi_{0}=\varpi_{3}\). Observe that the condition \(a\in\frac{1}{2\pi}\mathbb{Z}^{*}\) is necessary for these projections to pass to the quotient \(C\), see Lemma 17. We denote by \(\ell_{i}=d\varpi_{i}\) the linear forms: \[\ell_{1}:=-d\rho,\quad\ell_{2}:=d\rho-ad\varphi,\quad\ell_{0}=\ell_{3}:=d\rho +ad\varphi.\] ### The initial embedding. We choose as initial embedding the map \(f_{0}:C\to\mathbb{E}^{3}\) defined by \[f_{0}(\rho,\varphi)=2\left(\rho\cos\varphi,\rho\sin\varphi,\frac{\sqrt{2}}{2} \rho^{2}\right). \tag{29}\] The analytic expression of its isometric default \(\Delta\) to the Poincare metric \(h\) is given by \[\Delta:=h-f_{0}^{*}\left\langle\cdot,\cdot\right\rangle=4\left(\frac{1}{(1-\rho^{ 2})^{2}}-1-2\rho^{2}\right)d\rho^{2}+4\rho^{2}\left(\frac{1}{(1-\rho^{2})^{2}}-1 \right)d\varphi^{2}.\] It is readily checked that \(\Delta\) is a metric on every point of \(C^{*}\). This shows that \(h>f_{0}^{*}\left\langle\cdot,\cdot\right\rangle\), i.e. \(f_{0}\) is a strictly short embedding. **Lemma 26**.: _Let \(a=\frac{n}{2\pi}\) where \(n\geq 7\) is an integer. Then \(\Delta\in C^{\infty}(C^{*},\mathscr{G})\), where \(\mathscr{G}\) the positive cone as in (19)._ Proof.: Let \[B=Ed\rho^{2}+F(d\rho\otimes d\varphi+d\varphi\otimes d\rho)+Gd\varphi^{2}\in \mathscr{S}_{2}(\mathbb{R}^{2})\] be a symmetric bilinear form and let \(H_{1}(B)\), \(H_{2}(B)\) and \(H_{3}(B)\) its coefficients in the basis \((\ell_{i}\otimes\ell_{i})_{i\in\{1,2,3\}}\). A straightforward computation leads to \[H_{1}(B)=E-\frac{G}{a^{2}},\quad H_{2}(B)=\frac{G}{2a^{2}}-\frac{F}{2a}\quad \text{ and }\quad H_{3}(B)=\frac{G}{2a^{2}}+\frac{F}{2a}. \tag{30}\] Figure 8: Wavefront curves \(\{p\in D^{2},\varpi_{i}(p)=Cte\}\) for \(i=1\) (left), \(i=2\) (middle) and \(i=3\) (right). Their images by \(f_{\infty}\) correspond to the wavefronts of the different layers of corrugations, see Figure 1. Figure 7: Level lines of the projections \(\varpi_{i}\) in \(C_{0}\) with \(i=1\) (left), \(i=2\) (middle) and \(i=3\) (right). In particular, the values of the three linear forms \(H_{i}:\mathcal{S}_{2}(\mathbb{R}^{2})\to\mathbb{R}\), \(i\in\{1,2,3\}\), on the isometric default \(\Delta\) are \[H_{2}(\Delta)=H_{3}(\Delta)=\frac{2\rho^{2}}{a^{2}}\left(\frac{1}{(1-\rho^{2})^ {2}}-1\right)>0\] and \[H_{1}(\Delta)=\frac{a^{2}(12\rho^{4}-8\rho^{6})-8\rho^{4}+4\rho^{6}}{a^{2}(1- \rho^{2})^{2}}.\] The number \(H_{1}(\Delta)\) is positive if and only if its numerator is positive that is \[a^{2}>\frac{2-\rho^{2}}{3-2\rho^{2}}.\] The function \(\rho\mapsto\frac{2-\rho^{2}}{3-2\rho^{2}}\) is increasing for \(\rho\in\,]0,1]\) and its maximum is \(1\). We deduce that \(H_{1}(\Delta)>0\) for every \(\rho\in\,]0,1]\) if and only if \(a>1\). Choice of the parameter \(a\).We choose \(a=7/2\pi\in\mathbb{Z}/(2\pi)\). ### The sequence of metrics Just like the metric \(h\), the isometric default \(\Delta\) blows up at \(\rho=1\). To build an increasing sequence of metrics \((g_{k})_{k}\) converging toward \(h\) while remaining bounded on \(C\), we consider the Taylor series of \(\Delta\) for the variable \(\rho\). We then add truncations of this series to the metric \(f_{0}^{*}\left\langle\cdot,\cdot\right\rangle\). The coefficients of the resulting metrics being polynomial in \(\rho\), they extend to the boundary \(\rho=1\). In more details, the Taylor series of the \(E\) and \(G\) coefficients of the isometric default \(\Delta\) are \[E(\Delta)=4\sum_{n=1}^{\infty}(n+2)\rho^{2(n+1)}\quad\text{and}\quad G(\Delta )=4\sum_{n=1}^{\infty}(n+1)\rho^{2(n+1)}.\] For every \(k\in\mathbb{N}^{*}\) we consider the truncations \[\delta_{k}^{E}(\rho):=4\sum_{n=1}^{k}(n+2)\rho^{2(n+1)}\quad\text{and}\quad \delta_{k}^{G}(\rho):=4\sum_{n=1}^{k}(n+1)\rho^{2(n+1)}\] and define a sequence of metrics by setting \[g_{k}:=f_{0}^{*}\left\langle\cdot,\cdot\right\rangle+\Delta_{k}\quad\text{ where}\quad\Delta_{k}:=\delta_{k}^{E}(\rho)d\rho^{2}+\delta_{k}^{G}(\rho)d\varphi^{2}. \tag{31}\] Note that \(\Delta_{0}=0\) and \(g_{0}=f_{0}^{*}\left\langle\cdot,\cdot\right\rangle\). Obviously \(g_{k}\uparrow h\) and each metric \(g_{k}\) is bounded on \(C\). **Lemma 27** (Property \(P_{5}\)).: _The sequence \((g_{k})_{k}\) of metrics defined by (31) satisfies_ \[\forall k\in\mathbb{N}^{*},\qquad g_{k}-g_{k-1}\in C^{\infty}(C,\mathscr{G}).\] Proof.: A direct computation using (30) leads to \[H_{2}(g_{k}-g_{k-1})=H_{3}(g_{k}-g_{k-1})=4\rho^{2(k+1)}\frac{k+1}{2a^{2}}>0\] and \[H_{1}(g_{k}-g_{k-1})=4\rho^{2(k+1)}\left(k+2-\frac{k+1}{a^{2}}\right)>0\] because \(a>1\). For a metric \(g\) we denote by \(\|g\|_{X,\infty}\) the supremum of \(\|g(\rho,\varphi)\|\) on \(X\subset C.\) The following lemma will be needed later to deduce that the sequence \((f_{k})_{k}\) is \(C^{1}\)-converging over each compact set of \(C^{*}\) that is over \(C^{*}.\) **Lemma 28** (Property \(P_{4}\)).: _The sequence \((g_{k})_{k}\) of metrics defined by (31) satisfies_ \[\sum_{k=1}^{+\infty}\|g_{k}-g_{k-1}\|_{K,\infty}^{1/2}<+\infty.\] _for all compact set \(K\subset C^{*}.\)_ Proof.: Let \(b<1\) be the radius of a disk centered at the origin that contains \(K\) and let \(\mathrm{eucl}:=d\rho\otimes d\rho+d\varphi\otimes d\varphi\) be the Euclidean metric on \(C.\) We have \[\|g_{k}-g_{k-1}\|_{K,\infty}\leq 4b^{2k+2}(k+2)\|\mathrm{eucl}\|\] and the resultat follows from the fact that \(\sum\sqrt{k+2}\,b^{k+1}<+\infty.\) ### Existence and regularity of the limit map **Proposition 29**.: _Let \(f_{0}\) be the embedding defined by (29), \(\varpi_{1},\varpi_{2},\varpi_{3}\) the affine projections defined by (28) and \((g_{k})_{k}\) be the sequence of metrics defined by (31). If the corrugation numbers \(N_{k,i}\) are large enough then the 3-corrugated process_ \[f_{k,i}=CP_{i}(f_{k,i},g_{k},N_{k,i}),\qquad i\in\{1,2,3\},\] _is well-defined and its limit map \(f_{\infty}\) is continuous on \(C\) and a \(C^{1}\)\(h\)-isometric immersion on \(C^{*}.\)_ Proof.: Lemmas 28 and 27 show that \((P_{4})\) and \(P_{5}\) hold. It is then enough to apply Proposition 19 to obtain Proposition 29. **Lemma 30** (Holder regularity).: _If, in addition to the assumptions of Proposition 29, the sequence \((\tau_{k})_{k}\) is chosen such that, from some \(k_{0}>0\), we have_ \[\forall k\geq k_{0},\qquad\tau_{k}\leq e^{-k}\] _then the limit map \(f_{\infty}\) is \(\beta\)-Holder for any \(0<\beta<1.\)_ Proof.: From the definition of the metrics \(g_{k}\) we have \[4(k+1)\|\mathrm{eucl}\|\leq\|g_{k}-g_{k-1}\|_{C,\infty}\leq 4(k+2)\|\mathrm{eucl}\|.\] In particular, there exists \(k_{0}\in\mathbb{N}\) such that for all \(k\geq k_{0}\), we have \(\tau_{k}\leq\|g_{k}-g_{k-1}\|_{C,\infty}^{1/2}\) and Condition \((i)\) of Proposition 9 is fulfilled. Regarding Condition \((ii)\), it is easily seen that the series \[\sum(k+2)^{\frac{\beta}{2}}e^{-(1-\beta)k}\] is convergent for every \(0<\beta<1\). ### Embedded nature of the limit map **Lemma 31**.: _If the corrugations numbers are chosen so that (24), (25) and (27) hold then we have_ \[g_{k-1}-\left(|D_{k-1,4}|+\sum_{j=1}^{i}|Err_{k,j}|\right)\leq f_{k,i}^{*} \left\langle\cdot,\cdot\right\rangle\leq g_{k}+|D_{k,4}|+\sum_{j=i+1}^{3}|Err_ {k,j}|\] _for all \(i\in\{0,1,2,3\}\)._ Proof.: We first prove the left hand side inequality. From the definition of \(D_{k-1,4}\), we have \[f_{k,0}^{*}\left\langle\cdot,\cdot\right\rangle=f_{k-1,3}^{*}\left\langle \cdot,\cdot\right\rangle=g_{k-1}-D_{k-1,4}\geq g_{k-1}-|D_{k-1,4}|,\] which proves the inequality for \(i=0\). From Lemma 20 we have \[Err_{k,i}=H_{i}(D_{k,i})\ell_{i}\otimes\ell_{i}+f_{k,i-1}^{*}\left\langle \cdot,\cdot\right\rangle-f_{k,i}^{*}\left\langle\cdot,\cdot\right\rangle. \tag{32}\] Since \(H_{i}(D_{k,i})\geq 0\) it follows \[f_{k,i}^{*}\left\langle\cdot,\cdot\right\rangle\geq f_{k,i-1}^{*}\left\langle \cdot,\cdot\right\rangle-Err_{k,i}. \tag{33}\] If \(i=1\), and by definition of \(D_{k-1,4}=g_{k-1}-f_{k-1,3}^{*}\left\langle\cdot,\cdot\right\rangle\), we deduce \[f_{k,1}^{*}\left\langle\cdot,\cdot\right\rangle\geq f_{k,0}^{*}\left\langle \cdot,\cdot\right\rangle-Err_{k,1}\geq g_{k-1}-D_{k-1,4}-Err_{k,1}\] Using inductively (33), we then obtain \[f_{k,i}^{*}\left\langle\cdot,\cdot\right\rangle\geq g_{k-1}-D_{k-1,4}-\sum_{j =1}^{i}Err_{k,j}.\] Regarding the other inequality, by definition of \(D_{k,4}\) we have \[f_{k,3}^{*}\left\langle\cdot,\cdot\right\rangle=g_{k}-D_{k,4}\leq g_{k}+|D_{k,4}|.\] Using (33) and by induction we find for all \(i\in\{0,1,2,3\}\), \[f_{k,i}^{*}\left\langle\cdot,\cdot\right\rangle\leq g_{k}+|D_{k,4}|+\sum_{j=i +1}^{3}Err_{k,j}.\] **Lemma 32**.: _Let \(\lambda>1.\) If, in addition to (24), (25) and (27) the corrugations numbers \(N_{k,i}\) are chosen to satisfy_ \[err_{k,i}\leq\frac{1}{6\lambda C_{H}}\min\left(\min_{p\in C}\|g_{k+1}(p)-g_{k}( p)\|,\min_{p\in C}\|g_{k}(p)-g_{k-1}(p)\|\right) \tag{34}\] _then for all \(k\in\mathbb{N}^{*}\) and for all \(i\in\{0,1,2,3\}\) we have:_ \[g_{k-1}-\frac{1}{\lambda}(g_{k}-g_{k-1})\leq f_{k,i}^{*}\left\langle\cdot, \cdot\right\rangle\leq g_{k}+\frac{1}{\lambda}(g_{k}-g_{k-1}).\] Proof.: By Lemma 22 and condition (34) of the lemma, we have \[\|D_{k-1,4}\|\leq C_{H}\sum_{i=1}^{3}err_{k-1,i}\leq\frac{1}{2\lambda}\min_{p \in C}\|g_{k}(p)-g_{k-1}(p)\|\] and, since \(C_{H}\geq 1,\) we also have \[\sum_{j=1}^{3}err_{k,i}\leq\frac{1}{2\lambda}\min_{p\in C}\|g_{k}(p)-g_{k-1}( p)\|.\] We deduce from Lemma 31 that \(g_{k-1}-\frac{1}{\lambda}(g_{k}-g_{k-1})\leq f_{k,i}^{*}\left\langle\cdot, \cdot\right\rangle\). We also have \[\|D_{k,4}\|\leq C_{H}\sum_{i=1}^{3}err_{k,i}\leq\frac{1}{2\lambda}\min_{p\in C }\|g_{k}(p)-g_{k-1}(p)\|\] and from Lemma 31 if follows that \(f_{k,i}^{*}\left\langle\cdot,\cdot\right\rangle\leq g_{k}+\frac{1}{\lambda}(g _{k}-g_{k-1}).\) **Lemma 33**.: _If the corrugation numbers \((N_{k,i})_{k,i}\) are chosen large enough then each map \(f_{k,i}\) is an embedding._ Proof.: To apply Lemma 16, we need to show that each \(\alpha_{k,i}\) is strictly less than \(\pi/2.\) We assume that the corrugation numbers are chosen to satisfy (24), (25), (27) and (34). From (13) and Corollary 13, we know that \[\alpha_{k,i}=J_{0}^{-1}\left(\frac{\|df_{k,i-1}(u_{i})\|}{\sqrt{\eta_{k,i}+\| df_{k,i-1}(u_{i})\|^{2}}}\right)=J_{0}^{-1}(\psi(X_{k,i})),\] where \(X_{k,i}:=\eta_{k,i}/\|df_{k,i-1}(u_{i})\|^{2}\) and \(\psi:\mathbb{R}^{+}\rightarrow[0,1]\) is defined by \(\psi(x)=(1+x)^{-1/2}\). Since the function \(J_{0}^{-1}\circ\psi\) is increasing, vanishes at \(0\) and satisfies \(J_{0}^{-1}(\psi(\sigma))=\pi/2\) for \(\sigma=3.488629...\), it is sufficient to show that \(X_{k,i}<\sigma.\) From (32) we have \[H_{i}(D_{k,i})\ell_{i}\otimes\ell_{i}=Err_{k,i}+f_{k,i}^{*}\left\langle\cdot, \cdot\right\rangle-f_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle\] and thus by using Lemma 32 \[\eta_{k,i}=\eta_{k,i}\ell_{i}^{2}(u_{i})\leq|Err_{k,i}(u_{i},u_{i})|+(1+2 \lambda^{-1})(g_{k}-g_{k-1})(u_{i},u_{i}).\] Still using Lemma 32, we then have \[X_{k,i}=\frac{\eta_{k,i}}{\|df_{k,i-1}(u_{i})\|^{2}}\leq(1+2\lambda^{-1})\frac{( g_{k}-g_{k-1})(u_{i},u_{i})}{(g_{k-1}-\frac{1}{\lambda}(g_{k}-g_{k-1}))(u_{i},u_{i})}+ \frac{|Err_{k,i}(u_{i},u_{i})|}{\|df_{k,i-1}(u_{i})\|^{2}}.\] In this inequality, the last term can be made arbitrarily small by chosing \(N_{k,i}\) large enough, so the problem reduces to show that \[A_{k}(\lambda):=(1+2\lambda^{-1})\frac{(g_{k}-g_{k-1})(u_{i},u_{i})}{(g_{k-1}- \frac{1}{\lambda}(g_{k}-g_{k-1}))(u_{i},u_{i})}<\sigma\] for every \(k\geq 1\). From (31) we have \[g_{k}-g_{k-1}\leq 4(k+2)\rho^{2(k+1)}\text{eucl}\quad\text{and}\quad g_{k-1} \geq 4\left(\sum_{n=0}^{k-1}(n+1)\rho^{2(n+1)}\right)\text{eucl}.\] which implies that \[A_{k}(\lambda)\leq\frac{(1+2\lambda^{-1})(k+2)\rho^{2(k+1)}}{\left(\sum_{n=0} ^{k-1}(n+1)\rho^{2(n+1)}\right)-\frac{1}{\lambda}(k+2)\rho^{2(k+1)}}.\] A direct calculation shows that the right hand side is lower than \(\sigma\) for every \(k\geq 1\) if \(\lambda\) is chosen large enough (for instance \(\lambda\geq 100\)). **Lemma 34**.: _If the corrugation numbers \((N_{k,i})_{k,i}\) are chosen large enough then the limit map \(f_{\infty}\) is an embedding._ Proof.: The argument of [7] SS10 applies and shows that the limit map \(f_{\infty}\) is an embedding provided that the \(N_{k,i}\) are large enough. In the reasoning, it is important to keep in mind that the maps \(f_{k,i}\) are defined at each step on the compact set \(C\). Theorem 1 and Proposition 3 follow directly from this Proposition 29 and Lemmas 30 and 34. ## 6 Formal Corrugation Process ### The sequence \((\Phi_{k,i})_{k,i}\) **Definition 35**.: Let \(\Phi:C\to\operatorname{Mono}\left(\mathbb{R}^{2},\mathbb{E}^{3}\right)\), \(\eta:C\to\mathbb{R}_{\geq 0}\), \(\ell=d\varpi\in\mathscr{L}(\mathbb{R}^{2},\mathbb{R})\), \(W=\Phi(\ker\ell)\) and \(N\in\mathbb{N}^{*}\). Consider the formal corrugation frame \[\mathbf{w}=\frac{\Phi(w)}{\|\Phi(w)\|},\;\mathbf{n}=\frac{\Phi(v)\wedge\Phi( w)}{\|\Phi(v)\wedge\Phi(w)\|}\quad and\quad\mathbf{t}=\mathbf{w}\wedge\mathbf{n}\] where \(v\) is any vector such that \(\ell(v)>0\) and \(w\in\ker\ell\) is such that \((v,w)\) is a direct basis. We define the _formal corrugation process_ of \(\Phi\) to be \[\Phi^{c}:=[\Phi]^{W}+\mathbf{z}\otimes\ell\quad\text{with}\quad\mathbf{z}:=r( \cos\theta\,\mathbf{t}+\sin\theta\,\mathbf{n})\] where \[\left\{\begin{array}{rcl}r&=&\sqrt{\eta+\|\Phi(u)\|^{2}}\\ \theta&=&\alpha\cos(2\pi N\varpi)\\ \alpha&=&J_{0}^{-1}\left(\frac{1}{r}\|\Phi(u)\|\right)\end{array}\right. \tag{35}\] and where \(u\) is the unique vector such that \(\Phi(u)\) is collinear to \(\mathbf{t}\) and \(\ell(u)=1\). Observing that \([\Phi]^{W}=\Phi-\Phi(u)\otimes\ell\), we obtain \[\Phi^{c}=\Phi+(\mathbf{z}-\Phi(u))\otimes\ell. \tag{36}\] Since the data \((\Phi,\eta,\varpi,N)\) completely defines \(\Phi^{c}\), we write \[\Phi^{c}=FCP(\Phi,\eta,\varpi,N). \tag{37}\] **Remark.-** If \(\Phi=df\) for some immersion \(f:C\to\mathbb{E}^{3}\) then \(\Phi^{c}\) is the target differential \(L(f,\eta,\varpi,N)\), see Definition 11. From Corollary 13, \(\Phi^{c}\) is moreover \(\mu\)-isometric for \(\mu:=f^{*}\left\langle\cdot,\cdot\right\rangle+\eta d\varpi\otimes d\varpi\). Let \(\varpi_{1}\), \(\varpi_{2}\) and \(\varpi_{3}\) be three affine projections and let \(f_{0}:C\to\mathbb{E}^{3}\) be an immersion. We assume given a sequence of metrics \((g_{k})_{k}\) defined on \(C\) satisfying the following hypotheses. * \(g_{0}=f_{0}^{*}\left\langle\cdot,\cdot\right\rangle\), * \((g_{k})_{k}\) is increasing, * \((g_{k})_{k}\) is converging on \(C^{*}\) toward \(h\), * the difference \(g_{k}-g_{k-1}\in C^{\infty}(C,\mathscr{C})\). From such a sequence of metrics and any sequence of positive integers \((N_{k,i})_{k,i}\) we define iteratively a sequence of maps \(\Phi_{k,i}:D^{2}\to Mono(\mathbb{R}^{2},\mathbb{E}^{3})\), by setting \[\Phi_{0}:=df_{0}\quad\text{and}\quad\Phi_{k,i}:=FCP_{i}(\Phi_{k,i-1},g_{k},N_{ k,i}) \tag{38}\] where \(FCP_{i}(\Phi_{k,i-1},g_{k},N_{k,i}):=FCP(\Phi_{k,i-1},\eta_{k,i},\varpi_{i},N_{ k,i})\) and \[\eta_{k,i}:=H_{i}(g_{k}-\Phi_{k,i-1}^{*}\left\langle\cdot,\cdot\right\rangle). \tag{39}\] **Lemma 36**.: _Given a sequence of metrics \((g_{k})_{k}\) satisfying the above assumptions and given any sequence of positive integers \((N_{k,i})_{k,i}\), the sequence \((\Phi_{k,i})_{k,i}\) is well defined and each \(\Phi_{k,i}\) is \(\mu_{k,i}\)-isometric for_ \[\mu_{k,i}^{\Phi}:=g_{k-1}+\sum_{j=1}^{i}H_{j}(g_{k}-g_{k-1})\ell_{j}\otimes \ell_{j}. \tag{40}\] _Moreover, for every \((k,i)\) we have \(\eta_{k,i}=H_{i}(g_{k}-g_{k-1})\)._ Proof.: The sequence \((\Phi_{k,i})_{k,i}\) is well defined if, at each step \((k,i)\), we have \(\eta_{k,i}\geq 0\) and \(\Phi_{k,i-1}\) is a monomorphism. By definition \(\Phi_{1,0}=df_{0}\) is a monomorphism and \(\Phi_{1,0}^{*}\left\langle\cdot,\cdot\right\rangle=g_{0}\). We also have \(\eta_{1,1}=H_{1}(g_{1}-g_{0})>0\) by (39). We observe that \([\Phi_{1,1}]^{W}=\Phi_{1,1}-\Phi_{1,1}(u_{1,1})\ell_{1}\), where \(u_{1,1}\) stands for \(u\) as in Definition 35, with respect to \(\Phi_{1,0}\) and \(\ell_{1}\). From the definition of the formal corrugation process, we thus have \(\Phi_{1,1}=\Phi_{1,0}+(\mathbf{z}_{1,1}-\Phi_{1,0}(u_{1,1}))\otimes\ell_{1}\). We deduce that \(\Phi_{1,1}(u_{1,1})=\mathbf{z}_{1,1}\) and \(\Phi_{1,1}(w_{1})=\Phi_{1,0}(w_{1})\) for any \(w_{1}\in\ker\ell_{1}\). By testing over \((u_{1,1},w_{1})\) we easily check that \[\Phi_{1,1}^{*}\left\langle\cdot,\cdot\right\rangle=\Phi_{1,0}^{*}\left\langle \cdot,\cdot\right\rangle+H_{1}(g_{1}-g_{0})\ell_{1}\otimes\ell_{1}\] proving that \(\Phi_{1,1}\) is \(\mu_{1,1}^{\Phi}\)-isometric. In particular, \(\mu_{1,1}^{\Phi}\) being a metric, \(\Phi_{1,1}\) is a monomorphism. We now compute \[\eta_{1,2}=H_{2}(g_{1}-\Phi_{1,1}^{*}\left\langle\cdot,\cdot\right\rangle)=H_{ 2}(g_{1}-\mu_{1,1}^{\Phi})\] Observe that \(H_{2}(\mu_{1,1}^{\Phi})=H_{2}(g_{0})\), whence \(\eta_{1,2}=H_{2}(g_{1}-g_{0})>0\). It follows that \(\Phi_{1,2}\) is well defined and we have \(\Phi_{1,2}=\Phi_{1,1}+(\mathbf{z}_{1,2}-\Phi_{1,1}(u_{1,2}))\otimes\ell_{2}\). Similarly to the previous computation, we check that \[\Phi_{1,2}^{*}\left\langle\cdot,\cdot\right\rangle=\Phi_{1,1}^{*}\left\langle \cdot,\cdot\right\rangle+H_{2}(g_{1}-g_{0})\ell_{2}\otimes\ell_{2}\] so that \(\Phi_{1,2}^{*}\left\langle\cdot,\cdot\right\rangle=\mu_{1,2}^{\Phi}\). An easy induction then shows that for every \((k,i)\) the map \(\Phi_{k,i}\) is a well defined \(\mu_{k,i}^{\Phi}\)-isometric monomorphism with \(\eta_{k,i}=H_{i}(g_{k}-g_{k-1})\). **Lemma 37**.: _Let \(K\subset C^{*}\) be a compact set. If the series \(\sum\|g_{k}-g_{k-1}\|_{K,\infty}^{\frac{1}{2}}\) converges, then the series \(\sum\|\Phi_{k,i}-\Phi_{k,i-1}\|_{K,\infty}\) is convergent. As a consequence, if \(\sum\|g_{k}-g_{k-1}\|_{K,\infty}^{\frac{1}{2}}\) converges on any compact set of \(C^{*}\), then the sequence \((\Phi_{k,i})_{k,i}\) converges on \(C^{*}\) toward a continuous limit map \(\Phi_{\infty}\)._ **Definition 38**.: The map \(\Phi_{\infty}\) can be interpreted as a target differential for the limit \(f_{\infty}\) of the \(3\)-corrugated process \(CP_{i}(\cdot,g_{k},N_{k,i})\). We call \(\Phi_{\infty}\) the _formal analogue of \(df_{\infty}\)_. Proof of Lemma 37.: Arguments similar to those used in the proof of Lemma 14 show that, at each step \((k,i)\), \[\|\Phi_{k,i}-\Phi_{k,i-1}\|_{K,\infty}\leq\|\sqrt{7\eta_{k,i}}\ell_{i}\|_{K, \infty}.\] From Lemma 36, we know that \[\eta_{k,i}=H_{i}(g_{k}-g_{k-1})\leq h_{max}\|g_{k}-g_{k-1}\|\] and we obtain \[\|\Phi_{k,i}-\Phi_{k,i-1}\|_{K,\infty}\leq\sqrt{7h_{max}\|\ell_{i}\|}\|g_{k}-g _{k-1}\|_{K,\infty}^{\frac{1}{2}}. \tag{41}\] It is then straightforward to deduce the convergence result of the lemma. ### The map \(\Phi\mapsto\Phi^{c}\) Since the formal corrugation process is defined by a pointwise formula (37), it induces a map \(\phi\mapsto\phi^{c}\) from (a subspace of) \(Mono(\mathbb{R}^{2},\mathbb{E}^{3})\) to itself. Precisely, an index \(i\in\{1,2,3\}\), an inner product \(g\) on \(\mathbb{R}^{2}\) and a corrugation number \(N\) being given, then the map \(\phi\mapsto\phi^{c}=FCP_{i}(\phi,g,N)\) is well defined on \[\mathscr{D}(g,i):=\{\phi\in Mono(\mathbb{R}^{2},\mathbb{E}^{3})\,|\,H_{i}(g- \phi^{*}\left\langle\cdot,\cdot\right\rangle)\geq 0\}.\] Observe that the subspace \(\mathscr{D}(g,i)\) is not compact because \(Mono(\mathbb{R}^{2},\mathbb{E}^{3})\) is open in \(\mathscr{L}(\mathbb{R}^{2},\mathbb{E}^{3})\). For any monomorphism \(\phi:\mathbb{R}^{2}\to\mathbb{E}^{3}\), we set \[\|\phi\|:=\sup_{v\in\mathbb{R}^{2}\setminus\{0\}}\frac{\|\phi(v)\|}{\|v\|} \quad\text{and}\quad\lambda(\phi):=\inf_{v\in\mathbb{R}^{2}\setminus\{0\}} \frac{\|\phi(v)\|}{\|v\|}>0.\] Given \(0<\lambda\leq\Lambda\), we consider the compact subspace \(\mathscr{K}(\lambda,\Lambda,g,i)\) of \(Mono(\mathbb{R}^{2},\mathbb{E}^{3})\) defined by \[\mathscr{K}(\lambda,\Lambda,g,i):=\mathscr{D}(g,i)\cap\{\phi\in Mono(\mathbb{R }^{2},\mathbb{E}^{3})\,|\,\lambda\leq\lambda(\phi)\text{ and }\|\phi\|\leq\Lambda\}.\] **Lemma 39**.: _Let \(0<\lambda\leq\Lambda\), \(i\) and \(g\) be fixed and let \(\mathscr{K}=\mathscr{K}(\lambda,\Lambda,g,i)\). There exists a constant \(C=C(\lambda,\Lambda,g)>0\) such that_ \[\forall\phi_{1},\phi_{2}\in\mathscr{K},\qquad\|\phi_{2}^{c}-\phi_{1}^{c}\| \leq C\|\phi_{2}-\phi_{1}\|^{\frac{1}{2}}.\] _In other words, the map \(\phi\to\phi^{c}\) is \(\frac{1}{2}\)-Holder on \(\mathscr{K}\)._ Proof.: In Formula (37) defining the formal corrugation process, everything depends smoothly on \(\phi\) except the amplitude \(\alpha\) that involves the inverse function \(J_{0}^{-1}\) and the radius \(r\) that involves a square root. It is readily checked that the term under the square root never vanishes on \(\mathscr{K}\). However, the argument in the inverse function \(J_{0}^{-1}\) can be equal to one (when \(\eta=0\)) and for this value the inverse function \(J_{0}^{-1}\) in not differentiable. This prevents \(\phi\mapsto\phi^{c}\) to be Lipschitz on \(\mathscr{K}\). Nevertheless, Lemma 40 below shows that \(J_{0}^{-1}\) is \(\frac{1}{2}\)-Holder. From this, it is straightforward to obtain the result of the lemma. Regarding the constant \(C\), since the number of corrugations \(N\) only appears in the definition of the angle \(\theta=\alpha\cos(2\pi N\varpi)\), it disappears when writing an upper bound of the difference, indeed \[|\theta(\phi_{2})-\theta(\phi_{1})|\leq|\alpha(\phi_{2})-\alpha(\phi_{1})|.\] Therefore, the constant \(C(\lambda,\Lambda,g,i)\) is independent of \(N\). By taking the maximum when \(i\in\{1,2,3\}\) this constant can also be taken independent of \(i\). **Lemma 40**.: _The inverse \(J_{0}^{-1}\) of the restriction \(J_{0}|_{[0,\kappa_{0}[}\), where \(\kappa_{0}=2.404...\) is the first positive zero of the Bessel function \(J_{0}\), is \(\frac{1}{2}\)-Holder._ Proof.: We have the classical series expansion \(J^{\prime}_{0}(x)=-J_{1}(x)=\sum_{k=0}^{\infty}(-1)^{k+1}a_{k}(x)\) with \(a_{k}(x)=\frac{x^{2k+1}}{2^{2k+1}k!(k+1)!}\). We compute for \(0\leq x\leq\kappa_{0}\) and \(k\geq 0\): \[\frac{a_{k+1}(x)}{a_{k}(x)}=\frac{x^{2}}{4(k+1)(k+2)}\leq\frac{\kappa_{0}^{2}} {8}<1\] It follows that the series of \(J^{\prime}_{0}\) satisfies the alternating series test for every \(x\in[0,\kappa_{0}]\). Truncating the series after the second term, we thus get \[J^{\prime}_{0}(x)\leq P(x)\quad\text{ with }\quad P(x)=-\frac{x}{2}+\frac{x^{3}} {16}.\] We easily check that \(P(x)+x/8\) is non positive over \([0,\kappa_{0}]\), whence \(J^{\prime}_{0}(x)\leq-x/8\). By integrating, we deduce for \(0\leq x<y\leq\kappa_{0}\): \[J_{0}(x)-J_{0}(y)\geq\frac{1}{16}(y^{2}-x^{2})\geq\frac{1}{16}(y-x)^{2}\] We conclude that for all \(u,v\in[0,1]\): \[|J_{0}^{-1}(u)-J_{0}^{-1}(v)|\leq 4|u-v|^{1/2}.\] We denote by \(\Gamma Mono(\mathbb{R}^{2},\mathbb{E}^{3})\) the space of monomorphism fields \(\Phi:C\to Mono(\mathbb{R}^{2},\mathbb{E}^{3})\) over \(C\). For any compact set \(K\subset C^{*}\), we also set \[\|\Phi\|_{K,\infty}:=\sup_{p\in K}\|\Phi(p)\|\quad\text{and}\quad\lambda_{K}( \Phi):=\inf_{p\in K}\lambda(\Phi(p)).\] Given a metric \(g\) on \(C\), the map \(\Phi\to\Phi^{c}=FCP_{i}(\Phi,g,N)\) is well defined on \[\Gamma\mathcal{D}(g,i,K):=\{\Phi\in\Gamma Mono(\mathbb{R}^{2},\mathbb{E}^{3}) \,|\,\forall p\in K,H_{i}(g_{p}-\Phi(p)^{*}\,\langle\cdot,\cdot\rangle)\geq 0\}.\] Similarly as above, \(0<\lambda\leq\Lambda\) being given, we consider the compact subspace \(\Gamma\mathcal{K}=\Gamma\mathcal{K}(\lambda,\Lambda,g,i,K)\) defined by \[\Gamma\mathcal{K}:=\Gamma\mathcal{D}(g,i,K)\cap\{\Phi\in\Gamma Mono(\mathbb{R }^{2},\mathbb{E}^{3})\,|\,\lambda\leq\lambda_{K}(\Phi)\text{ and }\|\Phi\|_{K,\infty}\leq\Lambda\}.\] **Lemma 41**.: _Let \(0<\lambda\leq\Lambda,\,i\), \(g\) and let \(K\subset C^{*}\) be a compact set. There exists a constant \(C_{K}=C_{K}(\lambda,\Lambda,g)>0\) such that_ \[\forall\Phi_{1},\Phi_{2}\in\Gamma\mathcal{K},\qquad\|\Phi_{2}^{c}-\Phi_{1}^{c} \|_{K,\infty}\leq C_{K}\|\Phi_{2}-\Phi_{1}\|_{K,\infty}^{\frac{1}{2}}.\] _In other words, the map \(\Phi\to\Phi^{c}\) is \(\frac{1}{2}\)-Holder on \(\Gamma\mathcal{K}\)._ Proof.: The result stated in this lemma is a corollary of Lemma 39. The constant \(C_{K}\) is given by \(C_{K}(\lambda,\Lambda,g)=\sup_{p\in K}C(\lambda,\Lambda,g_{p})\) ### Comparing \(\Phi_{k,i}\) to \(df_{k,i}\) In the following, we consider a sequence \((f_{k,i})_{k,i}\) defined by the \(3\)-corrugated process (21) and its formal analogue \((\Phi_{k,i})_{k,i}\). Recall from Section 2.2 that we introduced a decreasing sequence of positive numbers \((\tau_{k})_{k}\) guiding the choice of the corrugation numbers. We assume that (23) holds and that \(\tau_{1}<1.\) Given any compact \(K\subset C^{*},\) we introduce a sequence \((C_{k}(K))_{k}\) defined by \[C_{k}(K):=C_{K}\left(\frac{\lambda_{K}(df_{0})}{2},\|g_{k+1}\|_{K,\infty}^{ \frac{1}{2}},g_{k}\right)\] where \(C_{K}\) appears in Lemma 41. **Lemma 42**.: _Let \(K\subset C^{*}\) a compact set. For all \(k\in\mathbb{N}^{*}\), \(i\in\{1,2,3\},\) we have_ \[\|\Phi_{k,i}-(df_{k,i})^{c}\|_{K,\infty}\leq C_{k}(K)\|\Phi_{k,i-1}-df_{k,i-1} \|_{K,\infty}^{\frac{1}{2}}.\] Proof.: By Lemma 36, the map \(\Phi_{k,i}\) is isometric for \(\mu_{k,i}^{\Phi}\). Therefore, \[\|\Phi_{k,i}(p)\|=\sup_{v\in\mathbb{R}^{2}\setminus\{0\}}\frac{\|\Phi_{k,i}(p) (v)\|}{\|v\|}=\sup_{v\in\mathbb{R}^{2}\setminus\{0\}}\frac{\sqrt{\mu_{k,i}^{ \Phi}(p)(v,v)}}{\|v\|}.\] From Lemma 36 we have \(g_{k-1}\leq\mu_{k,i}^{\Phi}\leq g_{k}\) and thus \[\|\Phi_{k,i}(p)\|\leq\sup_{v\in\mathbb{R}^{2}\setminus\{0\}}\frac{\sqrt{g_{k} (p)(v,v)}}{\|v\|}=\|g_{k}(p)\|^{\frac{1}{2}}.\] It follows that \(\|\Phi_{k,i}\|_{K,\infty}\leq\|g_{k}\|_{K,\infty}^{\frac{1}{2}}\). Similarly, using the fact that \(\Phi_{k,i}\) is \(\mu_{k,i}^{\Phi}\)-isometric, we obtain \(\lambda_{K}(\Phi_{k,i})\geq\lambda_{K}(df_{0}).\) By construction, the map \(df_{k,i}\) is short for \(g_{k+1}\) and thus \(\|df_{k,i}\|_{K,\infty}\leq\|g_{k+1}\|_{K,\infty}^{\frac{1}{2}}\). From (26), we also have \(\lambda_{K}(df_{k,i})\geq\frac{1}{2}\lambda_{K}(df_{0}).\) Finally, we have obtained that \(\Phi_{k,i}\) and \(df_{k,i}\) both belong to \(\Gamma\mathscr{K}(\frac{1}{2}\lambda_{K}(df_{0}),\|g_{k+1}\|_{K,\infty}^{ \frac{1}{2}},g_{k},i,K).\) We then apply Lemma 41 to conclude. Let \(M_{k,i}(K)\) be the sequence of constants defined inductively by \[M_{1,1}(K):=\frac{1}{3}\quad\text{and}\quad M_{k,i}(K):=C_{k}(K)M_{k,i-1}^{1/2 }(K)+\frac{1}{3}.\] **Lemma 43**.: _Assume that \(\tau_{1}<1\). For all \(k\in\mathbb{N}^{*}\) and \(i\in\{1,2,3\}\), we have_ \[\|\Phi_{k,i}-df_{k,i}\|_{K,\infty}\leq M_{k,i}(K)\tau_{1}^{\kappa_{k,i}}\quad \text{where}\quad\kappa_{k,i}:=2^{-(3k+i-4)}.\] Proof.: By induction. We have \(\Phi_{0}=df_{0}\) and \(\Phi_{1,1}-df_{1,1}=(df_{0})^{c}-df_{1,1}.\) Since \((df_{0})^{c}=L_{1,1},\) it follows from (25) that \[\|\Phi_{1,1}-df_{1,1}\|_{K,\infty}\leq\frac{\tau_{1}}{3}=M_{1,1}(K)\tau_{1}.\] Assuming the result of the lemma for \((k,i-1),\) we have at step \((k,i):\) \[\Phi_{k,i}-df_{k,i} = (\Phi_{k,i}-(df_{k,i-1})^{c})+((df_{k,i-1})^{c}-df_{k,i})\] \[= (\Phi_{k,i-1}^{c}-(df_{k,i-1})^{c})+((df_{k,i-1})^{c}-df_{k,i})\] Since \((df_{k,i-1})^{c}=L_{k,i},\) it follows from (25) and Lemma 42 that \[\|\Phi_{k,i}-df_{k,i}\|_{K,\infty}\leq C_{k}(K)\|\Phi_{k,i-1}-df_{k,i-1}\|_{K, \infty}^{\frac{1}{2}}+\frac{\tau_{k}}{3}.\] By the induction hypothesis and since \(\tau_{k}\leq\tau_{1}\leq\tau_{1}^{\kappa_{k,i-1}/2},\) we deduce \[\|\Phi_{k,i}-df_{k,i}\|_{K,\infty} \leq C_{k}(K)M_{k,i-1}(K)^{1/2}\tau_{1}^{\kappa_{k,i-1}/2}+\frac{\tau_ {k}}{3}\] \[\leq \left(C_{k}(K)M_{k,i-1}^{1/2}(K)+\frac{1}{3}\right)\tau_{1}^{ \kappa_{k,i-1}/2}=M_{k,i}(K)\tau_{1}^{\kappa_{k,i}}.\] ### Proof of Theorem 4 Let \(k^{*}\in\mathbb{N}\). We write the difference \(\Phi_{\infty}-df_{\infty}\) as \[\|\Phi_{\infty}-df_{\infty}\|_{K,\infty}\leq\|\Phi_{\infty}-\Phi_{k^{*}}\|_{K, \infty}+\|\Phi_{k^{*}}-df_{k^{*}}\|_{K,\infty}+\|df_{k^{*}}-df_{\infty}\|_{K,\infty}\] where we have used the notation \(f_{k}=f_{k,3}\) and \(\Phi_{k}=\Phi_{k,3}.\) Let \(\varepsilon>0\). By Lemma 37 and by the proof of Proposition 8, we can choose \(k^{*}\) so that \[\|\Phi_{\infty}-\Phi_{k^{*}}\|_{K,\infty}\leq\frac{\varepsilon}{3}\quad\text{ and }\quad\|df_{\infty}-df_{k^{*}}\|_{K,\infty}\leq\frac{\varepsilon}{3}.\] Choosing \(\tau_{1}\leq(\frac{\varepsilon}{3M_{k,i}(K)})^{1/\kappa_{k,i}},\) we have by Lemma 43, that \[\|\Phi_{k^{*}}-df_{k^{*}}\|_{K,\infty}\leq\frac{\varepsilon}{3}.\] It follows that \(\|\Phi_{\infty}-df_{\infty}\|_{K,\infty}\leq\varepsilon,\) which ends the proof of Theorem 4. ## 7 Gauss map ### The corrugation matrices Let \((f_{k,i})_{k,i}\) be a sequence of maps generated by a \(3\)-corrugated process. The data \((f_{k,i-1},\ell_{i})\) where \(\ell_{i}=d\varpi_{i}\) allows to define a field of corrugation frames \(\mathbf{F}_{k,i-1}=(\mathbf{t}_{k,i-1},\mathbf{w}_{k,i-1},\mathbf{n}_{k,i-1})\) as in Section 3.2. More precisely, let \((v_{i},w_{i})\) be a direct basis of \(\mathbb{R}^{2}\) such that \(\ell_{i}(v_{i})>0\) and \(w_{i}\in\ker\ell_{i}\), we set \[\mathbf{w}_{k,i-1}:=\frac{df_{k,i-1}(w_{i})}{\|df_{k,i-1}(w_{i})\|},\qquad \mathbf{n}_{k,i-1}:=\frac{df_{k,i-1}(v_{i})\wedge df_{k,i-1}(w_{i})}{\|df_{k, i-1}(v_{i})\wedge df_{k,i-1}(w_{i})\|},\] and \(\mathbf{t}_{k,i-1}\) is chosen so that \(\mathbf{F}_{k,i-1}\) is a direct orthonormal frame. For each \((k,i)\) there exists a field of orthogonal matrices \(\mathscr{M}_{k,i}:C\to SO(3)\) to pass from one frame to the other: \[\mathbf{F}_{k,i}=\mathbf{F}_{k,i-1}\cdot\mathscr{M}_{k,i}. \tag{42}\] We call \(\mathscr{M}_{k,i}\) a _corrugation matrix_. We introduce an intermediary frame \(\mathbf{F}_{k,i-\frac{1}{2}}\) defined by \[\mathbf{w}_{k,i-\frac{1}{2}}:=\frac{df_{k,i}(w_{i})}{\|df_{k,i}(w_{i})\|}, \quad\mathbf{n}_{k,i-\frac{1}{2}}:=\mathbf{n}_{k,i}\quad\text{and}\quad \mathbf{t}_{k,i-\frac{1}{2}}:=\mathbf{w}_{k,i-\frac{1}{2}}\wedge\mathbf{n}_{k,i-\frac{1}{2}}. \tag{43}\] Each corrugation matrix thus decomposes into a product of two orthogonal matrices \(\mathscr{M}_{k,i}=\mathscr{L}_{k,i}\mathscr{R}_{k,i}\) where \(\mathscr{L}_{k,i}\) and \(\mathscr{R}_{k,i}\) are defined by \[\mathbf{F}_{k,i-\frac{1}{2}}=\mathbf{F}_{k,i-1}\cdot\mathscr{L}_{k,i}\quad \text{and}\quad\mathbf{F}_{k,i}=\mathbf{F}_{k,i-\frac{1}{2}}\cdot\mathscr{R}_ {k,i}.\] We have \[\mathscr{R}_{k,i}=\left(\begin{array}{ccc}\cos\beta_{k,i}&-\sin\beta_{k,i}&0 \\ \sin\beta_{k,i}&\cos\beta_{k,i}&0\\ 0&0&1\end{array}\right)\] where \(\beta_{k,i}\) is the angle between \(df_{k,i}(w_{i})\) and \(df_{k,i}(w_{i+1})\). Since \(f_{k,i}\) is \(C^{1}\) converging to an \(h\)-isometric map, this angle converges toward the \(h\)-angle between \(w_{i}\) and \(w_{i+1}\). Regarding \(\mathscr{L}_{k,i}\) it was shown in [3, Theorem 21] that \[\mathscr{L}_{k,i}=\left(\begin{array}{ccc}\cos\theta_{k,i}&0&-\sin\theta_{k,i}\\ 0&1&0\\ \sin\theta_{k,i}&0&\cos\theta_{k,i}\end{array}\right)+O\left(\frac{1}{N_{k,i}}\right) \tag{44}\] where \(\theta_{k,i}=\alpha_{k,i}\cos(2\pi N_{k,i}\varpi_{i})\). Asymptotically, the corrugation matrix thus looks like a product of two rotations with perpendicular axis, the first one reflecting the effect of the corrugations in the normal direction while the second is changing the direction of the wavefront in preparation for the next corrugation. It is readily seen from [3, Theorem 23] that the product \[\mathscr{M}_{\infty}:=\mathscr{M}_{1,1}\mathscr{M}_{1,2}\mathscr{M}_{1,3} \mathscr{M}_{2,1}\cdots=\prod_{k=1}^{\infty}\left(\prod_{i=1}^{3}\mathscr{M}_ {k,i}\right)\] is converging toward a continuous map \(\mathscr{M}_{\infty}:C^{*}\to SO(3)\) (beware of the unusual order of this product). As \(k\) tends to infinity, the frame \(\mathbf{F}_{k,0}\) converges to a frame \(\mathbf{F}_{\infty}=(\mathbf{t}_{\infty},\mathbf{w}_{\infty},\mathbf{n}_{\infty})\) adapted to \(f_{\infty}\). Writing \(\mathbf{F}_{0}\) for \(\mathbf{F}_{1,0}\), we then have by iterating (42): \[\mathbf{F}_{\infty}=\mathbf{F}_{0}\cdot\mathscr{M}_{\infty}.\] The normal map \(\mathbf{n}_{\infty}\) of \(f_{\infty}\) is thus given by \[\mathbf{n}_{\infty}=\mathbf{F}_{0}\cdot\mathscr{M}_{\infty}\cdot\mathbf{e}_{3} \tag{45}\] where \(\mathbf{e}_{3}\) the last vector of the canonical basis of \(\mathbb{E}^{3}\). ### The formal corrugation matrices In analogy with the 3-corrugation process, the sequence \((\Phi_{k,i})_{k,i}\) defines a sequence of frames \((\mathbf{F}_{k,i}^{\Phi})_{k,i}\) and corrugation matrices \((\mathscr{M}_{k,i}^{\Phi})_{k,i}\) allowing to express the normal map \(\mathbf{n}_{\infty}^{\Phi}\) of \(\Phi_{\infty}\) as an infinite product. Namely, with \((v_{i},w_{i})\) defined as in Section 7.1, we have \[\mathbf{w}_{k,i-1}^{\Phi}:=\frac{\Phi_{k,i-1}(w_{i})}{\|\Phi_{k,i-1}(w_{i})\|},\qquad\mathbf{n}_{k,i-1}^{\Phi}:=\frac{\Phi_{k,i-1}(v_{i})\wedge\Phi_{k,i-1} (w_{i})}{\|\Phi_{k,i-1}(v_{i})\wedge\Phi_{k,i-1}(w_{i})\|},\] and \(\mathbf{t}_{k,i-1}^{\Phi}\) is chosen so that \(\mathbf{F}_{k,i-1}^{\Phi}=(\mathbf{t}_{k,i-1}^{\Phi},\mathbf{w}_{k,i-1}^{\Phi },\mathbf{n}_{k,i-1}^{\Phi})\) is a direct orthonormal frame. We then write \[\mathbf{n}_{\infty}^{\Phi}=\mathbf{F}_{0}\cdot\mathscr{M}_{\infty}^{\Phi}. \mathbf{e}_{3}\quad\text{with}\quad\mathscr{M}_{\infty}^{\Phi}:=\prod_{k=1}^{ \infty}\left(\prod_{i=1}^{3}\mathscr{M}_{k,i}^{\Phi}\right). \tag{46}\] By considering the intermediary frame \(\mathbf{F}_{k,i-\frac{1}{2}}^{\Phi}\) obtained by replacing \(df_{k,i}\) by \(\Phi_{k,i}\) in (43), we get as above a splitting of the corrugation matrix in two parts \[\mathscr{M}_{k,i}^{\Phi}=\mathscr{L}_{k,i}^{\Phi}.\mathscr{G}_{k,i}^{\Phi},\] where \[\mathbf{F}_{k,i-\frac{1}{2}}^{\Phi}=\mathbf{F}_{k,i-1}^{\Phi}\cdot\mathscr{L} _{k,i}^{\Phi}\quad\text{and}\quad\mathbf{F}_{k,i}^{\Phi}=\mathbf{F}_{k,i- \frac{1}{2}}^{\Phi}\cdot\mathscr{R}_{k,i}^{\Phi}.\] **Lemma 44**.: _With \(\mu_{k,i}^{\Phi}\) defined in Lemma 36 and with the above notations for \(w_{i}\), we have_ \[\mathscr{R}_{k,i}^{\Phi}=\left(\begin{array}{ccc}\cos\beta_{k,i}^{\Phi}&- \sin\beta_{k,i}^{\Phi}&0\\ \sin\beta_{k,i}^{\Phi}&\cos\beta_{k,i}^{\Phi}&0\\ 0&0&1\end{array}\right)\] _with_ \[\cos\beta_{k,i}^{\Phi}=\frac{\mu_{k,i}^{\Phi}(w_{i},w_{i+1})}{\sqrt{\mu_{k,i}^ {\Phi}(w_{i},w_{i})\mu_{k,i}^{\Phi}(w_{i+1},w_{i+1})}},\] _and_ \[\mathscr{L}_{k,i}^{\Phi}=\left(\begin{array}{ccc}\cos\theta_{k,i}^{\Phi}&0& -\sin\theta_{k,i}^{\Phi}\\ 0&1&0\\ \sin\theta_{k,i}^{\Phi}&0&\cos\theta_{k,i}^{\Phi}\end{array}\right)\] _with \(\theta^{\Phi}_{k,i}=\alpha^{\Phi}_{k,i}\cos(2\pi N_{k,i}\varpi_{i})\), where_ \[\alpha^{\Phi}_{k,i}=J_{0}^{-1}\left(\sqrt{\frac{Z_{k,i}}{H_{i}(g_{k}-g_{k-1})+Z_ {k,i}}}\right)\] _and_ \[Z_{k,i}=\frac{\mu^{\Phi}_{k,i-1}(w_{i-1},w_{i-1})}{\ell_{i}(w_{i-1})^{2}}\left( \frac{\mu^{\Phi}_{k,i-1}(w_{i-1},w_{i-1})\mu^{\Phi}_{k,i-1}(w_{i},w_{i})}{\mu^ {\Phi}_{k,i-1}(w_{i-1},w_{i})^{2}}-1\right)\] _In particular, \(\beta^{\Phi}_{k,i}\) and \(\alpha^{\Phi}_{k,i}\) do not depend on the corrugation sequence \(N_{*}=(N_{k,i})_{k,i}\)._ Proof.: By definition of the rotation matrix \(\mathscr{B}^{\Phi}_{k,i}\), its angle \(\beta^{\Phi}_{k,i}\) is the angle between \(\Phi_{k,i}(w_{i})\) and \(\Phi_{k,i}(w_{i+1})\). Since \(\Phi_{k,i}\) is \(\mu^{\Phi}_{k,i}\)-isometric, we compute \[\cos\beta^{\Phi}_{k,i}=\frac{\langle\Phi_{k,i}(w_{i}),\Phi_{k,i}(w_{i+1}) \rangle}{\|\Phi_{k,i}(w_{i})\|\|\Phi_{k,i}(w_{i+1})\|}=\frac{\mu^{\Phi}_{k,i}(w _{i},w_{i+1})}{\sqrt{\mu^{\Phi}_{k,i}(w_{i},w_{i})\mu^{\Phi}_{k,i}(w_{i+1},w_{ i+1})}}\] as claimed in the lemma. For convenience we denote by \(u_{k,i}\) the unique vector field such that \(\Phi_{k,i-1}(u_{k,i})\) is collinear with \(\mathbf{t}^{\Phi}_{k,i-1}\) and \(\ell_{i}(u_{k,i})=1\), see Definition 35. From (36), we get \(\Phi_{k,i}(u_{k,i})=\mathbf{z}^{\Phi}_{k,i}\), where \(\mathbf{z}^{\Phi}_{k,i}=r^{\Phi}_{i}(\cos\theta^{\Phi}_{k,i}\mathbf{t}^{\Phi} _{k,i-1}+\sin\theta^{\Phi}_{k,i}\mathbf{n}^{\Phi}_{k,i-1})\). We also get from (36) that \(\Phi_{k,i}(w_{i})=\Phi_{k,i-1}(w_{i})\), implying \(\mathbf{w}^{\Phi}_{k,i}=\mathbf{w}^{\Phi}_{k,i-1}\). Now, \((u_{k,i},w_{i})\) being a direct frame, we compute \[\mathbf{n}^{\Phi}_{k,i}=\frac{\Phi_{k,i}(u_{k,i})\wedge\Phi_{k,i}(w_{i})}{\| \Phi_{k,i}(u_{k,i})\wedge\Phi_{k,i}(w_{i})\|}=\frac{\mathbf{z}^{\Phi}_{k,i} \wedge\mathbf{w}^{\Phi}_{k,i-1}}{r^{\Phi}_{i}}=\cos\theta^{\Phi}_{k,i}\mathbf{ n}^{\Phi}_{k,i-1}-\sin\theta^{\Phi}_{k,i}\mathbf{t}^{\Phi}_{k,i-1}.\] It follows that the rotation matrix \(\mathscr{L}^{\Phi}_{k,i}\) has the form given in the lemma with \(\theta^{\Phi}_{k,i}=\alpha^{\Phi}_{k,i}\cos(2\pi N_{k,i}\varpi_{i})\). By (35), we have \[\alpha^{\Phi}_{k,i}=J_{0}^{-1}\left(\frac{\|\Phi_{k,i-1}(u_{k,i})\|}{\sqrt{ \eta_{k,i}+\|\Phi_{k,i-1}(u_{k,i})\|^{2}}}\right)\] From Lemma 36 and since \(\Phi_{k,i-1}\) is \(\mu^{\Phi}_{k,i-1}\)-isometric we obtain \[\alpha^{\Phi}_{k,i}=J_{0}^{-1}\left(\sqrt{\frac{\mu^{\Phi}_{k,i-1}(u_{k,i},u_{ k,i})}{H_{i}(g_{k}-g_{k-1})+\mu^{\Phi}_{k,i-1}(u_{k,i},u_{k,i})}}\right).\] We have \(u_{k,i}=xw_{i-1}+yw_{i}\) for some real coefficients \(x,y\). Applying \(\ell_{i}\) on both sides of the decomposition we get \(x=1/\ell_{i}(w_{i-1})\). Using the fact that \(\Phi_{k,i-1}(u_{k,i})\) is perpendicular to \(\mathbf{w}_{k,i-1}\) and that \(\Phi_{k,i-1}\) is \(\mu^{\Phi}_{k,i-1}\)-isometric, we deduce \[y=-\frac{\mu^{\Phi}_{k,i-1}(w_{i-1},w_{i-1})}{\mu^{\Phi}_{k,i-1}(w_{i-1},w_{i}) \ell_{i}(w_{i-1})}\] We then have \[\mu^{\Phi}_{k,i-1}(u_{k,i},u_{k,i})=y\mu^{\Phi}_{k,i-1}(u_{k,i},w_{i}) =y\left(\frac{\mu^{\Phi}_{k,i-1}(w_{i-1},w_{i})}{\ell_{i}(w_{i-1})}+ y\mu^{\Phi}_{k,i-1}(w_{i},w_{i})\right).\] Replacing \(y\) by its above value, we get the expression for \(Z_{k,i}=\mu^{\Phi}_{k,i-1}(u_{k,i},u_{k,i})\) as in the lemma. ### Properties of the formal analogue The map \(\mathscr{M}^{\Phi}_{\infty}:C^{*}\to SO(3)\) has a natural factorization as a product of two maps that we now describe. In Formula (35) defining the formal corrugation process \(\Phi^{c}=FCP(\Phi,\eta,\varpi,N)\), the affine projection \(\varpi\) appears in the expression of the angle \(\theta=\alpha\cos(2\pi N\varpi)\) and in the definition of \(\ell:=d\varpi\). We can derive from \(FCP\) a new process \(\widetilde{FCP}\) by decoupling \(\varpi\) and \(\ell\). Precisely, the linear form \(\ell\) replaces \(\varpi\) as a parameter of the process and the projection \(\varpi\) is replaced by a variable \(t\). In particular, \(\theta\) is now considered as a function of two variables \((p,t)\to\theta(p,t)=\alpha(p)\cos(2\pi Nt)\). Similarly, the vector \(\mathbf{z}\) in Definition 35 is now given by \[\mathbf{z}(p,t)=r(p)\Big{(}\cos(\theta(p,t))\mathbf{t}(p)+\sin(\theta(p,t)) \mathbf{n}(p)\Big{)}.\] Consequently, the maps \(\widetilde{\Phi^{c}}:C\times\mathbb{R}\to Mono(\mathbb{R}^{2},\mathbb{E}^{3})\) defined by this new process also have two variables \[\widetilde{\Phi^{c}}(p,t):=\Phi(p)+(\mathbf{z}(p,t)-\Phi(u(p)))\otimes\ell.\] We denote by \(\widetilde{FCP}(\Phi,\eta,\ell,N)\) the formal corrugated map \(\widetilde{\Phi^{c}}\). Of course, if \(d\varpi=\ell\) then \[FCP(\Phi,\eta,\varpi,N)=\widetilde{FCP}(\Phi,\eta,\ell,N)\circ(Id,\varpi).\] Starting with \(\Phi_{0}\), the extended formal corrugation process produces a sequence of maps \((\widetilde{\Phi}_{k,i})_{k,i}\) such that \(\Phi_{k,i}=\widetilde{\Phi}_{k,i}\circ(Id,\varpi_{i})\) and a sequence of corrugation matrices such that \(\mathscr{M}^{\Phi}_{k,i}=\mathscr{M}(\widetilde{\Phi}_{k,i})\circ(Id,\varpi_{ i})\) for all \((k,i).\) We thus have \[\mathscr{M}^{\Phi}_{\infty}=\prod_{k=1}^{\infty}\left(\prod_{i=1}^{3}\mathscr{ M}(\widetilde{\Phi}_{k,i})\circ(Id,\varpi_{i})\right).\] This motivates the introduction of the following corrugation matrix \[\begin{array}{rcl}\widetilde{\mathscr{M}^{\Phi}_{\infty}}:&C^{*}\times \mathbb{R}^{3}&\longrightarrow&SO(3)\\ &(p,t_{1},t_{2},t_{3})&\longmapsto&\prod_{k=1}^{\infty}\left(\prod_{i=1}^{3} \mathscr{M}(\widetilde{\Phi}_{k,i})(p,t_{i})\right)\end{array}\] which is defined over \(C^{*}\times\mathbb{R}^{3}\) since the formal corrugation process converges over \(C^{*}\). We obviously have \[\mathscr{M}^{\Phi}_{\infty}=\widetilde{\mathscr{M}^{\Phi}_{\infty}}\circ(Id, \varpi)\] where \(\varpi:C^{*}\to\mathbb{R}^{3}\) is the affine map defined by \(\varpi(p):=(\varpi_{1}(p),\varpi_{2}(p),\varpi_{3}(p))\). **Definition 45**.: We call \(\widetilde{\mathscr{M}}_{\infty}^{\Phi}\) the _decoupled_ corrugation matrix of \(\Phi_{\infty}\). By ignoring the affine projections \(\varpi_{i}\), the decoupled corrugation matrix \(\widetilde{\mathscr{M}}_{\infty}^{\Phi}\) makes apparent some possible symmetries of the limit map \(\Phi_{\infty}\). **Lemma 46**.: _Let \(\Phi_{\infty}\) be the formal analogue of \(df_{\infty}\). The decoupled corrugation matrix \(\widetilde{\mathscr{M}}_{\infty}^{\Phi}\) does not depend on the angular parameter \(\varphi\)._ Proof.: The chosen initial map \(f_{0}\) is rotationaly invariant and its pull-back metric \(g_{0}=f_{0}^{*}\left\langle\cdot,\cdot\right\rangle\) only depends on \(\rho\), see 5.2. The sequence of metrics \((g_{k})_{k}\) also only depends on \(\rho\), see 2.2. From the analytical expression (40), the metrics \(\mu_{k,i}^{\Phi}\) also depends only on \(\rho.\) Obviously the angle \(\widetilde{\beta_{k,i}^{\Phi}}(p,t_{i})\) of the rotation matrix \(\widetilde{\mathscr{L}_{k,i}^{\Phi}}(p,t_{i})\) is equal to \(\beta_{k,i}^{\Phi}(p)\) and the amplitude \(\widetilde{\alpha_{k,i}^{\Phi}}(p,t_{i})\) is equal to \(\alpha_{k,i}^{\Phi}(p).\) By Lemma 44, the two functions \(\beta_{k,i}^{\Phi}\) and \(\alpha_{k,i}^{\Phi}\). can be expressed in terms of the metrics \(\mu_{k,i}^{\Phi}\) and consequently, only depends on \(\rho\). **Remark.-** The matrix \(\mathscr{M}_{\infty}^{\Phi}\) does depend on \(\varphi\) because of the presence of the projections \(\varpi_{i}\) whose values depend on both \(\rho\) and \(\varphi\). **Corollary 47**.: _The Holder regularity of \(\mathbf{n}_{\infty}^{\Phi}\) at a point \((\rho,\varphi)\) only depends on \(\rho\)._ Proof.: Since \(\mathscr{M}_{\infty}^{\Phi}\) and \(\widetilde{\mathscr{M}}_{\infty}^{\Phi}\) differ by an affine map, they share the same regularity. The following proposition enlights the link between \(\mathbf{n}_{\infty}^{\Phi}\) and the Weierstrass-like function defined by \[(\rho,\varphi)\longmapsto\sum_{k=1}^{\infty}\left(\sum_{i=1}^{3}\alpha_{k,i}^ {\Phi}(\rho)\cos(2\pi N_{k,i}\varpi_{i}(\rho,\varphi))\right).\] **Proposition 48**.: _Let \(p_{1}=(\rho,\varphi_{1})\) and \(p_{2}=(\rho,\varphi_{2})\) then_ \[\|\mathbf{n}_{\infty}^{\Phi}(p_{2})-\mathbf{n}_{\infty}^{\Phi}(p_{1})\|\leq \sqrt{2}\sum_{k=1,i\in\{1,2,3\}}^{\infty}\alpha_{k,i}^{\Phi}(\rho)\,|\Delta \cos(2\pi N_{k,i}\varpi_{i})|+\|\Delta\mathbf{F}_{0}\|_{F}.\] _where \(\Delta X\) denotes the difference \(X(p_{2})-X(p_{1}).\)_ In this proposition we have used the Frobenius norm: \[\|\mathbf{F}\|_{F}=\sqrt{\|\mathbf{F}.\mathbf{e}_{1}\|^{2}+\|\mathbf{F}. \mathbf{e}_{2}\|^{2}+\|\mathbf{F}.\mathbf{e}_{3}\|^{2}}.\] Note that this norm is invariant under the action of the orthogonal group. Proof.: We write the difference \(\Delta\mathbf{F}_{k,i}^{\Phi}\) as \[\Delta\mathbf{F}_{k,i}^{\Phi} = \mathbf{F}_{k,i-1}^{\Phi}(p_{2})\mathscr{M}_{k,i}^{\Phi}(p_{2})- \mathbf{F}_{k,i-1}^{\Phi}(p_{1})\mathscr{M}_{k,i}^{\Phi}(p_{1})\] \[= \mathbf{F}_{k,i-1}^{\Phi}(p_{2}).\Delta\mathscr{M}_{k,i}^{\Phi}+ \Delta\mathbf{F}_{k,i-1}^{\Phi}\mathscr{M}_{k,i}^{\Phi}(p_{1}).\] Since \(\mathscr{M}_{k,i}^{\Phi}\) is an orthogonal matrix we deduce \[\|\Delta\mathbf{F}_{k,i-1}^{\Phi}\mathscr{M}_{k,i}^{\Phi}(p_{1})\|_{F}=\| \Delta\mathbf{F}_{k,i-1}^{\Phi}\|_{F}.\] From the fact that \(\mathscr{M}_{k,i}^{\Phi}=\mathscr{L}_{k,i}^{\Phi}\mathscr{B}_{k,i}^{\Phi}\) we deduce \[\Delta\mathscr{M}_{k,i}^{\Phi}=\left(\Delta\mathscr{L}_{k,i}^{\Phi}\right) \mathscr{R}_{k,i}^{\Phi}+\mathscr{L}_{k,i}^{\Phi}\left(\Delta\mathscr{B}_{k,i }^{\Phi}\right).\] Since \(\beta_{k,i}^{\Phi}\) does not depend on \(\varphi\) we have \(\Delta\mathscr{B}_{k,i}^{\Phi}=0.\) Computing the difference \(\Delta\mathscr{L}_{k,i}^{\Phi}\) and taking the Frobenius norm we obtain \[\|\Delta\mathscr{L}_{k,i}^{\Phi}\|_{F}^{2} = 8\sin^{2}\left(\frac{\theta_{k,i}^{\Phi}(p_{2})-\theta_{k,i}^{ \Phi}(p_{1})}{2}\right).\] Thus \[\|\Delta\mathscr{L}_{k,i}^{\Phi}\|_{F}\leq\sqrt{2}\left|\theta_{k,i}^{\Phi}(p _{2})-\theta_{k,i}^{\Phi}(p_{1})\right|\] and \[\|\Delta\mathbf{F}_{k,i}^{\Phi}\|_{F}\leq\|\Delta\mathbf{F}_{k,i-1}^{\Phi}\|_ {F}+\sqrt{2}\left|\theta_{k,i}^{\Phi}(p_{2})-\theta_{k,i}^{\Phi}(p_{1})\right|.\] The proposition then follows easily. ### Proof of Theorem 6 Recall that we have defined the _normal pattern_\(\boldsymbol{\nu}_{\infty}\) and \(\boldsymbol{\nu}_{\infty}^{\Phi}\) of the map \(f_{\infty}\) and the formal analogue \(\Phi_{\infty}\) to be \[\boldsymbol{\nu}_{\infty}:=\mathscr{M}_{\infty}\cdot\mathbf{e}_{3}\quad\text {and}\quad\boldsymbol{\nu}_{\infty}^{\Phi}:=\mathscr{M}_{\infty}^{\Phi}\cdot \mathbf{e}_{3}.\] From (45) and (46) we deduce that \(\mathbf{n}_{\infty}=\mathbf{F}_{0}\cdot\boldsymbol{\nu}_{\infty}\) and \(\mathbf{n}_{\infty}^{\Phi}=\mathbf{F}_{0}\cdot\boldsymbol{\nu}_{\infty}^{\Phi}\). Thus the Gauss maps \(\mathbf{n}_{\infty}\) and \(\mathbf{n}_{\infty}^{\Phi}\) and their normal patterns \(\boldsymbol{\nu}_{\infty}\) and \(\boldsymbol{\nu}_{\infty}^{\Phi}\) only differ by the initial frame \(\mathbf{F}_{0}\). **Lemma 49**.: _For every \(\rho\in[0,1[\), the map_ \[\varphi\longmapsto\boldsymbol{\nu}_{\infty}^{\Phi}(N_{*})\left(\rho,\varphi\right)\] _is \(\frac{2\pi}{7L(N_{*})}\)-periodic. Moreover, for every integer \(m\) such that \(1\leq m\leq M-1\), we have_ \[\forall n\in\mathbb{N}^{*},\forall\varphi\in\mathbb{R}/(2\pi\mathbb{Z}), \qquad\boldsymbol{\nu}_{\infty}^{\Phi}(nN_{*})\left(\frac{m}{M},\varphi \right)=\boldsymbol{\nu}_{\infty}^{\Phi}(N_{*})\left(\frac{m}{M},n\varphi\right) \tag{47}\] _where \(nN_{*}\) is the sequence \((nN_{k,i})_{k,i}\)._ Proof.: The proof of Lemma 46 shows that \(\alpha_{k,i}^{\Phi}\) is not only independent of \(\varphi\) but also of the sequence \(N_{*}\) of corrugation numbers. We thus write \(\alpha_{k,i}^{\Phi}(\rho)\) instead of \(\alpha_{k,i}^{\Phi}(N^{*})(\rho,\varphi)\). From the definition of the wavefront forms (28) we have for every \(\varphi_{0}\in\mathbb{R}\) and \(i\in\{1,2,3\}\) \[\varpi_{i}(\rho,\varphi+\varphi_{0})=\varpi_{i}(\rho,\varphi)+\zeta a\varphi_{0}\] where \(a=\frac{7}{2\pi}\) and \(\zeta=0,-1\) or \(1\) depending on whether \(i\) is \(1\), \(2\) or \(3\). It follows that \[N_{k,i}\varpi_{i}\left(\rho,\varphi+\frac{2\pi}{7L(N_{*})}\right)=N_{k,i} \varpi_{i}(\rho,\varphi)+\zeta\frac{N_{k,i}}{L(N_{*})}=N_{k,i}\varpi_{i}(\rho, \varphi)\qquad\text{mod }1\] since by definition of \(L(N_{*})\) the quotient \(\frac{N_{k,i}}{L(N_{*})}\) is an integer if \(i=2,3\) and \(\zeta=0\) if \(i=1\). From the fact that \[\theta_{k,i}^{\Phi}(N_{*})\left(\rho,\varphi\right)=\alpha_{k,i}^{\Phi}(\rho) \cos\left(2\pi nN_{k,i}\varpi_{i}\left(\rho,\varphi\right)\right)\] we deduce that for all \((k,i)\) \[\theta_{k,i}^{\Phi}(N_{*})\left(\rho,\varphi+\frac{2\pi}{7L(N_{*})}\right)= \theta_{k,i}^{\Phi}(N_{*})\left(\rho,\varphi\right).\] Since \(\beta_{k,i}^{\Phi}\) is independent of \(\varphi\) and of the sequence \(N_{*}\), it easily follows that \[\mathscr{M}_{\infty}^{\Phi}(N_{*})\left(\rho,\varphi+\frac{2\pi}{7L(N_{*})} \right)=\mathscr{M}_{\infty}^{\Phi}(N_{*})\left(\rho,\varphi\right). \tag{48}\] Hence, the \(\frac{2\pi}{7L(N_{*})}\)-periodicity of \(\boldsymbol{\nu}_{\infty}^{\Phi}\). We now assume that \(\rho=\frac{m}{M}\). We want to compare \(\boldsymbol{\nu}_{\infty}^{\Phi}(nN_{*})\left(\rho,\varphi\right)\) to \(\boldsymbol{\nu}_{\infty}^{\Phi}(N_{*})\left(\rho,n\varphi\right).\) For \(i=1\), and since \(\varpi_{1}(\rho,\varphi)=-\rho\), we have \[\theta_{k,1}^{\Phi}(nN_{*})\left(\frac{m}{M}\right)=\alpha_{k,1}^{\Phi}( \frac{m}{M})\cos\left(-2\pi nmR_{k,1}\right)=\alpha_{k,1}^{\Phi}(\frac{m}{M})= \theta_{k,1}^{\Phi}(N_{*})\left(\frac{m}{M}\right)\] where \(R_{k,i}:=N_{k,i}/M.\) If \(i=2\), we have \(\varpi_{2}\left(\frac{m}{M},\varphi\right)=\frac{m}{M}-a\varphi\) and therefore \[\theta_{k,2}^{\Phi}(nN_{*})\left(\frac{m}{M},\varphi\right) = \alpha_{k,2}^{\Phi}(\frac{m}{M})\cos\left(2\pi nmR_{k,2}-2\pi nN _{k,2}a\varphi\right)\] \[= \alpha_{k,2}^{\Phi}(\frac{m}{M})\cos\left(2\pi nN_{k,2}a\varphi\right)\] \[= \theta_{k,2}^{\Phi}(N_{*})\left(\frac{m}{M},n\varphi\right)\] and similarly for \(\theta_{k,3}^{\Phi}(nN_{*}).\) Since \(\beta_{k,i}^{\Phi}\) is independent of \(\varphi\) and of the sequence \(nN_{*}\), it easily follows that \[\mathscr{M}_{\infty}^{\Phi}(nN_{*})\left(\frac{m}{M},\varphi\right)=\mathscr{ M}_{\infty}^{\Phi}(N_{*})\left(\frac{m}{M},n\varphi\right). \tag{49}\] This last relation proves (47). The proof of Theorem 6 follows from Lemma 49 and Theorem 4. ### Proof of Proposition 7. **Lemma 50**.: _We have_ \[\forall(\rho,\varphi)\in C,\qquad\mathbf{n}_{\infty}^{\Phi}(\rho,\varphi)=R_{ \varphi}\cdot\mathbf{F}_{0}(\rho,0)\cdot\boldsymbol{\nu}_{\infty}^{\Phi}(\rho,\varphi)\] _where \(R_{\varphi}\) is the rotation of \(\mathbb{E}^{3}\) along the \(\mathbf{e}_{3}\)-axis and of angle \(\varphi\), acting componentwise on the frame \(\mathbf{F}_{0}(\rho,0)\)._ Proof.: The formal Gauss map and its formal pattern only differ by the initial frame \[\mathbf{n}_{\infty}^{\Phi}(\rho,\varphi)=\mathbf{F}_{0}(\rho,\varphi)\cdot \boldsymbol{\nu}_{\infty}^{\Phi}(\rho,\varphi).\] Since \(f_{0}\) is a surface of revolution, we have \[R_{\varphi}(f_{0}(\rho,0))=f_{0}(\rho,\varphi)\] and thus \[R_{\varphi}\cdot\mathbf{F}_{0}(\rho,\varphi_{0})=\mathbf{F}_{0}(\rho,\varphi+ \varphi_{0}).\] Let \(K_{1}\subset K\) be a sub-arc \(\{(\rho,\varphi),\,|\,0\leq\varphi\leq\frac{2\pi}{7L}\}\). Since \(\boldsymbol{\nu}_{\infty}^{\Phi}\) is \(\frac{2\pi}{L}\)-periodic, we have \(\Gamma_{K_{1}}^{\Phi}=\Gamma_{K}^{\Phi}\). We claim that the Hausdorff distance between \(G_{K_{1}}^{\Phi}\) and \(\mathbf{F}_{0}(\rho,0)\cdot\Gamma_{K}^{\Phi}\), as subsets of \(\mathbb{S}^{2}\), is at most \(\frac{2\pi}{7L}\). Indeed, we have \[G_{K_{1}}^{\Phi}=\{\mathbf{n}_{\infty}^{\Phi}(\rho,\varphi)\}_{0\leq\varphi \leq\frac{2\pi}{7L}}=\{R_{\varphi}\cdot\mathbf{F}_{0}(\rho,0)\cdot\boldsymbol{ \nu}_{\infty}^{\Phi}(\rho,\varphi)\}_{0\leq\varphi\leq\frac{2\pi}{7L}}.\] and \[\mathbf{F}_{0}(\rho,0)\cdot\Gamma_{K_{1}}^{\Phi}=\{\mathbf{F}_{0}(\rho,0) \cdot\boldsymbol{\nu}_{\infty}^{\Phi}(\rho,\varphi)\}_{0\leq\varphi\leq\frac{ 2\pi}{7L}}.\] Hence, every point of \(G_{K_{1}}^{\Phi}\) is at spherical distance at most \(\frac{2\pi}{7L}\) from a point of \(\mathbf{F}_{0}(\rho,0)\cdot\Gamma_{K_{1}}^{\Phi}\), and reciprocally. Similarly, the Hausdorff distance between the image by \(\mathbf{n}_{\infty}^{\Phi}\) of the next sub-arc \(\{(\rho,\varphi),\,|\,\frac{2\pi}{7L}\leq\varphi\leq\frac{4\pi}{7L}\}\) and \(\mathbf{F}_{0}(\rho,\frac{2\pi}{7L})\cdot\Gamma_{K_{1}}^{\Phi}\) is at most \(\frac{2\pi}{7L}\). Iterating over the \(7L\) sub-arcs of \(K\), we obtain Proposition 7. ### Structure of the Gauss map Thanks to Proposition 7 we can express the fractal behavior of the formal Gauss map as follows. We consider the truncated normal pattern \[\boldsymbol{\nu}_{\infty}^{\Phi}(j)=\prod_{k=j}^{\infty}\left(\prod_{i=1}^{3} \mathscr{M}_{k,i}^{\Phi}\right)\cdot\mathbf{e}_{3}\] Let \(L_{j}\) be the greatest common divisor of the \(N_{k,2}\) and \(N_{k,3}\) for \(k\geq j\). Note that \(L=L_{1}\) in the above notation and that the sequence \((L_{j})_{j}\) is not decreasing. We also denote by \(\Gamma_{K}^{\Phi}(j)\) the image by \(\boldsymbol{\nu}_{\infty}^{\Phi}(j)\) of the sub-arc \(\{(\rho,\varphi)\,|\,0\leq\varphi\leq\frac{2\pi}{7L_{j}}\}\). With the above notation \(\Gamma_{K}^{\Phi}=\Gamma_{K}^{\Phi}(1)\). Arguing as in Proposition 7, we deduce that the Hausdorff distance between \(G_{K}^{\Phi}\) and \[\bigcup_{\ell=0}^{7L_{j}-1}\mathbf{F}_{j-1}(\rho,\frac{2\ell\pi}{7L_{j}})\cdot \Gamma_{K}^{\Phi}(j) \tag{50}\] is bounded by \(\frac{2\pi}{7L_{j}}\) plus a term proportional to the maximum of the moduli of continuity of \(\mathbf{F}_{j-1}=\mathbf{F}_{j-1,3}\) over each sub-arc \(\{(\rho,\varphi)\,|\,\frac{2\ell\pi}{7L_{j}}\leq\varphi\leq\frac{2(\ell+1)\pi} {7L_{j}}\}\). These moduli of continuity can be made explicit following the proof of Proposition 48. Decomposition (50) shows that \(G_{K}^{\Phi}\) is roughly the union of \(7L_{j}\) copies of the same pattern \(\Gamma_{K}^{\Phi}(j)\). Similarly, \(\Gamma_{K}^{\Phi}(j)\) is roughly the union of \(L_{j+1}/L_{j}\) copies of \(\Gamma_{K}^{\Phi}(j+1)\), showing that the image of the normal \(\mathbf{n}_{\infty}^{\Phi}\) exhibits a form of self-similarity.
2310.02762
Recurrences for values of the Hurwitz type poly-Bernoulli numbers and polynomials
The main object of this paper is to investigate a new class of the generalized Hurwitz type poly-Bernoulli numbers and polynomials from which we derive some algorithms for evaluating the Hurwitz type poly-Bernoulli numbers and polynomials. By introducing a new generalization of the Stirling numbers of the second kind, we succeed to establish some combinatorial formulas for the generalized Hurwitz type poly-Bernoulli numbers and polynomials with negative upper indices. Moreover, we give a connection between the generalized Stirling numbers of the second kind and graph theory.
Mohamed Amine Boutiche, Mohamed Mechacha, Mourad Rahmani
2023-10-04T12:19:11Z
http://arxiv.org/abs/2310.02762v1
# Recurrences for values of the Hurwitz type poly-Bernoulli numbers and polynomials ###### Abstract. The main object of this paper is to investigate a new class of the generalized Hurwitz type poly-Bernoulli numbers and polynomials from which we derive some algorithms for evaluating the Hurwitz type poly-Bernoulli numbers and polynomials. By introducing a new generalization of the Stirling numbers of the second kind, we succeed to establish some combinatorial formulas for the generalized Hurwitz type poly-Bernoulli numbers and polynomials with negative upper indices. Moreover, we give a connection between the generalized Stirling numbers of the second kind and graph theory. Key words and phrases:Chromatic polynomial of a graph, Poly-Bernoulli numbers, Hurwitz-Lerch zeta function, Recurrence relations, Stirling numbers 2010 Mathematics Subject Classification: 11B68, 11B73, 11M35 ## 1. Introduction An interesting extension of the well-known Riemann zeta function is the Hurwitz-Lerch zeta function \(\Phi\left(z,s,a\right)\) defined by [11] \[\Phi\left(z,s,a\right)=\underset{n\geq 0}{\sum}\frac{z^{n}}{\left(n+a\right)^ {s}},\] \[\left(s\in\mathbb{C}\text{ when }\left|z\right|<1;\operatorname{Re}\left(s \right)>1\text{ when }\left|z\right|=1\right),\] where \(a\in\mathbb{C}-\{0,-1,-2,\ldots\}\). Some important special cases of the Hurwitz-Lerch zeta function are Hurwitz zeta function \(\zeta(s,a)=\Phi\left(1,s,a\right)\), polylogarithm functions \(\operatorname{Li}_{s}\left(z\right)=z\Phi\left(z,s,1\right)\) and Dirichlet eta function \(\eta(s)=\Phi\left(-1,s,1\right)\). The Hurwitz type poly-Bernoulli numbers \(\mathcal{HB}_{n}^{\left(k\right)}\left(a\right)\) was introduced by Cenkci and Young in a recent paper [5] as a generalization of poly-Bernoulli numbers, which are defined by the following generating function \[\Phi\left(1-e^{-z},k,a\right)=\underset{n\geq 0}{\sum}\mathcal{HB}_{n}^{\left(k \right)}\left(a\right)\frac{z^{n}}{n!}.\] The poly-Bernoulli numbers \(\mathcal{B}_{n}^{\left(k\right)}\), given by \[\mathcal{B}_{n}^{\left(k\right)}:=\mathcal{HB}_{n}^{\left(k\right)}\left(1\right)\] are defined by the following generating function: \[\frac{\operatorname{Li}_{k}\left(1-e^{-z}\right)}{1-e^{-z}}=\underset{n\geq 0 }{\sum}\mathcal{B}_{n}^{\left(k\right)}\frac{z^{n}}{n!}.\] The numbers \[B_{n}:=\mathcal{B}_{n}^{\left(1\right)}\left(1\right)\] are the ordinary Bernoulli numbers with \(B_{1}=1/2\). For more details on these numbers, we refer the reader to [1, 8]. An explicit formula for \(\mathcal{HB}_{n}^{\left(k\right)}\left(a\right)\) is given by [5] \[\mathcal{HB}_{n}^{\left(k\right)}\left(a\right)=\underset{i=0}{\overset{n}{ \sum}}\frac{\left(-1\right)^{n+i}i!S\left(n,i\right)}{\left(i+a\right)^{k}},\] where \(S\left(n,i\right)\) are the Stirling numbers of the second kind arising as coefficients in the following expansion: \[x^{n}=\underset{i=0}{\overset{n}{\sum}}i!\binom{x}{i}S\left(n,i\right).\] The Hurwitz type poly-Bernoulli polynomials \(\mathcal{HB}_{n}^{\left(k\right)}\left(x;a\right)\) is defined by the following generating function \[\Phi\left(1-e^{-z},k,a\right)e^{-xz}=\underset{n\geq 0}{\sum}\mathcal{HB}_{n}^{ \left(k\right)}\left(x;a\right)\frac{z^{n}}{n!}.\] The coefficients in Cauchy's product series are given by \[\mathcal{HB}_{n}^{\left(k\right)}\left(x;a\right)=\underset{i=0}{\overset{n} {\sum}}\left(-1\right)^{n-i}\binom{n}{i}\mathcal{HB}_{i}^{\left(k\right)} \left(a\right)x^{n-i}.\] In this paper, we propose to investigate a new class of the generalized Hurwitz type poly-Bernoulli polynomials \(\mathbb{B}_{n,m}^{\left(k\right)}\left(x;a\right)\) which we call \(m-\)Hurwitz type poly-Bernoulli polynomials. We establish several properties of these polynomials. As a consequence, the study of \(\mathbb{B}_{n,m}^{\left(k\right)}\left(x;a\right)\) yields an interesting algorithm for calculating \(\mathcal{HB}_{n}^{\left(k\right)}\left(x;a\right).\) The idea is to construct an infinite matrix \(\left(\mathbb{B}_{n,m}^{\left(k\right)}\left(x;a\right)\right)_{n,m\geq 0}\), the first column of which gives the Hurwitz type poly-Bernoulli polynomials \(\mathbb{B}_{n,0}^{\left(k\right)}\left(x;a\right):=\mathcal{HB}_{n}^{\left(k \right)}\left(x;a\right).\) Furthermore, we introduce a new generalization of the Stirling numbers of the second kind, which aid us to prove the explicit formulas of the \(m-\)Hurwitz type poly-Bernoulli numbers and polynomials with negative upper indices. We first recall some basic definitions and some results [6, 11] that will be useful in the rest of the paper. For \(\nu\in\mathbb{C}\), the Pochhammer symbol \(\left(\nu\right)_{n}\) is defined by \[\left(\nu\right)_{n}=\nu\left(\nu+1\right)\cdots\left(\nu+n-1\right)\qquad \text{and}\qquad\left(\nu\right)_{0}=1.\] The (signed) Stirling numbers \(s\left(n,i\right)\) of the first kind are the coefficients in the following expansion: \[x\left(x-1\right)\cdots\left(x-n+1\right)=\underset{i=0}{\overset{n}{\sum}}s \left(n,i\right)x^{i}.\] and satisfy the recurrence relation given by \[s\left(n+1,i\right)=s\left(n,i-1\right)-ns\left(n,i\right)\qquad\left(1\leq i \leq n\right). \tag{1.1}\] The exponential generating functions for \(s(n,i)\) and \(S(n,i)\) are given by \[\frac{1}{i!}\left[\ln\left(1+z\right)\right]^{i}=\underset{n=i}{\overset{\infty}{ \sum}}s\left(n,i\right)\text{ }\frac{z^{n}}{n!}\] and \[\frac{1}{i!}\left(e^{z}-1\right)^{i}=\underset{n=i}{\overset{\infty}{\sum}}S \left(n,i\right)\text{ }\frac{z^{n}}{n!},\] respectively. The weighted Stirling numbers \(\mathcal{S}_{n}^{i}\left(x\right)\) of the second kind are defined by (see [3, 4]) \[\mathcal{S}_{n}^{i}\left(x\right) =\frac{1}{i!}\Delta^{i}x^{n} \tag{1.2}\] \[=\frac{1}{i!}\underset{j=0}{\overset{i}{\sum}}\left(-1\right)^{i- j}\binom{i}{j}\left(x+j\right)^{n},\] where \(\Delta\) denotes the forward difference operator. The exponential generating function of \(\mathcal{S}_{n}^{k}\left(x\right)\) is given by \[\frac{1}{i!}e^{xz}\left(e^{z}-1\right)^{i}=\underset{n=i}{\overset{\infty}{ \sum}}\mathcal{S}_{n}^{i}\left(x\right)\text{ }\frac{z^{n}}{n!} \tag{1.3}\] and weighted Stirling numbers \(\mathcal{S}_{n}^{i}\left(x\right)\) satisfy the following recurrence relation: \[\mathcal{S}_{n+1}^{i}\left(x\right)=\mathcal{S}_{n}^{i-1}\left(x\right)+\left( x+i\right)\mathcal{S}_{n}^{i}\left(x\right)\text{ }\text{ **Proposition 2.1**.: _The \(m\)-Hurwitz type poly-Bernoulli numbers may be expressed in the form_ \[\mathbb{B}_{n,m}^{(k)}\left(a\right)=\frac{\left(m+a\right)^{k}}{m!a^{k}}{\sum \limits_{i=0}^{m}}\left(-1\right)^{m-i}s\left(m,i\right)\mathcal{H}\mathcal{B} _{n+i}^{(k)}\left(a\right). \tag{2.2}\] The following theorem contains the Rodrigues-type formula for the exponential generating function of \(m\)-Hurwitz type poly-Bernoulli numbers. **Theorem 2.2**.: _The exponential generating function for \(m\)-Hurwitz type poly-Bernoulli numbers is given by_ \[\frac{1}{m!}e^{-mz}\left(1+\frac{m}{a}\right)^{k}\left(e^{z}\frac{d}{dz} \right)^{m}\left[\left(1-e^{-z}\right)^{m}\Phi\left(1-e^{-z},k,m+a\right) \right]={\sum\limits_{n\geq 0}}\mathbb{B}_{n,m}^{(k)}\left(a\right)\frac{z^{n}}{n!}. \tag{2.3}\] Proof.: It follows from (2.1) and (1.3) that \[{\sum\limits_{n\geq 0}}\mathbb{B}_{n,m}^{(k)}\left(a\right)\frac{z^{n}}{n!} =\frac{\left(-1\right)^{m}\left(m+a\right)^{k}}{m!a^{k}}{\sum \limits_{i\geq 0}}\frac{\left(-1\right)^{m+i}\left(i+m\right)!}{\left(i+m+a \right)^{k}}{\sum\limits_{n\geq i}}\genfrac{\{}{\}}{0.0pt}{}{n+m}{i+m}_{m} \frac{\left(-z\right)^{n}}{n!}\] \[=\frac{\left(-1\right)^{m}\left(m+a\right)^{k}}{m!a^{k}}{\sum \limits_{i\geq 0}}\frac{\left(-1\right)^{m+i}\left(i+m\right)!}{\left(i+m+a \right)^{k}}\frac{1}{i!}e^{-mz}\left(e^{-z}-1\right)^{i}\] \[=e^{-mz}\left(1+\frac{m}{a}\right)^{k}{\sum\limits_{i\geq 0}} \binom{m+i}{i}\frac{\left(1-e^{-z}\right)^{i}}{\left(i+m+a\right)^{k}}.\] Since \[\binom{m+i}{i}\left(1-e^{-z}\right)^{i}=\frac{1}{m!}\left(e^{z}\frac{d}{dz} \right)^{m}\left(1-e^{-z}\right)^{m+i},\] we get \[{\sum\limits_{n\geq 0}}\mathbb{B}_{n,m}^{(k)}\left(a\right)\frac{z^{n}}{n!}=\frac{ 1}{m!}e^{-mz}\left(1+\frac{m}{a}\right)^{k}\left(e^{z}\frac{d}{dz}\right)^{m} \left[\left(1-e^{-z}\right)^{m}{\sum\limits_{i\geq 0}}\frac{\left(1-e^{-z} \right)^{i}}{\left(i+m+a\right)^{k}}\right],\] which is obviously equivalent to (2.3). We have thus completed the proof of the theorem. **Theorem 2.3**.: _The \(\mathbb{B}_{n,m}^{(k)}\left(a\right)\) satisfies the following three-term recurrence relation:_ \[\mathbb{B}_{n+1,m}^{(k)}\left(a\right)=\frac{\left(m+1\right)\left(m+a\right) ^{k}}{\left(m+a+1\right)^{k}}\mathbb{B}_{n,m+1}^{(k)}\left(a\right)-m\mathbb{ B}_{n,m}^{(k)}\left(a\right), \tag{2.4}\] _with the initial sequence given by_ \[\mathbb{B}_{0,m}^{(k)}\left(a\right)=\frac{1}{a^{k}}.\] Proof.: From (1.1) and (2.2), we have \[\mathbb{B}_{n,m+1}^{(k)}\left(a\right)=\frac{\left(m+a+1\right)^{k}}{\left(m+ 1\right)!a^{k}}\,\sum\limits_{i=0}^{m+1}\left(-1\right)^{m+1-i}\left(\left(s \left(m,i-1\right)-ms\left(m,i\right)\right)\right)\mathcal{H}\mathcal{B}_{n+i }^{(k)}\left(a\right).\] After some rearrangement, we find that \[\mathbb{B}_{n,m+1}^{(k)}\left(a\right)=\frac{\left(m+a+1\right)^{k}}{\left(m+1 \right)\left(m+a\right)^{k}}\left(\mathbb{B}_{n+1,m}^{(k)}\left(a\right)+m \mathbb{B}_{n,m}^{(k)}\left(a\right)\right).\] This evidently completes the proof of the theorem. **Remark 2.4**.: _By setting \(k=1\) and \(a=1\) in (2.4), we get_ \[\mathbb{B}_{0,m}=1,\ \ \mathbb{B}_{n+1,m}=\frac{\left(m+1\right)^{2}}{\left(m+2 \right)}\mathbb{B}_{n,m+1}\left(a\right)-m\mathbb{B}_{n,m}\left(a\right),\] _which coincides with Rahmani's algorithm for Bernoulli numbers [10] with \(B_{1}=1/2\)._ ## 3. The \(m\)-Stirling numbers of the second kind In this section, we introduce a new generalization of the familiar Stirling numbers \(S(n,k)\) of the second kind, which we call \(m-\)Stirling numbers of the second kind. We then derive several elementary properties. **Definition 3.1**.: _The \(m\)-Stirling numbers \(\mathcal{R}_{n}^{k}\left(m\right)\) of the second kind is defined by_ \[\mathcal{R}_{n}^{k}\left(m\right)=\frac{m!}{k!}\sum_{j=0}^{k}\left(-1\right)^ {k-j}\binom{k}{j}\binom{j+m-1}{m}j^{n}. \tag{3.1}\] Since \[\left(x\right)_{m}=m!\binom{x+m-1}{m},\] we can write \(\mathcal{R}_{n}^{k}\left(m\right)\) as \[\mathcal{R}_{n}^{k}\left(m\right)=\frac{1}{k!}{\sum_{j=0}^{k} \left(-1\right)^{k-j}\binom{k}{j}j^{n}\left(j\right)_{m}.} \tag{3.2}\] Substituting \(m=0\) into above equation, we have the Stirling numbers of the second kind \[\mathcal{R}_{n}^{k}\left(0\right)=S\left(n,k\right).\] **Theorem 3.2**.: _The generating function of \(\mathcal{R}_{n}^{k}\left(m\right)\) are given by_ \[e^{z}\left(e^{-z}\frac{d}{dz}\right)^{m}\left(\frac{1}{k!}e^{\left(m-1\right) z}\left(e^{z}-1\right)^{k}\right)={\sum_{n\geq k}}\mathcal{R}_{n}^{k}\left(m \right)\frac{z^{n}}{n!}. \tag{3.3}\] Proof.: By using (3.1), we obtain \[\underset{n\geq k}{\sum}\mathcal{R}_{n}^{k}\left(m\right)\frac{z^{n} }{n!} =\frac{m!}{k!}\underset{j=0}{\sum}\left(-1\right)^{k-j}\binom{k}{j} \binom{j+m-1}{m}\underset{n\geq 0}{\sum}j^{n}\frac{z^{n}}{n!}\] \[=\frac{m!}{k!}\underset{j=0}{\sum}\binom{j+m-1}{m}e^{jz}\left[ \left(-1\right)^{k-j}\binom{k}{j}\right]\] \[=\frac{m!}{k!}e^{z}\underset{j=0}{\sum}\binom{m+j-1}{j-1}e^{(j-1 )z}\left(-1\right)^{k-j}\binom{k}{j}.\] Since \[\binom{m+j}{j}t^{j}=\frac{1}{m!}\frac{d^{m}}{dt^{m}}t^{m+j},\] we have \[\underset{n\geq k}{\sum}\mathcal{R}_{n}^{k}\left(m\right)\frac{z^ {n}}{n!} =e^{z}\frac{m!}{k!}\left(e^{-z}\frac{d}{dz}\right)^{m}\left(\frac{ 1}{m!}e^{(m-1)z}\underset{j=0}{\sum}\left(-1\right)^{k-j}\binom{k}{j}e^{jz}\right)\] \[=e^{z}\left(e^{-z}\frac{d}{dz}\right)^{m}\left(\frac{1}{k!}e^{(m- 1)z}\left(e^{z}-1\right)^{k}\right).\] Next, we obtain the following explicit relationship between the \(m\)-Stirling numbers \(\mathcal{R}_{n}^{k}\left(m\right)\) of the second kind and \(r\)-Stirling numbers of the second kind. **Theorem 3.3**.: _The following formula holds true_ \[\mathcal{R}_{n}^{k}\left(m\right)=\underset{j=0}{\sum}\binom{n}{j}\left(1-m \right)^{n-j}\underset{i=0}{\sum}s\left(m,i\right)\mathcal{S}_{j+i}^{k}\left( m-1\right). \tag{3.4}\] Proof.: Since \[\left(e^{-z}\frac{d}{dz}\right)^{m}=e^{-mz}\underset{i=0}{\sum}s\left(m,i \right)\frac{d^{i}}{dz^{i}}\] and \[\frac{1}{k!}e^{(m-1)z}\left(e^{z}-1\right)^{k}=\underset{n\geq k}{\sum} \mathcal{S}_{n}^{k}\left(m-1\right)\frac{z^{n}}{n!},\] we can write (3.3) as \[\underset{n\geq 0}{\sum}\mathcal{R}_{n}^{k}\left(m\right)\frac{z^{n}}{n!} =e^{\left(1-m\right)z}\underset{i=0}{\sum}s\left(m,i\right)\frac{d^ {i}}{dz^{i}}\left(\underset{n\geq 0}{\sum}\mathcal{S}_{n}^{k}\left(m-1\right)\frac{z^{n}}{n!}\right)\] \[=e^{\left(1-m\right)z}\underset{i=0}{\sum}s\left(m,i\right) \underset{n\geq 0}{\sum}\mathcal{S}_{n}^{k}\left(m-1\right)\frac{d^{i}}{dz^{i}} \frac{z^{n}}{n!}\] \[=e^{\left(1-m\right)z}\underset{i=0}{\sum}s\left(m,i\right) \underset{n\geq 0}{\sum}\mathcal{S}_{n}^{k}\left(m-1\right)\frac{z^{n-i}}{ \left(n-i\right)!}\] \[=e^{\left(1-m\right)z}\underset{i=0}{\sum}s\left(m,i\right) \underset{l\geq 0}{\sum}\mathcal{S}_{l+i}^{k}\left(m-1\right)\frac{z^{l}}{l!}\] \[=\left(\underset{n\geq 0}{\sum}\left(1-m\right)^{n}\frac{z^{n}}{n!} \right)\underset{i=0}{\sum}s\left(m,i\right)\left(\underset{n\geq 0}{\sum} \mathcal{S}_{n+i}^{k}\left(m-1\right)\frac{z^{n}}{n!}\right)\] \[=\underset{n\geq 0}{\sum}\left(\underset{j=0}{\sum}\binom{n}{j} \left(1-m\right)^{n-j}\underset{i=0}{\sum}s\left(m,i\right)\mathcal{S}_{j+i}^{ k}\left(m-1\right)\right)\frac{z^{n}}{n!}.\] Now, by comparing the coefficients of \(\frac{z^{n}}{n!}\) on both sides we obtain (3.4). **Theorem 3.4**.: _The following explicit relationships hold true_ \[\mathcal{R}_{n}^{k}\left(m\right)=\underset{i=0}{\sum}^{m}\left(-1\right)^{m- i}s\left(m,i\right)S(n+i,k). \tag{3.5}\] Proof.: We have \[RHS =\underset{i=0}{\sum}^{m}\left(-1\right)^{m-i}s\left(m,i\right) \frac{1}{k!}\underset{j=0}{\sum}^{k}\left(-1\right)^{k-j}\binom{k}{j}j^{n+i}\] \[=\frac{1}{k!}\underset{j=0}{\sum}^{k}\left(\underset{i=0}{\sum} \left(-1\right)^{m-i}s\left(m,i\right)j^{i}\right)\left(-1\right)^{k-j}\binom{ k}{j}j^{n}\] \[=\frac{1}{k!}\underset{j=0}{\sum}^{k}\left(-1\right)^{m+k-j} \left(-j\right)\left(-j-1\right)\cdots\left(-j+m-1\right)\binom{k}{j}j^{n}\] \[=\frac{1}{k!}\underset{j=0}{\sum}^{k}\left(-1\right)^{k-j}\binom{ k}{j}j^{n}\left(j\right)_{m}\] \[=\mathcal{R}_{n}^{k}\left(m\right).\] **Remark 3.5**.: _By means of the formula (3.5), we can compute several values of \(\mathcal{R}_{n}^{k}\left(m\right)\) given by_ \[\mathcal{R}_{0}^{0}\left(0\right) =1,\] \[\mathcal{R}_{0}^{0}\left(m\right) =0\ (m\geq 1),\] \[\mathcal{R}_{0}^{k}\left(m\right) =\left\lfloor\begin{matrix}m\\ k\end{matrix}\right\rfloor\,(k>0,m>0),\] \[\mathcal{R}_{n}^{1}\left(m\right) =m!\,(n>0,m\geq 0),\] \[\mathcal{R}_{n}^{k}\left(0\right) =S(n,k),\] \[\mathcal{R}_{n}^{k}\left(1\right) =S(n+1,k),\] \[\mathcal{R}_{n}^{n+m}\left(m\right) =1,\] \[\mathcal{R}_{n}^{k}\left(m\right) =0\ (k>n+m),\] _where \(\left\lfloor\begin{matrix}m\\ k\end{matrix}\right\rfloor\) denotes the Lah numbers [6] given by_ \[\left\lfloor\begin{matrix}m\\ k\end{matrix}\right\rfloor=\frac{m!}{k!}\binom{m-1}{k-1}.\] **Theorem 3.6**.: _The \(m-\)Stirling numbers of the second kind \(\mathcal{R}_{n}^{k}\left(m\right)\) satisfy the triangular recurrence relation_ \[\mathcal{R}_{n+1}^{k}\left(m\right)=\mathcal{R}_{n}^{k-1}\left(m\right)+k \mathcal{R}_{n}^{k}\left(m\right) \tag{3.6}\] _for \(1\leq k\leq n+m\), with initial conditions:_ \[\mathcal{R}_{n}^{0}\left(m\right)=\delta_{0,n+m}\ \left(n,m\geq 0\right)\] _and_ \[\mathcal{R}_{0}^{k}\left(m\right)=\left\lfloor\begin{matrix}m\\ k\end{matrix}\right\rfloor\,(k>0,m\geq 0)\] _with \(\delta_{i,j}\) being the Kronecker delta defined by_ \[\delta_{i,j}=\left\{\begin{array}{ll}0&\left(i\neq j\right),\\ 1&\left(i=j\right).\end{array}\right.\] Proof.: The result follows directly from the formula (3.5) and the recurrence formula for the Stirling numbers of the second kind. In the special case when \(m=0\), the triangular recurrence relation (3.6) corresponds to the well-known triangular recurrence for the Stirling numbers \(S(n,k)\) of the second kind. For \(m=1,2,3\), we obtain the following tables, for \(0\leq n\leq 7\) and \(0\leq k\leq m+7\). In the next paragraph, we describe a connection between the \(m\)-Stirling numbers of the second kind and graph theory. **Remark 3.7**.: _Let \(G\) be a finite and simple graph with \(n\) vertices and \(P\) its chromatic polynomial. Stanley in [12, Theorem 1.2] proved for all non-negative integer \(x\) the following_ \[\overline{P}(x)=(-1)^{n}P(-x)\] _where \(\overline{P}(x)\) is the number of pairs (\(\sigma\), \(\mathcal{O}\)), with \(\sigma\) is any map \(\sigma:V\to\{1,2,\ldots,x\}\) and \(\mathcal{O}\) is an orientation of \(G\). We say that \(\sigma\) is compatible with \(\mathcal{O}\) if the following conditions are satisfied_ 1. _The orientation_ \(\mathcal{O}\) _is acyclic,_ 2. _If_ \(u\to v\) _in the orientation_ \(\mathcal{O}\)_, then_ \(\sigma(u)\geq\sigma(v)\)_._ _Moreover, we have the following properties of \(\overline{P}\)_ 1. \(\overline{P}(G_{0},x)=x\)_, where_ \(G_{0}\) _is the one-vertex graph,_ 2. \(\overline{P}(G+H,x)=\overline{P}(G,x)\overline{P}(H,x)\)_, where_ \(G+H\) _denotes the disjoint union of graphs_ \(G\) _and_ \(H\)_,_ 3. \(\overline{P}(G,x)=\overline{P}(G-e,x)+\overline{P}(G/e,x)\)_._ _Where \(G-e\) and \(G/e\) are graphs obtained from \(G\) by respectively deleting and contracting an edge \(e\)._ **Theorem 3.8**.: _We have_ \[\mathcal{R}_{n}^{k}(m)=\frac{1}{k!}\sum_{j=0}^{k}(-1)^{k-j}\binom{k}{j} \overline{P}(O_{n}+K_{m},j).\] Proof.: It is known from [7] that \[P(O_{n},x)=x^{n},\] and \[P(K_{n},x)=(-1)^{n}(-x)_{n},\] where \(O_{n}\) and \(K_{n}\) are the empty graph and the complete graph respectively. Then by making use of \((-1)^{n}(-x)_{n}=x(x-1)\ldots(x-n+1)\), we deduce that \[\overline{P}(O_{n},x)=x^{n},\] and \[\overline{P}(K_{n},x) =(-1)^{n}P(K_{n},-x)\] \[=(x)_{n}\] Finally, from (3.2) we get the desired result. ## 4. The \(m-\)Hurwitz type poly-Bernoulli numbers with negative upper indices In this section, we consider an explicit formula for \(m-\)Hurwitz type poly-Bernoulli numbers \(\mathbb{B}_{n,m}^{\left(-k\right)}\left(a\right)\) with negative upper indices involving \(m-\)Stirling numbers of the second kind \(\mathcal{R}_{n}^{k}\left(m\right).\) **Theorem 4.1**.: _We have_ \[\mathbb{B}_{n,m}^{\left(-k\right)}\left(a\right)=\frac{a^{k}}{m!\left(m+a \right)^{k}}\sum_{l=0}^{\min\left(n,k\right)}\left(l!\right)^{2}\mathcal{S}_{ k}^{l}\left(a\right)\mathcal{R}_{n+1}^{l+1}(m).\] Proof.: From (2.2) and theorem 2.2 on page 804 of [5], we get \[\mathbb{B}_{n,m}^{\left(-k\right)}\left(a\right) =\frac{\left(-1\right)^{m+n}a^{k}}{m!\left(m+a\right)^{k}}\!\sum _{i=0}^{m}\left(-1\right)^{n+i}s\left(m,i\right)\mathcal{H}\mathcal{B}_{n+i}^ {\left(-k\right)}\left(a\right)\] \[=\frac{\left(-1\right)^{m}a^{k}}{m!\left(m+a\right)^{k}}\!\sum _{l=0}^{\infty}\left(l!\right)^{2}\mathcal{S}_{k}^{l}\left(a\right)\sum_{i=0} ^{m}\left(-1\right)^{i}s\left(m,i\right)\mathcal{S}(n+i+1,l+1).\] The conclusion follows by Theorem 3.4. As a consequence of Theorem 2.3, one can deduce a three-term recurrence relation for \(\mathbb{B}_{n,m}^{\left(-k\right)}\left(a\right)\). **Corollary 4.2**.: _The \(\mathbb{B}_{n,m}^{\left(-k\right)}\left(a\right)\) satisfies the following three-term recurrence relation:_ \[\mathbb{B}_{n+1,m}^{\left(-k\right)}\left(a\right)=\frac{\left(m+1\right) \left(m+a+1\right)^{k}}{\left(m+a\right)^{k}}\mathbb{B}_{n,m+1}^{\left(-k \right)}\left(a\right)-m\mathbb{B}_{n,m}^{\left(-k\right)}\left(a\right), \tag{4.1}\] _with the initial sequence given by_ \[\mathbb{B}_{0,m}^{\left(-k\right)}\left(a\right)=a^{k}.\] If \(m=0\) and \(a=1\), then Theorem 4.1 reduces to the duality property of poly-Bernoulli numbers [1]. **Corollary 4.3**.: _We have_ \[\mathcal{B}_{n}^{\left(-k\right)}=\mathcal{B}_{k}^{\left(-n\right)}.\] ## 5. The \(m\)-Hurwitz type poly-Bernoulli polynomials For \(m\geq 0\), let us consider the \(m\)-Hurwitz type poly-Bernoulli polynomials \(\mathbb{B}_{n,m}^{\left(k\right)}\left(x;a\right)\) as follows: \[\mathbb{B}_{n,m}^{\left(k\right)}\left(x;a\right)=\sum_{i=0}^{n}\left(-1\right) ^{n-i}\binom{n}{i}\mathbb{B}_{i,m}^{\left(k\right)}\left(a\right)x^{n-i}. \tag{5.1}\] It is easy to show that the generating function of \(\mathbb{B}_{n,m}^{(k)}\left(x;a\right)\) is given by \[\underset{n\geq 0}{\sum}\mathbb{B}_{n,m}^{(k)}\left(x;a\right) \frac{z^{n}}{n!} =e^{-xz}{\sum}_{n\geq 0}\mathbb{B}_{n,m}^{(k)}\left(a\right)\frac{z^{n}}{n!}\] \[=\frac{1}{m!}e^{-(m+x)z}\left(1+\frac{m}{a}\right)^{k}\left(e^{z} \frac{d}{dz}\right)^{m}\left(\left(1-e^{-z}\right)^{m}\Phi\left(1-e^{-z},k,m+a \right)\right).\] Next, we state an explicit formula for \(\mathbb{B}_{n,m}^{(k)}\left(x;a\right)\). **Theorem 5.1**.: _The following formula holds true_ \[\mathbb{B}_{n,m}^{(k)}\left(x;a\right)=\frac{\left(m+a\right)^{k}}{m!a^{k}} \sum_{i=0}^{n}\frac{\left(-1\right)^{n-i}\left(i+m\right)!}{\left(i+m+a\right) ^{k}}\mathcal{S}_{n}^{i}\left(x+m\right).\] Proof.: We have \[\sum_{n\geq 0}\left(\frac{\left(m+a\right)^{k}}{m!a^{k}}\sum_{i= 0}^{n}\frac{\left(-1\right)^{n-i}\left(i+m\right)!}{\left(i+m+a\right)^{k}} \mathcal{S}_{n}^{i}\left(x+m\right)\right)\frac{z^{n}}{n!}\] \[=\frac{\left(m+a\right)^{k}}{m!a^{k}}\sum_{i\geq 0}\frac{\left(-1 \right)^{i}\left(i+m\right)!}{\left(i+m+a\right)^{k}}{\sum}_{n\geq i} \mathcal{S}_{n}^{i}\left(x+m\right)\frac{\left(-z\right)^{n}}{n!}\] \[=\frac{\left(m+a\right)^{k}}{m!a^{k}}\sum_{i\geq 0}\frac{\left(-1 \right)^{i}\left(i+m\right)!}{\left(i+m+a\right)^{k}}\frac{1}{i!}e^{-\left(x+m \right)z}\left(e^{-z}-1\right)^{i}\] \[=e^{-\left(x+m\right)z}\frac{\left(m+a\right)^{k}}{a^{k}}\sum_{i \geq 0}\binom{m+i}{i}\frac{\left(1-e^{-z}\right)^{i}}{\left(i+m+a\right)^{k}}\] \[=e^{-xz}{\sum}_{n\geq 0}\mathbb{B}_{n,m}^{(k)}\left(a\right) \frac{z^{n}}{n!}\] \[={\sum}_{n\geq 0}\mathbb{B}_{n,m}^{(k)}\left(x;a\right) \frac{z^{n}}{n!}.\] Therefore, we get the desired result by comparing the coefficients of \(\frac{z^{n}}{n!}\) on both sides. **Theorem 5.2**.: _The polynomials \(\mathbb{B}_{n,m}^{(k)}\left(x;a\right)\) satisfy the following three-term recurrence relation:_ \[\mathbb{B}_{n+1,m}^{(k)}\left(x;a\right)=\frac{\left(m+1\right)\left(m+a \right)^{k}}{\left(m+a+1\right)^{k}}\mathbb{B}_{n,m+1}^{(k)}\left(x;a\right)+ \left(x-m\right)\mathbb{B}_{n,m}^{(k)}\left(x;a\right) \tag{5.2}\] _with the initial sequence given by_ \[\mathbb{B}_{n,0}^{(k)}\left(x;a\right)=\frac{1}{a^{k}}.\] Proof.: From (5.1), we get \[x\frac{d}{dx}\mathbb{B}_{n,m}^{(k)}\left(x;a\right) =n\underset{j=0}{\overset{n}{\sum}}\left(-1\right)^{n-j}\binom{n}{j }\mathbb{B}_{j,m}^{(k)}\left(a\right)x^{n-j}-n\underset{j=1}{\overset{n}{\sum}} \left(-1\right)^{n-j}\binom{n-1}{j-1}\mathbb{B}_{j,m}^{(k)}\left(a\right)x^{n-j}\] \[=n\mathbb{B}_{n,m}^{(k)}\left(x;a\right)-n\underset{j=0}{\overset{ n-1}{\sum}}\left(-1\right)^{n-j-1}\binom{n-1}{j}\mathbb{B}_{j+1,m}^{(k)}\left(a \right)x^{n-j-1}.\] Now, using (2.4), we have \[x\frac{d}{dx}\mathbb{B}_{n,m}^{(k)}\left(x;a\right)=n\mathbb{B}_ {n,m}^{(k)}\left(x;a\right)+nm\underset{j=0}{\overset{n-1}{\sum}}\left(-1 \right)^{n-j-1}\binom{n-1}{j}\mathbb{B}_{j,m}^{(k)}\left(a\right)x^{n-j-1}\\ -n\frac{\left(m+1\right)\left(m+a\right)^{k}}{\sum}_{j=0}^{n-1} \left(-1\right)^{n-j-1}\binom{n-1}{j}\mathbb{B}_{j,m+1}^{(k)}\left(a\right)x^{ n-j-1}\] which, after simplification, yields \[x\mathbb{B}_{n-1,m}^{(k)}\left(x;a\right)=\mathbb{B}_{n,m}^{(k)}\left(x;a \right)-\frac{\left(m+1\right)\left(m+a\right)^{k}}{\left(m+a+1\right)^{k}} \mathbb{B}_{n-1,m+1}^{(k)}\left(x;a\right)+m\mathbb{B}_{n-1,m}^{(k)}\left(x;a \right),\] which is obviously equivalent to (5.2) and the proof is complete. The next lemma is used in the proof of the Theorem 5.4. **Lemma 5.3**.: _We have_ \[\underset{i=0}{\overset{n}{\sum}}\left(-1\right)^{n-i}\binom{n}{i}\mathcal{ R}_{i+1}^{l+1}(m)x^{n-i}=\mathcal{R}_{n+1}^{l+1}(-x;m)+x\mathcal{R}_{n}^{l+1}(-x;m)\] _and for \(m=0,\) we have_ \[\underset{i=0}{\overset{n}{\sum}}\left(-1\right)^{n-i}\binom{n}{i}S(i+1,l+1) x^{n-i}=\mathcal{S}_{n}^{l}\left(1-x\right).\] Proof.: We have \[\sum_{i=0}^{n}\left(-1\right)^{n-i}\binom{n}{i}\mathcal{R}_{i+1}^{l+ 1}(m)x^{n-i}\] \[=\sum_{i=0}^{n}\left(-1\right)^{n-i}\binom{n}{i}x^{n-i}\left(\frac{ 1}{\left(l+1\right)!}\sum_{j=0}^{l+1}(-1)^{l+1-j}\binom{l+1}{j}j^{i+1}(j)_{m}\right)\] \[=\frac{1}{\left(l+1\right)!}\sum_{j=0}^{l+1}(-1)^{l+1-j}\binom{l+1 }{j}(j)_{m}\!\sum_{i=0}^{n}\left(-1\right)^{n-i}\binom{n}{i}j^{i+1}x^{n-i}\] \[=\frac{1}{\left(l+1\right)!}\sum_{j=0}^{l+1}(-1)^{l+1-j}\binom{l+1 }{j}(j)_{m}\!j\left(j-x\right)^{n}\] \[=\frac{1}{\left(l+1\right)!}\sum_{j=0}^{l+1}(-1)^{l+1-j}\binom{l+ 1}{j}(j)_{m}\left(\left(j-x\right)+x\right)\left(j-x\right)^{n}.\] This completes the proof of lemma. **Theorem 5.4**.: _We have_ \[\mathbb{B}_{n,m}^{(-k)}\left(x;a\right)=\frac{a^{k}}{m!\left(m+a\right)^{k}} \sum_{l=0}^{\min(n,k)}\left(l!\right)^{2}\mathcal{S}_{k}^{l}\left(a\right) \left(\mathcal{R}_{n+1}^{l+1}(-x;m)+x\mathcal{R}_{n}^{l+1}(-x;m)\right).\] Proof.: We have \[\mathbb{B}_{n,m}^{(-k)}\left(x;a\right) =\sum_{i=0}^{n}\left(-1\right)^{n-i}\binom{n}{i}\mathbb{B}_{i,m} ^{(-k)}\left(a\right)x^{n-i}\] \[=\sum_{i=0}^{n}\left(-1\right)^{n-i}\binom{n}{i}\left(\frac{a^{k }}{m!\left(m+a\right)^{k}}\sum_{l=0}^{\min(n,k)}\left(l!\right)^{2}\mathcal{S }_{k}^{l}\left(a\right)\mathcal{R}_{i+1}^{l+1}(m).\right)x^{n-i}\] \[=\frac{a^{k}}{m!\left(m+a\right)^{k}}\sum_{l=0}^{\min(n,k)}\left( l!\right)^{2}\mathcal{S}_{k}^{l}\left(a\right)\sum_{i=0}^{n}\left(-1\right)^{n-i} \binom{n}{i}\mathcal{R}_{i+1}^{l+1}(m)x^{n-i}.\] The result follows from the Lemma 5.3. **Corollary 5.5**.: _We have_ \[\mathbb{B}_{n}^{(-k)}\left(x\right)=\sum_{l=0}^{\min(n,k)}\left(l!\right)^{2} \mathcal{S}(k+1,l+1)\mathcal{S}_{n}^{l}\left(1-x\right). \tag{5.3}\] Proof.: The proof follows from Theorem 5.4 with \(m=0\) and \(a=1\). Note that the formula (5.3) was already given in [5].
2301.05994
Min-Max-Jump distance and its applications
We explore three applications of Min-Max-Jump distance (MMJ distance). MMJ-based K-means revises K-means with MMJ distance. MMJ-based Silhouette coefficient revises Silhouette coefficient with MMJ distance. We also tested the Clustering with Neural Network and Index (CNNI) model with MMJ-based Silhouette coefficient. In the last application, we tested using Min-Max-Jump distance for predicting labels of new points, after a clustering analysis of data. Result shows Min-Max-Jump distance achieves good performances in all the three proposed applications. In addition, we devise several algorithms for calculating or estimating the distance.
Gangli Liu
2023-01-15T00:55:40Z
http://arxiv.org/abs/2301.05994v6
# Min-Max-Jump distance and its applications ###### Abstract. We explore three applications of Min-Max-Jump distance (MMJ distance). MMJ-based K-means revises K-means with MMJ distance. MMJ-based Silhouette coefficient revises Silhouette coefficient with MMJ distance. We also tested the Clustering with Neural Network and Index (CNNI) model with MMJ-based Silhouette coefficient. In the last application, we tested using Min-Max-Jump distance for predicting labels of new points, after a clustering analysis of data. Result shows Min-Max-Jump distance achieves good performances in all the three proposed applications. distance; CNNI; Silhouette coefficient; SCOM; metric space; K-means; clustering + Footnote †: journal: 0000, Beijing, China 0000-000-000-000-000/00/00..
2308.01035
TS-RGBD Dataset: a Novel Dataset for Theatre Scenes Description for People with Visual Impairments
Computer vision was long a tool used for aiding visually impaired people to move around their environment and avoid obstacles and falls. Solutions are limited to either indoor or outdoor scenes, which limits the kind of places and scenes visually disabled people can be in, including entertainment places such as theatres. Furthermore, most of the proposed computer-vision-based methods rely on RGB benchmarks to train their models resulting in a limited performance due to the absence of the depth modality. In this paper, we propose a novel RGB-D dataset containing theatre scenes with ground truth human actions and dense captions annotations for image captioning and human action recognition: TS-RGBD dataset. It includes three types of data: RGB, depth, and skeleton sequences, captured by Microsoft Kinect. We test image captioning models on our dataset as well as some skeleton-based human action recognition models in order to extend the range of environment types where a visually disabled person can be, by detecting human actions and textually describing appearances of regions of interest in theatre scenes.
Leyla Benhamida, Khadidja Delloul, Slimane Larabi
2023-08-02T09:28:35Z
http://arxiv.org/abs/2308.01035v1
# TS-RGBD Dataset: a Novel Dataset for Theatre Scenes Description for People with Visual Impairments ###### Abstract Computer vision was long a tool used for aiding visually impaired people to move around their environment and avoid obstacles and falls. Solutions are limited to either indoor or outdoor scenes, which limits the kind of places and scenes visually disabled people can be in, including entertainment places such as theatres. Furthermore, most of the proposed computer-vision-based methods rely on RGB benchmarks to train their models resulting in a limited performance due to the absence of the depth modality. In this paper, we propose a novel RGB-D dataset containing theatre scenes with ground truth human actions and dense captions annotations for image captioning and human action recognition: TS-RGBD dataset. It includes three types of data: RGB, depth, and skeleton sequences, captured by Microsoft Kinect 1. Footnote 1: [https://github.com/khadidja-delloul/RGB-D-Theatre-Scenes-Dataset](https://github.com/khadidja-delloul/RGB-D-Theatre-Scenes-Dataset) We test image captioning models on our dataset as well as some skeleton-based human action recognition models in order to extend the range of environment types where a visually disabled person can be, by detecting human actions and textually describing appearances of regions of interest in theatre scenes. theatre, dataset, RGB-D, data collection, image captioning, egocentric description, human action recognition. ## I Introduction With the advancement known in deep learning technologies, uncountable are applications that emerged in this field. Among these researches, we can find multiple solutions that focus on helping make the life of blind and visually impaired people easier. Either by designing tools to help them move around their environment and detect obstacles and stairs, or by developing applications that help them in their daily life by identifying money bills or objects, reading for them, or offering them online assistance. While these applications offer them (blind and visually impaired people) help throughout their daily life transactions and issues, they remain limited when it comes to entertainment. For instance, there are no solutions that help them access and understand a theatre scene by providing a description of the scene and the actors' actions on stage. Even though works that revolve around describing paintings and aesthetics [1, 2] or reading books exist, there are -to our knowledge- no works that are interested in textual descriptions of theatre plays. Although these textual descriptions are sometimes written manually and read by people, they are not always available. In this work, we aim to provide blind and visually impaired people with a system that can not only describe a theatre scene for them but to give them the positions of every object or region present on the stage regarding them (left, right, front). To build such a system, we had to use the image captioning 'DenseCap' model to detect regions and generate captions for each one of them, while using depth information to determine their positions regarding the user. However, the first challenge that was encountered was the fact there are no theatre scenes in the images that models are trained on. The second challenge was the absence of depth information from that set of images. On the other hand, in order to fully comprehend a theatre scene, visually impaired persons need to have a description of the actors' actions performed on stage. This description can be provided after recognizing the actions based on state-of-the-art human action recognition methods. Various techniques have emerged to recognize human actions using a computer vision approach with deep learning models. The emergence of RGB-D sensors, such as the Microsoft Kinect, has revolutionized the field of HAR (human action recognition) by providing rich human action benchmarks [3, 4, 5] that contain RGB images as well as depth and skeleton information for more accurate action analysis. However, despite significant progress in RGB-D action datasets, there remains a scarcity of datasets specifically designed to capture human actions in theatrical settings. Theatre environments present unique challenges for action recognition due to their distinct characteristics and intricate stage designs. To address the cited challenges for theatre scene textual description and advance the state-of-the-art in RGB-D human action recognition in a theatre environment, we present a novel dataset specifically tailored for capturing scenes and human actions in theatrical settings containing three modalities: RGB, Depth, and skeleton data. Furthermore, we provide through this dataset two categories of data: trimmed sequences of human actions and untrimmed sequences that represent long continuous theatre scenes with temporal annotation. By in troducing our unique dataset with these two categories, we will promote the development of novel techniques capable of not only effectively recognizing actions in theatres but also localizing and detecting the boundaries of actions, using the second category of data, for real-time recognition. This paper is organized as follows: Section II reviews the current benchmarks of both image captioning and human action recognition as well as a small review of the existing approaches for human action recognition and used datasets. Section III introduces the proposed theatre dataset: TS-RGBD, its structure, annotation process, and detailed information. Then, section IV is devoted to presenting the proposed solution for egocentric captioning, followed by the experimental results of human action recognition models on the proposed dataset, detailed in section V. ## II Related Works #### Ii-1 Datasets Well-known computer vision datasets, even those of considerable acclaim, notably lack theatre images, let alone comprehensive RGB-D data specifically capturing theatre scenes. The following table gives a summary of available RGB datasets: As for depth datasets: From both tables, we conclude that there are no available datasets with theatre plays in them. ### _Image Captioning_ Image captioning consists of describing the content of any given image using text. The automatically generated captions are expected to be grammatically correct, with logical order. Image captioning relies on deep learning models that are based either on retrieval (auto-encoders or features extraction...), template (sentence generation after object detection and recognition), or end-to-end learning [6]. Generated captions can be a single sentence or multiple sentences that constitute a paragraph. There are various architectures for single sentence captioning models, from scene description graphs [7, 8] to attention mechanisms [9, 10, 11, 12, 13], transformers, and even CNN-LSTM and GANs networks [15, 16, 17]. Solutions for paragraph captioning are based on end-to-end dense captioning models. They are based on single-sentence captioning to generate a set of sentences that will be combined to form a coherent paragraph [6]. These solutions are built using encoder-decoder architectures and recurrent networks [13, 14, 19, 20]. Kong et al proposed in [21] a solution for RGB-D image captioning, but it only focuses on enriching descriptions by positional relationships between objects, while training their model on a dataset that does not include theatre images. Whether single sentence or paragraph, image captioning models achieved remarkable results regarding different metrics (BLEU, ROUGE, METEOR, CIDrE..etc). However, they do not generate detailed captions when it comes to complex scenes. Single sentence models focus on moving objects ignoring background, and paragraph captioning models do not consider positional descriptions. Giving blind and visually impaired people sentences that lack descriptions of static objects and background, or paragraphs that lack positional descriptions of said objects makes it difficult or even impossible for them to re-imagine and rebuild the scene in their minds. We highlight the fact that most models are trained only on indoor or outdoor scenes, which leads to bad captioning when the images are extracted from theatre scenes. ### _Human Action Recognition_ Human action recognition is a fundamental task in computer vision with numerous applications, ranging from surveillance and human-computer interaction to robotics and virtual reality. Due to its wide range of applications, many methods were proposed that succeeded at achieving considerable performance. The earliest methods were based on RGB sequences [22, 23] but their performance is relatively low due to different factors such as illumination and clothing colors. After the release of the Microsoft Kinect sensor, many RGB-D human action benchmarks emerged [3, 4, 24] presenting richer information by providing the depth modality resulting in more accurate action features. They mostly consist of three modalities: RGB, depth, and skeleton sequences. As a result, other methods were developed based on the RGB-D datasets that surpass the earliest approaches. Some methods considered the use of depth maps only [25, 26] which achieved better performance compared to RGB methods but they remain very sensitive to view-point variations. Recently, the skeleton-based approach is widely investigated using skeleton sequences and it achieved considerable performance compared to the other approaches, especially after the rise of Graph Convolution Networks (GCN) [27, 28, 29]. GCNs are designed to extract features from graph-based data such as skeleton sequences that can be modeled as graphs by linking different body joints. #### Ii-B1 RGB-D Datasets Some of the well-known RGB-D human action benchmarks include: * UWA3D Activity Dataset [4] contains 30 activities performed at different speeds by 10 people of varying heights in congested settings. This dataset has high inter-class similarity and contains frequent self-occlusions. * MSR Daily Activity3D dataset [24] includes 16 daily activities in the living room. This dataset can be used to assess the modeling of human-object interactions as well as the robustness of proposed algorithms to pose changes. * MSR Action Pairs [30] provides 6 pairs of actions in which two actions in a pair involve the interactions with the same object in distinct ways. This dataset can be used to evaluate the algorithms' ability to model the temporal structure of actions. * NTU-RGBD [3] was first containing 56880 sequences of 60 action classes. Then, the extended version [31] was introduced with 57367 additional sequences and 60 other action classes making it the largest action benchmark so far. Most of the proposed benchmarks, including the cited ones, focus only on offline action recognition task that consists of classifying segmented action sequences. However, in the case of real-life applications, temporal localization of actions in untrimmed sequences is very important in order to obtain real-time recognition. In order to elaborate online systems, a few benchmarks were proposed providing a set of untrimmed videos where most of them were collected from Media, TV shows, YouTube...etc, resulting in one modality datasets containing only RGB sequences [32, 33]. Some others were collected using depth sensors, providing multi-modal datasets such as: * G3D [34] is intended for real-time action recognition in games with a total of 210 videos. As the first activity detection dataset, the majority of G3D sequences involve several actions in a controlled indoor environment with a fixed camera, which is a typical setup for gesture-based gaming., * OAD [35] dataset focuses on both online action detection and prediction. It contains 59 videos of daily actions, and it proposes a set of new protocols for 3D action detection. * PKU-MMD [36] represents a large-scale dataset containing 1076 sequences with almost 20,000 action instances and 5,4 million frames in total. Besides the three modalities (RGB, Depth, and skeleton sequences), it also provides the corresponding Infrared Radiation data. All of these datasets were captured in either an outdoor environment or an indoor environment (e.g. kitchen, room, office...etc), none have considered a theatre environment. The task of recognizing human actions in a theatre environment can be very challenging due to its unique characteristics such as dynamic lighting conditions, special stage designs, and complex human interactions. Therefore, we collect a dataset of RGB-D theatre scenes that contains both trimmed and untrimmed action sequences in order to i) advance the performance of the proposed techniques for both offline and online action recognition in a theatre context, and ii) stimulate the development of novel algorithms and techniques capable of effectively handling the intricacies of theatre environment. In conclusion, in this work, we make the following contributions: * To the best of our knowledge, we are the first to collect and provide RGB-D sequences captured in a theatrical setting. * Our dataset provides RGB-D untrimmed theatre scenes with temporal annotations, that contains continuous actors' actions in order to help the development of theatre online action recognition systems. * Image Captions that contain the direction of each region, with captioning model retrained on our theatre scenes dataset. ## III TS-RGBD Dataset Description In this section, we describe the data collection process, and dataset statistics in detail as well as annotation and cleaning methodologies. ### _Setup_ In order to collect samples in a theatre environment, we sought cooperation with national theaters. Thus, we contacted the UK National Theater, but because of the terms of the actors' contracts, it was not possible to use their visual content. Our local National Theater on the other hand was open for a partnership with the laboratory to achieve the task. However, the limited range of the Kinect sensor hindered us from accurately capturing the depth information of actors situated at a distance beyond four meters. Finally, we opted to film various scenarios at the auditorium of the university (figure 1) where the distances are convenient for the Kinect sensor. Two Kinect v1 sensors were used and positioned at the same height in two different viewpoints (front view and side view) as shown in Figure 2. We also used more than 76 objects in total to vary the setups and the used/background objects. The use of two sensors at different positions and varying background setups results in the diversity of the collected samples. Fig. 1: Scene capturing in the university auditorium. ### _Subjects_ We enlisted a team of 8 students to interpret on stage the prepared scenarios. The students signed a legal document granting us permission to use and distribute their visual content among the scientific society. ### _Data Modalities_ The Microsoft Kinect v1 provides three data modalities: RGB images, depth, and skeleton data. The resolution of each captured RGB and depth sequence is \(640\times 480\), and each frame is saved in JPEG format. The sequences were captured at a rate of 25 frames per second. The skeleton data, on the other hand, consists of 3-dimensional positions of 20 body joints for each tracked human body, knowing that Kinect v1 can only detect and track at most two human bodies. Figure.3 illustrates the configuration of the 20 captured joints. ### _Data Classes_ Our dataset consists of two categories of data: _segmented theatre actions_ and _untrimmed theatre scenes_. #### Iv-D1 Segmented theatre actions This category contains 36 action classes that are more accurate in theatre scenes such as walking, sitting down, drinking, jumping, eating, and throwing. Each viewpoint comprises 230 sequences, with an average of 170 frames for each sequence. Each action was carried out by 3 males and was repeated at least 3 times at varying speeds. #### Iv-D2 Untrimmed theatre scenes This category includes 38 written theatre scene scenarios. It contains, in total, 75 sequences for each viewpoint, with a mean of 1119 frames per sequence. The scenes are divided into three types: * **Solo scenes** involve a single person performing different actions (figure 4). Each solo scene was interpreted by at least two individuals to ensure data diversity. * **Two-Person Scenes** involve interactions between two individuals, such as "two persons walking towards each other", "shaking hands", "one person handing an object to another one", and "hugging each other" as shown in figure 5. * **Group Scenes** involve three or more people engaged in an activity. Notably, skeleton data of this last type of scene is considered as a two-person interaction scene Fig. 4: Example of interpreted scenarios (Solo). Fig. 5: Example of interpreted scenarios (Two People). Fig. 3: Joints configuration provided by Kinect v1. Fig. 2: Illustration of the Kinects setup. because, as mentioned before, Kinect v1 can only track skeleton joints of at most two persons. Figure 6 shows an example of such scenes. Summarily, with 8 male actors (females were not available) we could gather 610 sequences with an average of 373 frames per sequence (25 frames per second), and a total of 123 149 frames. The table III presents a summary: Figure 7 shows the number of sequences per type of scenario. There are more solo scenes since the Kinect v1 range is limited to \(4\) meters and resolution (640\(\times\)480) which makes it impossible to fit a group of people into such a small frame due to their height differences. ### _Data Cleaning_ For the image captioning task, we created an application to manually select frames with smooth depth maps, that mark a transition in the video to avoid redundancies. In addition to that, we had to go over all selected frames to keep only the ones with smooth corresponding depth maps. In the end, 1480 key-frames were kept. ### _Data Annotation_ Many data annotation applications available today offer powerful functionality for annotating data, but they often come with a trade-off: either our data become publicly accessible, or these applications come at a cost and are not available for free. Even so, we could find a multi-platform desktop application developed by [37] available to download and install from GitHub. The developer was inspired by the original _"LabelMe"_ application that was created by MIT for manually annotating data for object detection/recognition and instance or semantic segmentation, with the possibility of drawing a box or a polygonal envelope and adding labels. We could annotate 50 images so far, resulting in the following: Figure 8 shows the interface of the _"LabelMe"_ application as well as the process of polygonal annotations: ## IV Egocentric Captioning ### _Proposed Solution_ In this paper, we propose an approach to offer the blind and visually impaired detailed descriptions of the environment they are in while giving them the opportunity to attend theatre plays. Those descriptions will be generated by the DenseCap module that outputs captions for both mobile and static objects and regions in a given scene. These generated captions are not enough for the users to re-imagine the scene, they will need to know where each object or region is situated regarding their own position (Egocentric Description). To give the users this information, we will need depth data alongside RGB image of the scenes, specifically theatre scenes. An example of the expected description is shown in figure 9. To do that, we had to retrain the DenseCap model on our dataset. Proposed in [14], it is a model based on Fully Convolutional Localization Networks (FCLN) that outputs Fig. 8: LabelMe Interface. Fig. 6: Example of interpreted scenarios (Group). Fig. 7: Pie chart for the number of sequences by type of scenario. boxes surrounding detected regions, each box with its caption and confidence. We chose DenseCap because it does not focus only on salient objects and provides background descriptions. After detecting regions and generating the corresponding captions, we applied the algorithm proposed in our precedent work to get the directions [38]. Since Depth information is not present for the VG dataset, we used AdaBins model to estimate depth maps for VG images. ### _Experiments and Results_ We modified the DenseCap code provided in GitHub to be trained on custom data and we applied transfer learning by reusing the models' weights provided by the authors to train it on our data for 10 more epochs. Table V shows evaluation results after using DenseCap on our data before and after retraining. We then chose 20 random images from VG and our dataset to manually annotate the direction of each generated region. The table VI summarizes results. Qualitative results are shown in the figures 10. ### _Limitations_ * Captions are redundant due to the fact that DenseCap generates \(k\) number of captions and \(k\) was set to 10. Sometimes there are fewer regions than 10, and sometimes there are more, which cannot be possible to determine by a visually impaired person. * Egocentric description lacks precision for some regions. * Final description doesn't mention that the image is about a theatre play. ## V Human Action Recognition: Experimental Evaluations with TS-RGB We conducted experiments using the proposed theatre dataset using its skeleton sequences (Fig. illustrates some of the skeleton sequences of TS-RGBD) on skeleton-based approach with three Graph Neural Networks: ST-GCN [27], 2S-AGCN [39], and MS-G3D [40]. We selected to test skeleton-based GCNs due to their highly attained performance. All of the selected models fall under spatio-temporal models which can extract both spatial and temporal features from skeletal sequences. They were mostly trained on NTU-RGBD [3] and Kinetics [41] and the relevant results are illustrated in Table.VII, which demonstrate their high recognition performances on these very challenging benchmarks. Thus, we use the available pre-trained weights ( after the training on NTU-RGBD dataset) of each model and test it on our dataset. We obtained the results shown in Table.VIII. ### _Discussion_ We observe that the performances of the models on our dataset are relatively low. MS-G3D outperformed the other models, so we pursued more comprehensive data from its Fig. 9: Example of Egocentric Scene Description. experiment by analyzing its confusion matrix and extracting the most well-classified as well as misclassified action classes (Table.IX and Figure.12). Based on Table.IX and Figure.12, we distinguish that the model is somewhat weak in recognizing actions that require details about specific body parts, such as the hand shape, or about the involved object in the case of human-object interaction. For instance, the action "write" necessitates additional information on the hand form and the used object, which are not included in the skeleton represent Fig. 11: Examples of Skeleton data sequences from TS-RGBD dataset. Fig. 12: Confusion Matrix of MS-G3D with TS-RGBD. \(y=37\) represents actions of NTU-RGBD that are not included in our dataset Fig. 10: Multiple Examples from TS-RGBD dataset. it was frequently confused with the action "play with phone" due to the similarity of their skeleton motion trajectories. The same is true for the action 'Drop,' which the model failed to recognize due to missing information about the dropped object and similarities in skeleton motion with other actions, making it difficult to differentiate them based solely on skeleton joint positions. In conclusion, there are two major elements that have a large impact on the skeleton-based approach recognition performance. The first factor is the precision of the provided joints' positions. The recognition performance can be low if the skeleton joints are not very well captured and cluttered. The second factor is the number of characteristics that can be extracted from the skeleton modality only. It is not sufficient to recognize some actions that require details about specific body parts' characteristics such as hands or about the involved object in the case of human-object interaction. Future work on our dataset may consider combining skeleton modality with other modalities as a solution to the lack of information problem, which may aid in differentiating between some confusing actions with similar skeleton motions. ## VI Conclusion In conclusion, this paper presents the TS-RGBD dataset, a novel RGB-D dataset containing theatre scenes with ground truth human actions and dense captions annotations. The dataset includes RGB, depth, and skeleton sequences captured using the Microsoft Kinect sensor. The purpose of this dataset is to help address the limitations of existing computer vision solutions for aiding visually impaired individuals, which are often limited to either indoor or outdoor scenes, excluding certain environments like theatres. By incorporating depth information along with RGB data, the TS-RGBD dataset aims to improve the performance of image captioning and human action recognition models. The inclusion of depth modality allows for a more comprehensive understanding of the scenes and actions, enhancing the capabilities of computer vision models to describe the appearances of regions of interest and recognize human actions accurately. The results of testing image captioning models and skeleton-based human action recognition models on the TS-RGBD dataset demonstrate its potential to expand the range of environment types where visually disabled individuals can navigate with the aid of computer vision technology. The combination of accurate human action recognition and textual description of theatre scenes can provide valuable assistance to visually impaired individuals in accessing entertainment places and enjoying theatrical experiences. In summary, the TS-RGBD dataset and the discussed methods in this paper contribute to the advancement of computer vision applications for assisting visually impaired individuals, particularly in theatre settings. The dataset's availability and the performance of the tested models open up new possibilities for developing more inclusive and versatile assistive technologies, making entertainment venues and various other environments more accessible to visually disabled individuals. However, further research and development are required to optimize and generalize these methods for real-world applications and potentially adapt them to other challenging scenarios beyond theatre scenes.
2305.18598
A Method for Studying Semantic Construal in Grammatical Constructions with Interpretable Contextual Embedding Spaces
We study semantic construal in grammatical constructions using large language models. First, we project contextual word embeddings into three interpretable semantic spaces, each defined by a different set of psycholinguistic feature norms. We validate these interpretable spaces and then use them to automatically derive semantic characterizations of lexical items in two grammatical constructions: nouns in subject or object position within the same sentence, and the AANN construction (e.g., `a beautiful three days'). We show that a word in subject position is interpreted as more agentive than the very same word in object position, and that the nouns in the AANN construction are interpreted as more measurement-like than when in the canonical alternation. Our method can probe the distributional meaning of syntactic constructions at a templatic level, abstracted away from specific lexemes.
Gabriella Chronis, Kyle Mahowald, Katrin Erk
2023-05-29T20:30:38Z
http://arxiv.org/abs/2305.18598v1
A Method for Studying Semantic Construal in Grammatical Constructions with Interpretable Contextual Embedding Spaces ###### Abstract We study semantic construal in grammatical constructions using large language models. First, we project contextual word embeddings into three interpretable semantic spaces, each defined by a different set of psycholinguistic feature norms. We validate these interpretable spaces and then use them to automatically derive semantic characterizations of lexical items in two grammatical constructions: nouns in subject or object position within the same sentence, and the AANN construction (e.g., 'a beautiful three days'). We show that a word in subject position is interpreted as more agentive than the very same word in object position, and that the nouns in the AANN construction are interpreted as more measurement-like than when in the canonical alternation. Our method can probe the distributional meaning of syntactic constructions at a template level, abstracted away from specific lexemes. ## 1 Introduction There are now several paradigms for the linguistically oriented exploration of large neural language models. Major paradigms include treating the model as a linguistic test subject by measuring model output on test sentences (e.g., Linzen et al., 2016; Wilcox et al., 2018; Futrell et al., 2019) and building (often lightweight) probing classifiers on top of embeddings, to test whether the embeddings are sensitive to certain properties like dependency structure (Tenney et al., 2019; Hewitt and Manning, 2019; Rogers et al., 2020; Belinkov, 2022; Manning et al., 2020). 1 Footnote 1: Code and data for all experiments in this paper are available at [https://github.com/gchronis/features_in_context](https://github.com/gchronis/features_in_context). Here, we consider another approach: projecting contextual, token-level embeddings into interpretable feature spaces defined by psycholinguistic feature norms (Binder et al., 2016; Buchanan et al., 2019; McRae et al., 2005). By learning a mapping to these spaces, as illustrated in Figure 1, we attain context-sensitive, interpretable, real-valued lexical-semantic features. After experimenting to determine best practices for contextual-feature projection, we use these features to explore whether contextual embeddings are sensitive to subtle semantic _construals_ in different grammatical constructions. Specifically, we observe how even seemingly similar constructions can impart a different semantics on their component parts or'slot fillers' (Trott et al., 2020; Goldberg, 2019). Consider the Article + Advective + Num (AANN) construction: e.g., "a beautiful three days in London," where the normally singular "a" precedes a plural noun and the adjective precedes the numeral (Solt, 2007; Dalrymple and King, 2019; Keenan, 2013). This construction often occurs with units or measure phrases (e.g., _days_, _feet_), but can also occur with non-measure nouns (e.g., "a lucky three students"). Figure 1: **(top) Models are trained by using multi-prototype embeddings in LLM space to predict gold feature vectors derived from psycholinguistic feature norms. (bottom) These same models are used to project contextual word embeddings to interpretable contextual feature space (model=Buchanan-PLSR-MIL).** While it is tempting to think of "a lucky three students" as semantically equivalent to "three lucky students," it has a different _construal_. Specifically, the AANN construction is acceptable only when the noun behaves as a single collective unit and is, in effect, more semantically similar to a unit of measurement than it would be in the unmarked construction. Evidence for a difference in meaning between the two variants is seen in their divergent distributions. For example, the AANN construction is unavailable in contexts like (1) and (2) (#-ed cases; adapted from Solt, 2007). 1. The essay consisted of (a few eloquent paragraphs / # an eloquent few paragraphs) separated by pages of gibberish. 2. He played (five boring songs / # a boring five songs), but in between he played one really good one. The AANN construction cannot occur in contexts where the referent of the noun is split into non-contiguous parts. This distributional pattern is taken as evidence that the AANN construction construes its argument as a single, measure-like unit. In this paper, we study distributional evidence on a larger scale, using a contextualized large language model as a 'compressed corpus' that captures observed statistical regularities over utterances of many speakers. We analyze this compressed corpus by mapping embeddings to interpretable feature spaces based on psycholinguistic feature norms. When we do this for the embedding of the noun _days_ in "I spent a beautiful three days in London," we find the most salient difference with the "I spent three beautiful _days_ in London" to be **a higher value for features like _measure_ and _unit_ when it is in an AANN construction.** We argue that this is because human speakers construe the AANN construction as being "measure-ish", and that this construal is reflected in their language use in a way that the contextual language model can pick up. We conduct two case studies, one about AANNs and the other about grammatical subjecthood. Specifically, **we show that a word in subject position is interpreted as more agentive than the very same word in object position** (consistent with findings from psycholinguistics, e.g., Kako, 2006), and that **a noun in the AANN construction is interpreted as more measurement-like than when in the canonical alternation.** Our results demonstrate that construals can be inferred from statistical usage patterns. While we here use constructions with known construals, our positive results indicate that we may be able to analyze constructions where the construal is less clear in the theoretical literature. While feature norms have been used to _interpret_ distributional semantic models (Baroni and Lenci, 2010; Herbelot and Vecchi, 2015; Fagarasan et al., 2015; Rosenfeld and Erk, 2023), we emphasize the _linguistic_ value of reliable, reusable, interpretable semantic spaces, which we use The ability of our method to characterize subtle semantic differences using language models offers a point of connection between linguistically oriented deep neural network analysis (Baroni, 2021) and topics in formal linguistics. In particular, this work empirically demonstrates the potential alignment between LMs and feature-based theories of lexical semantics (as illustrated by Petersen and Potts, 2023). Our main goal is to use interpretable feature spaces for understanding the semantic construal of words in context, specifically the AANN construction and the transitive construction. In Section 2, we lay out our method for constructing interpretable feature spaces for tokens in context. Then, in Section 3, we evaluate the success of our method on a sense differentiation task, a homonym feature prediction task, and a qualitative analysis. The idea is that, if the method for mapping from embedding space to context-sensitive feature space is successful, we will predict unique semantic features for different senses. Having established and validated our method, we then turn to our key constructions in Section 4. ## 2 Methods The task is to learn a mapping from contextual word embedding space to an interpretable space defined by feature norms (Section 2.1), where every dimension corresponds to a semantic feature. We construct the training data by pairing feature norms with embeddings derived from contextual word vectors. We train models at the type-level, e.g., to map the embedding vectors for the word _ring_ to the set of feature norms for _ring_, as shown in the top half of Figure 1. But ultimately, we use the model to predict semantic features for individual tokens. That is, we project the token vector of a single occurrence of the word "ring" into the feature space learned at the type-level, as shown in the bottom half of Figure 1. ### Psycholinguistic feature norms We construct three semantic spaces, trained from three datasets of psycholinguistic feature norms. **The McRae et al. (2005) feature norms** comprise 541 concrete English nouns and 2,526 features. Participants were asked to list definitional properties of cue words. The features are full predicates; for example, a _brush_ 'has_bristles' and is 'used_on_hair'. **The Buchanan et al. (2019) feature norms** consist of over 4000 English words and 3,981 distinct features, from all open-class parts of speech, and include abstract words. The authors collect new norms and collate them with McRae norms and the Vinson and Vigliocco (2008) verb feature norms. The features are tokenized and lemmatized. If a participant said 'found in kitchens,' this yields the features 'found' and 'kitchen'. **The Binder et al. (2016) data** consists of 535 English words rated for the relevance of 65 predefined features. The features were chosen to correspond to known neural activation regions in the human brain, and to domains of cognition and perception; they are more coarse grained than the other norms. The word _song_ might have a high rating for 'Audition' but a lower rating for 'Vision'. Feature norms as feature spacesFeature norms can be interpreted as vectors, with a real-valued dimension for each feature in the dataset. The differences between the feature norm data sets lead to differences in the feature inference problems. For McRae and Buchanan, values along each feature-dimension correspond to the number of participants who named that feature--zero in the majority of cases. These spaces are thus sparse and high-dimensional. For these two spaces, we treat the output as a ranked list of features, where the lower ranks are not relevant. The Binder space is dense and low-dimensional, and the goal is to predict the value of each feature. Here, a low value on a feature does not indicate lack of relevance. The norms differ in what they say about a word. The McRae and Buchanan norms are fine-grained, and represent salient or prototypical meanings. McRae norms are limited in their applicability because they only cover concrete nouns. Buchanan norms have a coverage that is wider but still somewhat ad-hoc. The Binder norms are high-level and were designed to be comprehensive. Past and concurrent work on feature prediction has explored the utility of McRae (Fagarasan et al., 2015; Herbelot and Vecchi, 2015; Rosenfeld and Erk, 2023) and Binder (Utsumi, 2020; Turton et al., 2021) norms for probing distributional models and language models. ### Embeddings The feature norms serve as our gold feature labels that we map our type-level embeddings onto. For these type-level embeddings, we use embeddings derived from BERT (Devlin et al., 2019), either in a _vanilla_ variety (one vector representation per word) or using _multi-prototype embeddings_, which have multiple embedding clusters per word (roughly corresponding to distinct usages). Specifically, we use the embeddings from Chronis and Erk (2020), which are generated by performing K-means clustering on BERT embeddings of tokens from the British National Corpus (BNC). This procedure collects up to 200 occurrences of each cue word in the British National Corpus, and generates token vectors for each occurrence with the HuggingFace bert-base-uncased model. For multi-prototype embeddings, these representations are clustered using K-means, using their best-performing setting of K=5 clusters per word at Layer 8. For vanilla embeddings, we generate BERT vectors through the same procedure, but simply average the token vectors together (K=1) to get one vector per word. See Appendix A for more detail on the multi-prototype vectors. Though the mapping is _trained_ from type-level (or sense-level) embeddings, contextual word vectors at the token level can be _projected_ into the interpretable space using the resulting model. ### Mapping from embeddings to feature norms Though feature prediction is well explored for static embeddings (Baroni and Lenci, 2010; Herbelot and Vecchi, 2015; Fagarasan et al., 2015; Rosenfeld and Erk, 2023; Utsumi, 2020) and gaining popularity as a method to probe contextual embeddings (Chersoni et al., 2021; Turton et al., 2021; Apidianaki and Gari Soler, 2021; Proietti et al., 2022), there is no consensus as to which models work best for which datasets. We experiment with several mapping methods used previously for feature prediction. The first is a feed forward neural network (FFNN, with a single hidden layer, tanh activation, and dropout applied after the final output layer; Turton et al., 2020). The dropout parameter, hidden layer size, learning rate, and number of epochs were grid-searched, as described in Appendix B (which also includes implementation details for the other models described). The second is partial least squares regression (PLSR, using the scikit-learn implementation; Herbelot and Vecchi, 2015; Fagarasan et al., 2015; Utsumi, 2020), whereby we run a partial least squares regression that predicts the feature space from the (potentially multi-prototype) embeddings. The third is label propagation (PROP; Rosenfeld and Erk, 2023), which percolates labels through a graph from labels to unlabeled nodes. In all cases, the goal is to predict a real-valued semantic feature vector. Thus, the task is formulated as a multi-output regression problem. In the vanilla setting, the above methods can straightforwardly map from a particular word embedding into feature space. But, in order to map from a _multi-prototype_ embedding into feature space, the problem is trickier--especially since the multi-prototype embeddings may capture meanings that are entirely absent in interpretable feature space. Therefore, we test versions of each model using techniques inspired by multi-instance learning (MIL; Dietterich et al., 1997). The implementation of these MIL-inspired models is different for each of the three methods. For the FFNN, we use an attention mechanism that allows the model to learn a weighted average over instances, as in Ilse et al. (2018). For PLSR and Label Propagation, we simply construct a separate training example for each prototype drawn from the multi-prototype embedding That is, for a 5-prototype vector, we construct 5 training examples, where each of the 5 examples consists of a (unique) single prototype vector paired with the same type-level feature vector. See Appendix C for more detail on adaptations for the multi-prototype setting. ## 3 Evaluating Contextual Feature Norms for Interpreting Semantic Space We first evaluated the models on their ability to fit the _type-level_ feature norms they are trained on. We do not go into detail here, as it is context-dependent meanings we are most interested in. See Appendix D for full results. Overall, BERT-derived models were comparable to those we trained with static GloVe (Pennington et al., 2014) embeddings, and to the best static models in the literature. This initial evaluation established that models using BERT-derived embeddings are just as good as static embeddings for predicting semantic features. To evaluate our models on _in-context_ feature prediction, we conduct two quantitative experiments: one on a sense differentiation task, one on a homonym disambiguation task, as well as a qualitative analysis for a representative word (_fire_). The goal of this section is to explore whether the contextual feature norm method successfully captures contextual modulation of word meaning. For these experiments, we select the hyperparameters for each model that performed the best at type-level feature prediction under 10-fold cross-validation (Appendix D). ### Exp. 1: Sense Differentiation Token-level evaluation is tricky because there are no existing datasets for in-context feature norms. Noting this obstacle, others utilize indirect methods like word-sense disambiguation and qualitative analysis, (Turton et al., 2020), or forego in-context evaluation (Chersoni et al., 2021). Turton et al. (2020) evaluate the Binder feature prediction model using the Words in Context Dataset (Pilehvar and Camacho-Collados, 2019), which only labels token pairs as'same meaning' or 'different meaning'. We devise a sense differentiation experiment using the SemCor corpus, (Miller et al., 1994), which lets us do a more fine-grained analysis in terms of close and distant polysemy. The logic of this experiment is that, if two senses of a word are semantically _distant_, we expect the feature vectors in projected space to also be distant. We test the quality of our predicted feature vectors by testing how well the cosine distance between vectors for polysemous words corresponds to the distance between their senses in WordNet (Fellbaum, 2010). To build this dataset, we collect examples of noun lemmas in the SemCor corpus, which is an \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**McRae**} & \multicolumn{2}{c}{**Buchanan**} & \multicolumn{2}{c}{**Binder**} \\ \cline{2-7} & MIL & Vanilla & MIL & Vanilla & MIL & Vanilla \\ \cline{2-7} PLSR &.41 &.39 &.41 &.42 &.28 &.26 \\ FFNN &.36 &.36 &.42 &.40 &.30 &.30 \\ PROP & -.03 & -.03 &.10 &.10 & -.03 & -.03 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of Sense Differentiation experiment. Pearson correlation of cosine similarities of predicted features vectors with Wu-Palmer similarity between senses. Data: pairs of tokens of the same noun lemma in SemCor. # Lemmas = 8021, # Token-pairs = 1,045,966, p \(<\) 0.0001 in all cases. notated with WordNet senses for words in context. In SemCor, "Water is a human right," is labeled right.n.02, _an abstract idea due to a person_, while "He walked with a heavy list to the right," is labeled right.n.01, _the side to the south when facing east_. To counteract data imbalance, we collect only up to 30 instances of a particular word from any one WordNet sense. We determine degrees of similarity between WordNet senses using Wu-Palmer similarity Wu and Palmer (1994), which measures the degrees of separation between them. Then, each token in the dataset is projected into interpretable semantic space. We compute the cosine similarity between pairs of tokens and compare them to the Wu-Palmer similarity of their word senses. The key hypothesis is that we should see highly similar predicted features for tokens of the same sense, somewhat divergent features when the senses are different but related, and very different features for distant senses. Table 1 shows the results. Regardless of whether we use Multi-Instance Learning, both PLSR and FFNN models show a significant correlation between the sense similarity and similarity of predicted features. We interpret this to mean that PLSR and FFNN reflect _degree_ differences of similarity between word senses. Comparison to frozen BERT embeddingsThe results in Table 1 suggest that, at least to some extent, the projected semantic features capture information about different word senses. But to what extent? We take it as a given that the hidden layer embeddings of bert-base, because they are sensitive to context, reflect differences in word senses. Therefore, we run an additional baseline where we run the same correlational analysis using the frozen weights of bert-base, instead of the projected semantic feature. That is, we compute a correlation between the cosine distance between bert-base vectors from Layer 8 and the WordNet-derived Wu-Palmer similarity metric. The correlation between cosine distance and WordNet distance for plain BERT vectors is as high as our best models (Pearson's \(r=0.41\), \(p<.0001\)), which suggests that, even though the feature projection method is trained on word types, our training procedure does not lead to catastrophic information loss about word _tokens_. More precisely, for McRae and Buchanan datasets, PLSR learns a projection that is _as contextual_ as the original BERT space. Our best Binder space (FFNN) is less contextual than the original BERT space, though it still differentiates senses. This evaluation also demonstrates that Label Propagation, which is good at fitting norms at the type level (as shown in Appendix D and Rosenfeld and Erk, 2023) is not an effective method for generating contextual features. Performance varies across wordsPerformance on this task is not necessarily uniform across all words. For instance, as discussed in Appendix E, performance on the sense differentiation task (using our interpretable feature projections _or_ the original BERT embeddings) is better for concrete words, relative to abstract words. We leave it to future work to further explore this, as well as other sources of heterogeneity in performance. ### Exp. 2: Homonym Disambiguation The previous experiment considered many lemmas, with widely distinct as well as closely related senses. However, it is an indirect evaluation: it does not let us directly compare our projected context-dependent features to _known_ context-dependent feature norms. But the McRae dataset offers a natural experiment, since it contains 20 homonymous words in disambiguated format. That is, separate norms exist in the McRae dataset (and per force the Buchanan dataset, which is a superset) for 'hose (water)' and 'hose (leggings)'. We treat these disambiguated norms as gold contextual features for tokens of these senses. That is, we treat the McRae features for 'hose (water)' as a gold label for the token "hose" in a sentence like "I watered my flowers with the hose." As SemCor only contains a few sense-annotated tokens for each of the relevant homonyms, we use CoCA Davies (2018), a large corpus that of largely American English news text, to collect a dataset of tokens for each homonym. See Appendix G for details. Models were re-trained on all words in the feature norm \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**McRae**} & \multicolumn{2}{c}{**Buchanan**} \\ \cline{2-5} & MIL & Vanilla & MIL & Vanilla \\ \cline{2-5} PLSR &.50 &.50 &.42 &.42 \\ FFNN &.50 &.50 &.33 &.25 \\ PROP &.30 &.30 &.58 &.25 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of Homonym Disambiguation Experiment. Performance on gold contextual feature prediction for homonyms (McRae and Buchanan only). Results reported are MAP@k. (n = 1093) dataset _except_ the held-out homonyms.2 Footnote 2: Because Binder norms do not contain any homonymous pairs, this evaluation is unavailable for Binder space. On this task, performance is measured as mean average precision (MAP@k) over the gold homonym features from McRae and Buchanan, where k is the number of gold features specific to each concept Derby et al. (2019); Rosenfeld and Erk (2023). Table 2 shows results. For both sets of norms, we see strong performance. The best-performing models achieve a precision of 0.50 (on McRae) and 0.42 (on Buchanan). Though we cannot directly compare performance, feature prediction is generally understood to be a very hard task, with SOTA performance for static McRae feature prediction at 0.36 Rosenfeld and Erk (2023). This is because models will often predict plausible features that aren't in the gold feature set, like has_teeth for _cat_Fagarasan et al. (2015). ### Qualitative Analysis In order to get a better sense of our in-context predictions, we now explore predicted features for clusters of token embeddings, extracted using the clustering procedure described in Erk and Chronis (2023) (which use the same kind of multi-prototype embeddings as described in Section 2.2), for the representative word _fire_. Focusing on a single, highly polysemous word allows us to build fine-grained intuition as to the kind of information each of our feature norms can offer. In addition, characterizing token embedding clusters may be useful in itself: Giulianelli et al. (2020) use the term _usage types_ (UTs) for clusters of token embeddings, and note that they reflect word senses and other regularities such as grammatical constructions. UTs have proven useful for the study of semantic change. However, while UTs are created automatically by clustering, researchers usually manually design labels for UTs to make their interpretation clear. An automatic labeling of token clusters with projected semantic features, as we demonstrate here, could hence be useful for studying UTs. Our goal in this section is to take 5 UTs for the word _fire_ from Erk and Chronis (2023) and project them into our interpretable semantic spaces (Binder, McRae, and Buchanan). These UTs are: _destructive_ fire (e.g., "There was a fire at Mr's store and they called it arson."), _cooking/cozy_ fire (e.g., "They all went over to the fire for plates of meat and bread."), _artillery_ fire (e.g., "a brief burst of machine-gun fire"), and _noun compounds_ (e.g., "fire brigade," "fire hydrant"). These UTs are represented as the centroids of K-means clusters of token vectors for the word _fire_. Then, we project these usage type vectors into interpretable semantic spaces, using PLSR+MIL for McRae and Buchanan, and FFNN+MIL for Binder. Predictably, the models predict similar features values in many cases, as the senses of _fire_ have a lot in common. For example, in Buchanan space, all UTs except _artillery_ have a high rating for 'hot' (Appendix F). To avoid this issue and get at how the usage types _differ_, for each UT we average over the features predicted for the other four embedding centroids and select the features with the greatest positive difference to the target UT. Table 3 shows the features that most distinguish each UT. The most distinctive features in Binder space are reasonable--destructive fire is indeed unpleasant, \begin{table} \begin{tabular}{l l} \hline \hline Buchanan & \\ \hline **1. figurative** & animal, color, light, fire, burn \\ **2. destructive** & destroy, build, cause, break, person \\ **3. artillery** & act, weapon, kill, loud, human \\ **4. cooking** & hot, food, wood, burn, heat \\ **5. N-N compounds** & person, place, work, office, law \\ \hline \hline McRae & \\ \hline **1. figurative** & has\_legs, is\_hard, different\_sizes, \\ & has\_4\_legs, is\_large \\ **2. destructive** & different\_colors, a\_ mammal, \\ & made\_of\_paper, made\_of\_cement, \\ & inbeh\_\_\_explodes \\ **3. artillery** & a\_weapon, used\_for\_killing, \\ & made\_of\_metal, is\_loud, \\ & used\_for\_war \\ **4. cooking** & found\_in\_kitchens, \\ & used\_for\_cooking, requires\_gas, \\ & an\_appliance, is\_hot \\ **5. N-N compounds** & has\_doors, \\ & used\_for\_transportation, a\_bird, \\ & has\_feathers, beh\_\_eats \\ \hline \hline Binder & \\ \hline **1. figurative** & Color, Needs, Harm, Cognition, \\ **2. destructive** & Temperature \\ **3. artillery** & Unpleasant, Fearful, Sad, Consequential, Harm \\ **4. cooking** & UpperLimb, Communication, Social, Audition, Head \\ **5. N-N compounds** & Pleasant, Needs, Happy, Near, Temperature \\ & Biomotion, Face, Speech, Body, Unpleasant \\ \hline \hline \end{tabular} \end{table} Table 3: The most distinctive features for each prototype of _fire_ multi-prototype embeddings, in each of the three interpretable semantic spaces. fearful, full of consequences, sad, and capable of causing harm. The McRae features are reasonable for the more concrete senses, which have synonyms that appear in the dataset (like 'gun' for 3 and 'oven' for 4). However, in contrast to Binder and Buchanan, the distinctive McRae features predicted for the more abstract UTs (1, 2, and 5) have no ready interpretation. ### Discussion Mapping methodLooking at both experiments, PLSR obtained the overall best results for predicting both Buchanan and McRae features. For Binder features, where the model must predict the best fit along _every_ dimension, FFNN does better. Based on these experiments, we recommend using PLSR to predict definitional features like McRae and Buchanan, and FFNN to predict comprehensive features like Binder. MilAside from a few instances, the multi-instance framework does not drastically improve model performance. Though the positive effect is marginal, we use MIL in the case studies below. Choice of feature normsThe experiments above also give us insight into which feature space to use when. Experiment 1 shows that different senses are very distinct in McRae (\(r=0.41\)) and Buchanan (\(r=0.41\)) space, but not as distinct in Binder space (\(r=0.28\)). The qualitative look at feature predictions indicates that Buchanan and Binder models produce reasonable features for the word _fire_ in different contexts, including when used in a more abstract sense. Though the best McRae model scores well overall on quantitative tasks, the qualitative analysis suggests that it does not extend well to abstract senses. This conclusion aligns with expectations, given that Buchanan and Binder norms contain features for verbs and abstract nouns, whereas the McRae norms only contains concrete nouns. Binder feature vectors are comprehensive and good for examining abstract meanings, but Buchanan feature vectors can pinpoint more precise meanings. The case studies that follow use these feature spaces according to their strengths. To get an idea of the overarching differences between two constructions, we use Binder (4.2). To generate specific descriptions of lexical meaning in context, we use Buchanan (4.1). ## 4 Evaluating Constructions in Context Having validated that our method works for extracting meaningful, context-dependent semantic information from large language models, we turn to two target constructions: the AANN construction (described in the Introduction) and the basic transitive construction. Crucially, in both studies, the word types are largely controlled between conditions (e.g., comparing "The family spent a beautiful three days in London." vs. "The family spent three beautiful days in London."), and so we compare context-dependent features derived from minimally different sentences. This design lets us study the effect of context in a highly controlled way, without being influenced just by the identity of the words in the sentences. ### Construction 1: 'A Beautiful Three Days' Method Using a 1,000 sentence sample from Mahowald (2023)'s dataset of sentences templatically constructed with varying nouns, adjectives, numerals, and templates from a variety of subtypes, we compared AANN head nouns to their equivalent "default" forms (e.g., "The family spent a lovely three _days_ in London." vs. "The family spent three lovely _days_ in London"). Crucially, these form a near minimal pair. We extracted the embeddings for the head noun token in each sentence. We projected the token embeddings into Buchanan space (using PLSR - MIL) and examined the delta between each feature, for each token, in the AANN construction vs. in the default construction. ResultsThe top 5 features associated with the AANN construction (relative to default) were: **measure**, **one**, green, **unit**, grow. The features most associated with default (relative to AANN) were: animal, leg, child, human, please. The bolded AANN features suggest that nouns in the AANN alternation are more measure-like, and treated as more singular. These are consistent with observations in the literature. Animacy-oriented words (e.g., animal, child, human) seem to be more associated with the default construction. Though this is not proposed outright in the literature, it's been observed that AANN's are more likely to be ungrammatical when the head noun is agentive (Solt, 2007). Focusing in on a representative sentence pair that shows a particularly sharp difference, the word _meas_ in "They consumed an ugly five meals." is rated much higher on the measure (.18) and unit (13) feature than the word _meals_ in "They consumed five ugly meals." (.05 and.04, respectively). We interpret these results as evidence that projection into the Buchanan space detects a meaningful and attested semantic difference between the AANN construction and the default construction. Specifically, we can meaningfully detect that the construal associated with the AANN construction is more associated with measurement/units, compared to a non-AANN sentence matched on lexical content, even when the noun is not itself inherently a unit or measurement noun. ### Construction 2: Grammatical Roles Understanding grammatical roles like subject and object is crucial for natural language understanding. "The dog chased the cat." means something different from "The cat chased the dog." English relies largely on SVO word order for discriminating subjects vs. objects. Arguments that are animate, sentient, cause an event or a change of state in another participant, or move relative to another participant tend to be realized as subjects. Arguments that undergo a change of state, or are affected by another participant, tend to be realized as objects (Levin et al., 2005; Dowty, 1991). Most of the time, just knowing the two nouns in a transitive sentence is enough to know which is the subject and which is the object: If the nouns are "dog" and "bone", you can guess that "dog" is the subject and "bone" the object (Mahowald et al., 2022). There is evidence that contextual language models like BERT represent subjecthood (Linzen et al., 2016; Papadimitriou et al., 2021; Hewitt and Manning, 2019). But do these models actually represent abstract grammatical subject, or do they rely on lexical information? One way to tease this apart is to study sentences where grammatical context and lexical heuristics come apart. Papadimitriou et al. (2022) showed that BERT can reliably distinguish between grammatical subject and object, even for sentences with non-prototypical arguments like, "The onion chopped the chef", but only in the higher levels of the model after more information has been shared. At lower layers, the model seems to rely on lexical information (e.g., would classify "chef" as the subject and "onion" as the object). While prior work has explored the subject/object classification question by training bespoke probes, here we use projections into Binder space. We focus on the set of English sentences studied in Papadimitriou et al. (2022), which are extracted from the Universal Dependencies Treebank (Nivre et al., 2016) and appear in two forms: the original form and a form in which the subject and object are swapped. For instance: compare the Natural, "Finally a chambermaid stuck her head around the corner" vs. the Swapped, "Finally a head stuck her chambermaid around the corner." The Treebank from which the sentences are sampled contains data from a number of different English corpora. We project the subject and object in each of the 486 Natural sentences into Binder space, using the FFNN-MIL method (which is best for token-level Binder prediction), and then do the same for each of their Swapped counterparts. We first ask whether naturally occurring subjects tend to be more animate than objects. But we then ask whether, merely by virtue of being a subject, the lexical item takes on a more animate construal. Such a result would be consistent with psycholinguistic findings in humans: Kako (2006) shows that, even with nonce sentences like "The rom checked the zarg," the subject word "rom" is rated as more animate. Figure 2: We plot the average predicted value of each feature for naturally occurring subjects and objects (points), and show how that probability shifts when we instead use swapped sentences (arrows). We show only those features which differ significantly for either overall subjectness vs. objectness (marked with a *), or for contextual swapping (caret). For example, Natural Objects have low values for the Biomotion feature; when swapped to subject position, their Biomotion value increases. Norms are centered but not normalized. **Words that tend to appear in subject position are associated with higher animacy ratings.** Given that there are known to be systematic differences between subjects and objects, will the Binder features for subjects and objects systematically differ in the Natural sentences? As can be seen in Figure 2, the answer is clearly yes. Animacy-associated features like Biomotion, Body, and Human are higher for naturally occurring subjects than for objects. We ran a linear regression predicting the Binder value from the subject/object status of the word, the Binder feature, and their interaction. The interaction term is the one we care about: how does the predicted value for that feature change when we are dealing with a subject or object? After Bonferroni correction for multiple comparisons, we find several features significantly correlated with subjechthood and a few with objectiond, starred in Figure 2. The _same token_ is construed as more animate when it appears in subject position.The preceding analysis could have been done using type-level Binder features: the upshot is that word _types_ that appear in subject position get animacy-associated features. The highest rated words in this data set, for the Biomotion category, are: _animals_, _reptiles_, _cat_, _dog_, and they all occur as subjects in the corpus. But merely knowing that naturally occurring subjects and objects differ in Binder features does not tell us the whole story. Using the contextual feature projections, we can explore whether two tokens of the same type are construed as differing in animacy, based on whether they appear as a subject. We can do this in a controlled way by comparing the same word in the natural sentences and the swapped ones. For instance, in the sentence above, "chambermaid" appears as a subject but is an object in the swapped version. How does its Binder rating change? To assess that, we compare natural subjects vs. those same words moved to object position of the same verb in the same sentence. And we compare natural objects to those same words swapped to be subjects. Figure 2 shows that subject-oriented features like Biomotion, Body, and Human lose their large values and become more neutral. The caredet features in the figure show significant effects of being swapped, after Bonferroni correction. To assess whether our contextual feature predictions are sufficient for predicting whether a noun is a subject, no matter if natural or swapped, we run a forward-stepwise logistic regression on a portion of the data (300 sentences) to predict whether a particular token is a subject or an object based on its Binder ratings. The forward-stepwise part picks the set of Binder features that give the best prediction. We then test its k-fold cross-validation accuracy on the held-out test set. For Natural sentences, this method achieves 80% accuracy, compared to 73% accuracy for Swapped sentences. Thus, while natural sentences are easier, even the swapped sentences can be categorized better than chance using the feature norms--despite the fact that the words in question naturally occurred in the opposite roles. We then performed the same procedure, but instead predicted whether a particular token was from a Natural or Swapped sentence. We did this separately for subjects and objects. Performance was above chance, at 70% and 71% respectively. So a model can, with better than chance accuracy, use projected Binder features to identify which nouns are subjects in swapped sentences. But we can also predict which nouns are from swapped sentences. This result suggests that the predicted Binder features reflect contextual information, but also retain type-level information. The results of our study align with Lebani and Lenci (2021) who investigate semantic proto-roles using distributional models and with Proietti et al. (2022), who investigate semantic proto-roles by projecting BERT into an interpretable space (similar to our method). Both show that transitive verbs have more proto-agent properties than their intransitive counterparts. The present analysis confirms and expands on their finding that BERT captures semantic role information and that projecting into interpretable space is a fruitful way of gaining insight into grammatical and thematic roles. ## 5 Conclusion In this paper, we honed techniques for predicting semantic features for token embeddings. These projections are versatile. Once created, one and the same model can be used to study a wide array of phenomena. We explored their utility for studying semantic construal in syntactic constructions. We emphasize the potential of this method to answer linguistic questions about meaning differences in constructions that are less well-understood and well-theorized than the ones studied here. As such, we hope it will be possible to use this method to generate linguistic insight. ### Limitations One limitation of our study is that interpretable feature spaces are at times only semi-interpretable. We infer from patterns of model behavior that Buchanan features such as 'human', 'child', and 'animal' can be signals for animacy more broadly construed. The need to conjecture about what a feature means points to a weakness in our approach. Some interpretation will always be necessary, and with a more heavy-handed probing method like ours, it can't be certain what effects are coming from the model and which are coming from the probe. One way to get around this need for subjective interpretation is to train a separate classifier for animacy more broadly understood, and then use the feature prediction model to examine what features are most relevant to the classifier (Chersoni et al., 2021). However, this method is not foolproof either. The classification distinction is wholly determined by the labeled data used to train the animacy probe, and the judgments are subjective. Even for a seemingly straightforward feature, the correct label is not always clear. Is a clock that _sings_ the hour animate? What about a _stony face_? Subjective interpretation is an important and unavoidable component of both linguistic and neural language model analysis. The goal of data-driven research is to extend the sphere of concern beyond self-reflexive subjective judgments of the researcher to the shared subjectivities of a language community. Information about animacy reflected in an annotated dataset still reflects subjectivities, but shared ones. It is important to always be clear about where interpretation is happening, whose interpretations are taken into account, and how they affect what conclusions may be drawn. On that note, there are a few places where design decisions affect our analysis of lexical variation. Linguistic data enters the modeling pipeline in at least four places: BooksCorpus and Wikipedia data used to pre-train BERT, the BNC corpus which we use to derive multi-prototype embeddings, the feature norm datasets which tend to capture the subjectivities of American college students, and the texts we analyze in our case studies (both natural language text and constructed examples). These resources all cover English, but necessarily reflect different varieties of English, given that they were collected in different places at different times. For example, usage types in the BNC often differ from those derived from Wikipedia data. Not only do the corpora we use represent potentially disjoint varieties (English spoken by college students in Vermont, English in newswire and fiction genres, English in reference texts). They also all represent the semantics of the unmarked, _normative varieties_ of English. Normative English dominates all data collection contexts upon which our study rests. Consequently, to the extent that our model is a proxy for English semantic judgments, it is a proxy for dominant semantic associations among the composers of these texts and participants in the feature norm studies. Though it is interesting and useful to study the English language as a whole, care must be taken to ensure that the sample is representative of all speakers; and ideally, our approach supports linguistic approaches which aim to describe and explain the semantics of smaller language communities. This would require language models trained on corpora at the level of communities of practice, as well as feature norms specific to these communities. We are hopeful that the future of statistical methods in lexical semantic analysis moves in this direction. ### Ethics Statement Our models are developed and published in order to encourage academic research in descriptive linguistics. In the future, we plan to use our method to study the inherent non-neutrality of language models by examining the influence of training corpus composition on the semantic representation of social meanings, as represented by cultural keywords. Because they are built on top of an unpredictable language model, the feature prediction methods, as well as the models we publish, are recommended for descriptive research only. Researchers should take into account the potential for language models, like language, to reflect of harmful ideologies such as sexism, racism, homophobia, and other forms of bigotry. ### Acknowledgements This work was made possible through funding from an NSF GRFP Grant to GC, NSF Grant 2139005 to KM. Thank you to the UT Austin Linguistics Computational Linguistics group for helpful comments and the SynSem group for their enthusiasm in considering how language modeling might inform their questions in semantics. For helpful discussions, thanks to Adele Goldberg and the Prince ton language group, Richard Futrell, and Isabel Papadimitriou.
2310.04307
Mean left-right eigenvector self-overlap in the real Ginibre ensemble
We study analytically the Chalker-Mehlig mean diagonal overlap $\mathcal{O}(z)$ between left and right eigenvectors associated with a complex eigenvalue $z$ of $N\times N$ matrices in the real Ginibre ensemble (GinOE). We first derive a general finite $N$ expression for the mean overlap and then investigate several scaling regimes in the limit $N\rightarrow \infty$. While in the generic spectral bulk and edge of the GinOE the limiting expressions for $\mathcal{O}(z)$ are found to coincide with the known results for the complex Ginibre ensemble (GinUE), in the region of eigenvalue depletion close to the real axis the asymptotic for the GinOE is considerably different. We also study numerically the distribution of diagonal overlaps and conjecture that it is the same in the bulk and at the edge of both the GinOE and GinUE, but essentially different in the depletion region of the GinOE.
Tim R. Würfel, Mark J. Crumpton, Yan V. Fyodorov
2023-10-06T15:11:46Z
http://arxiv.org/abs/2310.04307v1
# Mean left-right eigenvector self-overlap in the real Ginibre ensemble ###### Abstract We study analytically the Chalker-Mehlig mean diagonal overlap \(\mathcal{O}(z)\) between left and right eigenvectors associated with a complex eigenvalue \(z\) of \(N\times N\) matrices in the real Ginibre ensemble (GinOE). We first derive a general finite \(N\) expression for the mean overlap and then investigate several scaling regimes in the limit \(N\to\infty\). While in the generic spectral bulk and edge of the GinOE the limiting expressions for \(\mathcal{O}(z)\) are found to coincide with the known results for the complex Ginibre ensemble (GinUE), in the region of eigenvalue depletion close to the real axis the asymptotic for the GinOE is considerably different. We also study numerically the distribution of diagonal overlaps and conjecture that it is the same in the bulk and at the edge of both the GinOE and GinUE, but essentially different in the depletion region of the GinOE. **Keywords:** non-Hermitian random matrices, real Ginibre ensemble, bi-orthogonal eigenvectors, eigenvector overlaps, eigenvalue depletion, bulk and edge statistics ## 1 Introduction Random matrices of finite size \(N\times N\) are often categorized in terms of their global symmetries, influencing statistical properties of their eigenvalues and eigenvectors. The two major categories to distinguish are matrices which are self-adjoint (Hermitian) from their non-self-adjoint (non-Hermitian) counterparts. The former are defined as satisfying the condition \(H=H^{\dagger}\equiv\bar{H}^{T}\), with \({}^{\dagger}\) standing for Hermitian conjugation, \({}^{T}\) denoting the transpose of the matrix and the bar standing for the complex conjugation of its entries. All Hermitian matrices are by definition _normal_, with vanishing commutator \([H,H^{\dagger}]=0\), which ensures that \(H\) is diagonalizable by a unitary transformation. The Hermiticity of \(H\) ensures all ensuing eigenvalues \(\lambda_{n}\) to be necessarily real. On the other hand, _non-Hermitian_ matrices, which are denoted by \(X\), may or may not be normal, in the latter case \(XX^{\dagger}\neq X^{\dagger}X\). Such matrices generically have the majority of the eigenvalues \(z_{n}\) (\(n=1,\ldots,N\)) located in the full complex plane \(\mathbb{C}\), although some of them may still remain on the real line \(\mathbb{R}\). The non-normality of such matrices is well known to lead to important numerical issues, when the matrix size \(N\) grows. For example, keeping a fixed precision of calculations might not be sufficient, as some eigenvalues can be 'ill-conditioned', i.e. they are much more sensitive to a perturbation of the matrix entries, in contrast to eigenvalues of normal matrices, which are relatively robust. For a random matrix \(X\), we can safely assume that all its \(N\) eigenvalues have multiplicity one, ensuring that \(X\) is still diagonalizable by a transformation to its eigenbasis: \(X=S\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! entries \(\mathcal{O}_{nm}=\left(\mathbf{x}_{L_{n}}^{\dagger}\mathbf{x}_{L_{m}}\right) \left(\mathbf{x}_{R_{m}}^{\dagger}\mathbf{x}_{R_{n}}\right)\). In physical literature the diagonal entries, \(\mathcal{O}_{nn}\), are generally referred to as diagonal overlaps, or _self-overlaps_. In numerical analysis literature their square roots are sometimes referred to as the _eigenvalue condition numbers_, as they characterize the sensitivity of the eigenvalue \(z_{n}\) to a perturbation of the entries of \(X\), see e.g. [1, 2]. To this end, consider a matrix \(X^{\prime}=X+\varepsilon P\), the second term reflecting an error made with respect to the entries of \(X\), which are stored with finite machine precision. \(P\) here can be chosen to have its 2-norm fixed to \(||P||_{2}=1\), such that the strength of such a perturbation is measured via the real parameter \(\varepsilon\). Then, the change in eigenvalue \(z_{n}\) of \(X\) incurred by a small \(\varepsilon\) can be characterized via \[\left.\frac{dz_{n}(\varepsilon)}{d\varepsilon}\right|_{\varepsilon=0}:=\dot{ z}_{n}(0)=\mathbf{x}_{L_{n}}^{\dagger}P\mathbf{x}_{R_{n}}\, \tag{1.1}\] which can be derived using the bi-orthonormality of the corresponding left- and right-eigenvectors. By using the Cauchy-Schwartz inequality, the magnitude of the shift can be seen to satisfy the following bound: \[|\dot{z}_{n}(0)|=|\mathbf{x}_{L_{n}}^{\dagger}P\mathbf{x}_{R_{n}}|\leq| \mathbf{x}_{L_{n}}|\ ||P||_{2}\ |\mathbf{x}_{R_{n}}|=\sqrt{(\mathbf{x}_{L_{n}}^{\dagger} \mathbf{x}_{L_{n}})(\mathbf{x}_{R_{n}}^{\dagger}\mathbf{x}_{R_{n}})}=\mathcal{ O}_{nn}^{1/2}\, \tag{1.2}\] where \(\mathcal{O}_{nn}\) are the diagonal entries of the overlap matrix. We indeed see that the magnitude of the perturbation is essentially controlled by \(\mathcal{O}_{nn}^{1/2}\), which assumes the minimal value only if \(X\) is normal and all \(O_{nn}=1\). In the realm of non-normal matrices one can find numerous examples where \(\mathcal{O}_{nn}\gg 1\), indicating that the associated eigenvalues are extremely sensitive to perturbations, see e.g. [1, 2]. In fact, random matrices can be used to provide a regularization of eigenvalue condition numbers of highly non-normal, non-random matrices, see for example [3, 4, 5] for more information. For \(X\) taken to be a random matrix from a specified probability measure, the statistics of the entries of the overlap matrix \(\mathcal{O}_{mn}\) become an important object of study. The simplest nontrivial choice is to assume that all entries \(X_{ij}\) are mean-zero, independent, identically distributed (i.i.d.) Gaussian numbers, which can be real, complex, or quaternion. This defines the three classical Ginibre ensembles [6]. Understanding of statistics of the overlap matrix in this setting has been influenced heavily by the seminal papers of Chalker and Mehlig [7, 8] who treated the _complex Ginibre_ ensemble. We denote the latter ensemble as GinUE in the rest of the paper, to distinguish it from its real (GinOE) and quaternion (GinSE) counterparts, the nomenclature relating to the classical Dyson Hermitian ensembles - GUE, GOE and GSE. In particular, Chalker and Mehlig addressed the statistics of the overlap matrix \(\mathcal{O}_{mn}\) via considering the following single-point and two-point correlation functions \[\mathcal{O}(z)\equiv\left\langle\frac{1}{N}\sum_{n=1}^{N}\mathcal{O}_{nn}\ \delta(z-z_{n})\right\rangle\qquad\text{and}\qquad\mathcal{O}(z_{1},z_{2}) \equiv\left\langle\frac{1}{N^{2}}\sum_{\begin{subarray}{c}n,m=1\\ n\neq m\end{subarray}}^{N}\mathcal{O}_{nm}\ \delta(z_{1}-z_{n})\ \delta(z_{2}-z_{m})\right\rangle\,, \tag{1.3}\] where the angular brackets stand for the expectation value with respect to the probability measure associated with the ensemble in question, GinUE for the particular case studied by Chalker and Mehlig. Here \(\delta(z-z_{n})\) is the Dirac delta mass at the eigenvalue \(z_{n}\), so that the empirical density of eigenvalues in the complex plane \(z\) reads \(\rho_{N}^{\text{(emp)}}(z)=\frac{1}{N}\sum_{n=1}^{N}\delta(z-z_{n})\). It is evident that \(\mathcal{O}(z)\) describes the conditional expectations of \(\mathcal{O}_{nn}\) and we can define the mean conditional self-overlap as \[\mathbb{E}\left(z\right)\equiv\mathbb{E}\left(\mathcal{O}_{nn}\ |\ z=z_{n}\right)=\frac{ \mathcal{O}(z)}{\rho\left(z\right)}\, \tag{1.4}\] where \(\rho\left(z\right)\) is the mean spectral density, defined via \[\rho\left(z\right)\equiv\left\langle\ \rho_{N}^{\text{(emp)}}(z)\ \right\rangle=\left\langle\frac{1}{N}\sum_{n=1}^{N}\delta(z-z_{n})\right\rangle\,. \tag{1.5}\] Recall that, as \(N\rightarrow\infty\), the asymptotic mean eigenvalue density is nonvanishing and uniform only inside the unit circle in the complex plane: \(\rho(z)\approx 1/\pi\) for \(|z|^{2}<1\) and zero otherwise [9]. Chalker and Mehlig were able to extract the leading asymptotic behaviour of \(\mathcal{O}(z)\) and \(\mathcal{O}(z_{1},z_{2})\) as \(N\) tends to \(\infty\). Choosing the variance of the entries of \(X\) to be \(1/N\), they found that \(\mathcal{O}(z)\approx N\left(1-|z|^{2}\right)/\pi\) inside the unit disk \(|z|^{2}<1\) and zero otherwise. Consequently, one should expect \(\mathcal{O}_{nn}\sim N\) for eigenvalues inside the disk, which is thus parametrically larger than the value for normal matrices, where \(\mathcal{O}_{nn}=1\). Note that only the rescaled correlator \(\widetilde{\mathcal{O}}(z)=\mathcal{O}(z)/N\) is well-defined and finite in the limit \(N\to\infty\). The Chalker-Mehlig (CM) correlators and other aspects pertaining to eigenvector non-orthogonality in random matrix ensembles have attracted growing interest in the theoretical physics community over the past decades, see e.g. [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. The main interest being in extending CM correlators from the GinUE to more general classes of random matrix models. Apart from the important issues of eigenvalue stability, the eigenvector non-orthogonality is known to play an important role in describing transient behaviour in complex systems with classical dynamics and related questions [24, 25, 26, 27, 28, 29, 30], as well as in producing intriguing features in their quantum counterparts [31, 32, 33]. Another strong motivation comes from the field of quantum chaotic wave scattering, where non-Hermitian random matrix ensembles, different from Ginibre, play a prominent role, see e.g. [34, 35, 36, 37] for some background information. The overlap matrix \(\mathcal{O}_{mn}\) appears naturally in many scattering observables, for example in the derivation of decay laws [38], in the 'Petermann factors' describing excess noise in open laser resonators [11, 39], in issues of increased sensitivity of resonance widths to small perturbations [15, 16] and in the shape of reflected power profiles [23]. In that context both \(\mathcal{O}(z)\) and \(\mathcal{O}(z_{1},z_{2})\) have been studied theoretically [11, 12, 13, 15, 23] and experimentally [16, 40, 41]. In a not unrelated development, mathematically rigorous studies of CM correlators became a field of considerable activity more recently, see e.g. [42, 43, 44]. The usage of free probability techniques [10] to compute the self-overlap, Eq. (3), allowed extensions to invariant ensembles with non-Gaussian weights [20, 21]. Since \(\mathcal{O}(z)\) is known in the GinUE at finite matrix size [7, 8] it also became possible to compute it for products of small Ginibre matrices [19]. It was shown that one- and two-point functions of eigenvector overlaps, conditioned on an arbitrary number of eigenvalues in the GinUE, lead to determinantal structures [45]. Deep insights into Dysonian dynamics related to eigenvalues have been made, from different angles, in [17, 46, 47, 48]. Lower [4, 49] and upper [3, 5, 50] bounds on diagonal eigenvector overlaps have been provided for a fixed non-Hermitian matrix perturbed by random matrices with i.i.d. entries, even beyond Gaussian case. Some other properties of eigenvectors of non-normal random matrices have also been studied rigorously, see [51, 52, 53, 54]. Finally, let us mention that in addition to analysing the correlators in Eq. (3), Chalker and Mehlig put forward a conjecture for the far tail behaviour of the distribution of the random variable \(\mathcal{O}_{nn}\) for the GinUE as \(N\to\infty\). Based on both numerical evidence and simple eigenvalue repulsion arguments, illustrated by a solvable \(2\times 2\) GinUE matrix, they predicted that for large overlaps the probability density of \(\mathcal{O}_{nn}\) must exhibit a power law tail proportional to \(1/\mathcal{O}_{nn}^{3}\). Such tail would make all the positive integer moments beyond \(\mathcal{O}(z)\) divergent. This conjecture has been rigorously proved by two different approaches in Bourgade and Dubach [47] and Fyodorov [55], in fact recovering the full form of the probability density beyond the tail region. While the work of Bourgade and Dubach proceeded on studying the off-diagonal correlator \(\mathcal{O}(z_{1},z_{2})\) for the GinUE, the paper by Fyodorov revealed that for real eigenvalues of the GinOE the diagonal overlaps \(\mathcal{O}_{nn}\) are distributed with an even heavier probability density tail, decaying as \(1/\mathcal{O}_{nn}^{2}\). This implies that for real Ginibre matrices the mean self-overlap \(\mathcal{O}(z)\) is divergent on the real line. Understanding both the distribution of \(\mathcal{O}_{nn}\) and the CM mean \(\mathcal{O}(z)\) for complex eigenvalues of the GinOE remained however an outstanding problem. In the present paper we make the first step towards addressing the above issues, and present the results for the mean diagonal CM correlator \(\mathcal{O}(z)\) for the GinOE in the complex plane, first at finite \(N\) and then in various scaling regimes as \(N\gg 1\). We will also systematically compare our findings with both results for complex eigenvalues in the GinUE and for real eigenvalues in the GinOE. The rest of the paper is organized as follows. We present our main findings in Section 2, in particular, the mean self-overlap at finite matrix size \(N\) is given in Theorem 2.5. Asymptotic results for the mean self-overlap, as \(N\to\infty\), are given in the bulk, Corollary 2.6, at the spectral edge, Corollary 2.7, and in an eigenvalue depleted region of the droplet in Corollary 2.8. We then compare with similar results for the GinUE, and use numerical simulations, both to corroborate analytical results and provide insights into yet analytically unavailable distributions of \(O_{nn}\) for the GinOE. Finally, the corresponding proofs of our findings are presented in Section 3 for finite \(N\) - Theorem 2.5 - and in Section 4 for the asymptotic results - Corollary 2.6, 2.7 and 2.8. Statement and Discussion of Main Results In order to state our main results, we start by introducing the necessary notation and statements about random real Ginibre matrices in Section 2.1. We then present our main results for the mean self-overlap in Section 2.2 for both finite matrix size \(N\) and in several large \(N\) regimes of the complex plane. We follow up with a discussion about connections to previously known results, comparisons to numerical simulations and open problems. ### Remarks on real and complex Ginibre ensembles **Definition 2.1**.: Let \(G=\left(G_{ij}\right)_{i,j=1}^{N}\) be an \(N\times N\) matrix containing i.i.d. real Gaussian entries with mean zero and unit variance, such that the off-diagonal entries \(G_{ij}\) and \(G_{ji}\) are uncorrelated. The joint probability density function (JPDF) of matrices \(G\) is defined with respect to the flat Lebesgue measure, \(dG=\prod_{i,j=1}^{N}dG_{ij}\), via \[P_{\text{GinOE}}(G)\ dG=\frac{1}{C_{N}}\ \exp\left[-\frac{1}{2}\operatorname{Tr} \left(GG^{T}\right)\right]\ dG,\quad C_{N}=(2\pi)^{N^{2}/2}. \tag{2.1}\] The corresponding ensemble is called the _real Ginibre_ ensemble, and is denoted as GinOE. Its counterpart for matrices with complex i.i.d. entries is the _complex Ginibre_ ensemble (GinUE). For recent reviews of available results on the GinUE and GinOE see [56, 57]. The remarks below provide a discussion of main facts on both ensembles which will be of direct relevance for the present paper. **Remark 2.2**.: Generically, the spectrum of GinOE matrices consists of real eigenvalues \(\lambda\in\mathbb{R}\) and complex eigenvalues \(z=x+iy,\,y\neq 0\). Complex eigenvalues of real matrices, \(G\), always appear in conjugated pairs, i.e. \(\bar{z}=x-iy\) is an eigenvalue of \(G\) iff \(z\) is an eigenvalue of \(G\). Correspondingly, the mean spectral density, Eq. (1.5), necessarily has the form \(\rho\left(z\right)=\rho^{(c)}(z)+\delta(y)\rho^{(r)}(x)\), where \(\rho^{(c/r)}(z)\) describes the mean density of complex/real eigenvalues, respectively. On the other hand, with probability one, the eigenvalues of GinUE matrices are complex without conjugate counterparts, due to the entries of GinUE matrices being complex. In Figure 1, we show examples of the spectra of real Ginibre matrices for three different values of \(N\). As is well-known, the majority of complex eigenvalues lie inside a circle of radius \(\sqrt{N}\), with a subleading fraction (of the order of \(1/\sqrt{N}\) for \(N\gg 1\)) of purely real eigenvalues. As \(N\) tends to \(\infty\) the support of the spectrum approaches a uniform disc. **Remark 2.3**.: Results for the mean densities (both real and complex) are well known in the GinOE and GinUE at finite \(N\)[6, 58, 59, 60]. The mean density of complex eigenvalues is of particular interest in this work and is given for finite \(N\) in the GinOE by \[\rho_{N}^{\left(\text{GinOE,c}\right)}(z)=\sqrt{\frac{2}{\pi}}\ |y|\ e^{2y^{2}}\ \text{erfc}\left(\sqrt{2}\ |y|\right)\ \frac{\Gamma\left(N-1,|z|^{2}\right)}{\Gamma \left(N-1\right)}\, \tag{2.2}\] Figure 1: Spectra of real Ginibre matrices for three different values of \(N\), \(N=20\) (left), \(N=100\) (centre) and \(N=1000\) (right). Each plot contains 1,000 samples of eigenvalues and has a solid black line depicting a circle of radius \(\sqrt{N}\). Complex eigenvalues are shown in red and real eigenvalues are shown in blue. see e.g. [57, Eq. (2.46)] and [59, Theorem 6.2]. In the above, \(\Gamma(N)\) denotes the standard Euler \(\Gamma\)-function and \(\Gamma(N,|z|^{2})\) denotes the incomplete (upper) \(\Gamma\)-function defined as \[\Gamma\left(N,a\right)=\Gamma\left(N\right)\ e^{-a}\ \sum_{k=0}^{N-1}\frac{a^{k}}{k!}= \int_{a}^{\infty}du\ u^{N-1}\ e^{-u}. \tag{2.3}\] We have also made use of the complementary error function \(\mathrm{erfc}(x)=1-\mathrm{erf}(x)\), where \(\mathrm{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}dt\). In the GinUE, several equivalent representations for the mean density can be found in e.g. [6], [44, Eq. (20)], [56, Proposition 2.2] and [60, Eq. (18.2.11)], which we prefer to write as \[\rho_{N}^{(\mathrm{GinUE},c)}(z)=\frac{1}{\pi}\ \frac{\Gamma\left(N,|z|^{2} \right)}{\Gamma\left(N\right)}=\frac{1}{\pi}e^{-|z|^{2}}\sum_{n=0}^{N-1}\frac {|z|^{2n}}{n!}. \tag{2.4}\] The mean density of _real_ eigenvalues in the GinOE is also known at finite \(N\), see e.g. [58, 61], but is not needed for our purposes. **Remark 2.4**.: It is worth discussing the large \(N\) asymptotic behaviour of the mean eigenvalue density in more detail. A comparison between the densities of complex eigenvalues in the GinOE and GinUE at large \(N\) is shown in Figure 2 using heatmaps. The heatmap associated with the GinUE features two distinct scaling regimes: a spectral bulk inside the disk and edge along the disk circumference. Similar regimes are also seen in the plot for the GinOE. Firstly, in both ensembles it is easy to show that the limiting mean density of complex eigenvalues, after rescaling \(z=\sqrt{N}w\), converges to the uniform density: \[\rho_{\mathrm{bulk}}^{(\mathrm{GinUE},c)}(w)=\rho_{\mathrm{bulk}}^{(\mathrm{ GinOE},c)}(w)=\frac{1}{\pi}\, \tag{2.5}\] in the generic bulk of the unit disk, where \(|w|<1\). Similarly, at the edge of the disk, given by \(|z|=\sqrt{N}+\eta\), the density for both ensembles can be shown to be given by \[\rho_{\mathrm{edge}}^{(\mathrm{GinUE},c)}(\eta)=\rho_{\mathrm{edge}}^{( \mathrm{GinOE},c)}(\eta)=\frac{1}{2\pi}\mathrm{erfc}\left(\sqrt{2}\eta\right). \tag{2.6}\] Figure 2: Large \(N\) density of complex eigenvalues in the GinOE and GinUE with relevant scaling regions labelled. Left: GinOE. Right: GinUE. In both diagrams the solid black circle within the heatmap has radius \(\sqrt{N}\) and the density is depicted on a rainbow scale. This diagram explains schematically what is meant by the bulk, edge and depletion regimes of large real and complex Ginibre matrices. Each heatmap is plotted using the analytic expression for the density of complex eigenvalues in the associated ensembles when \(N=250\). In addition, the heatmap in the GinOE case shows yet another scaling regime close to the real axis, essentially for heights of the fraction \(\sim 1/\sqrt{N}\) of the disk radius. In that region the density of eigenvalues is reduced when compared to the spectral bulk value. The origin of such a depletion is clearly seen analytically from the presence of the factor \(\left|y\right|\) in Eq. (2.2). Henceforth the scaling associated with this region of the complex plane will be referred to as the _depletion_ regime. Inside the bulk depletion regime, i.e. \(z=\sqrt{N}x+iy\), with \(\left|x\right|<1\) and \(y\sim O(1)\), the mean GinOE density in the limit \(N\to\infty\) converges to \[\rho_{\text{depletion}}^{(\text{GinOE},c)}(y)=\sqrt{\frac{2}{\pi}}\ |y|\ e^{2y^{2}}\text{ erfc}\left(\sqrt{2}\ |y|\right)\, \tag{2.7}\] see e.g. [57, Eq. (2.42)], cf. Eq. (2.2). After the digressions on the mean densities, let us come back to our main objects of interest, the left and right eigenvectors. Given the complex eigenvalue \(z_{n}\) we denote the associated left-eigenvector by \(\mathbf{x}_{L_{n}}^{\dagger}\) and the right-eigenvector by \(\mathbf{x}_{R_{n}}\). The overlap matrix \(\mathcal{O}_{nm}=\left(\mathbf{x}_{L_{n}}^{\dagger}\mathbf{x}_{L_{m}}\right) \left(\mathbf{x}_{R_{m}}^{\dagger}\mathbf{x}_{R_{n}}\right)\) of left- and right-eigenvectors is used to define the mean self-overlap \(\mathcal{O}(z)\) as in Eq. (1.3) and its conditional companion \(\mathbb{E}(z)\) as in Eq. (1.4). Note that \(\mathbb{E}\left(z\right)\) is particularly useful when comparing theoretical predictions with numerical data. Results for the mean self-overlap have been established in the GinUE for the entire complex plane, see [7, 8, 44, 47, 55] for several equivalent forms. For our purposes we present it as \[\mathcal{O}_{N}^{(\text{GinUE},c)}(z)=\frac{1}{\pi}\left[\frac{\Gamma(N,|z|^{2 })}{(N-1)!}(N-|z|^{2})+\frac{|z|^{2N}}{(N-1)!}e^{-|z|^{2}}\right]. \tag{2.8}\] For convenience of the reader we give a derivation of Eq. (2.8) in the Appendix A, using the results in [55] as a starting point. One can further easily find the bulk asymptotics of this expression, which gives back the original Chalker-Mehlig result, as well as the corresponding edge asymptotics for the scaling \(\left|z\right|=\left(\sqrt{N}+\eta\right)\) given by \[\lim_{N\to\infty}\frac{1}{\sqrt{N}}\mathcal{O}_{\text{edge}}^{(\text{GinUE},c) }(z)=\frac{1}{\pi}\left(\frac{1}{\sqrt{2\pi}}\ e^{-2\eta^{2}}-\eta\ \text{erfc}\left(\sqrt{2}\ \eta\right)\right). \tag{2.9}\] In the GinOE, so far, results were limited to real eigenvalues only, i.e. when \(z=x\), see [55, 62]. In the next subsection, we provide results for the mean self-overlap \(\mathcal{O}(z)\) of eigenvectors associated with complex eigenvalues in the GinOE at finite \(N\) and in several large \(N\) scaling regions, dictated by the corresponding scalings of the mean eigenvalue density discussed above. ### Statement of Main Results for GinOE and comparison to GinUE Relegating the proofs and technical details to Section 3, we present our main findings below. The following theorem gives the mean self-overlap \(\mathcal{O}(z)\) of eigenvectors associated with a complex eigenvalue \(z\) for the real Ginibre ensemble at finite matrix size \(N\). **Theorem 2.5**.: _Let \(G\) be an \(N\times N\) random matrix drawn from the GinOE, distributed according to Eq. (2.1) in Definition 2.1. The mean self-overlap, Eq. (1.3), associated with a complex eigenvalue \(z\) at finite matrix size \(N\) is given by_ \[\mathcal{O}_{N}^{(\text{GinOE},c)}(z) =\left\langle\frac{1}{N}\sum_{n=1}^{N}\mathcal{O}_{nn}\ \delta(z-z_{n})\right\rangle_{\text{GinOE},N}=\frac{1}{\pi}\ \left(1+\sqrt{\frac{\pi}{2}}\ \exp\left[2y^{2}\right]\ \frac{1}{2|y|}\ \text{ erfc}\left(\sqrt{2}\ |y|\right)\right) \tag{2.10}\] \[\times\left[\ \frac{\Gamma\left(N-1,|z|^{2}\right)}{(N-2)!}\bigg{(}N-1- |z|^{2}\bigg{)}+\frac{|z|^{2(N-1)}}{(N-2)!}\ e^{-|z|^{2}}\right]\,.\] Comparing Eq. (2.10) with its GinUE counterpart Eq. (2.8), we see that they almost share the term inside the square brackets (with the change \(N\to N-1\)). The result in the GinUE is fully rotationally symmetric as there is no \(y-\)dependent factor present. In the GinOE however, the self-overlap is dependent on the distance to the real axis and so is not rotationally symmetric. In Figure 3, we compare the mean conditional self-overlap, Eq. (1.4), for eigenvectors associated with purely imaginary eigenvalues, \(z=iy\), in the GinOE and GinUE at finite \(N\), as predicted by our theory and as seen in numerical simulations. As is evident, in the region close to the real line, the mean conditional self-overlap is much larger in the GinOE than the GinUE, implying that the GinOE has a higher degree of non-normality. Next, we provide results for the large \(N\) asymptotic behaviour of the mean self-overlap in the bulk, at the spectral edge and in the depletion regime, as defined in the previous section. **Corollary 2.6**.: _For a complex eigenvalue \(z=\sqrt{N}w\), where \(w=x+iy\) with \(|w|<1\) while keeping \(|y|\gg N^{-1/2}\) as \(N\to\infty\), the limiting scaled mean self-overlap in the bulk is given by_ \[\mathcal{O}_{\rm bulk}^{\rm(GinOE,c)}(w)\equiv\lim_{N\to\infty}\frac{1}{N}~{ }\mathcal{O}_{N}^{\rm(GinOE,c)}\left(\sqrt{N}w\right)=\frac{1}{\pi}\left(1-|w |^{2}\right)~{}\Theta\left[1-|w|^{2}\right]~{}, \tag{2.11}\] _where \(\Theta\left[a\right]\) is the Heaviside function, which is equal to one, when \(a>0\) and zero otherwise._ This is nothing else but the original Chalker-Mehlig asymptotic for the mean self-overlap in the spectral bulk of the GinUE. This relation has been tested in both the GinOE and GinUE using numerical simulation, with the results shown in Figure 4. In the case of the GinOE, one can see that deep inside the bulk region the formula Eq. (2.11) is accurate for \(N\geq O(10^{2})\), indicating that the theory is accurate as \(N\to\infty\). Figure 4: Numerical simulation of the mean conditional self-overlap, \(\mathbb{E}_{\rm bulk}(w)\), within the bulk of the GinOE and GinUE for large \(N\) compared to the Chalker and Mehlig result (black lines). Left: \(\mathbb{E}_{\rm bulk}(w)\) in the bulk of GinOE as a function of \(|w|\) for a range of different values of \(N\) (coloured markers). We obtain results by considering the self-overlaps for eigenvalues within \(\pm 1/\sqrt{N}\) of each value of \(|z|\). Right: \(\mathbb{E}_{\rm bulk}(w)\) of eigenvectors of size \(N=250\) associated with purely imaginary eigenvalues in the GinOE (red triangles) and GinUE (blue circles). Numerical results for each value of \(N\) are taken from a sample with \(O(10^{6})\) GinOE and GinUE matrices. Figure 3: Mean conditional self-overlap \(\mathbb{E}_{N}(z)\) associated with purely imaginary eigenvalues in the GinOE and GinUE, for three different values of \(N\). Theoretical predictions are red dashed lines (GinOE) and blue solid lines (GinUE). Triangular (circular) markers represent numerical results for the GinOE (GinUE). The mean self-overlap is measured by averaging self-overlaps of the closest \(O(10^{3})\) eigenvalues with respect to the chosen value of \(y\). Numerically averaged values are taken from a data set containing \(O(10^{8})\) samples of the self-overlap. One observes a crucial difference between the two ensembles when considering eigenvalues close to the real axis. This can be seen when measuring the mean self-overlap of eigenvectors associated with purely imaginary eigenvalues in the bulk for all \(\mathrm{Im}(z)\in[-\sqrt{N},\sqrt{N}]\). In the case of the GinUE, the Chalker-Mehlig result holds for all \(\mathrm{Im}(z)\). In the GinOE however, the depletion of eigenvalues close to the real line leads to considerable deviations from the Chalker-Mehlig formula. This effect will be accounted for by treating this region more carefully in Corollary 2.8. However, before doing so, we consider the edge of the droplet and find the following Corollary, in agreement with its GinUE counterpart, given in Eq. (2.9). **Corollary 2.7**.: _For a complex eigenvalue \(z=\left(\sqrt{N}+\eta\right)e^{i\theta}\) satisfying \(|\sin\theta|\sim\mathcal{O}(1)\) and \(\eta>0\) the limiting scaled mean self-overlap at the edge reads_ \[\mathcal{O}^{\mathrm{(GinOE,c)}}_{\mathrm{edge}}(\eta)\equiv\lim_{N\to\infty }\frac{1}{\sqrt{N}}\ \mathcal{O}^{\mathrm{(GinOE,c)}}_{N}\left(\left(\sqrt{N}+\eta\right)e^{i \theta}\right)=\frac{1}{\pi}\left(\frac{1}{\sqrt{2\pi}}\ e^{-2\eta^{2}}-\eta \ \mathrm{erfc}\left(\sqrt{2}\ \eta\right)\right). \tag{2.12}\] We thus see that, away from the real axis, both the bulk asymptotic and the edge asymptotic of the mean diagonal overlap is shared between the GinOE and GinUE. Note that when approaching the boundary of the droplet, the mean self-overlap turns out to be parametrically weaker, which is reflected in rescaling with \(1/\sqrt{N}\) instead of \(1/N\) to obtain a non-trivial limit as \(N\to\infty\). The result for the mean self-overlap at the spectral edge of the GinOE has been considered in Figure 5. One can see from this figure that as \(N\) increases, the agreement between the theoretical and numerically observed mean self-overlaps becomes better. Finally, we present the asymptotic results for the depletion region of the droplet in the GinOE. **Corollary 2.8**.: _For a complex eigenvalue \(z=x+i\xi\), such that \(\xi\sim\mathcal{O}(1)\) the limiting scaled mean self-overlap in the depleted region close to the origin, i.e. \(x\sim\mathcal{O}(1)\), reads_ \[\mathcal{O}^{\mathrm{(GinOE,c)}}_{\mathrm{depletion,origin}}(\xi)\equiv\lim_ {N\to\infty}\frac{1}{N}\ \mathcal{O}^{\mathrm{(GinOE,c)}}_{N}\left(x+i\xi\right)=\frac{1}{\pi}\left(1+ \sqrt{\frac{\pi}{2}}\ \frac{1}{2|\xi|}\ e^{2\xi^{2}}\ \mathrm{erfc}\left(\sqrt{2}\ |\xi| \right)\right). \tag{2.13}\] _Rescaling instead \(x=\sqrt{N}\delta\), the limiting scaled mean self-overlap in the depleted region becomes_ \[\mathcal{O}^{\mathrm{(GinOE,c)}}_{\mathrm{depletion,strip}}(\delta,\xi) \equiv\lim_{N\to\infty}\frac{1}{N}\ \mathcal{O}^{\mathrm{(GinOE,c)}}_{\mathrm{Depletion,origin}}\left(\sqrt{N} \delta+i\xi\right)=\mathcal{O}^{\mathrm{(GinOE,c)}}_{\mathrm{depletion,origin }}(\xi)\left(1-\delta^{2}\right)\Theta\left[1-\delta^{2}\right]. \tag{2.14}\] In order to demonstrate numerically the appropriate scale on which the depletion regime should be studied, the mean density of purely imaginary eigenvalues in the GinOE is plotted in Figure 6. This illustrates a region Figure 5: Left: Limiting density of complex eigenvalues at the edge of the GinOE droplet. Right: Mean conditional self-overlap, \(\mathbb{E}_{\mathrm{edge}}(\eta)\), as a function of eigenvalue moduli, \(|z|=\sqrt{N}+\eta\), for a range of different values of \(N\). The limiting theoretical prediction of \(\mathbb{E}_{\mathrm{edge}}(\eta)\) for the GinOE (solid black line) is compared to numerical simulations for different values of \(N\) (coloured markers). For each value of \(\eta\), numerical averages are obtained by considering all eigenvalues with a modulus between \(\pm 1/\sqrt{N}\) of \((\sqrt{N}+\eta)\). Samples of the self-overlap are taken from a data set generated from \(O(10^{6})\) GinOE matrices of each used value of \(N\). of reduced eigenvalue density in the GinOE when \(O(10^{-1})<y<O(10^{1})\) before reaching an approximately constant value inside the bulk. Figure 6 also shows the mean conditional self-overlap, Eq. (1.4), close to the origin and in a small strip close to the real line. When considering the mean self-overlap in this region, we start by considering eigenvalues close to the origin, i.e \(z=x+i\xi\) with \(x\) and \(\xi\sim O(1)\). Here, one can see that Eq. (2.13) is independent of \(x\) and depends purely on the imaginary component \(\xi\). On the other hand, when considering the mean self-overlap along a rectangular strip close to the real line, i.e. taking \(x\to\sqrt{N}\delta\), as in Eq. (2.14), there is now an explicit dependence on \(\delta\). Essentially, one can interpret this expression as a scaled version of the Chalker-Mehlig result, that describes the increased mean self-overlap in the region of eigenvalue depletion close to the real line. Note that as \(\xi\) becomes comparable to \(\sqrt{N}\), i.e. the eigenvalue is inside the bulk, \(\mathcal{O}^{\rm(GinOE,c)}_{\rm depletion,origin}(z)\to 1/\pi\) and we, unsurprisingly, recover the Chalker-Mehlig result. ### Numerical Simulations of the distribution of the diagonal overlap and discussion of open questions We have already seen that, in the limit of large \(N\), the mean self-overlap of eigenvectors in the spectral bulk and at the edge is the same for both the GinOE and GinUE. However, despite these similarities, there is a discernible difference in behaviour of the mean self-overlap in these two ensembles due to the existence of the depletion regime in the GinOE. It is natural to expect that a similar picture should hold not only for the first moment, but the whole distribution of the diagonal overlaps. As we do not yet have the analytic expression for such a distribution, \(\mathcal{P}(t,z)\) where \(t=\mathcal{O}_{nn}-1\), for the GinOE in the complex plane, we proceed with briefly reviewing the results for complex eigenvalues of the GinUE and real eigenvalues of the GinOE, following the work [55]. The equation for the limiting JPDF of the eigenvector self-overlap in the bulk of the GinUE reads: \[\mathcal{P}^{\rm(GinUE,c)}_{\rm bulk}\left(s,w\right)=\frac{(1-|w|^{2})^{2}} {\pi s^{3}}e^{-\frac{1-|w|^{2}}{s}}\Theta[1-|w|^{2}]\;, \tag{2.15}\] with \(s=t/N\) and \(z=\sqrt{N}w\), whereas the distribution at the spectral edge is given by \[\mathcal{P}^{\rm(GinUE,c)}_{\rm edge}\left(\sigma,\eta\right) =\frac{1}{2\pi\sigma^{5}}e^{-\frac{\Delta^{2}}{2\sigma^{2}}} \left\{\frac{e^{-2\delta^{2}}}{\pi}\left(2\sigma^{2}-\Delta\right)-\frac{1}{ \sqrt{2\pi}}\left(4\delta\sigma^{2}-\Delta(2\delta+\sigma)\right){\rm erfc} \left(\sqrt{2}\delta\right)\right. \tag{2.16}\] \[\quad+\frac{e^{2\delta^{2}}}{2}\left(\Delta^{2}-\sigma^{2} \right){\rm erfc}^{2}\left(\sqrt{2}\delta\right)\bigg{\}}\;,\] Figure 6: Large \(N\) density of complex eigenvalues and mean conditional self-overlap \(\mathbb{E}_{\rm dep}(z)\) in the depletion regime of the GinOE. Left: Comparison of the limiting density of purely imaginary eigenvalues in the GinOE (red) and GinUE (blue). Centre: Theoretical prediction of \(\mathbb{E}_{\rm dep}^{\rm(origin)}(\xi)\) close to the origin (red line), compared to numerical simulations (red triangles). Right: Mean conditional self-overlap \(\mathbb{E}_{\rm dep}^{\rm(strip)}(\delta,\xi)\), with \(z=\sqrt{N}\delta+i\xi\), for three different fixed values of \(\xi\) in a strip close to the real line. In the centre and right hand plots the solid black line represents the Chalker and Mehlig result. Theoretical predictions (coloured lines) are compared to numerical simulations (coloured markers) of self-overlaps generated from \(O(10^{7})\) GinOE matrices of size \(N=500\). where \(\sigma=t/\sqrt{N}\) and \(\Delta=1-2\sigma\eta\). In [55] one also finds an expression for the density of the self-overlap of GinOE eigenvectors associated with purely real eigenvalues in the bulk, \(z=\sqrt{N}x\), which is given by \[\mathcal{P}_{\text{bulk}}^{\text{(GinOE,r)}}\left(s,x\right)=\frac{(1-x^{2})}{2 \sqrt{2\pi}}\frac{e^{-\frac{1-x^{2}}{2\pi}}}{s^{2}}\Theta[1-x^{2}]. \tag{2.17}\] To compare with numerical simulations, the above distributions must also be normalised with respect to the mean spectral density. We denote the normalised distribution as \(\widetilde{\mathcal{P}}(t,z)=\mathcal{P}(t,z)/\rho(z)\). In Figure 7, we compare distributions of the self-overlap of eigenvectors in the GinOE and GinUE. This is done in three different ways. Firstly, we consider the theoretical distributions in the complex bulk of the GinUE and real bulk of the GinOE (at \(x=0\)), in comparison to a numerically observed distribution in the depletion regime of the GinOE. We also compare the distribution of eigenvector self-overlaps in the complex bulk of the GinOE to the theoretical limiting distribution of self-overlaps in the bulk of the GinOE. Finally, we make a comparison between the distribution of the self-overlap at the edge of the GinUE and a numerically measured distribution at the edge of the GinOE. The data indeed seems to confirm that the GinUE distribution accurately describes the corresponding quantities for the GinOE away from the real axis. In the depletion regime around the real axis the GinOE is not described by the limiting distribution associated with either complex eigenvalues in the bulk of the GinUE, Eq. (2.15), or eigenvalues in the real bulk of the GinOE, Eq. (2.17). It seems that the tails of the distribution of self-overlaps in the depletion regime is described better by Eq. (2.17) than Eq. (2.15), indicating that the corresponding distribution may have a \(1/s^{2}\) tail (at least as an intermediate asymptotics) as opposed to \(1/s^{3}\). Next, considering the edge region it is apparent that there is a small discrepancy between theory and numerical simulations, however we attribute this to finite \(N\) effects and expect this deviation to approach zero as \(N\to\infty\). Rigorously proving equivalence of the distribution of the GinOE self-overlap for asymptotics within the bulk and at the edge for the GinOE remains an outstanding challenge. Characterizing the distribution of eigenvector diagonal overlaps in the depletion regime remains a completely open question. Another interesting extension of this work would be to study the off-diagonal averaged overlap, see Eq. (1.3), which is not yet known for the GinOE, neither for complex nor for real eigenvalues. In a separate paper, the results of this work will be extended to the mean self-overlap in the complex plane of the real elliptic Ginibre ensemble, interpolating between GinOE and GOE. In particular, this should allow us to study the depletion region in the new non-trivial scaling regime of weak non-Hermiticity [66, 67, 68]. Figure 7: Numerical simulation of the distribution of the eigenvector self-overlaps in the GinOE compared to theoretical distributions for the GinOE and GinUE. In each plot the solid black line is the limiting distribution of the self-overlap in a region of the GinUE and the red shaded area shows a normalised histogram of self-overlaps observed in numerical simulation of \(O(10^{7})\) GinOE matrices of size \(N=500\). Left: Depletion regime of the GinOE compared to the bulk of the GinUE and bulk real eigenvalues in the GinOE (blue line). Centre: Bulk of the GinOE compared to the bulk of the GinUE. Right: Edge region of the GinOE compared to the edge of the GinUE. Derivation of Main Results Our strategy for the proof of the main results outlined in the previous section amounts to using the (incomplete) Schur decomposition with respect to pairs of complex eigenvalues of a real random matrix \(G\). In this way we reduce the problem of evaluating the mean self-overlap in the complex plane to calculating the expectation value of a certain determinant, which is eventually implemented using Grassmann integration. In the process, we adapt techniques used in similar circumstances in [55] and [62], which will help to prove the asymptotic results in Section 4. ### Eigenvectors of real matrices via incomplete Schur decomposition Let \(z=x+iy\) (with \(y\neq 0\)) be a complex eigenvalue of the \(N\times N\) real, non-Hermitian matrix \(G\). The associated left and right eigenvectors are denoted by \(\mathbf{x}_{L}^{\dagger}\) and \(\mathbf{x}_{R}\) respectively. Following the influential paper by Edelman [59], we employ the _incomplete Schur decomposition_ of \(G\) with respect to \(z\), which is \[G=Q\widetilde{G}Q^{T}\quad\text{with}\quad\widetilde{G}=\begin{pmatrix}x&b& \\ -c&x&W\\ 0&G_{2}\end{pmatrix},\,bc>0,\,b\geq c,\,y=\sqrt{bc}, \tag{3.1}\] where real symmetric \(Q\) is a so-called Householder reflection matrix such that \(Q^{2}=\mathds{1}_{N}\). The block matrix \(W\) is real and has size \(2\times(N-2)\), i.e. \(W=\left(\mathbf{w}_{1}\text{ },\,\mathbf{w}_{2}\right)^{T}\), where \(\mathbf{w}_{1}\), \(\mathbf{w}_{2}\) are vectors with \(N-2\) real i.i.d. entries following standard normal distributions. The matrix \(G_{2}\) in this decomposition is essentially an \((N-2)\times(N-2)\) dimensional real Ginibre matrix. The Jacobian of the transformation is presented in [59] and the integration measure changes according to \[dG=2(b-c)\ \det\left[(x\mathds{1}_{N-2}-G_{2})^{2}+y^{2}\mathds{1}_{N-2} \right]\ dx\ db\ dc\ dW\ dG_{2}\ dS\, \tag{3.2}\] where \(dS\) denotes the volume element of the Stiefel manifold originating from the Householder reflection matrix \(Q\). After simple manipulations we find the probability measure, defined in terms of the new variables, reads: \[\begin{split} P_{\text{GinOE}}(G)dG=&\ C_{N}^{\prime} \det\left[(x\mathds{1}_{N-2}-G_{2})^{2}+y^{2}\mathds{1}_{N-2}\right]e^{- \frac{1}{2}\operatorname{Tr}G_{2}G_{2}^{T}}\\ &(b-c)e^{-\frac{1}{2}\left(2x^{2}+b^{2}+c^{2}+\operatorname{Tr}WW^ {T}\right)}dx\ db\ dc\ dW\ dG_{2}\,\end{split} \tag{3.3}\] where the constant \(C_{N}^{\prime}\) now reads \[C_{N}^{\prime}=2\ \frac{(2\pi)^{-\frac{1}{2}(N-1)^{2}}}{\sqrt{2\pi}\Gamma(N-1)}. \tag{3.4}\] The next step is to determine the left and right eigenvectors in terms of the Schur decomposition variables, which is done adapting the method of [55], where the case of real GinOE eigenvalues was considered. For \(z=x+iy\), the eigenvalue problems read \[G\mathbf{x}_{R}=z\mathbf{x}_{R}\quad\text{and}\quad\mathbf{x}_{L}^{\dagger}G= z\mathbf{x}_{L}^{\dagger}. \tag{3.5}\] Applying the incomplete Schur decomposition from Eq. (3.1), we introduce \(\widetilde{\mathbf{x}}_{L}^{\dagger}\equiv\mathbf{x}_{L}^{\dagger}Q\) and \(\widetilde{\mathbf{x}}_{R}\equiv Q\mathbf{x}_{R}\) and see that the eigenvalue problems for \(\widetilde{G}\) can be rewritten as \(\widetilde{\mathbf{x}}_{L}^{\dagger}\widetilde{G}=z\widetilde{\mathbf{x}}_{L} ^{\dagger}\) and \(\widetilde{G}\widetilde{\mathbf{x}}_{R}=z\widetilde{\mathbf{x}}_{R}\). The left-right diagonal overlap, corresponding to the eigenvalue \(z\) is obviously invariant under the incomplete Schur decomposition, i.e. \[\mathcal{O}_{z}=\left(\mathbf{x}_{L}^{\dagger}\mathbf{x}_{L}\right)\left( \mathbf{x}_{R}^{\dagger}\mathbf{x}_{R}\right)=\left(\widetilde{\mathbf{x}}_{L }^{\dagger}\widetilde{\mathbf{x}}_{L}\right)\left(\widetilde{\mathbf{x}}_{R }^{\dagger}\widetilde{\mathbf{x}}_{R}\right)\, \tag{3.6}\] hence we can continue the calculation of the mean self-overlap using \(\widetilde{\mathbf{x}}_{L}^{\dagger}\), \(\widetilde{\mathbf{x}}_{R}\) and \(\widetilde{G}\) instead of \(\mathbf{x}_{L}^{\dagger}\), \(\mathbf{x}_{R}\) and \(G\). The incomplete Schur decomposition gives us the forms of \(\widetilde{\mathbf{x}}_{R}\) and \(\widetilde{\mathbf{x}}_{L}^{\dagger}\). By construction, \(\widetilde{\mathbf{x}}_{L}^{\dagger}\widetilde{\mathbf{x}}_{R}=1\) and it is easy to check that the eigenvectors must have the following structure: \[\widetilde{\mathbf{x}}_{R}=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ i\sqrt{\frac{c}{b}}\\ \mathbf{0}_{N-2}\end{pmatrix}\quad\text{and}\quad\widetilde{\mathbf{x}}_{L}^{ \dagger}=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ i\sqrt{\frac{b}{c}}\\ \sqrt{2}\ \mathbf{b}_{N-2}\end{pmatrix}^{\dagger}\, \tag{3.7}\] with some \(\mathbf{b}_{N-2}\) yet to be determined. Substituting the above into the eigenvalue equation, \(\widetilde{\mathbf{x}}_{L}^{\dagger}\widetilde{G}=z\widetilde{\mathbf{x}}_{L}^ {\dagger}\), we find that \[z\mathbf{b}_{N-2}^{\dagger}\stackrel{{\dagger}}{{=}}\frac{1}{ \sqrt{2}}\left(\mathbf{w}_{1}^{T}-i\sqrt{\frac{b}{c}}\mathbf{w}_{2}^{T}\right) +\mathbf{b}_{N-2}^{\dagger}G_{2}\quad\Leftrightarrow\quad\mathbf{b}_{N-2}^{ \dagger}=\frac{1}{\sqrt{2}}\left(\mathbf{w}_{1}^{T}-i\sqrt{\frac{b}{c}} \mathbf{w}_{2}^{T}\right)\left(z\mathds{1}_{N-2}-G_{2}\right)^{-1}. \tag{3.8}\] Using the above relation, the self-overlap \(\mathcal{O}_{z}\) associated with a complex eigenvalue \(z\) in the GinOE becomes \[\mathcal{O}_{z}=\left(\widetilde{\mathbf{x}}_{L}^{\dagger}\widetilde{\mathbf{x }}_{L}\right)\left(\widetilde{\mathbf{x}}_{R}^{\dagger}\widetilde{\mathbf{x}}_ {R}\right)=\frac{1}{4}\left(2+\frac{c^{2}+b^{2}}{bc}\right)+\frac{1}{2}\left( 1+\frac{c}{b}\right)\left(\mathbf{b}_{N-2}^{\dagger}\mathbf{b}_{N-2}\right). \tag{3.9}\] To perform the ensemble average we follow [59] and change variables from \(b\) and \(c\) to \(y=\sqrt{bc}\) and \(\delta=b-c>0\), implying \[db\ dc=\frac{2y}{\sqrt{\delta^{2}+4y^{2}}}\ dy\ d\delta\,\quad b^{2}+c^{2}= \delta^{2}+2y^{2}\quad\text{and}\quad\frac{b}{c}=\exp\left[\ 2\ \text{arcsinh}\left(\frac{\delta}{2y}\right)\right]\, \tag{3.10}\] so that the relevant JPDF, Eq. (3.3), now takes the form \[\begin{split} P_{\text{GinOE}}(G)dG=&\ C_{N}^{ \prime}\det\left[(x\mathds{1}_{N-2}-G_{2})^{2}+y^{2}\mathds{1}_{N-2}\right] \exp\left[-\frac{1}{2}\operatorname{Tr}\left(G_{2}G_{2}^{T}+WW^{T}\right) \right]\\ &\times\exp\left[-x^{2}\right]\ \exp\left[-y^{2}\right]\ \exp\left[-\frac{1}{2}\delta^{2}\right]\ \frac{2y\delta}{\sqrt{\delta^{2}+4y^{2}}}\ dx\ dy\ d\delta\ dW\ dG_{2}\.\end{split} \tag{3.11}\] Defining the matrix \[B\equiv\left(z\mathds{1}_{N-2}-G_{2}\right)^{\dagger}\left(z\mathds{1}_{N-2}- G_{2}\right)\, \tag{3.12}\] we first express \(\left(\mathbf{b}_{N-2}^{\dagger}\mathbf{b}_{N-2}\right)\) as \[\begin{split}\left(\mathbf{b}_{N-2}^{\dagger}\mathbf{b}_{N-2} \right)=&\frac{1}{2}\mathbf{w}_{1}^{T}\ B^{-1}\ \mathbf{w}_{1}+\frac{1}{2}\exp\left[\ 2\ \text{arcsinh}\left(\frac{\delta}{2y}\right) \right]\mathbf{w}_{2}^{T}\ B^{-1}\ \mathbf{w}_{2}\\ &+\frac{1}{2}\ i\ \exp\left[\ \text{arcsinh}\left(\frac{ \delta}{2y}\right)\right]\left[\mathbf{w}_{1}^{T}\ B^{-1}\ \mathbf{w}_{2}-\mathbf{w}_{2}^{T}\ B^{-1}\ \mathbf{w}_{1}\right]\,,\end{split} \tag{3.13}\] which when substituted into Eq. (3.9) implies \[\mathcal{O}_{z}=\widetilde{c}_{1}+\widetilde{c}_{2}\ \left(\mathbf{b}_{N-2}^{ \dagger}\mathbf{b}_{N-2}\right)\, \tag{3.14}\] where the constants read \[\widetilde{c}_{1}=\frac{1}{4}\left(2+\frac{\delta^{2}+2y^{2}}{y^{2}}\right) \quad\text{and}\quad\widetilde{c}_{2}=\frac{1}{2}\left(1+\exp\left[-2\ \text{arcsinh}\left(\frac{\delta}{2y}\right)\right]\right). \tag{3.15}\] With these formulas in hand, we now proceed to proving our main theorem in the next section. ### Proof of Theorem 2.5 Using the equivalence of different indices \(n\) under the ensemble averaging, the mean self-overlap for the GinOE can be written as follows: \[\mathcal{O}_{N}^{(\text{GinOE,c})}(z)=\left\langle\frac{1}{N}\sum_{n=1}^{N} \mathcal{O}_{nn}\ \delta(z-z_{n})\right\rangle_{\text{GinOE,}N}=\left\langle\mathcal{O}_{\bar{z}} \ \delta(z-\widetilde{z})\right\rangle_{\text{GinOE,}N}\, \tag{3.16}\] where the two-dimensional \(\delta\)-function of complex argument, \(z=x+iy\), should be interpreted as the product of two one-dimensional \(\delta\)-functions, containing its real and imaginary parts respectively, i.e. \(\delta(z-\widetilde{z})=\delta(x-\widetilde{x})\delta(y-\widetilde{y})\). This allows us to use the incomplete Schur decomposition as presented in the previous section, in particular using results from Eq. (3.11) to Eq. (3.15). The average splits into integrations over \(G_{2}\), \(W\), \(x\), \(y\) and \(\delta\) according to Eq. (3.11). The two \(\delta\)-functions make the integrations over \(x\) and \(y\) trivial, meaning that the next non-trivial task is to perform the integration with respect to the matrix \(W\). We have \(\operatorname{Tr}WW^{T}=\mathbf{w}_{1}^{T}\mathbf{w}_{1}+\mathbf{w}_{2}^{T} \mathbf{w}_{2}\) and \(dW=d\mathbf{w}_{1}d\mathbf{w}_{2}\), which allows us to perform the Gaussian averages with respect to vectors \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\) instead of \(W\). We define the average with respect to \(\mathbf{w}\) of an observable \(\mathcal{A}(\mathbf{w})\) as \[\left\langle\mathcal{A}(\mathbf{w})\right\rangle_{\mathbf{w}}\equiv\frac{1}{( 2\pi)^{\frac{N-2}{2}}}\int d\mathbf{w}\exp\left[-\frac{1}{2}\mathbf{w}^{T} \mathbf{w}\right]\mathcal{A}(\mathbf{w})\, \tag{3.17}\] normalized in such a way that \(\langle\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\rangle_{\mathbf{w}}=1\). The mean self-overlap for the GinOE then can be written as \[\begin{split}&\left\langle\mathcal{O}_{\bar{z}}\ \delta(z-\widetilde{z})\ \right\rangle_{\text{GinOE,}N}=C_{N}^{\prime}\ (2\pi)^{N-2}\ \exp\left[-\left(x^{2}+y^{2}\right)\right]\int d\delta\ \frac{2y\delta}{\sqrt{\delta^{2}+4y^{2}}}\ \exp\left[-\frac{1}{2}\delta^{2}\right]\\ &\times\int dG_{2}\ \det\left[(x\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}_{N-2}-G_{2})^{2}+y^{2}\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}_{N-2}\right]\exp\left[-\frac{1}{2}\operatorname{Tr}G_{2}G_{2}^{T} \right]\ \left\langle\left\langle\widetilde{c}_{1}+\widetilde{c}_{2}\ \left(\mathbf{b}_{N-2}^{\dagger}\mathbf{b}_{N-2}\right)\right\rangle_{ \mathbf{w}_{2}}\right\rangle_{\mathbf{w}_{1}}\.\end{split} \tag{3.18}\] The computation of the double average over \(\mathbf{w}_{1}\), \(\mathbf{w}_{2}\) is performed in the next step, using the following Lemma. **Lemma 3.1**.: _Let \(\mathbf{w}_{1}\), \(\mathbf{w}_{2}\) be two vectors, each of length \(N-2\), with independent real variables as entries and \(X\) be an \(N-2\) dimensional matrix. Then_ \[\left\langle\mathbf{w}^{T}\ X\ \mathbf{w}\right\rangle_{\mathbf{w}}=\operatorname{ Tr}X\quad\text{and}\quad\left\langle\left\langle\mathbf{w}_{1}^{T}\ X\ \mathbf{w}_{2}\right\rangle_{\mathbf{w}_{2}}\right\rangle_{\mathbf{w}_{1}}= \left\langle\left\langle\mathbf{w}_{2}^{T}\ X\ \mathbf{w}_{1}\right\rangle_{\mathbf{w}_{2}}\right\rangle_{\mathbf{w}_{1}}=0\, \tag{3.19}\] _with the average defined in Eq. (3.17) and \(\mathbf{w}\in\{\mathbf{w}_{1},\mathbf{w}_{2}\}\)._ The verification of this Lemma is straightforward and can be done by exploiting the Gaussian nature of the integrals involved. This allows us to show the validity of the next Lemma. **Lemma 3.2**.: _The average over \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\) of the object \(\left(\mathbf{b}_{N-2}^{\dagger}\mathbf{b}_{N-2}\right)\) reads_ \[\left\langle\left\langle\widetilde{c}_{1}+\widetilde{c}_{2}\ \left(\mathbf{b}_{N-2}^{\dagger}\mathbf{b}_{N-2}\right)\right\rangle_{ \mathbf{w}_{2}}\right\rangle_{\mathbf{w}_{1}}=\frac{1}{2}\left(2+\frac{\delta^ {2}}{2y^{2}}\right)\left[1+\operatorname{Tr}\left[B^{-1}\right]\,\right]\,. \tag{3.20}\] Proof.: The verification amounts to using the expression of \(\left(\mathbf{b}_{N-2}^{\dagger}\mathbf{b}_{N-2}\right)\), in Eq. (3.13), in conjunction with Lemma 3.1, such that: \[\begin{split}\left\langle\ \left\langle\left(\mathbf{b}_{N-2}^{ \dagger}\mathbf{b}_{N-2}\right)\right\rangle_{\mathbf{w}_{2}}\right\rangle_{ \mathbf{w}_{1}}&=\frac{1}{2}\left\langle\mathbf{w}_{1}^{T}\ B^{-1}\ \mathbf{w}_{1}\right\rangle_{\mathbf{w}_{1}}+\frac{1}{2}\exp\left[\ 2\ \arcsinh\left(\frac{\delta}{2y}\right)\right]\left\langle\mathbf{w}_{2}^{T}\ B^{-1}\ \mathbf{w}_{2}\right\rangle_{\mathbf{w}_{2}}\\ &=\frac{1}{2}\left(1+\exp\left[\ 2\ \arcsinh\left(\frac{\delta}{2y} \right)\right]\right)\operatorname{Tr}B^{-1}\.\end{split} \tag{3.21}\] Then exploiting the fact that \(\cosh^{2}\left[\arcsinh\left(\frac{\delta}{2y}\right)\right]=1+\frac{\delta^{2} }{4y^{2}}\), alongside the definition of \(\tilde{c}_{2}\) in Eq. (3.15) this yields: \[\frac{\widetilde{c}_{2}}{2}\left(1+\exp\left[\ 2\ \arcsinh\left(\frac{\delta}{2y} \right)\right]\right)=\frac{1}{2}\bigg{[}1+\cosh\left[2\ \arcsinh\left(\frac{\delta}{2y}\right)\right]\bigg{]}=1+\frac{\delta^{2}}{4y^{2}}. \tag{3.22}\] Finally, applying the definition of \(\tilde{c}_{1}\) from Eq. (3.15), leads to \[\left\langle\left\langle\widetilde{c}_{1}+\widetilde{c}_{2}\ \left(\mathbf{b}_{N-2}^{\dagger} \mathbf{b}_{N-2}\right)\right\rangle_{\mathbf{w}_{2}}\right\rangle_{\mathbf{w}_ {1}}=\widetilde{c}_{1}+\frac{1}{2}\operatorname{Tr}\left[B^{-1}\right]\left(2 +\frac{\delta^{2}}{2y^{2}}\right)=\frac{1}{2}\left(2+\frac{\delta^{2}}{2y^{2} }\right)\left[1+\operatorname{Tr}\left[B^{-1}\right]\right]\,, \tag{3.23}\] which concludes the proof of this Lemma. With Lemma 3.2, we are able to express the mean self-overlap at finite \(N\) in such a way, that the remaining average over \(G_{2}\) becomes tractable. Starting from Eq. (3.18) and applying Lemma 3.2 we obtain \[\left\langle\mathcal{O}_{\widetilde{z}}\ \delta(z-\widetilde{z})\ \right\rangle_{\text{GinOE},N} =C_{N}^{\prime}\ (2\pi)^{N-2}\ \exp\left[-\left(x^{2}+y^{2}\right)\right]\int d\delta\ \frac{2y\delta}{\sqrt{\delta^{2}+4y^{2}}}\ \exp\left[-\frac{1}{2}\delta^{2}\right]\frac{1}{2}\left(2+\frac{\delta^{2}}{2 y^{2}}\right) \tag{3.24}\] The integrand, involving the matrix \(G_{2}\), can be written in terms of the matrix \(B\) from Eq. (3.12), noticing that \[\det B =\det\begin{bmatrix}\left(z\mathds{1}_{N-2}-G_{2}\right)^{ \dagger}\left(z\mathds{1}_{N-2}-G_{2}\right)\end{bmatrix}=\det\begin{bmatrix} \left(x\mathds{1}_{N-2}-G_{2}\right)^{2}+y^{2}\mathds{1}_{N-2}\end{bmatrix} \tag{3.25}\] \[=\det\begin{bmatrix}0&i(z\mathds{1}_{N-2}-G_{2})\\ i(\bar{z}\mathds{1}_{N-2}-G_{2}^{T})&0\end{bmatrix}\,\] as well as \[\frac{\partial}{\partial\mu}\det\left(\mu\mathds{1}_{N-2}+B\right)\bigg{|}_{ \mu=0}=\det B\ \operatorname{Tr}B^{-1}. \tag{3.26}\] In fact, it is more convenient to introduce a block-matrix \(M\) via \[M\equiv\begin{pmatrix}\sqrt{\mu}\mathds{1}_{N}&i\left(z\,\mathds{1}_{N}-G \right)\\ i\left(\bar{z}\mathds{1}_{N-2}-G^{T}\right)&\sqrt{\mu}\mathds{1}_{N}\end{pmatrix}\, \tag{3.27}\] where each block is of size \(N\times N\) and \(\mu\) is a real parameter. It is easy to see that for \(\mu=0\) we then have \[\det M=\det B\quad\text{and}\quad\frac{\partial}{\partial\mu}\det M\ \bigg{|}_{\mu=0}=\det B\ \operatorname{Tr}B^{-1}. \tag{3.28}\] Now we define the averaging over \(G_{2}\) as \[\left\langle\mathcal{A}(G_{2})\right\rangle_{G_{2}}\equiv C_{N-2}^{-1}\int dG _{2}\ \exp\left[-\frac{1}{2}\operatorname{Tr}\left(G_{2}G_{2}^{T}\right)\right]\ \mathcal{A}(G_{2})\, \tag{3.29}\] where the constant \(C_{N-2}\) is given in Eq. (2.1) to ensure the correct normalization: \(\langle\mathds{1}\rangle_{G_{2}}=1\). The expression in Eq. (3.24) therefore becomes \[\left\langle\mathcal{O}_{\widetilde{z}}\ \delta(z-\widetilde{z})\ \right\rangle_{\text{GinOE},N} =C_{N}^{\prime}\ (2\pi)^{N-2}\ C_{N-2}\ \exp\left[-\left(x^{2}+y^{2}\right)\right]\int d\delta\ \frac{y\delta}{\sqrt{\delta^{2}+4y^{2}}}\ \exp\left[-\frac{1}{2}\delta^{2}\right]\left(2+\frac{ \delta^{2}}{2y^{2}}\right) \tag{3.30}\] The analysis of the average of the determinant of the matrix \(M\) with respect to the GinOE of size \(N-2\) is done using the standard representation of the determinant via Berezin integration over anti-commuting Grassmann variables [63]: \[\int D(\boldsymbol{\Phi},\boldsymbol{\chi})\exp\left[-\boldsymbol{\Phi}^{T}M \boldsymbol{\chi}\right]=\det M\, \tag{3.31}\] where \(D(\boldsymbol{\Phi},\boldsymbol{\chi})=d\boldsymbol{\phi}_{1}d\boldsymbol{\chi }_{1}d\boldsymbol{\phi}_{2}d\boldsymbol{\chi}_{2}\) and \(\boldsymbol{\Phi}^{T}=\left(\boldsymbol{\phi}_{1},\boldsymbol{\phi}_{2} \right)^{T}\), \(\boldsymbol{\chi}^{T}=\left(\boldsymbol{\chi}_{1},\boldsymbol{\chi}_{2}\right) ^{T}\). The outcome is provided by the following Proposition. **Proposition 3.3**.: _Let \(M\) be the matrix defined in Eq. (3.27), \(z=x+iy\) is a complex number and \(G\) is an \(N\times N\) real Ginibre matrix. The average over the determinant of \(M\), with respect to the real Ginibre matrix \(G\), can be expressed in the following form:_ \[\left\langle\,\det M\right\rangle_{G}=\frac{1}{\pi}\int_{0}^{\infty }dr\ r\ e^{-r^{2}}\int_{0}^{2\pi}d\theta\bigg{[}\mu+|z|^{2}+2\sqrt{\mu}r\cos( \theta)+r^{2}\bigg{]}^{N} \tag{3.32}\] \[=\int_{0}^{\infty}dR\ e^{-R}\bigg{[}\left(R+|z|^{2}\right)^{2}+ \mu^{2}+2\mu(|z|^{2}-R)\bigg{]}^{\frac{N}{2}}\,P_{N}\left(\frac{\mu+R+|z|^{2}}{ \sqrt{\left(R+|z|^{2}\right)^{2}+\mu^{2}+2\mu(|z|^{2}-R)}}\right)\, \tag{3.33}\] _where \(P_{N}(t)\) is a Legendre polynomial defined via the identity [64, 8.913.3]_ \[P_{N}(t)=\frac{1}{\pi}\int_{0}^{\pi}d\theta\ \left(t+\sqrt{t^{2}-1}\cos \theta\right)^{N}. \tag{3.34}\] Proof.: Using the integration over vectors with \(N\) anticommuting components each, \(\mathbf{\phi}_{1}\), \(\mathbf{\phi}_{2}\), \(\mathbf{\chi}_{1}\), \(\mathbf{\chi}_{2}\), with entries \(\chi_{1,i}\) (\(i=1,2,...N\)) and so forth, we start with writing Eq. (3.31) in the explicit form for our particular choice: \[\left\langle\,\det M\right\rangle_{G}=(-1)^{N}\left\langle\int D\mathbf{\Phi}D\mathbf{ \chi}\exp\!\left\{-\begin{pmatrix}\mathbf{\phi}_{1}^{T}&\mathbf{\phi}_{2}^{T}\end{pmatrix} \begin{pmatrix}\sqrt{\mu}\mathds{1}_{N}&i\left(z\,\mathds{1}_{N}-G\right)\\ i\left(\bar{z}\mathds{1}_{N}-G^{T}\right)&\sqrt{\mu}\mathds{1}_{N}\end{pmatrix} \begin{pmatrix}\mathbf{\chi}_{1}\\ \mathbf{\chi}_{2}\end{pmatrix}\right\}\right\rangle_{G}\, \tag{3.35}\] which upon using \(\mathbf{\phi}_{1}^{T}G\mathbf{\chi}_{2}=-\operatorname{Tr}\!\left[G\mathbf{\chi}_{2}\otimes \phi_{1}^{T}\right]\) can be written as \[\left\langle\,\det M\right\rangle_{G}=(-1)^{N}\int D\mathbf{\Phi}D\mathbf{\chi}e^{- \sqrt{\mu}\phi_{1}^{T}\mathbf{\chi}_{1}-\sqrt{\mu}\phi_{2}^{T}\mathbf{\chi}_{2}-iz\bm {\phi}_{1}^{T}\mathbf{\chi}_{2}-iz\mathbf{\phi}_{2}^{T}\mathbf{\chi}_{1}}\left\langle e^{- i\operatorname{Tr}\!\left[G\mathbf{\chi}_{2}\otimes\mathbf{\phi}_{1}^{T}\right]-i \operatorname{Tr}\!\left[G^{T}\mathbf{\chi}_{1}\otimes\mathbf{\phi}_{2}^{T}\right]} \right\rangle_{G}. \tag{3.36}\] The expectation value in the above integrand can be evaluated using the following identity \[\left\langle e^{-\operatorname{Tr}\!\left[G\mathbf{A}\right]-\operatorname{Tr}\! \left[G^{T}\mathbf{B}\right]}\right\rangle_{G}=\exp\!\left\{\frac{1}{2}\operatorname {Tr}\!\left[\mathbf{A}^{T}\mathbf{A}\right]+\frac{1}{2}\operatorname{Tr}\!\left[\mathbf{ B}^{T}\mathbf{B}\right]+\operatorname{Tr}\!\left[\mathbf{A}\mathbf{B}\right]\right\}\,, \tag{3.37}\] which can be found in [55, Eq. (3.15)]. In our case we have that \(\mathbf{A}=i\mathbf{\chi}_{2}\otimes\mathbf{\phi}_{1}^{T}\) and \(\mathbf{B}=i\mathbf{\chi}_{1}\otimes\mathbf{\phi}_{2}^{T}\), so that \[\operatorname{Tr}\!\left[\mathbf{A}^{T}\mathbf{A}\right]=\operatorname{Tr}\!\left[ \mathbf{B}^{T}\mathbf{B}\right]=0\quad\text{and}\quad\operatorname{Tr}\!\left[\mathbf{A} \mathbf{B}\right]=\left(\mathbf{\phi}_{1}^{T}\mathbf{\chi}_{1}\right)\left(\mathbf{\phi}_{2}^ {T}\mathbf{\chi}_{2}\right)\, \tag{3.38}\] yielding \[\left\langle\,\det M\right\rangle_{G}=(-1)^{N}\int D\mathbf{\Phi}D\mathbf{\chi}\exp\! \left\{-\sqrt{\mu}\mathbf{\phi}_{1}^{T}\mathbf{\chi}_{1}-\sqrt{\mu}\mathbf{\phi}_{2}^{T} \mathbf{\chi}_{2}-iz\mathbf{\phi}_{1}^{T}\mathbf{\chi}_{2}-i\bar{z}\mathbf{\phi}_{2}^{T}\mathbf{ \chi}_{1}+\left(\mathbf{\phi}_{1}^{T}\mathbf{\chi}_{1}\right)\left(\mathbf{\phi}_{2}^{T} \mathbf{\chi}_{2}\right)\right\}\,. \tag{3.39}\] The exponential of the term quartic in anticommuting variables can be re-expressed using a Hubbard-Stratonovich transformation of the form \[e^{ab}=\frac{1}{2\pi}\int d\bar{q}dqe^{-|q|^{2}-(aq+b\bar{q})}\, \tag{3.40}\] so that after changing the order of integration and performing the Gaussian integrals over anticommuting vectors we arrive at \[\left\langle\,\det M\right\rangle_{G} =\frac{(-1)^{N}}{2\pi}\int d\bar{q}dqe^{-|q|^{2}}\int D\mathbf{\Phi}D \mathbf{\chi}e^{-\sqrt{\mu}\phi_{1}^{T}\mathbf{\chi}_{1}-\sqrt{\mu}\mathbf{\phi}_{2}^{T} \mathbf{\chi}_{2}-iz\mathbf{\phi}_{1}^{T}\mathbf{\chi}_{2}-i\bar{z}\mathbf{\phi}_{2}^{T}\mathbf{ \chi}_{1}-q\mathbf{\phi}_{1}^{T}\mathbf{\chi}_{1}-\bar{q}\mathbf{\phi}_{2}^{T}\mathbf{\chi}_{2}} \tag{3.41}\] \[=\ \frac{1}{2\pi}\int d\bar{q}dqe^{-|q|^{2}}\left[\left(\sqrt{\mu}+q \right)\left(\sqrt{\mu}+\bar{q}\right)+|z|^{2}\right]^{N}. \tag{3.42}\] Employing the change of variables to polar coordinates, \(q=re^{i\theta}\) and \(\bar{q}=re^{-i\theta}\), yields Eq. (3.32), then using \(r^{2}=R\) and the definition of a Legendre polynomial, Eq. (3.34), yields the second form, Eq. (3.33). **Remark 3.4**.: Proposition 3.3 allows one to compute the two terms in Eq. (3.30) which require averages with respect to \(G_{2}\). Note that for \(\mu=0\) the determinant of the matrix \(M\) can be written as a product of two characteristic polynomials of Ginibre matrices. The average of a product of two characteristic polynomials is proportional to the associated kernel at equal arguments for complex eigenvalues in the GinOE, and is well-known, see e.g. [60, Eq. (18.5.40)]. However, we need a slightly more general average involving the derivative over \(\mu\). We state the results we need in the next Corollary. **Corollary 3.5**.: _With the average taken as in Eq. (3.29) using the \(N-2\) sized GinOE matrix \(G_{2}\), we have_ \[\left\langle\,\det M\right\rangle_{G_{2}}\,\biggm{|}_{\mu=0}=e^{|z|^{2}}\,\, \Gamma\left(N-1,|z|^{2}\right) \tag{3.43}\] _and_ \[\frac{\partial}{\partial\mu}\bigg{\langle}\,\det M\bigg{\rangle}_{G_{2}}\, \biggm{|}_{\mu=0}=\left(N-2-|z|^{2}\right)\,e^{|z|^{2}}\,\,\Gamma(N-1,|z|^{2}) +|z|^{2(N-1)}. \tag{3.44}\] where we have used the incomplete Gamma-function defined in Eq. (2.3). Proof.: The proof of Eq. (3.43) is immediate from Eq. (3.33) after changing \(N\to N-2\), using \(P_{N-2}(1)=1\)[65, Table 18.6.1] and the identity [65, 8.6.5] \[\int_{0}^{\infty}\,e^{-R}(R+|z|^{2})^{N-2}\,dR=e^{|z|^{2}}\Gamma(N-1,|z|^{2}). \tag{3.45}\] In order to prove Eq. (3.44), we replace \(N\to N-2\) in Eq. (3.33), then differentiate over \(\mu\) using \[\frac{\partial}{\partial\mu}\left(\frac{\mu+R+|z|^{2}}{\sqrt{\left(R+|z|^{2} \right)^{2}+\mu^{2}+2\mu(|z|^{2}-R)}}\right)\biggm{|}_{\mu=0}=\frac{2R}{(R+|z |^{2})^{2}} \tag{3.46}\] and also \(P_{N-2}^{\prime}(1)=(N-2)(N-1)/2\), which follows per induction from [64, 8.939.6]. Collecting all terms gives \[\begin{split}&\frac{\partial}{\partial\mu}\bigg{\langle}\,\det M \bigg{\rangle}_{G_{2}}\,\biggm{|}_{\mu=0}=(N-2)\int_{0}^{\infty}\,dR\;e^{-R} \,\left(R+|z|^{2}\right)^{N-4}\,\left[|z|^{2}+R(N-2)\right]\\ &=(N-2)\left[(N-2)\int_{0}^{\infty}\,dR\;e^{-R}\left(R+|z|^{2} \right)^{N-3}-(N-3)\,\left|z\right|^{2}\,\int_{0}^{\infty}\,dR\;e^{-R}\left(R+ |z|^{2}\right)^{N-4}\right]\\ &=(N-2)\,\,e^{|z|^{2}}\bigg{[}(N-2)\,\,\Gamma(N-2,|z|^{2})-(N-3) \,\left|z\right|^{2}\,\Gamma(N-3,|z|^{2})\bigg{]}\,\end{split} \tag{3.47}\] where we have again utilised Eq. (3.45). By now substituting the following identity for incomplete \(\Gamma\)-functions, see e.g. [64, 8.356.2], \[m\,\,\Gamma\left(m,x\right)=\Gamma\left(m+1,x\right)-e^{-x}x^{m}\, \tag{3.48}\] in Eq. (3.47), both for \(m=N-2\) and \(m=N-3\), one finds that \[\frac{\partial}{\partial\mu}\bigg{\langle}\,\det M\bigg{\rangle}_{G_{2}}\, \biggm{|}_{\mu=0}=\left(N-2-|z|^{2}\right)\,e^{|z|^{2}}\,\,\Gamma(N-1,|z|^{2}) +|z|^{2(N-1)}\, \tag{3.49}\] which concludes our proof of Eq. (3.44). We can now finish the proof of Theorem 2.5. Going back to Eq. (3.30) and utilising the results from Corollary 3.5 for the averages over \(G_{2}\), we get the following expression: \[\begin{split}\left\langle\mathcal{O}_{\widetilde{z}}\,\,\delta(z -\widetilde{z})\right\rangle_{\text{GinOE},N}=&\ C_{N}^{\prime} \,\,(2\pi)^{N-2}\,\,C_{N-2}\,\,\,\,\frac{1}{2y}\int d\delta\,\,\delta\,\, \sqrt{\delta^{2}+4y^{2}}\,\,\exp\left[-\frac{1}{2}\delta^{2}\right]\\ &\times\left[\Gamma(N-1,|z|^{2})\,\left(N-1-|z|^{2}\right)+|z|^{2 (N-1)}e^{-|z|^{2}}\right]\,.\end{split} \tag{3.50}\] Finally, calculating the remaining integral over \(\delta\), using [64, 3.382.4], Eq. (3.48) and [65, 8.4.6], as \[\frac{1}{2y}\,\,\int_{0}^{\infty}d\delta\,\,\delta\,\,\sqrt{\delta^{2}+4y^{2}} \,\,e^{-\frac{1}{2}\delta^{2}}=1+\sqrt{\frac{\pi}{2}}\,\,e^{2y^{2}}\,\,\frac{1} {2|y|}\,\,\text{erfc}\left(\sqrt{2}\,\,|y|\right) \tag{3.51}\] and collecting all multiplicative constants together yields \[\begin{split}\mathcal{O}_{N}^{(\text{GinOE},c)}(z)&= \left\langle\frac{1}{N}\sum_{n=1}^{N}\mathcal{O}_{nn}\ \delta(z-z_{n})\right\rangle_{\text{GinOE},N}=\frac{1}{\pi}\ \left(1+\sqrt{\frac{\pi}{2}}\ \exp\left[2y^{2}\right]\ \frac{1}{2|y|}\ \text{ erfc}\left(\sqrt{2}\ |y|\right)\right)\\ &\times\left[\ \frac{\Gamma\left(N-1,|z|^{2}\right)}{(N-2)!}\left[N-1-|z |^{2}\right]+\frac{|z|^{2(N-1)}}{(N-2)!}\ e^{-|z|^{2}}\right]\,,\end{split} \tag{3.52}\] thus proving the Theorem 2.5. ## 4 Asymptotic Analysis for large matrix size In this Section, we prove the large \(N\gg 1\) asymptotic results in various scaling regimes, which are stated in Corollaries 2.6, 2.7 and 2.8. The starting point is always the finite \(N\) result in Theorem 2.5, from there we then perform our asymptotic analysis via Laplace's method. **Remark 4.1**.: The asymptotic analysis at \(N\to\infty\) becomes simpler, after defining the function \[\Theta_{N}^{(M)}(x)\equiv\frac{\Gamma(N-M+1,Nx)}{\Gamma(N-M+1)}\, \tag{4.1}\] which appears in the expression at finite \(N\) for the mean self-overlap in the GinOE. We observe that, for real \(x\) and fixed \(M\), this function is bounded for all \(N\) by \(1\), i.e. \(\Theta_{N}^{(M)}(x)\leq 1\). Furthermore, in the limit where \(N\to\infty\) for a fixed, real \(x\) and fixed non-negative integer \(M\), the following holds: \[\lim_{N\to\infty}\Theta_{N}^{(M)}(x)=\Theta[1-x]\, \tag{4.2}\] where \(\Theta[x]\) is the Heaviside step function. We will also need the following asymptotic formula, see e.g. [55, Eq. (2.9)]: \[\lim_{N\to\infty}\frac{\Gamma\left(N-1,N+2\delta N^{1/2}\right)}{\Gamma\left( N-1\right)}=\frac{1}{\sqrt{2\pi}}\int_{2\delta}^{\infty}dv\ \exp\left[-\frac{v^{2}}{2}\right]=\frac{1}{2}\text{erfc}\left(\sqrt{2}\delta \right). \tag{4.3}\] For a fixed, finite \(\delta\) we have from the definition of the incomplete \(\Gamma\)-function that: \[\lim_{N\to\infty}\frac{\Gamma\left(N-1,\delta\right)}{\Gamma\left(N-1\right)} =\ e^{-\delta}\ \sum_{k=0}^{\infty}\frac{\delta^{k}}{k!}=1. \tag{4.4}\] We will also use the following large \(N\) asymptotic behaviour of the complementary error function \(\text{erfc}(x)\), see e.g. [65, 7.12.1] \[\text{erfc}\left(\sqrt{2}y\right)=\frac{e^{-2y^{2}}}{y\sqrt{2\pi}}\sum_{m=0}^ {\infty}(-1)^{m}\frac{(2m-1)!!}{(4y^{2})^{m}}\approx\frac{e^{-2y^{2}}}{y\sqrt {2\pi}}\, \tag{4.5}\] in particular, the above implies that as \(y\to\sqrt{N}y\) and \(N\to\infty\), the following term vanishes \[e^{2y^{2}}\ \frac{1}{2y}\ \text{erfc}\left(\sqrt{2}y\right)\to e^{2 Ny^{2}}\ \frac{1}{2\sqrt{N}y}\ \frac{e^{-2 Ny^{2}}}{\sqrt{N}y\sqrt{2\pi}}=\frac{1}{2Ny^{2}\sqrt{2\pi}}\stackrel{{ N\gg 1}}{{\longrightarrow}}0. \tag{4.6}\] We now start with the asymptotic analysis in the bulk of the GinOE and give the proof of Corollary 2.6. _Proof._ Starting from Eq. (2.10), we introduce the scaling \(z\to\sqrt{N}w\), with \(w=x+iy\) such that \(|w|<1\) and keeping \(|y|>N^{1/2}\) to avoid the depletion regime. This transforms the Eq. (2.10) to \[\begin{split}\frac{1}{N}\mathcal{O}_{N}^{(\text{GinOE},c)}(\sqrt {N}w)&=\frac{1}{\pi\ N}\ \left(1+\sqrt{\frac{\pi}{2}}\ \exp\left[2\ N\ y^{2}\right]\ \frac{1}{2\ N\ |y|}\ \text{ erfc}\left(\sqrt{2}\ N\ |y|\right)\right)\\ &\times N\ \left[\ \frac{\Gamma\left(N-1,N|w|^{2}\right)}{\Gamma \left(N-1\right)}\left[1-\frac{1}{N}-|w|^{2}\right]+\frac{1}{N}\frac{N^{N-1} \ |w|^{2(N-1)}}{(N-2)!}\ e^{-N\ |w|^{2}}\right]\,.\end{split} \tag{4.7}\] The last term in the above expression vanishes, because \(|w|<1\) and \(N^{N-2}/(N-2)!\to 0\) as \(N\to\infty\) via Stirling's formula. From Remark 4.1 we deduce that the term containing the error function vanishes and the ratio of \(\Gamma\)-functions can be identified with \(\Theta^{(2)}_{N}(|z|^{2})\), which leaves us with an expression of the form \[\frac{1}{N}\mathcal{O}^{(\mathrm{GinOE},c)}_{N}(\sqrt{N}w)\to\frac{1}{\pi}\ \Theta^{(2)}_{N}(|w|^{2})\bigg{[}1-\frac{1}{N}-|w|^{2}\bigg{]}. \tag{4.8}\] Taking the limit straightforwardly results in \[\mathcal{O}^{(\mathrm{GinOE},c)}_{\mathrm{bulk}}(w)\equiv\lim_{N\to\infty} \frac{1}{N}\ \mathcal{O}^{(\mathrm{GinOE},c)}_{N}\left(\sqrt{N}w\right)=\frac{1}{\pi} \left(1-|w|^{2}\right)\ \Theta\left[1-|w|^{2}\right]\, \tag{4.9}\] in full accordance with the result of Chalker & Mehlig in the bulk of the GinUE [7]. We now proceed with calculating the large \(N\) asymptotic behaviour of the mean self-overlap at the edge, still keeping away from the depletion regime of the GinOE, providing a proof of Corollary 2.7. Proof.: For the edge of the spectrum we consider the scaling \[z=x+iy=\left(\sqrt{N}+\eta\right)e^{i\theta}=\left(\sqrt{N}+\eta\right)\cos \left(\theta\right)+i\left(\sqrt{N}+\eta\right)\sin\left(\theta\right)\, \tag{4.10}\] which implies that the imaginary part \(y\) is to be replaced by \[y=\left(\sqrt{N}+\eta\right)\sin\left(\theta\right)\stackrel{{ N\gg 1}}{{\approx}}\sqrt{N}\ \sin\left(\theta\right)\, \tag{4.11}\] with the condition that \(|\sin\left(\theta\right)|\sim\mathcal{O}(1)\) to avoid the depletion regime. Together with Remark 4.1, this implies immediately that the term containing the error function will again vanish in this scaling limit. Also, as \(N\) becomes large, the absolute value of \(z\) becomes \[|z|^{2}=\left(\sqrt{N}+\eta\right)^{2}=N+2\eta\ \sqrt{N}+\eta^{2}\stackrel{{ N\gg 1}}{{\approx}}N+2\eta\ \sqrt{N}. \tag{4.12}\] Recalling the finite \(N\) result in Eq. (2.10), the limiting mean self-overlap at the edge becomes \[\frac{1}{\sqrt{N}}\ \mathcal{O}^{(\mathrm{GinOE},c)}_{N}(z) =\frac{1}{\pi}\ \bigg{[}\ \frac{\Gamma\left(N-1,N+2\eta\ \sqrt{N}\right)}{\Gamma\left(N-1\right)}\bigg{(}-\frac{1}{\sqrt{N}}-2\eta \bigg{)} \tag{4.13}\] \[+\frac{1}{\sqrt{N}}\ \frac{1}{(N-2)!}\ \left(N+2\eta\ \sqrt{N}\right)^{N-1}\ e^{-N-2\eta\ \sqrt{N}}\bigg{]}\.\] Now, by invoking Eq. (4.3) in Remark 4.1 we see that the first term tends to \(-\eta\ \mathrm{erfc}\left(\sqrt{2}\eta\right)\). In the second term, after using the Stirling approximation \((N-2)!\sim\sqrt{2\pi}\ e^{-N}\ N^{N-3/2}\), we take the limit, straightforwardly arriving at \[\lim_{N\to\infty}\frac{1}{\sqrt{N}}\frac{(N+2\eta\sqrt{N})^{N-1}}{(N-2)!}e^{- N-2\eta\sqrt{N}}=\lim_{N\to\infty}\frac{1}{\sqrt{2\pi}}\left(1+\frac{2\eta}{ \sqrt{N}}\right)^{N-1}e^{-2\eta\sqrt{N}}=\frac{1}{\sqrt{2\pi}}e^{-2\eta^{2}}. \tag{4.14}\] Combining the two contributions thus gives the required result: \[\mathcal{O}^{(\mathrm{GinOE},c)}_{\mathrm{edge}}(\eta)=\frac{1}{\pi}\left( \frac{1}{\sqrt{2\pi}}\ e^{-2\eta^{2}}-\eta\ \mathrm{erfc}\left(\sqrt{2}\ \eta\right)\right). \tag{4.15}\] Finally we consider the depletion region covered by Corollary 2.8 and proceed to give its proof. Proof.: We consider the limiting behaviour of the mean self-overlap of eigenvectors associated with eigenvalues of the form \(z=x+i\xi\), where \(\xi\sim\mathcal{O}(1)\) as \(N\to\infty\). If \(x\sim\mathcal{O}(1)\), i.e. we stay close to the origin as \(N\to\infty\), then we have that \(|z|^{2}\sim\mathcal{O}(1)\). Recalling Eq. (2.10) and inserting into it that \(z=x+i\xi\), we find that \[\mathcal{O}_{N}^{(\text{GinOE},c)}(z) =\frac{1}{\pi}\ \left(1+\sqrt{\frac{\pi}{2}}\ \exp\left[2\xi^{2}\right]\ \frac{1}{2|\xi|}\ \text{erfc}\left(\sqrt{2}\ |\xi|\right)\right) \tag{4.16}\] \[\times\bigg{[}\ \frac{\Gamma\left(N-1,x^{2}+\xi^{2}\right)}{ \Gamma\left(N-1\right)}\bigg{[}N-1-x^{2}-\xi^{2}\bigg{]}+\frac{\left(x^{2}+ \xi^{2}\right)^{N-1}}{(N-2)!}\ e^{-x^{2}-\xi^{2}}\bigg{]}\.\] Multiplying with overall factor \(1/N\), noticing that the final term vanishes as \(N\to\infty\) and recalling Remark 4.1, one arrives at \[\mathcal{O}_{\text{depletion},\text{origin}}^{(\text{GinOE},c)}(\xi)=\frac{1 }{\pi}\left(1+\sqrt{\frac{\pi}{2}}\ \frac{1}{2|\xi|}\ e^{2\xi^{2}}\ \text{erfc}\left(\sqrt{2}\ |\xi|\right)\right)\, \tag{4.17}\] thus reproducing Eq. (2.13). If we scale the real part as \(x=\sqrt{N}\delta\) instead, the limiting self-overlap acquires an additional contribution from the \(\Gamma\)-function terms in the second line in Eq. (4.16). This can be seen starting from the finite \(N\) expression \[\mathcal{O}_{N}^{(\text{GinOE},c)}(z) =\frac{1}{\pi}\ \left(1+\sqrt{\frac{\pi}{2}}\ \exp\left[2\xi^{2}\right]\ \frac{1}{2|\xi|}\ \text{erfc}\left(\sqrt{2}\ |\xi| \right)\right) \tag{4.18}\] \[\times\bigg{[}\ \frac{\Gamma\left(N-1,N\delta^{2}+\xi^{2}\right)}{ \Gamma\left(N-1\right)}\bigg{[}N\left(1-\delta^{2}\right)-1-\xi^{2}\bigg{]}+ \frac{N^{N-1}\left(\delta^{2}+\xi^{2}\right)^{N-1}}{(N-2)!}\ e^{-N\ \delta^{2}-\xi^{2}}\bigg{]}\,\] then utilising Remark 4.1, which leads to an additional \(\Theta\left[1-\delta^{2}\right]\). Employing an overall scaling by \(1/N\) to ensure a non-trivial limit completes the proof and we get Eq. (2.14) for the strip in the depleted region, i.e. \[\mathcal{O}_{\text{depletion},\text{strip}}^{(\text{GinOE},c)}(\delta,\xi)= \mathcal{O}_{\text{depletion},\text{origin}}^{(\text{GinOE},c)}(\xi)\left(1 -\delta^{2}\right)\Theta\left[1-\delta^{2}\right]. \tag{4.19}\] ### Acknowledgements We would like to thank Gernot Akemann, Mario Kieburg and Wojciech Tarnowski for useful discussions. This research has been supported by the EPSRC Grant EP/V002473/1 "Random Hessians and Jacobians: theory and applications". ## Appendix A Consistency Check: \(\mathcal{O}(z)\) in the GinUE In the limit of \(N\to\infty\) it is natural to expect the statistics of complex eigenvalues and eigenvectors of the real and complex Ginibre ensembles to match in the bulk and at the edge of the disk. For the GinUE, the finite \(N\) joint probability density function of an eigenvalue \(z=x+iy\), with \(y\neq 0\), and the associated self-overlap of the eigenvectors, \(\mathcal{O}\) has been derived in a rather complicated form in [55], which reads: \[\mathcal{P}_{N}^{(\text{GinUE},c)}(\mathcal{O},z)=\frac{1}{\pi\Gamma(N)\Gamma (N-1)}\frac{e^{\frac{|z|^{2}}{\mathcal{O}}}}{\mathcal{O}^{3}}\left(\frac{ \mathcal{O}-1}{\mathcal{O}}\right)^{N-2}\left[D_{1}^{(N)}+|z|^{2}\frac{D_{2}^{ (N)}}{\mathcal{O}}+|z|^{4}\frac{d_{1}^{(N)}}{\mathcal{O}^{2}}\right]\,\] (A.1) where \[D_{1}^{(N)} =|z|^{4}(N-1)(N-2)d_{1}^{(N-1)}+\left[(N-1)N-2|z|^{2}(N+|z|^{2}) \right]d_{1}^{(N)}\] (A.2) \[\quad-|z|^{2}(N-2)(N-|z|^{2})d_{2}^{(N-1)}+|z|^{2}d_{2}^{(N)}\] \[D_{2}^{(N)} =2Nd_{1}^{(N)}-|z|^{2}(N-2)d_{2}^{(N-1)}\] (A.3) \[d_{1}^{(N)} =\Gamma\left(N-1,|z|^{2}\right)\Gamma\left(N+1,|z|^{2}\right)- \Gamma\left(N,|z|^{2}\right)\Gamma\left(N,|z|^{2}\right)\] (A.4) \[d_{2}^{(N)} =\Gamma\left(N-1,|z|^{2}\right)\Gamma\left(N+2,|z|^{2}\right)- \Gamma\left(N,|z|^{2}\right)\Gamma\left(N+1,|z|^{2}\right)\.\] (A.5) For completeness, we check below that its first moment produces the known mean self-overlap at finite \(N\). By definition, the mean self-overlap is the first moment of the above JPDF and therefore can be obtained by integration. Hence, we have \[\mathcal{O}_{N}^{(\text{GinUE,c})}(z)=\int_{1}^{\infty}d\mathcal{O}\ \mathcal{O}\int_{\mathbb{C}}d^{2}z\ \delta(z-\tilde{z})\ \mathcal{P}_{N}^{(\text{GinUE,c})}(\mathcal{O},z)\,\] (A.6) which when written explicitly has the following form: \[\mathcal{O}_{N}^{(\text{GinUE,c})}(z)= \frac{1}{\pi\Gamma(N)\Gamma(N-1)}\int_{1}^{\infty}d\mathcal{O}\ \frac{e^{\frac{|z|^{2}}{\mathcal{O}^{2}}}}{\mathcal{O}^{2}}\left(\frac{ \mathcal{O}-1}{\mathcal{O}}\right)^{N-2}\left[D_{1}^{(N)}+|z|^{2}\frac{D_{2}^ {(N)}}{\mathcal{O}}+|z|^{4}\frac{d_{1}^{(N)}}{\mathcal{O}^{2}}\right]\] (A.7) \[= \frac{1}{\pi\Gamma(N)\Gamma(N-1)}\left[D_{1}^{(N)}I_{1}+|z|^{2}D _{2}^{(N)}I_{2}+|z|^{4}d_{1}^{(N)}I_{3}\right]\,\] (A.8) where the integrals \(I_{1}\), \(I_{2}\) and \(I_{3}\) are defined below and can be readily evaluated in terms of the incomplete \(\Gamma\)-function: \[I_{1}= \int_{1}^{\infty}d\mathcal{O}\ e^{\frac{|z|^{2}}{\mathcal{O}}} \frac{(\mathcal{O}-1)^{N-2}}{\mathcal{O}^{N}}=\frac{e^{|z|^{2}}}{|z|^{2(N-1)} }\left(\Gamma(N-1)-\Gamma(N-1,|z|^{2})\right)\] (A.9) \[I_{2}= \int_{1}^{\infty}d\mathcal{O}\ e^{\frac{|z|^{2}}{\mathcal{O}}} \frac{(\mathcal{O}-1)^{N-2}}{\mathcal{O}^{N+1}}=\frac{\left(|z|^{2N}-e^{|z|^{ 2}}(N-|z|^{2}-1)\left[\Gamma(N)-\Gamma(N,|z|^{2})\right]\right)}{(N-1)|z|^{2N}}\] (A.10) \[I_{3}= \int_{1}^{\infty}d\mathcal{O}\ e^{\frac{|z|^{2}}{\mathcal{O}}} \frac{(\mathcal{O}-1)^{N-2}}{\mathcal{O}^{N+2}}=\frac{\Gamma(N-1)}{|z|^{2(N+1) }\Gamma(N+2)}\bigg{(}(N+2-N^{2})|z|^{2(N+1)}+(N+1)|z|^{2(N+2)}\] (A.11) \[+e^{|z|^{2}}\left(|z|^{4}-2(N-1)|z|^{2}+N(N-1)\right)\left[\Gamma (N+2)-(N+1)\Gamma(N+1,|z|^{2})\right]\bigg{)}\.\] To combine all the above results we utilised Mathematica software to manipulate the resulting expressions, finally finding that \[\mathcal{O}_{N}^{(\text{GinUE,c})}(z)=\frac{1}{\pi}\left[\frac{\Gamma(N,|z|^{2 })}{(N-1)!}(N-|z|^{2})+\frac{|z|^{2N}}{(N-1)!}e^{-|z|^{2}}\right]\,\] (A.12) which can be shown to be equivalent to the forms appearing in the literature, see e.g. [44]. The corresponding edge and bulk scaling limits can be straightforwardly recovered using essentially the same steps as employed for the GinOE case.
2303.13743
TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images
Recent work in Neural Fields (NFs) learn 3D representations from class-specific single view image collections. However, they are unable to reconstruct the input data preserving high-frequency details. Further, these methods do not disentangle appearance from geometry and hence are not suitable for tasks such as texture transfer and editing. In this work, we propose TEGLO (Textured EG3D-GLO) for learning 3D representations from single view in-the-wild image collections for a given class of objects. We accomplish this by training a conditional Neural Radiance Field (NeRF) without any explicit 3D supervision. We equip our method with editing capabilities by creating a dense correspondence mapping to a 2D canonical space. We demonstrate that such mapping enables texture transfer and texture editing without requiring meshes with shared topology. Our key insight is that by mapping the input image pixels onto the texture space we can achieve near perfect reconstruction (>= 74 dB PSNR at 1024^2 resolution). Our formulation allows for high quality 3D consistent novel view synthesis with high-frequency details at megapixel image resolution.
Vishal Vinod, Tanmay Shah, Dmitry Lagun
2023-03-24T01:52:03Z
http://arxiv.org/abs/2303.13743v1
# TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images ###### Abstract Recent work in Neural Fields (NFs) learn 3D representations from class-specific single view image collections. However, they are unable to reconstruct the input data preserving high-frequency details. Further, these methods do not disentangle appearance from geometry and hence are not suitable for tasks such as texture transfer and editing. In this work, we propose TEGLO (Textured EG3D-GLO) for learning 3D representations from single view in-the-wild image collections for a given class of objects. We accomplish this by training a conditional Neural Radiance Field (NeRF) without any explicit 3D supervision. We equip our method with editing capabilities by creating a dense correspondence mapping to a 2D canonical space. We demonstrate that such mapping enables texture transfer and texture editing without requiring meshes with shared topology. Our key insight is that by mapping the input image pixels onto the texture space we can achieve near perfect reconstruction (\(\geq 74\) dB PSNR at \(1024^{2}\) resolution). Our formulation allows for high quality 3D consistent novel view synthesis with high-frequency details at megapixel image resolution. ## 1 Introduction Reconstructing high-resolution and high-fidelity 3D consistent representations from single-view in-the-wild image collections is critical for applications in virtual reality, 3D content creation and telepresence systems. Recent work in Neural Radiance Fields (NeRFs) [7, 17, 6, 39] aim to address this by leveraging the inductive bias across a dataset of single-view images of class-specific objects for 3D consistent rendering. However, they are unable to preserve high frequency details while reconstructing the input data despite the use of SIREN [44] or positional encoding [33], in part due to the properties of MLPs that they use [10]. For arbitrary resolution 3D reconstruction from single-view images, these methods face several challenges such as image-space approximations that break multi-view consistency constraining the rendering resolution [6], requiring Pivotal Tuning Inversion (PTI) [41] or fine-tuning for reconstruction [17, 6, 45] and the inability to preserve high-frequency details [17, 6, 45, 39]. To address these limitations, we propose TEGLO (Textured EG3D-GLO) that uses a tri-plane representation [6] and Generative Latent Optimization (GLO) [4] based training to enable efficient and high-fidelity 3D reconstruction, and novel view synthesis at arbitrary image resolutions from single-view image collections of objects. Recent works disentangle texture from geometry [10, 55] Figure 1: **Teaser - Demonstrating TEGLO for high fidelity 3D reconstruction and multi-view consistent texture representation and texture editing from single-view image collections of objects.** and enable challenging tasks such as texture editing and texture transfer. However, they depend on large-scale textured mesh data for high-fidelity 3D reconstruction which is laborious, expensive and time intensive to capture. Further, the use of a capture environment may cause a dataset-shift leading to generalization issues in downstream tasks, and the data use may require custom licensing. All of these factors limit access from the broader research community. This motivates the need for a method to learn textured 3D representations from single-view in-the-wild images of objects. However, the task of disentangling texture and 3D geometry from in-the-wild image collections is a formidable challenge due to the presence of wide variations in poses, partial views, complex details in appearance, geometry, noise _etc._ in the given image collection. Inspired by surface fields [16], TEGLO leverages the 3D surface points of objects extracted from a NeRF to learn dense correspondences via a canonical coordinate space to enable texture transfer, texture editing and high-fidelity single-view 3D reconstruction. Our key insight is that by disentangling texture and geometry using the 3D surface points of objects to learn a dense correspondence mapping via a 2D canonical coordinate space, we can extract a texture for each object. Then, by using the learned correspondences to map the pixels from the input image of the object onto the texture, we enable preserving high-frequency details. As expected, copying the input image pixels onto the texture accurately allows near perfect reconstruction while preserving high-fidelity multi-view consistent representation with high-frequency details. In this work, we present TEGLO, consisting of a tri-plane and GLO-based conditional NeRF and a method to learn dense correspondences to enable challenging tasks such as texture transfer, texture editing and high-fidelity 3D reconstruction even at large megapixel resolutions. We also show that TEGLO enables single-view 3D reconstruction with no constraints on resolution by inverting the image into the latent table without requiring PTI [41] or model fine-tuning. We present an overview of our final model in Fig.(2): TEGLO takes a single-view image and its approximate camera pose to map the pixels onto a texture. Then, to render the object from a different view, we extract the 3D surface points from the trained NeRF and use the dense correspondences to obtain the color for each pixel from the mapped canonical texture. Optionally, TEGLO can take texture edits and transfer textures across objects. In summary, our contributions are: 1. A framework for effectively mapping the pixels from an in-the-wild single-view image onto a texture to enable high-fidelity 3D consistent representations preserving high-frequency details. 2. A method for extracting canonical textures from single-view images enabling tasks such as texture editing and texture transfer for NeRFs. 3. Demonstrating that we can effectively map the single-view image pixels to canonical texture space while preserving 3D consistency and so achieving near perfect reconstruction (\(\geq 74\) dB PSNR at \(1024^{2}\) resolution). ## 2 Related Work **3D-aware generative models.** Learning 3D representations from multi-view images with camera poses have been extensively studied since the explosion of Neural Radiance Fields (NeRFs) [33, 46, 58, 2, 59, 17]. However, these methods require several views and learn a radiance field for a single scene. RegNeRF [36] reduces the need from several views to only a handful, however, the results have several artifacts. Recently, several works learn 3D representations from single-view images [7, 6, 28, 45, 39, 60]. Further, [48, 47, 49, 24] enable multi-view consistent editing, however, they are limited by the rendering resolution. Recent work propose single image 3D consistent novel view synthesis [56, 29, 18, 51], however they are not yet suitable for texture representation. While point cloud based diffusion models [57, 35] enable learning 3D representations, they have limited applicability in textured 3D generation and high fidelity novel view synthesis. In this work, we show that TEGLO learns textured 3D representations from class-specific single-view image collections. **Texture representation.** Template based methods [38, 3, 11, 20] deform a template mesh prior for 3D representations and are hence restricted in the topology they can represent. Texture Fields [37] enable predicting textured 3D models given an image and a 3D shape, but are unable to represent high-frequency details. While NeuTex [52] enables texture representation, it does not allow multi-view Figure 2: **Overview** - TEGLO enables 3D reconstruction and texture representation from single-view image collections of objects. consistent texture editing at the desired locations due to a contorted UV mapping [55]. NeuMesh [55] learns mesh representations to enable texture transfer and texture editing using textured meshes. However, it performs mesh-guided texture transfer and requires spatial-aware fine-tuning for mesh-guided texture edits. While GET3D [15] learns textured 3D shapes by leveraging tri-plane based geometry and texture generators, it requires 2D silhouette supervision and is limited to synthetic data. AUVNet [10] represents textures from textured meshes by learning an aligned UV mapping and demonstrates texture transfer. However, it depends on textured mesh data and requires multiple networks to enable single-view 3D reconstruction. In contrast, TEGLO learns textured 3D consistent representations from single-view images by inverting the image into the latent table. **Dense correspondences.** Previous work in dense correspondence learning involve supervised [13, 27] or unsupervised [54, 53] learning methods. CoordGAN [34] learns dense correspondences by extracting each image as warped coordinate frames transformed from correspondence maps which is effective for 2D images. However, CoordGAN is unable to learn 3D correspondences. AUVNet [10] establishes dense correspondences across 3D meshes via a canonical UV mapping and asserts that methods that do not utilize color for dense correspondence learning [14, 30] may have sub-par performance in texture representation. ## 3 Proposed Method Given a collection of single-view in-the-wild images of objects and their approximate camera poses, TEGLO aims to learn a textured 3D representation of the data. TEGLO consists of two stages: 3D representation learning and dense correspondence learning. TEGLO Stage-1 consists of a conditional NeRF leveraging a Tri-Plane representation and an auto-decoder training regime based on generative latent optimization (GLO) [4] for 3D reconstruction of the image collection. TEGLO Stage-2 uses a dataset rendered using TEGLO Stage-1 consisting of the geometry from five views of an object and the optimized latent code. TEGLO Stage-2 uses the 3D surface points from the rendered dataset to learn dense pixel-level correspondences via a 2D canonical coordinate space. Then, the inference stage uses the learned dense correspondences to map the image pixels from the single-view input image onto a texture extracted from TEGLO-Stage 2. As a result, TEGLO effectively preserves high frequency details at an unprecedented level of accuracy even at large megapixel resolutions. TEGLO disentangles texture and geometry enabling texture transfer (Fig.(12)), texture editing (Fig.(11)) and single view 3D reconstruction without requiring fine-tuning or PTI (Fig.(9)). ### TEGLO Stage 1: 3D representation **Formulation.** We denote the single-view image collection (\(\mathcal{I}\)) with class specific objects as \(\{o_{0},o_{1},...,o_{n}\}\in\mathcal{I}\). For learning 3D representations, TEGLO employs a generative latent optimization (GLO) based auto-decoder training, where NeRF is conditioned on an image specific latent code \(\{w_{0},w_{1},...,w_{n}\}\in\mathcal{R}^{D}\) to effectively reconstruct the image without requiring a discriminator. **Network architecture.** The NeRF model \(\mathcal{N}\) is represented by TEGLO Stage-1 in Fig.(3). The model \(\mathcal{N}\) passes the input conditioning latent \(w_{i}\) to a set of CNN-based synthesis layers [23] whose output feature maps are used to construct a k-channel tri-plane. The sampled points on each ray are used to extract the tri-plane features and aggregate the k-channel features. Then the tri-plane decoder MLP outputs the scalar density \(\sigma\) and color which are alpha-composited by volume rendering to obtain the RGB image. Volume rendering along camera ray \(r(t)=O+td\) is: Figure 3: **Architecture** - TEGLO Stage-1 (left) uses a tri-plane and GLO based conditional NeRF to learn a per-object table of latents to reconstruct the single-view image collection. TEGLO Stage-2 (right) learns dense correspondences via a 2D canonical coordinate space. \[\mathcal{C}_{\text{NeRF}}(r,w)=\int_{b_{n}}^{b_{f}}T(t,w)\sigma(r(t),w)\textbf{c}(r (t),\textbf{d},w)dt \tag{1}\] \[\text{where}\;\;T(t,w)=\;\text{exp}\;\left(-\int_{b_{n}}^{b_{f}}\sigma(r(s),w) \right)ds\] Here, the radiance values can be replaced with the depth \(d(x)\) or pixel opacity to obtain the surface depth. During inference, the surface depth map and 2D pixel coordinates are used to obtain the 3D surface points via back-projection. The surface normals can be computed as the first derivative of the density \(\sigma\) with respect to the input as follows: \[\widehat{n}(r,w)=-\int_{b_{n}}^{b_{f}}T(t,w)\;\sigma(r(t),w)\;\nabla_{r(t)}( \sigma(r(t),w))dt\] \[n(r,w)=\frac{\widehat{n}(r,w)}{||\;\widehat{n}(r,w)\;||_{2}} \tag{2}\] Thus from an inference step, an RGB image, surface depth map, 3D surface points and the surface normals of the object instance can be obtained. In Fig.(4), we show the sample reconstruction results for \(\mathcal{N}\) on the CelebA-HQ, AFHQv2 and ShapeNet-Cars datasets. In Fig.(5) we show qualitative results for novel view synthesis with \(\mathcal{N}\) trained on SRN-Cars and evaluated on a held-out set of views. Since SRN-Cars is a multi-view dataset, we compare the rendered novel views with their corresponding ground-truth views. **Losses.**\(\mathcal{N}\) is trained by jointly reconstructing the image and simultaneously optimizing a latent (\(w_{i}\)). As noted in [39], this enables the training loss to be enforced on individual pixels enabling training and inference at arbitrary image resolutions. As depicted in TEGLO Stage-1 in Fig.(3), three losses are minimized to train \(\mathcal{N}\): \(\mathcal{L}_{\text{RGB}}\) is the \(\mathcal{L}_{1}\) reconstruction loss between the pixels from the rendered image and the corresponding pixels from the ground truth image for the object (\(o_{i}\)). The \(\mathcal{L}_{\text{Perceptual}}\) loss is the LPIPS (Learned Perceptual Image Patch Similarity) loss between rendered image and the ground truth image view. The \(\mathcal{L}_{\text{Camera}}\) is the camera prediction \(\mathcal{L}_{1}\) loss between the output of the light-weight camera encoder and the ground-truth camera parameters for the camera pose in order to learn 3D consistent representation of the object \((o_{i}\in\mathcal{I})\). \[\mathcal{L}_{\mathcal{N}}=\mathcal{L}_{\text{RGB}}+\mathcal{L}_{\text{ Perceptual}}+\mathcal{L}_{\text{Camera}} \tag{3}\] To train \(\mathcal{N}\), we use the single-view image dataset and the approximate pose for each \(o_{i}\in\mathcal{I}\) (Sec.(4)). We train the model for 500K steps using the Adam optimizer [25] on 8 NVIDIA V100 (16 GB) taking 36 hours to complete. **Design choices.** As noted in Sec.(1), EG3D [6] shows medium resolution (\(512^{2}\)) capacity while using image-space approximations in the super-resolution module which negatively affects the geometric fidelity [45]. While Epi-GRAF [45] uses a patch-based discriminator for pure 3D generation, it is still prone to issues in scaling and training with multi-resolution data. Moreover, adversarial training using discriminators leads to training instability. Different from EG3D and EpiGRAF that use an adversarial training paradigm, \(\mathcal{N}\) uses a GLO-based auto-decoder training paradigm which jointly optimizes a latent representation and reconstructs the image enabling arbitrary resolution synthesis - even at large megapixel resolutions - without the constraints of a discriminator. Hence, \(\mathcal{N}\) enables 3D representations with geometric fidelity while also benefiting from an efficient tri-plane based representation. EG3D [6] requires camera pose conditioning for the generator and discriminator to establish multi-view consistency. The limitation of a pose-conditioned generator is that it does not completely disentangle the pose from appearance which leads to artifacts such as degenerate solutions (2D billboards), or expressions such as the eye or smile following the camera. Since \(\mathcal{N}\) optimizes a latent representation of an object to reconstruct it, we observe that the generator does not require camera pose conditioning and simply using a light-weight camera predictor network and training with a camera prediction loss (\(\mathcal{L}_{\text{Camera}}\)) is sufficient to learn 3D consistent representations. ### TEGLO Stage 2: Dense correspondences **Formulation.** We render a multi-view dataset (\(\mathcal{D}\)) using \(\mathcal{N}\) trained on single-view image collections for the task of texture representation. We denote each object \(e_{i}\in\mathcal{D}\) Figure 4: **Rendering the dataset for TEGLO Stage-2** - Rendering multiple views of images, surface normals, depth maps and 3D surface points from CelebA-HQ, AFHQv2-Cats and ShapeNet-Cars for learning dense correspondences in TEGLO Stage-2. Figure 5: **Novel view synthesis** - Results for ShapeNet-Cars data. comprising of five views: \(e_{i}=\{v_{f},v_{l},v_{r},v_{t},v_{b}\}\) where \(v\) denotes the view, and the sub-scripts (\(j\) for all \(v_{j}\)) denote frontal, left, right, top and bottom poses respectively (refer Fig.(4)). In \(\mathcal{D}\), each view \(v_{j}\in e_{i}\) includes the depth map (\(\widehat{d_{j}}\)), RGB image (\(\widehat{r_{j}}\)), surface normals (\(\widehat{s_{j}}\)), 3D surface points (\(\widehat{p_{j}}\)), and the optimized latent, \(w_{i}\), which is identical for views of \(e_{i}\) as it is independent of camera pose (Fig.(4)). For TEGLO Stage 2, we use \(\{\{\widehat{r_{j}},\widehat{s_{j}},\widehat{p_{j}}\}\in v_{j},w_{i}\}\in e_{i}\}\). Learning dense pixel-level correspondences across multiple views of an object is the task of locating the same 3D coordinate point in a canonical coordinate space. Inspired by surface fields [16], we aim to learn dense correspondences using the 3D surface points extracted from \(\mathcal{N}\) by back-projecting the depth (\(\widehat{d_{j}}\)) and pixel coordinates. Inspired by CoordGAN [34] and AUVNet [10], we propose a dense correspondence learning network in TEGLO Stage-2 trained in an unsupervised manner learning an aligned canonical coordinate space to locate the same 3D surface point across different views (\(v_{j}\)) of the same object (\(e_{i}\)). **Network architecture.** TEGLO Stage-2 is represented in Fig.(3). The architecture consists of a latent mapping network (\(\mathcal{L}\)), a dense correspondence network (\(\mathcal{M}\)) and a basis network (\(\mathcal{C}\)) - all of which are MLP networks. The 3D surface points (\(\widehat{p_{j}}\)) from \(v_{j}\in e_{i}\)) are mapped to a 2D canonical coordinate space conditioned on a shape code mapped from the optimized latent \(w_{i}\) for \(e_{i}\). We use a Lipschitz regularization [31] for each MLP layer in the dense correspondence network (\(\mathcal{M}\)). The latent mapping network (\(\mathcal{L}\)) is a set of MLP layers that takes the \(w_{i}\)-latent for \(e_{i}\) as input and predicts a shape-code for conditioning the dense correspondence network \(\mathcal{M}\), and coefficients for the deformed basis. Previous work [50, 10] show that if the input is allowed to be represented as a weighted sum of basis images, _i.e_. to obtain a deformed basis before decomposition, then the 2D canonical coordinate space will be aligned. The basis network (\(\mathcal{C}\)) is similar to [10] and uses the predicted coefficients to decompose the deformed coordinate points. Thus, \(\mathcal{M}\) maps the 3D surface points to an aligned 2D canonical coordinate space, enabling the learning of dense correspondences using the \(p_{j}\in\mathcal{S}\) extracted from \(\mathcal{N}\). Next, the basis network takes the 2D canonical coordinates as input to predict the deformed basis \(\mathcal{B}\). Then, \(\mathcal{B}\) is weighted with the predicted coefficients to decompose the basis into the 3D surface points (\(p_{j}\)), surface normals (\(s_{j}\)) and color (\(r_{j}\)). **Losses.** TEGLO Stage-2 is trained using three \(\mathcal{L}_{2}\) reconstruction losses: the \(\mathcal{L}_{\text{RGB}}\) loss between the rendered RGB image \(\widehat{r_{j}}\) and the predicted RGB image \(r_{j}\); the \(\mathcal{L}_{\text{Normals}}\) loss between the rendered surface normals \(\widehat{s}_{j}\) and the predicted surface normals \(s_{j}\); \(\mathcal{L}_{\text{Coord}}\) loss between the extracted 3D surface points \(\widehat{p}_{j}\) and the predicted 3D surface points \(p_{j}\). Hence, the total training loss for TEGLO Stage-2 is: \[\mathcal{L}_{\text{Stage2}}=\mathcal{L}_{\text{RGB}}+\mathcal{L}_{\text{ Normals}}+\mathcal{L}_{\text{Coord}} \tag{4}\] To train TEGLO Stage-2, we use the rendered dataset \(\mathcal{D}\) consisting of 1000 objects with five views per object and the optimized latent for each identity. The networks are trained using \(\mathcal{L}_{\text{Stage2}}\) loss for 1000 epochs using the Adam [25] optimizer to learn dense correspondences across \(e_{i}\in\mathcal{D}\). **Design choices.** We use the optimized \(w\)-latent from \(\mathcal{N}\) for learning the shape code and coefficients for TEGLO Stage-2 because it represents the 3D geometry and appearance information for object (\(e_{i}\)) independent of camera pose. We observe that using a Lipschitz regularization for every MLP layer in \(\mathcal{M}\) suitably regularizes the network to deform the input surface points \(\widehat{s}_{j}\). Interestingly, our experiments show that simply reconstructing the 3D surface points instead of the color, surface points and surface normals also leads to learning reasonable dense pixel-level correspondences. We show qualitative results for TEGLO Stage-2 trained using only \(\mathcal{L}_{\text{Coord}}\) loss in Fig.(8) as TEGLO-3DP. ### Inference. **Extracting the texture.** After training TEGLO Stage-2, we use the learned dense correspondences to extract a texture map for every object \(o_{i}\in\mathcal{I}\). We use the pose of the target image \(o_{i}\) to extract the 3D surface points from \(\mathcal{N}\) and use it to map the image pixels to the 2D canonical coordinate space. We denote this as texture \(t_{GT}\). Similarly, we use \(\mathcal{M}\) to map the respective RGB values from Figure 6: **Inference** - TEGLO texture extraction for texture transfer and editing. Red arrows indicate the use of a K-d tree to store the texture. Blue arrows indicate the use of input image pixels. Figure 7: **Interpolating textures with sparse “holes” - Depicting the KD-Tree and Natural Neighbor Interpolation (NNI) to interpolate “holes” (if any) in the texture for novel view synthesis.** \(\{v_{f},v_{l},v_{r},v_{t},v_{b}\}\in e_{i}\) using the corresponding 3D surface points (\(s_{j}\)) from all five views to the 2D canonical coordinate space. We denote this as texture \(t_{\text{views}}\). Thus, textures \(t_{GT}\) and \(t_{\text{views}}\) store a mapping _i.e_. the canonical coordinate point and the corresponding RGB values. The procedure is represented in Fig.(6) and textures are depicted in Fig.(10) and Fig.(11). In Fig.(6) \(t_{O}\) represents the texture obtained by combining \(t_{GT}\) and \(t_{\text{views}}\). We store this mapping in a K-d tree which enables us to index into the textures using accurate floating point indices to obtain the RGB values. The K-d tree allows querying with canonical coordinate points to extract multiple neighbors and enables TEGLO to be robust to sparse "holes" in the texture as depicted in Fig.(7). **Novel view synthesis.** For rendering novel views of \(o_{i}\), we extract the 3D surface point for the pose from \(\mathcal{N}\) and obtain the canonical coordinates from \(\mathcal{M}\). For each 2D canonical coordinate point \(c_{k}\), we query the K-d tree for three natural neighbors and obtain indices for the neighbors which are used to obtain the respective RGB values. Natural Neighbor Interpolation (NNI) [43] enables fast and robust reconstruction of a surface based on a Dirichlet test-solation - unique for every set of query points - to provide an unambiguous interpolation result. We simplify the natural neighbor interpolation (NNI) based only on the distances of the points \(c_{k}\) in the 2D canonical coordinate space to obtain the RGB values from the stored texture. The robust and unambiguous interpolation enables TEGLO to effectively map the ground-truth image pixels from the input dataset \(\mathcal{I}\) onto the geometry for novel view synthesis. To extract the Surface Field \(\mathcal{S}\), we render \(e_{i}\) from five camera poses causing potential camera pose biases that may lead to sparse "holes" in the texture. Our formulation uses the K-d tree and NNI to interpolate and index into textures with sparse "holes". In Fig.(7), each cell in the 5x6 grid represents a discrete pixel in the texture space and the red dot represents a canonical coordinate point. There are three issues that may arise: 1. The canonical coordinate points may not be aligned to the pixel centers and storing them in the discretized texture space may lead to imprecision. 2. There may be multiple canonical coordinates mapped to a discrete integral pixel wherein some coordinates may need to be dropped for an unambiguous texture indexing - leading to loss of information. 3. Some pixels may not be mapped to by any canonical coordinates, creating a "hole" in discretized space. This is represented by "\(\mathbf{X}\)" in the grid in Fig.(7). K-d tree allows extracting multiple neighbors by querying with canonical coordinate points and also enables indexing the texture using floating point values. Hence, using a K-d tree to store the texture helps address (1) and (2). Further, using a K-d tree in conjunction with Natural Neighbor Interpolation (NNI) effectively addresses (3). We include more details in the supplementary material. **Texture editing.** Texture editing is represented by \(t_{\text{Edit}}\) in Fig.(6). We create the edits on a blank image the same size as that of \(t_{\text{O}}\) and denote it as \(r_{\text{edit}}\). The edit image \(r_{\text{edit}}\) is taken to be in the canonical coordinate space and hence directly indexed into the K-d tree to be overlay on \(t_{O}\). Note that the we do not apply any constraint on the texture space represented and hence the texture may be visually aligned to a non-frontal canonical pose as is the case in Fig.(10) and Fig.(11). The final texture with the edit \(t_{\text{Edit}}\) is created by combining \(t_{O}\) and \(r_{\text{edit}}\). Qualitative results for texture edits are depicted in Fig.(1) and Fig.(11). Figure 8: **Qualitative results** - Comparison with relevant 3D-aware generative baseline methods at \(256^{2}\) resolution for CelebA-HQ. ## 4 Experiments and Results **Datasets.** We train TEGLO with single-view image datasets such as FFHQ [23], CelebA-HQ [32, 21] and AFHQv2-Cats [22, 12]. To obtain the approximate camera pose, we follow [39] by first using an off-the-shelf face landmark predictor MediaPipe Face Mesh [1] to extract landmarks appearing at consistent locations. Then, we use a shape-matching least-squares optimization to align the landmarks with 3D canonical landmarks to obtain the approximate pose. We also use a multi-view image dataset - ShapeNet-Cars [9, 8] with results in Fig.(1) and Table.(3). tion on samples from CelebA-HQ are in Fig.(9). Previous work such as AUVNet [10] require additional training of a ResNet-18 [19] for the image encoder and IM-Net [11] for the shape decoder followed by ray marching to obtain the mesh to represent the image while methods such as EG3D [6] require PTI (Pivotal Tuning Inversion [41]) fine-tuning to represent the image. For single-view textured 3D representation in TEGLO, we simply invert the image into the latent and do not require any fine-tuning. Reconstructing single-view images at arbitrary resolutions while preserving 3D consistency is very desirable for many applications. However, EG3D [6] has a limitation in performing this task because its generator is conditioned on the camera intrinsic and extrinsic parameters, leading to a "baked-in" training image resolution. As TEGLO does not condition on the camera, it enables single-view 3D reconstruction and novel view synthesis at arbitrary resolutions without requiring re-training for different resolutions. **Texture editing.** In Sec.(3.3), we describe the procedure to edit textures using TEGLO. The qualitative results with texture editing for CelebA-HQ [32, 21] are depicted in Fig.(11) and for AFHQv2-Cats and ShapeNet-Cars in Fig.(1). Our edits are class-specific and target image agnostic because edits are performed in the canonical space. Previous work, NeuMesh [55] requires spatial-aware fine-tuning and mesh guided texture editing for precise transfer. However, TEGLO simply maps a texture edit image of the same size as the texture into the K-d tree with an overlay of the pixels in the earlier texture (obtaining \(t_{\text{Edit}}\)) - precisely transferring the edit without requiring any optimization strategies. Further results are in the supplementary. **Texture transfer.** As discussed in Sec.(3.3), the extracted textures are aligned in a canonical coordinate space and allows transferring textures across different geometries. We demonstrate texture transfer across different geometries in Fig.(12). Here, row-1 represents the target image from CelebA-HQ for the geometry learned by TEGLO Stage-1 and column-1 represents the textures (stored in a K-d tree) extracted after TEGLO Stage-2. We observe realistic texture transfer despite arbitrary camera biases in rendering \(\mathcal{D}\) mitigated by using the K-d tree and NNI. (Fig.(7)). ## 5 Discussion While TEGLO enables near perfect 3D reconstruction of objects from single-view image collections, it requires non-trivial computational resources to train TEGLO Stage-1, render a dataset with five views of the object, train TEGLO Stage-2 and then use the input image surface points to extract the texture. We hope that future work can simplify the framework with an elegant end-to-end formulation. A trivial next step would be to use StyleGANv2 [23] to generate high quality textures for texture transfer and editing. TEGLO could enable 3D full-body avatars from single views with an unprecedented amount of detail preservation extending methods such as PIFu [42]. Future work could explore representing light stage data via NeRFs with high frequency details across different camera capture angles in an illumination invariant manner using 3D surface points. ## 6 Conclusion In this work, we present TEGLO for high-fidelity canonical texture mapping from single-view images enabling textured 3D representations from class-specific single-view image collections. TEGLO consists of a conditional a NeRF Figure 11: **Texture editing - Qualitative results for texture edits.** Figure 12: **Texture transfer - Qualitative results for texture transfer with CelebA-HQ. (Top row shows CelebA-HQ image targets).** and a dense correspondence learning network that enable texture editing and texture transfer. We show that by effectively mapping the input single-view image pixels onto the texture, we can achieve near perfect reconstruction (\(\geq 74\) dB PSNR at \(1024^{2}\) resolution). TEGLO allows single-view 3D reconstruction by inverting the single-view image into the latent table without requiring any PTI or fine-tuning.
2306.11952
First-principles prediction of structural, magnetic properties of Cr-substituted strontium hexaferrite, and its site preference
To investigate the structural and magnetic properties of Cr-doped M-type strontium hexaferrite (SrFe$_{12}$O$_{19}$) with x = (0.0, 0.5, 1.0), we perform first-principles total-energy calculations relied on density functional theory. Based on the calculation of the substitution energy of Cr in strontium hexaferrite and formation probability analysis, we conclude that the doped Cr atoms prefer to occupy the 2a, 12k, and 4f$_{2}$ sites which is in good agreement with the experimental findings. Due to Cr$^{3+}$ ion moment, 3 {$\mu_B$}, smaller than that of Fe$^{3+}$ ion, 5 {$\mu_B$}, saturation magnetization (M$_{s}$) reduce rapidly as the concentration of Cr increases in strontium hexaferrite. The magnetic anisotropic field $\left(H_{a}\right)$ rises with an increasing fraction of Cr despite a significant reduction of magnetization and a slight increase of magnetocrystalline anisotropy $\left(K_{1}\right)$.The cause for the rise in magnetic anisotropy field $\left(H_{a}\right)$ with an increasing fraction of Cr is further emphasized by our formation probability study. Cr$^{3+}$ ions prefer to occupy the 2a sites at lower temperatures, but as the temperature rises, it is more likely that they will occupy the 12k site. Cr$^{3+}$ ions are more likely to occupy the 12k site than the 2a site at a specific annealing temperature (>700{\deg}C).
Binod Regmi, Dinesh Thapa, Bipin Lamichhane, Seong-Gon Kim
2023-06-21T00:40:51Z
http://arxiv.org/abs/2306.11952v1
First-principles prediction of structural, magnetic properties of Cr-substituted strontium hexaferrite, and its site preference ###### Abstract To investigate the structural and magnetic properties of Cr-doped M-type strontium hexaferrite (SrFe\({}_{12}\)O\({}_{19}\)) with x = (0.0, 0.5, 1.0), we perform first-principles total-energy calculations relied on density functional theory. Based on the calculation of the substitution energy of Cr in strontium hexaferrite and formation probability analysis, we conclude that the doped Cr atoms prefer to occupy the 2a, 12k, and 4f\({}_{2}\) sites which is in good agreement with the experimental findings. Due to Cr\({}^{3+}\) ion moment, 3 \(\mu_{B}\), smaller than that of Fe\({}^{3+}\) ion, 5 \(\mu_{B}\), saturation magnetization (M\({}_{s}\)) reduce rapidly as the concentration of Cr increases in strontium hexaferrite. The magnetic anisotropic field (\(H_{a}\)) rises with an increasing fraction of Cr despite a significant reduction of magnetization and a slight increase of magnetocrystalline anisotropy (\(K_{1}\)).The cause for the rise in magnetic anisotropy field (\(H_{a}\)) with an increasing fraction of Cr is further emphasized by our formation probability study. Cr\({}^{3+}\) ions prefer to occupy the 2a sites at lower temperatures, but as the temperature rises, it is more likely that they will occupy the 12k site. Cr\({}^{3+}\) ions are more likely to occupy the 12k site than the 2a site at a specific annealing temperature (>700degC). + Footnote †: preprint: AIP/1203 ## I Introduction Hexaferrites, also known as hexagonal ferrites or hexagonal ferrimagnets, are a class of magnetic materials that have been of great interest to researchers since their discovery in the 1950's. These hexaferrites are found in numerous types such as M, Y, Z, W, X, U-type commonly doped with zinc, strontium, nickel, aluminum, and magnesium. The most common properties to all hexaferrites include that all are ferrimagnetic, their properties of magnetism are based on the crystal structure, and they take different amounts of energy to magnetize in a specific direction within the crystal because of spin-orbit interaction [1]. Particularly, we are interested in M-type strontium hexaferrite (SrFe\({}_{12}\)O\({}_{19}\), SFO) that falls to space group \(P63/mmc\) which has a crystal structure of hexagonal magnetoplumbite. The unit cell of SFO having two formula units is presented in Fig(1). The iron ions in this structure are coupled in a tetrahedral, trigonal bipyramidal, and octahedral manner by oxygen ions.The magnetic property is retained in SFO mainly due to the occupancy of Fe\({}^{+3}\) ions in five inequivalent sites (namely 2a, 2b, 4f\({}_{1}\), 4f\({}_{2}\), and 12k), three octahedral sites (namely 2a, 12k, and 4f\({}_{2}\)), one trigonal bipyramidal site (2b), and one tetrahedral site (4f\({}_{1}\)). However, the range (degree) of magnetic properties will be influenced by the shape and size of material particles mainly in the context of thin films, nanoparticles [2; 3; 4; 5]. In SFO, there is an involvement of interactions between the moments or moments with lattice ions, tend to contribute anisotropic energy which is termed as magnetocrystalline anisotropy (MA). In a more explicit way, it is the dependence of the magnetic properties on the applied field direction relative to the crystal lattice. Magnetocrystalline anisotropy energy (MAE), an integral property of a ferromagnetic crystal, is the energy difference to magnetize a crystal along easy and hard direction of magnetization [2; 6]. The primary source of MA is spin-orbit coupling (SOC). Because of this coupling, orbitals of electrons are coupled with the spin of electrons and follow the spin direction no matter how the magnetization changes Figure 1: (a) One double formula unit cell of SrFe\({}_{12}\)O\({}_{19}\). Small maroon spheres are O atoms, and two huge gray spheres are Sr atoms. Fe\({}^{3+}\) ions are represented by colored spheres encircled by polyhedra made of O atoms in a variety of inequivalent sites: \(2a\) (red), \(2b\) (blue), \(4f_{1}\) (green), \(4f_{2}\) (yellow), and \(12k\) (purple).(b) A schematic representation of the Fe\({}^{3+}\) ions of SrFe\({}_{12}\)O\({}_{19}\) in their lowest energy spin configuration. The local magnetic moment at each atomic location is represented by the arrows. its direction in space [6]. The anisotropy that arises on a crystal are mainly due to the shape of a magnetic particle at the quantum scale, atomic diffusion at sufficiently high temperature, and the interaction between a ferromagnetic and an antiferromagnetic materials [6; 7; 8; 9]. SFO is one of the best candidates among hexex ferrite groups owing to its industrial and electronic implications. In the beginning years, Technologies were motivated about SFO to make permanent magnets, recording media, electric motors because of its high saturation magnetization, large coercivity, optimum Curie temperature, supreme magnetocrystalline anisotropy, and more chemical stability. Nowadays, due to technological advancement, it is equally growing interest in the development of nano fibres, electronic components for mobile and wireless communications. Recently, researchers have been characterized Y, M, U, and Z ferrites as multiferroics even at room temperature. These multiferroics have a wide range of practical implications like multi-state memory elements, memory media, and novel functional sensors [10; 11; 2]. Several successful investigations have been done on SFO to understand its electronic structure by computational and experimental approach. To uplift the strength of magnetic and electric properties, researchers are substituted ions or pair of ions in a different concentration mainly on Fe sites of SFO. Majority of researchers have followed the non-magnetic ions substitution into Fe sites to enhance further the value of saturation magnetization (M\({}_{s}\)). In case of Zr-Cd substitution (SrFe\({}_{12-2x}\)(ZrCd)\({}_{x}\)O\({}_{19}\)), the value of M\({}_{s}\) augmented in the limit of concentration \(x=0.2\), whereas the value of coercivity declined with increasing concentration of Zr-Cd [12]. Substitution of Er-Ni pair in SFO showed the continuous rise in the value of M\({}_{s}\) and coercivity in accordance with the concentration [13]. However, the substitution of certain pairs like Zn-Nb, [14] Zn-Sn, [15; 16; 17] and Sn-Mg [18; 19] showed the increasing trend of M\({}_{s}\) and decreasing pattern of coercivity. In this study, we performed the first-principles total-energy calculations to analyze the link between site occupation and magnetic properties of substituted strontium hexaferrite, SrFe\({}_{12-x}\)Cr\({}_{x}\)O\({}_{19}\) with \(x=0.5\) and \(x=1.0\). Every configuration of substituted SFO appears with a particular probability. To determine the formation probabilities of its various configurations at a typical annealing temperature (\(1000K\)), we used the Boltzmann distribution function. We show that our calculation predicts a decrease of saturation magnetization (\(M_{s}\)) as well as a decrease in magnetic anisotropy energy (MAE) of SrFe\({}_{12-x}\)(Cr)\({}_{x}\)O\({}_{19}\) at \(x=0.5\) and \(1.0\) compared to the pure M-type SFO. This result is also in good agreement with the experimental observation as observed in Ghasemi et al. (2009) [19]. ## II Computational details We pursued the first-principles total-energy calculations for configuration SrFe\({}_{12-x}\)(Cr)\({}_{x}\)O\({}_{19}\) at \(x=0.5\) and \(1.0\). In our calculation, a unit cell of two formula units of SFO is used. The structural optimization calculation along with total energies and forces were carried out using density functional theory by projector augmented wave (PAW) potential as executed in VASP. Depending on the ground state ferrimagnetic spin ordering of Fe, our calculations were solely focused on spin-polarized [3; 20]. The expansion of the wave function was in the form of plane waves with a \(520eV\) energy cut-off used for pristine SFO, Cr-substituted SFO. A \(7\times 7\times 1\) Monkhorst-Pack k-mesh was used to sample a Brillouin zone with a Fermi-level smearing of \(0.2eV\) applied through the Methfessel-Paxton method [21; 22]. The electronic relaxation was accomplished till the change in free energy and the band structure energy less than \(10^{-7}eV\). In addition, we fully optimized the structure by relaxing the positions of ions and cell shape till the change in total energies between two ionic steps less than \(10^{-4}eV\). We used the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) to describe fully the electron exchange-correlation effect [23]. Furthermore, we implemented the \(GGA+U\) method in the simplified rotationally invariant approach described by Dudarev et al. to address the localized \(3d\) electrons in Fe [24]. The necessity of U\({}_{eff}\) for Fe was fulfilled by setting \(3.7eV\) based on previous study. we set U\({}_{eff}\) to be zero for all other elements [25]. To evaluate the magnetocrystalline anisotropy energy, we first carried out an accurate collinear calculation in the ground state, and then we followed the spin-orbit coupling calculations in two different spin orientations within non-collinear setup. The substitution of foreign atoms in five crystallographic inequivalent Fe sites can change the magnetic characteristics of SFO. When foreign atoms are substituted in a SFO unit cell, there are a variety of energetically distinct configurations. The magnetism of substituted SFO is highly dependent on the on-site preferences of the replaced atoms since SFO is ferrimagnetic. Understanding the site preference of substituted atoms is crucial in order to research how substitution affects magnetic characteristics. The substitution energy can be calculated to find the substituted atom's preferred site. The substitution energy \(E_{\rm sub}[i]\) for configuration \(i\) at 0 K is given by \[E_{\rm sub}[i]=E_{\rm CSFO}[i]-E_{\rm SFO}-\sum_{\beta}n_{\beta}\epsilon_{\beta} \tag{1}\] where \(E_{\rm CSFO}[i]\) is the total energy per unit cell of Cr-substituted SFO in configuration \(i\), whereas \(E_{\rm SFO}\) is the total energy per unit cell of pure SFO and \(\epsilon_{\beta}\) is the total energy per atom for element \(\beta\) (\(\beta\)= Cr and Fe) in its most stable crystal structure. \(n_{\beta}\) is the number of atoms of type \(\beta\) added or removed; if one atom is added, \(n_{\beta}=+1\), and when one atom is withdrawn, \(n_{\beta}=-1\). The calculation of Magnetic Anisotropy Energy (MAE) is important for understanding the preferred magnetization directions in a material. Mathematically, MAE is defined as the difference between the two total energies where the spin quantization axes are oriented along two distinct directions:[26] \[E_{a}=E_{(100)}-E_{(001)} \tag{2}\] where \(E_{(100)}\) is the total energy with the spin quantization axis in the magnetically hard axis and \(E_{(001)}\) is the total energy with the spin quantization axis in the magnetically easy axis. The total energies in Eq.(2) are computed by the non-self-consistent calculations, where the spin densities are kept constant. With the help of MAE, the uniaxial magnetic anisotropy constant, \(K_{1}\), can be computed as[27; 28] \[K_{1}=\frac{E_{a}}{V\sin^{2}\theta} \tag{3}\] where \(V\) is the equilibrium volume of the unit cell and \(\theta\) is the angle between the two spin quantization axis orientations (90\({}^{\circ}\) in the present scenario). The anisotropy field, \(H_{a}\), which is related to the coercivity can be expressed as[29] \[H_{a}=\frac{2K_{1}}{M_{s}} \tag{4}\] where \(K_{1}\) is the magnetocrystalline anisotropy constant and \(M_{s}\) is the saturation magnetization. When the difference in substitution energies \(E_{\mathrm{sub}}\) between different configurations is relatively small compared to the thermal energy at high annealing temperatures (\(\gtrsim\) 1000 K), the site preference of substituted atoms in hexaferrite can change. This change in site occupation preference can be described using the Maxwell-Boltzmann distribution, which relates to the formation probability. The site occupation probability or the formation probability \(P_{i}(T)\) of configuration \(i\) at temperature \(T\) is given by \[P_{i}(T)=\frac{g_{i}\exp{(-\Delta G_{i}/k_{B}T)}}{\sum_{j}g_{j}\exp{(-\Delta G _{j}/k_{B}T)}}, \tag{5}\] \[\Delta G_{i}=\Delta E_{i}+P\Delta V_{i}-T\Delta S_{i}, \tag{6}\] \[\Delta S_{i}=k_{B}\ln{(g_{i})}-k_{B}\ln{(g_{0})}\,, \tag{7}\] where \(\Delta G_{i},\Delta E_{i},\Delta V_{i}\), and \(\Delta S_{i}\) are the change in free energy, substitution energy, unit cell volume, and entropy of the configuration \(i\) relative to the ground state configuration. \(P\), \(k_{\mathrm{B}}\), and \(g_{i}\) are the pressure, Boltzmann constant, and multiplicity of configuration i. \(g_{0}\) is the multiplicity of the ground state configuration. we considered \(\Delta S_{i}\) to be the same for all configurations based on prior literature[2]. Eq.(7) enhances the model through the explicit computation of the entropy change concerning the most stable configuration[30; 31]. Hence, when the probability of higher energy configurations becomes significant at the annealing temperature, it can be inferred that in a substituted SFO sample, multiple configurations exist rather than a single one. Consequently, any physical quantity of the SFO sample will be a weighted average of the corresponding properties in these different configurations. \[\langle Q\rangle=\sum_{i}P_{1000\text{ K}}(i)\cdot Q_{i} \tag{8}\] where \(P_{1000\text{ K}}(i)\) and \(Q_{i}\) are the formation probability at 1000 K and a physical quantity \(Q\) of the configuration \(i\). The weighted average calculated by Eq.(8) is the material's low-temperature property even though 1000 K is used for computation because the crystalline configurations of CSFO will be distributed according to this value during the annealing process. ## III Results and Discussion In order to visualize the doping effect of the \(Cr^{+3}\) ion on the structural and magnetic properties of SFO, we replaced the \(Fe^{+3}\) from various lattice locations. We found that it has a significant effect on the structural and magnetic properties of this system. We fully relaxed the volume, ionic positions, and shape of SFO and Cr-doped SFO. The crystal structure remains hexagonal under all circumstances. We further carried out our calculations when we match the optimized lattice parameter of pure SFO (a = 5.928 A, c = 23.195 A) with experimental lattice constants (a = 5.890 A, c = 23.182 A); less than 1% difference between the calculated and experimental values. For x = 1.0, the calculated lattice parameters (a = 5.930 A, c = 23.076 A) were found to be very consistent with experimental lattice parameters (a = 5.902 A, c = 23.024 A)[32; 33]. Fig.2 shows the variation of lattice parameters from theory and experiment explicitly. To compensate for the unavailability of the experimental lattice parameter at x = 0.5, we are comparing the experimental values at x = 0.6 with our calculated values at x = 0.5 for the comparison in Fig. 2. The substitution of Cr in SFO has little effect on the lattice parameters or unit cell volume. Because the radius of Cr\({}^{+3}\)(0.630 A) is similar to that of Fe\({}^{+3}\)(0.645 A), this is an expected outcome. In this paper, we measured the various physical quantities by varying the concentration of Cr in the unit cell of SFO. For the x = 0.5, one Cr atom was substituted at one of the 24 Fe sites of the unit cell. Many of these Fe sites are equivalent when crystallographic symmetry operations are applied, leaving only five inequivalent structures. We label these inequivalent configurations [2a], [2b], [4\(f_{1}\)], [4\(f_{2}\)], and [12k] using the crystallographic name of the Fe site. These structures were found by fully optimizing the unit cell shape, volume, and ionic positions. We estimated the substitution energy, \(E_{sub}\) to comprehend the site preference of the substituted Cr atom. Table 1 displays the results of our calculation for each of the five inequivalent configurations in ascending order of substitution energy(\(E_{sub}\)). The configuration [2a] has the lowest \(E_{sub}\) followed by [12k], and [4\(f_{2}\)] which is consistent with the experimental outcomes [32; 34]. We can conclude that the [2a] site is the most preferred site for the Cr atom at 0 K. We used Eq.(5) to calculate the probability of forming each configuration as a function of temperature. Because the change in volume between different configurations is so small (less than 0.3 A\({}^{3}\)), we may discard the \(P\Delta V\) term as negligible (in the order of \(10^{-7}\)eV at a standard pressure of 1 atm) compared to the \(\Delta E_{\rm sub}\) (i) term in Eq.(6). The entropy change \(\Delta S\) has two components: configurational, \(\Delta S_{c}\), and vibrational, \(\Delta S_{\rm vib}\). [31]. \(\Delta S_{\rm vib}\) is around 0.1-0.2 \(k_{\rm B}\) /atom for binary substitutional alloys like the present system, and \(\Delta S_{c}\) is 0.1732\(k_{\rm B}\) /atom. As a result, we assign \(\Delta S=0.3732k_{\rm B}\)/atom. Fig.(3) shows the formation probability of different configurations of SrFe\({}_{12-x}\)Cr\({}_{x}\)O\({}_{19}\) with \(x=0.5\) at different temperatures. The foreign Cr\({}^{3+}\) ions prefer to occupy host Fe\({}^{3+}\) ions from the \(2a\) and the \(12k\) sites. The formation probabilities of \([2b],[4f_{1}]\), and \([4f_{2}]\) are trivial and not displayed in Fig.(3). A Cr\({}^{3+}\) ion has a 100% probability of occupying the \([2a]\) site at 0K and it's value sharply declines as the temperature rises, while the probability of occupancy of Cr\({}^{3+}\) at the \([12k]\) site is maximum (88 %) at 1000K. At a typical annealing temperature of 1000 K for CSFO, the site occupation probability of the site \([2a]\) and \([12k]\) are 7% and 88.4% respectively. In CSFO, the doped Cr\({}^{3+}\) ions are more likely to occupy Fe\({}^{3+}\) ions at the \([12k]\) site than the \([2a]\) site because of the higher multiplicity of the \([12k]\) site. For the x = 1.0, two Cr atoms were substituted at two of the 24 Fe sites of the unit cell. Many of these Fe sites are equivalent when crystallographic symmetry operations are applied, leaving only 15 inequivalent structures. These structures were found by fully optimizing the unit cell shape, volume, and ionic positions. To understand the site preference of a substituted Cr atom, we estimated the substitution energy, \(E_{sub}\). Table 2 provides the results of our calculation for each of the fifteen inequivalent configurations in ascending order of substitution energy(\(E_{sub}\)). The configuration [2a, 2a] has the lowest \(E_{sub}\) followed by [12k, 2a], and [12k, 12k]. We also used Eq.(5) to calculate the probability of forming each configuration as a function of temperature. Because the change in volume between different configurations is so small (less than 0.7A\({}^{3}\)), we may discard the \(P\Delta V\) term as negligible (in the order of \(10^{-7}\)eV at a standard pressure of 1 atm) compared to the \(\Delta E_{\rm sub}\) (i) term in Eq.(6). Fig. (4) shows the formation probability of different configurations of SrFe\({}_{12-x}\)Cr\({}_{x}\)O\({}_{19}\) with \(x=1.0\) at different temperatures. The foreign Cr\({}^{3+}\) ions prefer to occupy host Fe\({}^{3+}\) ions from the \([2a,2a]\), \([12k,2a]\),and \([12k,12k]\) sites. The formation probabilities of other sites are trivial and not displayed in Fig.(4). A Cr\({}^{3+}\) ion has a 100% probability of occupying the \([2a,2a]\) site at 0K and it's value sharply declines as the temperature rises, while the probability of occupancy of Cr\({}^{3+}\) at the \([12k,12k]\) site is maximum (66.4 %) at 1000 K. At a typical annealing temperature of 1000 K for CSFO, the site occupation probability of the site \([12k,2a]\) is 17.8%. In CSFO, the doped Cr\({}^{3+}\) ions are more likely to occupy Fe\({}^{3+}\) ions at the \([12k,12k]\) site than the \([2a,2a]\) site because of it's higher multiplicity. We exclusively utilize the formation probability at elevated temperatures for computing weighted averages. This is because the arrangement of CSFO configurations during the annealing process will be distributed according to these values. Table 3 displays the weighted average of corresponding quantities as the concentration of Cr\({}^{3+}\) increases. The volume of CSFO decreases as we increase the concentration of Cr\({}^{3+}\) be Figure 3: Temperature dependence of the formation probability of different configurations of \(SrFe_{12-x}\)Cr\({}_{x}\)O\({}_{19}\) with \(x=0.5\). In the plot, the configurations with trivial probabilities are not displayed. Figure 2: Comparision of predicted and experimental lattice constant of unit cell as a function of fraction of Cr (x). cause of the smaller atomic radius of Cr\({}^{3+}\) ion. The magnetic moment of CSFO is also in the decreasing trend as the amount of doped Cr\({}^{3+}\) increases owing to its smaller magnetic moment (\(3\mu_{B}\)) than the Fe\({}^{3+}\) (\(5\mu_{B}\)). Similarly, the saturation magnetization is decreasing monotonically as we increase the Cr\({}^{3+}\) concentration which is consistent with the K. Praveena et al. [35]. Although the value of magnetocrystalline anisotropy (\(K_{1}\)) is slightly increased, the reduction in saturation magnetization (\(M_{s}\)) is much more significant. So, their resultant effect causes the anisotropy field (\(H_{a}\)) to increase as the fraction of Cr is raised. In Table 4, we have provided the atomic contribution from each sublattice to the overall magnetic moment of CSFO. It can be observed that the total magnetic moment of the unit cell is slightly distinct from the sum of local magnetic moments. This disparity arises from the contribution of the interstitial region to the overall magnetic moment. ## IV Conclusions First-principles total-energy calculations based on density functional theory were used to study Cr-substituted SFO (SrFe\({}_{12-x}\)Cr\({}_{x}\)O\({}_{19}\)) with \(x=0.0,0.5,1.0\). The results showed that increasing the fraction of Cr atoms reduced \begin{table} \begin{tabular}{c c c c c c c c c} x & config. & \(E_{sub}(eV)\) & \(m_{tot}(\mu_{B})\) & Volume(Å\({}^{3}\)) & \(M_{s}(emu/g)\) & \(E_{a}\)(meV) & \(K_{1}(KJ/m^{3})\) & \(H_{a}(koe)\) & \(P_{1000K}\) \\ \hline 1.0 & \([2a,2a]\) & -6.84 & 36.00 & 703.73 & 95.00 & 1.50 & 341.39 & 14.36 & 0.001 \\ & \([12k,2a]\) & -6.75 & 30.00 & 703.03 & 79.20 & 1.10 & 250.34 & 12.64 & 0.178 \\ & \([12k,12k]\) & -6.69 & 36.00 & 703.24 & 95.00 & 0.80 & 182.01 & 7.66 & 0.664 \\ & \([2a,4f_{2}]\) & -6.68 & 40.00 & 701.63 & 106.00 & 1.30 & 296.45 & 11.20 & 0.009 \\ & \([12k,4f_{2}]\) & -6.62 & 34.00 & 701.89 & 89.70 & 0.90 & 205.16 & 9.13 & 0.145 \\ & \([4f_{2},4f_{2}]\) & -6.56 & 44.00 & 700.33 & 116.00 & 1.00 & 228.46 & 7.84 & 0.001 \\ & \([12k,2b]\) & -6.30 & 30.00 & 701.93 & 79.20 & 0.50 & 113.97 & 5.75 & 0.001 \\ & \([2b,4f_{2}]\) & -6.15 & 34.00 & 698.58 & 89.70 & 0.70 & 160.33 & 7.10 & 0.000 \\ & \([2a,2b]\) & -6.09 & 30.00 & 701.90 & 79.20 & 0.40 & 91.18 & 4.60 & 0.000 \\ & \([2b,2b]\) & -5.99 & 36.00 & 699.37 & 95.00 & 2.00 & 457.55 & 19.15 & 0.000 \\ & \([12k,4f_{1}]\) & -5.66 & 40.00 & 704.40 & 106.00 & 0.20 & 45.43 & 1.72 & 0.000 \\ & \([4f_{1},4f_{2}]\) & -5.61 & 44.00 & 703.98 & 116.00 & 0.20 & 45.46 & 1.57 & 0.000 \\ & \([2a,4f_{1}]\) & -5.47 & 40.00 & 704.37 & 106.00 & 0.90 & 204.44 & 7.76 & 0.000 \\ & \([2b,4f_{1}]\) & -5.15 & 38.91 & 702.14 & 103.00 & 0.10 & 22.79 & 0.88 & 0.000 \\ & \([4f_{1},4f_{1}]\) & -4.67 & 44.00 & 705.71 & 116.00 & 0.60 & 136.03 & 4.70 & 0.000 \\ \end{tabular} \end{table} Table 2: Physical properties of inequivalent configurations of \(SrFe_{12-x}\)(Cr)\({}_{x}\)O\({}_{19}\) with \(x=1.0\): Doped amount(x), substitution energy(\(E_{sub}\)), total magnetic moment (\(m_{tot}\)),volume of the unit cell (\(V\)), saturation magnetization (\(M_{s}\)), magnetocrystalline anisotropy energy (\(E_{a}\)), uniaxial magnetic anisotropy constant (\(K_{1}\)), anisotropy field (\(H_{a}\)), and the formation probability at 1000 K (\(P_{1000K}\)). All values are for a double formula unit cell containing 64 atoms. \begin{table} \begin{tabular}{l c c c c c c c c} x & config. & g & \(E_{sub}(eV)\) & \(m_{tot}(\mu_{B})\) & Volume(Å\({}^{3}\)) & \(M_{s}(emu/g)\) & \(E_{a}\)(meV) & \(K_{1}(KJ/m^{3})\) & \(H_{a}(koe)\) & \(P_{1000K}\) \\ \hline 1.0 & \([2a,2a]\) & -6.84 & 36.00 & 703.73 & 95.00 & 1.50 & 341.39 & 14.36 & 0.001 \\ & \([12k,2a]\) & -6.75 & 30.00 & 703.03 & 79.20 & 1.10 & 250.34 & 12.64 & 0.178 \\ & \([12k,12k]\) & -6.69 & 36.00 & 703.24 & 95.00 & 0.80 & 182.01 & 7.66 & 0.664 \\ & \([2a,4f_{2}]\) & -6.68 & 40.00 & 701.63 & 106.00 & 1.30 & 296.45 & 11.20 & 0.009 \\ & \([12k,4f_{2}]\) & -6.62 & 34.00 & 701.89 & 89.70 & 0.90 & 205.16 & 9.13 & 0.145 \\ & \([4f_{2},4f_{2}]\) & -6.56 & 44.00 & 700.33 & 116.00 & 1.00 & 228.46 & 7.84 & 0.001 \\ & \([12k,2b]\) & -6.30 & 30.00 & 701.93 & 79.20 & 0.50 & 113.97 & 5.75 & 0.001 \\ & \([2b,4f_{2}]\) & -6.15 & 34.00 & 698.58 & 89.70 & 0.70 & 160.33 & 7.10 & 0.000 \\ & \([2a,2b]\) & -6.09 & 30.00 & 701.90 & 79.20 & 0.40 & 91.18 & 4.60 & 0.000 \\ & \([2b,2b]\) & -5.99 & 36.00 & 699.37 & 95.00 & 2.00 & 457.55 & 19.15 & 0.000 \\ & \([12k,4f_{1}]\) & -5.66 & 40.00 & 704.40 & 106.00 & 0.20 & 45.43 & 1.72 & 0.000 \\ & \([4f_{1},4f_{2}]\) & -5.61 & 44.00 & 703.98 & 116.00 & 0.20 & 45.46 & 1.57 & 0.000 \\ & \([2a,4f_{1}]\) & -5.47 & 40.00 & 704.37 & 106.00 & 0.90 & 204.44 & 7.76 & 0.000 \\ & \([2b,4f_{1}]\) & -5.15 & 38.91 & 702.14 & 103.00 & 0.10 & 22.79 & 0.88 & 0.000 \\ & \([4f_{1},4f_{1}]\) & -4.67 & 44.00 & 705.71 & 116.00 & 0.60 & 136.03 & 4.70 & 0.000 \\ \end{tabular} \end{table} Table 3: Weighted averages of physical properties of pure and Cr-doped strontium hexaferrite: volume of the unit cell (V), total magnetic moment (\(m_{tot}\) ), saturation magnetization (\(M_{s}\)), magnetocrystalline anisotropy energy the total magnetic moment of the SFO unit cell. This reduction in magnetization was obtained by low magnetic Cr atoms replacing Fe\({}^{3+}\) ions at two of the majority spin sites, 2a and 12k, resulting negative contribution to the magnetization. Our substitution energy and formation probability analysis predicts that Cr atoms preferentially occupy the 2a, 12k, and 4f\({}_{2}\) sites, consistent with experimental observations. Increasing the fraction of Cr in SFO leads to a rise in the magnetic anisotropic field (\(H_{a}\)) despite a decrease in magnetization and a slight increase in magnetocrystalline anisotropy (\(K_{1}\)). This increase in anisotropic field(\(H_{a}\)) is supported by a formation probability study, which shows that at higher temperatures (>700\({}^{\circ}\)C), Cr\({}^{3+}\) ions are more likely to occupy the 12k site rather than the 2a site because of its higher multiplicity. ###### Acknowledgements. This work was supported by the Center for Computational Science (CCS) at Mississippi State University. Computer time allocation has been provided by the High-Performance Computing Collaboratory (\(HPC^{2}\)) at Mississippi State University.
2302.12774
Automated Lesion Segmentation in Whole-Body FDG-PET/CT with Multi-modality Deep Neural Networks
Recent progress in automated PET/CT lesion segmentation using deep learning methods has demonstrated the feasibility of this task. However, tumor lesion detection and segmentation in whole-body PET/CT is still a chal-lenging task. To promote research on machine learning-based automated tumor lesion segmentation on whole-body FDG-PET/CT data, Automated Lesion Segmentation in Whole-Body FDG-PET/CT (autoPET) challenge is held, and a large, publicly available training dataset is provided. In this report, we present our solution to the autoPET challenge. We employ multi-modal residual U-Net with deep super vision. The experimental results for five preliminary test cases show that Dice score is 0.79 +/- 0.21.
Satoshi Kondo, Satoshi Kasai
2023-02-16T01:05:54Z
http://arxiv.org/abs/2302.12774v1
# Automated Lesion Segmentation in Whole-Body FDG-PET/CT with Multi-modality Deep Neural Networks ###### Abstract Recent progress in automated PET/CT lesion segmentation using deep learning methods has demonstrated the feasibility of this task. However, tumor lesion detection and segmentation in whole-body PET/CT is still a challenging task. To promote research on machine learning-based automated tumor lesion segmentation on whole-body FDG-PET/CT data, Automated Lesion Segmentation in Whole-Body FDG-PET/CT (autoPET) challenge is held, and a large, publicly available training dataset is provided. In this report, we present our solution to the autoPET challenge. We employ multi-modal residual U-Net with deep super vision. The experimental results for five preliminary test cases show that Dice score is 0.79 \(\pm\) 0.21. Keywords:FDG-PET/CT, Lesion segmentation, Multi-modality ## 1 Introduction Recent progress in automated PET/CT lesion segmentation using deep learning methods has demonstrated the feasibility of this task. However, tumor lesion detection and segmentation in whole-body PET/CT is still a challenging task. One bottleneck for progress in automated PET lesion segmentation is the limited availability of training data. To promote research on machine learning-based automated tumor lesion segmentation on whole-body FDG-PET/CT data, Automated Lesion Segmentation in Whole-Body FDG-PET/CT (autoPET) challenge is held, and a large, publicly available training dataset is provided. In this report, we present our solution to the autoPET challenge. We employ residual U-Net with deep super vision with multi-modality fashion. ## 2 Proposed Method The input data for lesion segmentation are whole-body PET/CT volumes, and two volumes, i.e., CT and SUV (Standardized Uptake Value) which is obtained from PET, are provided for each case. CT and PET volumes are acquired simultaneously on a single PET/CT scanner in one session, thus CT and PET (SUV) volumes are anatomically aligned up to minor shifts due to physiological motion. We use 3D encoder-decoder networks for the segmentation task. Our base model is residual U-Net with deep super vision [2]. Input two volumes are resampled in [2 mm, 2 mm, 3 mm] for x, y and z direction, respectively, at first. CT and SUV volumes are normalized. The minimum and maximum values are -100 and 250, respectively, for CT volumes. And the minimum and maximum values are 0 and 15, respectively, for SUV volumes. In the training phase, we randomly sample 3D patches from the input volumes. The size of a 3D patch is 48 x 48 x 32 voxels. We sample 12 patches from each volume. When the volume includes lesions, the ratio of positive and negative patches in the sampling for one input volume is 3:1. We do not apply any augmentation. The patches of CT and SUV are concatenated to one patches as 2 channel patches, and then the concatenated patches are fed into the segmentation network. The loss function is a weighted summation of Dice loss and cross entropy loss. The weights for Dice and cross entropy losses are 1 and 0.5, respectively. We also employ deep super vision for loss calculation. Intermediate outputs from several layers in the decoder of the model are up-sampled, loss value is calculated for each up-sampled output, and then the loss values are aggregated. The number of layers used in the deep super vision is two. We train multiple models. Each model is trained independently using different combinations of training and validate datasets, and the inference results are obtained by ensemble of the outputs from the models. The final likelihood score is obtained by averaging the likelihood scores from the models. We use five models in our experiments. ## 3 Experiments The dataset for training consists of 1014 studies of 900 patients acquired on a single site. The dataset for preliminary evaluation consists of 5 studies. Our method is implemented by mainly using PyTorch [3], PyTorch Lightning and MONAI libraries. We use three Nvidia RTX3090 GPUs for training. For the training of the segmentation model, the optimizer is Adam [4] and the learning rate changes with cosine annealing. The initial learning rate is 0.001. The number of epoch is 300. The model taking the lowest loss value for the validation dataset is selected as the final model. We evaluated our method with the evaluation system provided by the organizers of the autoPET challenge. There are three evaluation metrics. The first one is foreground Dice score of segmented lesions. The second one is volume of false positive connected components that do not overlap with positives, i.e., false positive volume. The final one is volume of positive connected components in the ground truth that do not overlap with the estimated segmentation mask, i.e., false negative volume. The results of our submission are Dice score is 0.79 \(\pm\) 0.21, false positive volume is 0.29 \(\pm\) 0.66, and false negative volume is 14.27 \(\pm\) 17.31. Conclusions In this report, we presented our solution for the autoPET challenge. We employ multi-modal residual U-Net with deep supervision. The experimental results for five preliminary test cases show that Dice score is 0.79 \(\pm\) 0.21.
2308.07444
The Performance of Transferability Metrics does not Translate to Medical Tasks
Transfer learning boosts the performance of medical image analysis by enabling deep learning (DL) on small datasets through the knowledge acquired from large ones. As the number of DL architectures explodes, exhaustively attempting all candidates becomes unfeasible, motivating cheaper alternatives for choosing them. Transferability scoring methods emerge as an enticing solution, allowing to efficiently calculate a score that correlates with the architecture accuracy on any target dataset. However, since transferability scores have not been evaluated on medical datasets, their use in this context remains uncertain, preventing them from benefiting practitioners. We fill that gap in this work, thoroughly evaluating seven transferability scores in three medical applications, including out-of-distribution scenarios. Despite promising results in general-purpose datasets, our results show that no transferability score can reliably and consistently estimate target performance in medical contexts, inviting further work in that direction.
Levy Chaves, Alceu Bissoto, Eduardo Valle, Sandra Avila
2023-08-14T20:34:52Z
http://arxiv.org/abs/2308.07444v1
# The Performance of Transferability Metrics ###### Abstract Transfer learning boosts the performance of medical image analysis by enabling deep learning (DL) on small datasets through the knowledge acquired from large ones. As the number of DL architectures explodes, exhaustively attempting all candidates becomes unfeasible, motivating cheaper alternatives for choosing them. Transferability scoring methods emerge as an enticing solution, allowing to efficiently calculate a score that correlates with the architecture accuracy on any target dataset. However, since transferability scores have not been evaluated on medical datasets, their use in this context remains uncertain, preventing them from benefiting practitioners. We fill that gap in this work, thoroughly evaluating seven transferability scores in three medical applications, including out-of-distribution scenarios. Despite promising results in general-purpose datasets, our results show that no transferability score can reliably and consistently estimate target performance in medical contexts, inviting further work in that direction. Keywords:Transferability Estimation Transferability Metrics Image Classification Medical Applications Transfer Learning Deep Learning ## 1 Introduction Transfer learning allows, in data-limited scenarios, to leverage knowledge obtained from larger datasets. Due to its effectiveness, it is the preferred training method in medical image analysis [14]. Practitioners typically fine-tune a pre-trained model for the target task. However, selecting the most appropriate pre-trained model can significantly impact the final performance. The growing number of architectures and datasets has led to increasingly difficult decisions. While, with unlimited resources, it would be theoretically possible to compare all options empirically, that approach is too inefficient in practice. Sound empirical evaluation must often be tempered with the designer's experience and, often, not-so-sound intuition, prejudices, and hearsay. Transferability estimation promises to ease this burden, as shown in Fig. 1. Traditional empirical selection of architectures requires optimizing the hyper-parameters of each candidate to allow a fair comparison [7]. Transferability scoring methods, in contrast, allow efficiently selecting the most promising model for a given target dataset without fine-tuning each candidate. When the transferability score accurately measures the ability to transfer knowledge between arbitrary tasks, empirical comparison of models may be limited to a small subset of candidates. Transferability scoring methods have shown promising results, performing well when source and target datasets share strong similarities in classes and image characteristics [1, 11]. However, as we will see, their behavior is much different for target medical datasets, a situation in which the target dataset deviates much more intensely from the source dataset as depicted in Fig. 2. This work evaluates several transferability scoring methods in the medical domain, including skin lesions, brain tumors, and breast cancer. We define a comprehensive hyperparameter optimization to ensure that the fine-tuned models are evaluated on their best capabilities. Additionally, we extend the evaluation to investigate how transferability scores correlate with out-of-distribution performance. We include at least one dataset considered out-of-distribution from a source one for each medical application. In summary, the contributions of our paper are twofold: * We extensively evaluate seven transferability scoring methods for three distinct medical classification tasks, covering common imagery types in medical tasks; * We design a new methodology for the medical context to account for out-of-distribution evaluation of transferability scoring methods. Figure 1: Transferability estimation vs. traditional empirical search. The latter selects the best candidate model through empirical evaluation of the target metric, thus needing a costly hyperparameter search for each candidate model. Transferability computes instead a proxy score that correlates with the best expected fine-tuned performance. Only the selected model (highest score) will need hyperparameter tuning to obtain the optimized model. ## 2 Transferability Scores & Related Work An effective transferability scoring method exhibits computational efficiency while strongly correlating with the final performance metric of a fine-tuned model on the target dataset. Generally, the estimation of transferability involves extracting the embeddings or predictions from the target dataset. That extracted information is integrated with the target dataset's ground-truth labels to quantify the model's transferability. Transferability scoring methods can be categorized into feature-based (fb) and source-label-based (sb). Source-label-based scores assume access to the source classification head for calculating probability distribution or label predictions, whereas feature-based scores only require source models for feature extraction. Both methods require the true labels of the target dataset for computing the transferability score. We summarize the transferability scoring methods, sorted by publication date in Table 1. Ibrahim et al. [11] and Agostinelli et al. [1] evaluated transferability scores on general-purpose datasets for classification and segmentation tasks. Their findings suggest that these scores may be unstable, and minor variations in the experimental protocol could lead to different conclusions. N-LEEP and LogME deliver the best transferability estimation results depending on the experimental design of classification tasks. Our work focuses on classification tasks in scenarios where the dataset shift is significant. The experimental design of previous works assumes a lower dataset shift compared to what we consider in our paper. For Figure 2: Both our best transferability score (N-LEEP) and the ImageNet ACC@1 metric (baseline) on the source model suffice to predict performance on target general-purpose tasks (two left columns). For target medical tasks, the scenario is much different, with neither scores nor raw performances being strong predictors of transferability (two right columns). A good transferability score should capture how well a transferability score (x-axis) relates to a test performance metric (y-axis), i.e, higher values of transferability scores predict higher values of true performance. The red line showcases any linear trending between the scores and the accuracy on the task for a given source model. instance, transferring from ImageNet to CIFAR is expected to be easier than any medical dataset due to some overlap between target-source classes and features. Additionally, we perform thorough hyperparameter tuning, which is essential in these works. ## 3 Materials and Methods ### Datasets We assess three medical classification problems. We use ISIC2019 [5] for melanoma vs. benign classification task and PAD-UFES-20 [16] for out-of-distribution (OOD) evaluation. BreakHis [20] is used for histopathology breast cancer malign vs. benign sample classification and ICIAR2018 [18] for out-of-distribution assessment. For brain tumor classification, we use BrainTumor-Cheng [4], a four-class dataset of MRI images. We adopt the NINS [3] as the out-of-distribution test dataset. ### Methodology We aim to provide a fair and concise evaluation of each transferability scoring method described in Section 2. We restrict our analysis to pre-trained models on the ImageNet dataset. We focus exclusively on the _source model selection_ scenario, which aims to identify the most suitable pre-trained model for a given target dataset. Our methodology involves the following seven steps: 1. Choosing a target medical task \(T\). \begin{table} \begin{tabular}{l l l l} \hline Tr. scorer & Cat. & Scorer input & Details \\ \hline H-Score [2] & fb & source feature & transferability correlates to inter-class \\ & & extractor \& labels & variance and feature redundancy \\ NCE [22] & lb & source classification & negative conditional entropy between \\ & & head \& labels & source and target labels \\ LEEP [15] & lb & source classification & log-likelihood between target labels and \\ & & head \& labels & source model predictions \\ N-LEEP [13] & fb & source feature & log-likelihood between target labels and \\ & & extractor \& labels & Gaussian mixture model fit to target \\ & & & extracted features \\ LogME [24] & fb & source feature & probability of target labels conditioned on \\ & & extractor \& labels & target image embeddings \\ Regularized & fb & source feature & shrinkage estimators for stable covariance \\ H-Score [11] & & extractor \& labels & \\ GBC [17] & fb & source feature & Bhattacharyya coeff. between multivariate \\ & & extractor \& labels & Gaussians fit to each class’ feature \\ & & & estimating overlap with target task classes \\ \hline \end{tabular} \end{table} Table 1: Summary of transferability scoring methods (Tr. scorer), sorted by publication date. Cat.: category; fb: feature-based; lb: label-based. 2. Selecting a pre-trained model architecture \(A\). 3. Computing the in-distribution transferability score \(S_{\mathrm{id}}(M,T,A)\) for all transferability scoring methods \(M\), pre-trained model \(A\), and the training dataset of task \(T\). 4. Performing a traditional evaluation of architecture \(A\) for the target task \(T\), by first optimizing the model hyperparameters on \(T\)'s validation dataset using the target metric to obtain the best model \(A_{\mathrm{opt}}(T)\), and then evaluating that metric \(P_{\mathrm{id}}(T,A)\) on \(T\)'s in-distribution test dataset. 5. Computing the out-of-distribution transferability score \(S_{\mathrm{ood}}(M,T,A)\) for all transferability scoring methods \(M\), the fine-tuned model \(A_{\mathrm{opt}}(T)\) obtained in the previous step, and target task \(T\)'s out-of-distribution test dataset (as explained in the previous subsection). 6. Evaluating the target metric \(P_{\mathrm{ood}}(T,A)\) of \(A_{\mathrm{opt}}(T)\) on \(T\)'s out-of-distribution test dataset. 7. For a given dataset \(T\) and scoring method \(M\), once steps 1-6 have been performed for all architectures, we may compute the correlation index between the transferability scores \(S_{*}(M,T,A)\) and the traditional empirical performance metrics \(P_{*}(T,A)\) across all architectures \(A\). We showcase each one of those correlation analyses on a separate subplot of our results. In our experiments, the target metric is always the balanced accuracy, and the correlation index is always the Kendall's tau, which ranges between \(-1\) and \(1\), with positive correlations indicating higher-quality scoring methods. Zero correlations indicate that the scoring method has no ability to predict transferability. Negative correlations are harder to interpret: although they suggest predictive ability, they show the scoring method is working _against_ its expected design. We analyze separately the in-distribution and the out-of-distribution analyses. As far as we know, we are the first to attempt OOD analyses on transferability metrics. ## 4 Results _Models architectures & hyperparameter tuning_. We use 10 ImageNet pre-trained models: ResNet18 [9], ResNet34 [9], ResNet50 [9], MobileNetV2-0.5 [19], MobileNetV2-1.0 [19], DenseNet121 [10], DenseNet161 [10], DenseNet169 [10], EfficientNet-B0 [21], ViT-Small [6]. For hyperparameter tuning, we followed the Tuning Playbook [7] guidelines, using Halton sequences [8] to sample candidates for the hyperparameters of interest. In our search, we keep fixed the use of SGD as the optimizer, cosine scheduler, 100 epochs, and batch size of 128. We search over 75 quasi-random combinations of learning rate in range \([10^{-4},10^{-1}]\) and weight decay in range \([10^{-6},10^{-4}]\) for each model architecture, as those are the two most critical optimization hyperparameters [12]. We run the experiments on NVIDIA RTX 5000, RTX 8000. We select the best-performing model in the validation set for each architecture for test evaluation. In total, we trained 2250 models. The source code to reproduce our experiments is available at [https://github.com/VirtualSpaceman/transfer-estimation-medical](https://github.com/VirtualSpaceman/transfer-estimation-medical). _In-distribution._ Fig. 3 shows the results for each transferability scoring method and each model's architecture for all medical tasks. The red line indicates a regression line to show any tendency in the results. Table 2 shows all investigated transferability scores for brain tumor, histopathologic, and skin lesion classification tasks, respectively. Each row depicts one correlation index value for that transferability scoring methods (columns). We calculate each correlation index considering the test performance of the best-fine-tuned model and the estimated transferability score for each architecture. Methods such as LogME, N-LEEP, and NCE demonstrate varying degrees of correlation, indicating their potential as indicators of transferability within the same distribution. All transferability scoring methods exhibited an unstable behavior, as the correlation index consistently shifted across datasets. While LogME was one of the best methods for the BrainTumor-Cheng and BreakHis datasets, it exhibited negative correlations for ISIC2019. To our knowledge, such instability has not been observed in general-purpose computer vision datasets. Our main hypothesis for this phenomenon relates to the dissimilarity between source and target domains. Unlike general-purpose computer vision datasets, which often overlap in target and label sets or share similar features, medical transfer learning involves substantial domain differences. _Out-of-distribution._ It is easy to come across out-of-distribution sets in the real world, as deep learning datasets often exhibit diversity and correlation Figure 3: Evaluation of all scores (columns) and medical datasets (rows), showcasing the correlation between transferability scores (x-axis) and best accuracy on test (y-axis). The linear regression lines (in red) are for illustration only, as the correlation index employed is the non necessarily linear Kendall’s tau, shown inside of each plot, on the top-left corner. shifts [23]. We conducted additional experiments to evaluate transferability scores' ability to predict out-of-distribution performance. Table 2 shows the transferability scores and correlation indexes, with interestingly high predictive capabilities observed for label-based transferability scoring methods. NCE and LEEP exhibited outstanding results for both ICIAR2018 and PAD-UFES-20 datasets across all correlations, with NCE being favored over other methods. We hypothesize that label-based methods are prone to provide better results than feature-based for binary tasks in out-of-distribution scenarios. As the number of classes of source dataset matches the target one, the probabilities distributions tend to concentrate on a single class, inflating the transferability score for binary cases. _Hypotheses why metrics failed_. Up to this point, our experiments revealed that all transferability scoring methods present unstable quality. For example, both NCE and LEEP excel at out-of-distribution but report poor results in in-distribution scenarios. We hypothesize two factors that may contribute to the failure of the methods followed by some preliminary experiments: 1) domain shift: the domain difference between source and target datasets might cause the failure. We fine-tuned each model on each medical dataset and evaluated their transferability score to the validation set. Our experiment indicates that only label-based methods excel in this scenario. So, domain shift helps to degrade the efficiency of such scores, but it is not the main reason. 2) number of classes: to measure the impact of the number of classes in the transferability scores, we take the OxfordPets dataset and map the original 37 classes (dogs and cats breeds) into a binary problem (cats vs. dogs). Our preliminary results suggest that all correlation indexes decrease, but all metrics still present high transferability estimation capabilities. ## 5 Conclusion Our work is the first to investigate the quality of transferability scoring methods for medical applications. We evaluated 7 different transferability scoring methods in 3 medical classification datasets, considering 10 different architectures. Despite promising results in our out-of-distribution experiment, the instability presented by the scores across datasets in the in-distribution scenario lead us to recommend to practitioners not yet to rely on transferability scores for source model selection \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Task & Dataset & H-Score & NCE & LEEP & N-LEEP & LogME & Reg. H-Score & GBC \\ \hline \multirow{2}{*}{Brain Tumor} & BrainTumor-cheng & 0.270 & -0.180 & 0.090 & 0.494 & 0.584 & 0.405 & 0.135 \\ & NINS & -0.333 & 0.156 & 0.200 & -0.333 & -0.289 & -0.422 & 0.200 \\ \hline \multirow{2}{*}{Histopathologic} & BreakHis & 0.600 & -0.156 & 0.200 & -0.244 & 0.378 & 0.200 & 0.022 \\ & ICIAR2018 & 0.333 & 0.778 & 0.778 & 0.289 & 0.289 & 0.378 & 0.156 \\ \hline \multirow{2}{*}{Skin Lesion} & ISIC2019 & -0.244 & 0.022 & 0.333 & -0.111 & -0.067 & -0.289 & 0.022 \\ & PAD-UFES-20 & -0.156 & 0.911 & 0.422 & -0.156 & -0.022 & -0.022 & 0.067 \\ \hline \end{tabular} \end{table} Table 2: Kendall’s tau (\(\tau_{w}\)) correlation index for each transferability score considering in and out-of-distribution scenarios for each medical task. in medical image analysis. Our work takes one step towards reducing the need for expensive training by selecting pre-trained models efficiently that empowers the final performance on the target task. Such efficiency positively diminishes the carbon footprint when performing a hyperparameter search using a subset of deep learning architectures instead of all available. Label-based methods shows superior results in out-of-distribution scenarios. Out-of-distribution scores might be inflated for binary tasks due to the distribution concentration on a single class, and the low number of classes benefits in favor of high transferability scores. Such an issue is absent in the available benchmarks because the general-purpose classification datasets present many classes and consider transferring from ImageNet as standard practice. For future work, the analysis can be expanded to other configurations, such as finding the most related target task for a given source model (target model's selection) or cross-dataset transfer evaluation. Finally, evaluating future transferability scorers should include contexts where the difference between source and target domains is high, such as medical. This brings opportunities to assess the robustness of transferability scores regarding a limited amount of samples, unbalanced labels, and low inter- and high intra-variation classes. Data Use.We use only publicity available medical datasets, including PAD-UFES-20 [16], ICIAR2018 [18], BreakHis [20], BrainTumor-Cheng [4], NINS [3], and ISIC2019 [5]. All of them are under CC BY 4.0 license, except ISIC2019 (CC BY-NC 4.0). The data collection process is described in the original papers. Acknowledgments.L. Chaves is funded by Becas Santander/Unicamp - HUB 2022, Google LARA 2021, in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001, and FAEPEX. A. Bissoto is funded by FAPESP (2019/19619-7, 2022/09606-8). S. Avila is funded by CNPq 315231/2020-3, FAPESP 2013/08293-7, 2020/09838-0, H.IAAC, Google LARA 2021 and Google AIR 2022.
2304.07949
Metrics for Bayesian Optimal Experiment Design under Model Misspecification
The conventional approach to Bayesian decision-theoretic experiment design involves searching over possible experiments to select a design that maximizes the expected value of a specified utility function. The expectation is over the joint distribution of all unknown variables implied by the statistical model that will be used to analyze the collected data. The utility function defines the objective of the experiment where a common utility function is the information gain. This article introduces an expanded framework for this process, where we go beyond the traditional Expected Information Gain criteria and introduce the Expected General Information Gain which measures robustness to the model discrepancy and Expected Discriminatory Information as a criterion to quantify how well an experiment can detect model discrepancy. The functionality of the framework is showcased through its application to a scenario involving a linearized spring mass damper system and an F-16 model where the model discrepancy is taken into account while doing Bayesian optimal experiment design.
Tommie A. Catanach, Niladri Das
2023-04-17T02:13:20Z
http://arxiv.org/abs/2304.07949v1
# Metrics for Bayesian Optimal Experiment Design under Model Misspecification ###### Abstract The conventional approach to Bayesian decision-theoretic experiment design involves searching over possible experiments to select a design that maximizes the expected value of a specified utility function. The expectation is over the joint distribution of all unknown variables implied by the statistical model that will be used to analyze the collected data. The utility function defines the objective of the experiment where a common utility function is the information gain. This article introduces an expanded framework for this process, where we go beyond the traditional Expected Information Gain criteria and introduce the Expected General Information Gain which measures robustness to the model discrepancy and Expected Discriminatory Information as a criterion to quantify how well an experiment can detect model discrepancy. The functionality of the framework is showcased through its application to a scenario involving a linearized spring mass damper system and an F-16 model where the model discrepancy is taken into account while doing Bayesian optimal experiment design. ## I Introduction For science and engineering systems there are often many choices of experiments to run or data to collect in order to infer information. Each of these choices has different costs in terms of time, money, or other constraints. One common solution to this problem stems from the field of Bayesian optimal experimental design (BOED). This approach uses the rigor of the Bayesian paradigm and information theory to formalize the design of experiments and treats it as an optimization problem. Concretely, the aim is to maximize a utility function that captures the worth of a particular experimental design. This utility function, typically the Expected Information Gain (EIG), depends on the posterior distribution sampled over many hypothetical realizations of plausible datasets from the experiment. However, for real applications, where there is the model discrepancy, EIG might not be the only relevant measure of information we should consider. In this work, we consider two additional criteria that measure notions of the robustness of the design. The first criterion, Expected Generalized Information Gain (EGIG), captures the expected information gained (or lost) when an experimenter uses a model with discrepancy. The second criterion, Expected Discriminatory Information (EDI) reflects whether the information gained from an experiment would be sufficient to discriminate between the model and an alternative. The EIGIG-based design seeks to mitigate discrepancy while the EDI-based seeks to only detect it. With these criteria, we aim to correct pathological issues in BOED and advance the BOED literature, which has a relatively few works concerning the robustness of BOED. In [2] a Bayesian linear regression example is shown where the system is analysed without considering model discrepancies. There not only is the parameter under-estimated but the posterior credible intervals are not even close to covering the true parameter value, which is alarming. In practice, despite the theoretical elegance and optimal performance for accurate models, BOED may encounter significant issues if our model is not properly specified. This means that there is no value of \(\boldsymbol{x}^{\star}\) for which \(p(\boldsymbol{y}|\boldsymbol{x}=\boldsymbol{x}^{\star},\boldsymbol{d})\) corresponds to the true distribution for \(p(\boldsymbol{y}|\boldsymbol{d})\), as noted in references [10][6]. Although model misspecification is a common problem in Bayesian settings, BOED methods are especially vulnerable because they not only use the model to fit data, but also to generate new data. The main issue is that Bayesian approaches only account for uncertainty in the model parameters, not in the model's correctness, which can lead to disastrous outcomes where BOED continuously queries similar designs and produces low-quality datasets. Eliminating misspecification entirely is unrealistic, particularly in a general BOED context. However, there is still a lot of work that can be done to improve our comprehension and management of it. Presently, there is only a limited amount of research that covers both the theoretical [5][7][9][12]and empirical implications of misspecification [14], and very little has been done to examine the specific mechanisms that can lead to failures. This is where our EIGIG and EDI metrics play an important role to evaluate the model robustness and identify modeling failures. Some Bayesian-adjacent approaches that call out the need for robustness and optimality in design are [11] and [13]. Most notably, [11] considers robust sensor placement for linear dynamical systems under asymptotic D-optimal design. Outline: Section II introduces the model and key concepts, Sections III presents the BOED criteria, Section IV studies EGIG and EDI for two examples systems, and Section VII provides discussions. ## II Modeling and Key Concepts ### _System Description_ Because of the difficulty of BOED, we will study this problem in the context of simplified models, specifically stationary discrete-time linear processes driven by Gaussian noise. We define the state vector as \(\mathbf{x}_{t}\in\mathbb{R}^{n}\), \[\mathbf{x}_{t}=\mathbf{A}\mathbf{x}_{t-1}+\mathbf{\eta}_{t},\qquad(t=1,2,...). \tag{1}\] \(\mathbf{A}\) is an \(n\times n\) transition matrix and \(\mathbf{\eta}_{t}\sim\mathcal{N}(\mathbf{0},\mathbf{Q})\) is the process noise where \(\mathbf{Q}\succeq 0\). We assume \(\mathbf{x}_{0}\sim\mathcal{N}(\mathbf{\mu}_{0},\mathbf{\Sigma}_{0})\). For simplicity, unless specified we will take \(\mathbf{\mu_{0}}=0\). The observation equation is \[\mathbf{y}_{t}=\mathbf{H}\mathbf{x}_{t}+\mathbf{v}_{t}, \tag{2}\] where the measurements are \(\mathbf{y}_{t}\in\mathbb{R}^{s}\), \(\mathbf{H}\) is the measurement matrix and \(\mathbf{v}_{t}\sim\mathcal{N}(\mathbf{0},\mathbf{R})\) where \(\mathbf{R}\succ\mathbf{0}\). The random vectors \(\{\mathbf{x}_{0},\mathbf{\eta}_{1},...,\mathbf{\eta}_{t},\mathbf{v}_{1},...,\mathbf{v}_{t}\}\) are assumed to all be independent. From this general case, we will study two simplifications. First, we consider a system without dynamics (or equivalently a single time step of the system), corresponding: \(\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{v}\). Second, we will study the system after it has converged to its stationary distribution, assuming that \(\mathbf{A}\) is asymptotically stable. In this case, if \(t\) is sufficiently large, we have that \(\mathbf{x}_{t}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{L})\), where \(\mathbf{\Sigma}_{L}\) is the solution to the discrete Lyapunov equation \(\mathbf{\Sigma}_{\mathbf{L}}=\mathbf{A}\mathbf{\Sigma}_{\mathbf{L}}\mathbf{A}^{\mathbf{T}}+\mathbf{Q}\). ### _Bayesian Inference_ In Bayesian inference to rigorously update our beliefs about \(\mathbf{X}\) with observation data \(\mathbf{Y}\), we apply Bayes' theorem, \[p\left(\mathbf{X}\mid\mathbf{Y}\right)=\frac{p\left(\mathbf{Y}\mid\mathbf{X}\right)p\left( \mathbf{X}\right)}{p\left(\mathbf{Y}\right)}. \tag{3}\] The prior \(p\left(\mathbf{X}\right)\) reflects our initial beliefs about \(\mathbf{X}\) while \(p\left(\mathbf{X}\mid\mathbf{Y}\right)\) is our posterior (after observations) belief. The likelihood, \(p\left(\mathbf{Y}\mid\mathbf{X}\right)\) is the probability of observing \(\mathbf{Y}\) given a state \(\mathbf{X}\), while \(p\left(\mathbf{Y}\right)\) is the overall probability of observing the data given our prior (called the evidence). Often we are interested in measuring how informative is the data. To do this we measure our change in belief, i.e. the information gain, using the Kullback-Leibler (KL) divergence, \[\text{D}_{\text{KL}}\left[p\left(\mathbf{X}\mid\mathbf{Y}\right)\left|\left|p\left( \mathbf{X}\right)\right|\right]=\int p\left(\mathbf{X}\mid\mathbf{Y}\right)\log\frac{p \left(\mathbf{X}\mid\mathbf{Y}\right)}{p\left(\mathbf{X}\right)}d\mathbf{X}\right. \tag{4}\] For the Gaussian case where \(p\left(\mathbf{X}\mid\mathbf{Y}\right)\sim\mathcal{N}\left(\mathbf{\mu}_{1},\mathbf{\Sigma}_ {\mathbf{1}}\right)\) and \(p\left(\mathbf{X}\right)\sim\mathcal{N}\left(\mathbf{\mu}_{0},\mathbf{\Sigma}_{\mathbf{0}}\right)\) the KL divergence is \[\frac{1}{2}(\text{Tr}\left[\mathbf{\Sigma}_{\mathbf{0}}^{-1}\mathbf{\Sigma}_{\mathbf{1}} \right]-n+\left(\mathbf{\mu}_{\mathbf{1}}-\mathbf{\mu}_{\mathbf{0}}\right)^{T}\mathbf{\Sigma}_{ \mathbf{0}}^{-1}\left(\mathbf{\mu}_{\mathbf{1}}-\mathbf{\mu}_{\mathbf{0}}\right)+\log\frac{\mid \mathbf{\Sigma}_{\mathbf{0}}\right.}{\left.\left.\right|\left.\right.}). \tag{5}\] The KL divergence can be generalized using a more expressive, yet still information theoretically valid, measure of information [4] defined over three distributions: \(r(\mathbf{X})\), \(p(\mathbf{X})\), and \(q(\mathbf{X})\) given by \[\mathcal{I}_{r(\mathbf{X})}[p(\mathbf{X})\mid\mid q(\mathbf{X})]=\int r\left(\mathbf{X} \right)\log\frac{p\left(\mathbf{X}\right)}{q\left(\mathbf{X}\right)}d\mathbf{X}. \tag{6}\] The interpretation of this form of information is that we want to measure a change in belief (e.g. information gained or lost) when updating from \(q\left(\mathbf{X}\right)\) to \(p\left(\mathbf{X}\right)\) in the view of \(r\left(\mathbf{X}\right)\). The view defines our reference frame for assessing changes in information. Typically, both \(r\left(\mathbf{X}\right)\) and \(p\left(\mathbf{X}\right)\) would be the posterior and \(q\left(\mathbf{X}\right)\) the prior, recovering the KL divergence. However, in the case where there is a model discrepancy, \(r\left(\mathbf{X}\right)\) could be the unknown posterior from the true model, while \(p\left(\mathbf{X}\right)\) could the inferred posterior from the model with discrepancy. Therefore, we could measure whether inference with the model discrepancy is still getting close to the correct result. We note that unlike the KL divergence this measure can be negative meaning that \(q\left(\mathbf{X}\right)\) provides more information about \(r\left(\mathbf{X}\right)\) than \(p\left(\mathbf{X}\right)\) does. For the case where, \(r(\mathbf{X})\), \(p(\mathbf{X})\), and \(q(\mathbf{X})\) are all described by multivariate Gaussians, \[\mathcal{I}_{r(\mathbf{X})}[p(\mathbf{X})\mid\mid q(\mathbf{X})]=\] \[+\left(\mathbf{\mu}_{\mathbf{r}}-\mathbf{\mu}_{\mathbf{q}}\right)^{T}\mathbf{\Sigma}_ {\mathbf{q}}^{-1}\left(\mathbf{\mu}_{\mathbf{r}}-\mathbf{\mu}_{\mathbf{q}}\right)+\log\frac{ \mid\mathbf{\Sigma}_{\mathbf{q}}\mid}{\mid\mathbf{\Sigma}_{\mathbf{p}}\mid}\right). \tag{7}\] This uses the fact that Eq. 6 can be expressed as the difference of two KL divergences and employing Eq. 5. ### _Bayesian Filtering_ For a Markov process where the state \(\mathbf{x_{t}}\) only depends on \(\mathbf{x_{t-1}}\) and the observation \(\mathbf{y_{t}}\) only depends on \(\mathbf{x_{t}}\) we can simplify the inference problem for the state \(\mathbf{x_{t}}\) given a time series of observations \(\mathbf{Y_{t}}=\{\mathbf{y_{0}}\ldots\mathbf{y_{t}}\}\) as \[p\left(\mathbf{x_{t}}\mid\mathbf{Y_{t}}\right)=\frac{p\left(\mathbf{y_{t}}\mid\mathbf{x_{t}} \right)p\left(\mathbf{x_{t}}\mid\mathbf{Y_{t-1}}\right)}{p\left(\mathbf{Y}\mid\mathbf{Y_{t-1}} \right)}. \tag{8}\] Using this, the Bayesian filter for the system described by Eq.(1)-(2), is the Kalman filter, \[\mathbf{\mu}_{\mid t\mid-1} =\mathbf{A}\mathbf{\mu}_{t-1\mid t-1} \tag{9}\] \[\mathbf{\Sigma}_{t\mid t-1} =\mathbf{A}\mathbf{\Sigma}_{t-1\mid t-1}\mathbf{A}^{T}+\mathbf{Q}\] (10) \[\mathbf{\mu}_{t\mid t} =\mathbf{\mu}_{t\mid t-1}+\mathbf{K}_{t}(\mathbf{y_{t}}-\mathbf{H}\mathbf{\mu}_{t\mid t -1})\] (11) \[\mathbf{\Sigma}_{t\mid t} =(\mathbf{I}-\mathbf{K}_{t}\mathbf{H})\mathbf{\Sigma}_{t\mid t-1}. \tag{12}\] where \(\mathbf{K}_{t}=\mathbf{\Sigma}_{t\mid t-1}\mathbf{H}^{T}\mathbf{\mathcal{S}}_{t}^{-1}\) is the Kalman gain matrix and \(\mathbf{S_{t}}=\mathbf{H}\mathbf{\Sigma}_{t\mid t-1}\mathbf{H}^{T}+\mathbf{R}\) is the predictive uncertainty. Considering a single time step, the _a-priori_ estimator of \(\mathbf{x}_{t}\) is \(\mathbf{\mu}_{t\mid t-1}\) with covariance \(\mathbf{\Sigma}_{t\mid t-1}\). The _a-posteriori_ estimator of \(\mathbf{x}_{t}\) is \(\mathbf{\mu}_{t\mid t}\) with covariance \(\mathbf{\Sigma}_{t\mid t}\). Therefore, the prior, posterior, and evidence are \[p(\mathbf{x}_{t}) \sim\mathcal{N}(\mathbf{\mu}_{t\mid t-1},\mathbf{\Sigma}_{t\mid t-1}), \tag{13}\] \[p(\mathbf{x}_{t}\mid\mathbf{y}_{t},\mathbf{d}) \sim\mathcal{N}(\mathbf{\mu}_{t\mid t},\mathbf{\Sigma}_{t\mid t}),\] (14) \[p(\mathbf{y}_{t}\mid\mathbf{d}) \sim\mathcal{N}(\mathbf{H}\mathbf{\mu}_{t\mid t-1},\mathbf{S_{t}}). \tag{15}\] As we can see from Eq. 9 - 12, only the means \(\mathbf{\mu}\) depend on the observations \(\mathbf{y}\). Thus, when \(\mathbf{A}\) is asymptotically stable we can find the stationary distribution of \(\mathbf{\Sigma}_{t\mid t}\). Here we define \(\mathbf{\Sigma}_{t\mid t-1}\rightarrow\mathbf{\Gamma}\) and \(\mathbf{\Sigma}_{t\mid t}\rightarrow\mathbf{\Sigma}_{D}\) as \(t\rightarrow\infty\). We do this by first using the discrete time algebraic Riccati equation (DARE) given by \[\boldsymbol{\Gamma}=\boldsymbol{A}\boldsymbol{\Gamma}\boldsymbol{A}^{T}+ \boldsymbol{Q}-\boldsymbol{A}\boldsymbol{\Gamma}\boldsymbol{H}^{T}(\boldsymbol{H }\boldsymbol{\Gamma}\boldsymbol{H}^{T}+\boldsymbol{R})^{-1}\boldsymbol{H} \boldsymbol{\Gamma}\boldsymbol{A}^{T} \tag{16}\] and then solve for \(\boldsymbol{\Sigma}_{D}\) via \[\boldsymbol{\Sigma}_{D}=\boldsymbol{\Gamma}-\boldsymbol{\Gamma}\boldsymbol{H} ^{T}(\boldsymbol{H}\boldsymbol{\Gamma}\boldsymbol{H}^{T}+\boldsymbol{R})^{-1 }\boldsymbol{H}\boldsymbol{\Gamma}. \tag{17}\] ### _Bayesian Optimal Experimental Design_ In BOED, the first step to modeling the problem is to define a utility function \(U(\boldsymbol{d})\) that gives the value of performing an experiment at \(\boldsymbol{d}\in\mathcal{D}\). The set \(\boldsymbol{d}\in\mathcal{D}\) spans the space of possible designs. In Bayesian design, the utility is a function of the posterior distribution \(p(\boldsymbol{X}\mid\boldsymbol{d},\boldsymbol{Y})\). The utility function is maximized to find the optimal design \(\boldsymbol{d}^{*}\), i.e. \(\boldsymbol{d}^{*}=\operatorname*{argmax}_{\boldsymbol{d}\in\mathcal{D}}U( \boldsymbol{d})\). The choice of the utility function \(U(\boldsymbol{d})\) is crucial, as different functions will usually lead to different optimal designs[8]. One of the most principled choices often used in BOED is the mutual information. This is the information gained about \(\boldsymbol{X}\) by taking measurements, \(\boldsymbol{Y}\), according to design \(\boldsymbol{d}\). This is just the KL divergence from the prior to posterior, \(\text{D}_{\text{KL}}[p(\boldsymbol{X}\mid\boldsymbol{Y},\boldsymbol{d})||p( \boldsymbol{X})]\), Eq. 4. However at the point of choosing \(\boldsymbol{d}\), we do not have the measurements. Thus, in order to access the effectiveness of the design \(\boldsymbol{d}\), we take the expected KL divergence over plausible data sets \(p(\boldsymbol{Y}\mid\boldsymbol{d})\). This utility function is known as the Expected Information Gain (EIG) and is defined as, \[\text{EIG}(\boldsymbol{d}) =\mathbb{E}_{p(\boldsymbol{Y}\mid\boldsymbol{d})}[\text{D}_{ \text{KL}}[p(\boldsymbol{X}\mid\boldsymbol{Y},\boldsymbol{d})||p(\boldsymbol{ X})]]\] \[=\int p(\boldsymbol{X},\boldsymbol{Y}\mid\boldsymbol{d})\log \frac{p(\boldsymbol{X}\mid\boldsymbol{Y},\boldsymbol{d})}{p(\boldsymbol{X})}d \boldsymbol{X}\boldsymbol{d}\boldsymbol{Y}. \tag{18}\] ## III Bayesian Optimal Experimental Design Criteria ### _Expected Information Gain_ For the linear Gaussian model given by Eq.(1)-(2), we can derive expressions for the EIG. _Single Step Update_: First for the case of a single update step (or equivalently when no dynamics are present) we begin by substituting the values from Eq. 9 - 12 into the Gaussian KL divergence expression, Eq. 5. Rearranging terms with the matrix inversion lemma and cyclic property of the trace, the information gain from the prior to the posterior is \[\text{D}_{\text{KL}}(p(\boldsymbol{x}_{t}\mid\boldsymbol{y}_{t}, \boldsymbol{d})||p(\boldsymbol{x}_{t}))=\frac{1}{2}\Big{[}\text{log}|\boldsymbol {I}+\boldsymbol{H}^{T}\boldsymbol{R}^{-1}\boldsymbol{H}\boldsymbol{\Sigma}_{t |t-1}|\] \[-\text{tr}[\boldsymbol{S}_{\boldsymbol{t}}^{-1}\boldsymbol{H} \boldsymbol{\Sigma}_{t|t-1}\boldsymbol{H}^{T}]\] \[+(\boldsymbol{y}_{t}-\boldsymbol{H}\boldsymbol{\mu}_{t|t-1})^{T} \boldsymbol{S}_{\boldsymbol{t}}^{-1}\boldsymbol{H}\boldsymbol{\Sigma}_{t|t-1} \boldsymbol{H}^{T}\boldsymbol{S}_{\boldsymbol{t}}^{-1}(\boldsymbol{y}_{t}- \boldsymbol{H}\boldsymbol{\mu}_{t|t-1})\Big{]} \tag{19}\] Only the last term depends on \(\boldsymbol{y}_{t}\), so for the EIG we just need to find the expectation of the quadratic term, which is \[\mathbb{E}_{p(\boldsymbol{y}_{t}|\boldsymbol{d})}[(\boldsymbol{y}_{t}- \boldsymbol{H}\boldsymbol{\mu}_{t|t-1})^{T}\boldsymbol{S}_{\boldsymbol{t}}^{- 1}\boldsymbol{H}\boldsymbol{\Sigma}_{t|t-1}\boldsymbol{H}^{T}\boldsymbol{S}_{ \boldsymbol{t}}^{-1}(\boldsymbol{y}_{t}-\] \[\boldsymbol{H}\boldsymbol{\mu}_{t|t-1})]\] \[=\text{Tr}[\boldsymbol{S}_{\boldsymbol{t}}^{-1}\boldsymbol{H} \boldsymbol{\Sigma}_{t|t-1}\boldsymbol{H}^{T}\boldsymbol{S}_{\boldsymbol{t}}^ {-1}\text{Cov}(\boldsymbol{y}_{t}-\boldsymbol{H}\boldsymbol{\mu}_{t|t-1})]\] \[=\text{Tr}[\boldsymbol{S}_{\boldsymbol{t}}^{-1}\boldsymbol{H} \boldsymbol{\Sigma}_{t|t-1}\boldsymbol{H}^{T}]. \tag{20}\] Here we recall Eq. 15 so \(\boldsymbol{y}_{t}-\boldsymbol{H}\boldsymbol{\mu}_{t|t-1}\) has mean \(\boldsymbol{0}\) and covariance \(\boldsymbol{S}_{\boldsymbol{t}}\). Therefore, noting the cancellation of the trace terms, the EIG of the single step of the Kalman filter is \[\text{EIG}(\boldsymbol{d}) =\mathbb{E}_{p(\boldsymbol{y}_{t}\mid\boldsymbol{d})}[\text{D}_{ \text{KL}}(p(\boldsymbol{x}_{t}\mid\boldsymbol{y}_{t},\boldsymbol{d})||p( \boldsymbol{x}_{t}))]\] \[=\frac{1}{2}\log\frac{|\boldsymbol{\Sigma}_{t|t-1}|}{|\boldsymbol {\Sigma}_{t|t}|}\] \[=\frac{1}{2}\Big{[}\text{log}|\boldsymbol{I}+\boldsymbol{H}^{T} \boldsymbol{R}^{-1}\boldsymbol{H}\boldsymbol{\Sigma}_{t|t-1}|\Big{]}. \tag{21}\] _Infinite Horizon_: We may also be interested in assessing the EIG about a state \(\boldsymbol{x}_{t}\) when the system and filters have converged to their stationary distributions. For this we define our prior knowledge about \(\boldsymbol{x}_{t}\) as solution to the Lyapunov equation e.g. \(p(\boldsymbol{x}_{t}|\boldsymbol{d})=\mathcal{N}(\boldsymbol{0},\boldsymbol{ \Sigma}_{L})\) when \(t\) is sufficiently large to be in the asymptotic regime. Similarly, when we have a sufficiently large set of observations, \(\boldsymbol{Y}_{t}\), we know the posterior belief about \(\boldsymbol{x}_{t}\) will have the form \(p(\boldsymbol{x}_{t}|\boldsymbol{Y}_{t},\boldsymbol{d})=\mathcal{N}(\boldsymbol{ \mu}_{t}(\boldsymbol{Y}_{t}),\boldsymbol{\Sigma}_{D})\). Here we express \(\boldsymbol{\mu}_{t}\) as a function of \(\boldsymbol{Y}_{t}\) to emphasize that \(\boldsymbol{\mu}_{t}\) is a random variable defined by \(\boldsymbol{Y}_{t}\). Therefore, the information gain from observing the \(\boldsymbol{Y}_{t}\) is \[\text{D}_{\text{KL}}(p(\boldsymbol{x}_{t}|\boldsymbol{Y}_{t}, \boldsymbol{d})||p(\boldsymbol{x}_{t}))= \tag{22}\] \[\frac{1}{2}\left(\text{Tr}\left[\boldsymbol{\Sigma}_{L}^{-1} \boldsymbol{\Sigma}_{D}\right]-n+\boldsymbol{\mu}_{t}(\boldsymbol{Y}_{t})^{T} \boldsymbol{\Sigma}_{L}^{-1}\boldsymbol{\mu}_{t}(\boldsymbol{Y}_{t})+\log\frac{| \boldsymbol{\Sigma}_{L}|}{|\boldsymbol{\Sigma}_{D}|}\right).\] Again, the only term that depends on the observations is the quadratic term. Therefore, to compute the EIG we first derive the expectation, \[\mathbb{E}_{p(\boldsymbol{Y}_{t}|\boldsymbol{d})}[ \boldsymbol{\mu}_{t}(\boldsymbol{Y}_{t})^{T}\boldsymbol{\Sigma}_{L}^ {-1}\boldsymbol{\mu}_{t}(\boldsymbol{Y}_{t})]\] \[=\text{Tr}[\boldsymbol{\Sigma}_{L}^{-1}\text{Cov}(\boldsymbol{ \mu}_{t}(\boldsymbol{Y}_{t})\boldsymbol{\mu}_{t}(\boldsymbol{Y}_{t})^{T})]\] \[=\text{Tr}[\boldsymbol{\Sigma}_{L}^{-1}(\boldsymbol{\Sigma}_{L}- \boldsymbol{\Sigma}_{D})]=n-\text{Tr}[\boldsymbol{\Sigma}_{L}^{-1}\boldsymbol{ \Sigma}_{D}]. \tag{23}\] Here we use that \(\mathbb{E}_{p(\boldsymbol{Y}_{t}|\boldsymbol{d})}[\boldsymbol{\mu}_{t}( \boldsymbol{Y}_{t})]=\boldsymbol{0}\) and \(\mathbb{E}[\boldsymbol{\mu}_{t}(\boldsymbol{Y}_{t})\boldsymbol{\mu}_{t}( \boldsymbol{Y}_{t})^{T}]=\boldsymbol{\Sigma}_{L}-\boldsymbol{\Sigma}_{D}\). This is shown as Eq. 57 in Appendix VI-A. Therefore, taking the expectation of Eq. 22 over \(\boldsymbol{Y}_{t}\) and substituting in the result of Eq. 23 which cancels the trace terms, we find similarly to Eq. 21 that \[\text{EIG}(\boldsymbol{d}) =\mathbb{E}_{p(\boldsymbol{Y}_{t}\mid\boldsymbol{d})}[\text{D}_{ \text{KL}}(p(\boldsymbol{x}_{t}\mid\boldsymbol{Y}_{t},\boldsymbol{d})||p( \boldsymbol{x}_{t}))]\] \[\to\frac{1}{2}\log\frac{|\boldsymbol{\Sigma}_{L}|}{|\boldsymbol{ \Sigma}_{D}|},\text{ as }t\to\infty. \tag{24}\] ### _Expected Generalized Information Gain_ Using the generalized measure of information in eq. 6, we can assess how much information is expected to be gained or lost by an experiment \(d\) when there is a model discrepancy. We define the true model as \(\mathcal{M}^{*}\) and the model with discrepancy as \(\mathcal{M}\), both of which have the same unknown states, \(\mathbf{X}\) which we seek to infer. This expectation is taken over data that is generated according to \(p(\mathbf{Y}\mid\mathbf{d},\mathcal{M}^{*})\). This leads to the Expected Generalized Information Gain (EGIG) given by, \[\text{EGIG}(\mathbf{d},\mathcal{M},\mathcal{M}^{*})=\] \[=\int p(\mathbf{X},\mathbf{Y}\mid\mathbf{d},\mathcal{M}^{*})\log\frac{p(\mathbf{X }\mid\mathbf{Y},\mathbf{d},\mathcal{M})}{p(\mathbf{X}\mid\mathcal{M})}d\mathbf{X}d\mathbf{Y} \tag{25}\] \[=\int p(\mathbf{X},\mathbf{Y}\mid\mathbf{d},\mathcal{M}^{*})\log\frac{p(\mathbf{Y }\mid\mathbf{X},\mathbf{d},\mathcal{M})}{p(\mathbf{Y}\mid\mathcal{M})}d\mathbf{X}d\mathbf{Y} \tag{26}\] Notes that Eq. 26 is a simple rearrangement using Bayes' theorem, which can be easier to compute for some problems. Typically we do not know the model \(\mathcal{M}^{*}\), so in practice we should either define a set of plausible models we want to be robust to or we can assess the sensitivity to perturbations away from \(\mathcal{M}\) by computing derivatives of the EGIG using either automatic differentiation or numerical derivatives. In the context of inferring \(\mathbf{x}_{t}\) with a system defined by Eq. 1 - 2 we define the true model \(\mathcal{M}^{*}=\{\mathbf{A}^{*},\mathbf{H}^{*},\mathbf{Q}^{*},\mathbf{R}^{*}\}\) and the model we use for inference as \(\mathcal{M}=\{\mathbf{A},\mathbf{H},\mathbf{Q},\mathbf{R}\}\). _Single Step Update_: We start with the EGIG form of Eq. 26. We defined \(\mathbf{\mu}_{t|t-1}=\mathbf{A}\), \(\mathbf{\mu}^{*}_{t|t-1}=\mathbf{A}^{*}\), \(\mathbf{\Sigma}_{t|t-1}=\mathbf{A}\mathbf{\Sigma}_{t-1|t-1}\mathbf{A}^{T}+\mathbf{Q}\), and \(\mathbf{\Sigma}^{*}_{t|t-1}=\mathbf{A}^{*}\mathbf{\Sigma}^{*}_{t-1|t-1}\mathbf{A}^{*T}+\mathbf{Q}^ {*}\). We then note the distributions, \[p(\mathbf{x}_{t},\mathbf{y}_{t}\mid d,\mathcal{M}^{*})= \tag{27}\] \[\mathcal{N}\Bigg{(}\begin{pmatrix}\mathbf{\mu}^{*}_{t|t-1}\\ \mathbf{H}^{*}\mathbf{\mu}^{*}_{t|t-1}\end{pmatrix},\begin{pmatrix}\mathbf{\Sigma}^{*}_{t| t-1}&\mathbf{\Sigma}^{*}_{t|t-1}\mathbf{H}^{*T}\\ \mathbf{H}^{*}\mathbf{\Sigma}^{*}_{t|t-1}&\mathbf{S}^{*}_{t}\end{pmatrix}\Bigg{)}\] \[p(\mathbf{y}_{t}\mid\mathbf{x}_{t}d,\mathcal{M}) =\mathcal{N}\left(\mathbf{H}\mathbf{\mu}_{t},\mathbf{R}\right) \tag{28}\] \[p(\mathbf{y}_{t}\mid d,\mathcal{M}) =\mathcal{N}\left(\mathbf{H}\mathbf{\mu}_{t|t-1},\mathbf{S}_{t}\right). \tag{29}\] Recall that \(\mathbf{S}_{t}=\mathbf{H}\mathbf{\Sigma}_{t|t-1}\mathbf{H}^{T}+\mathbf{R}\) and \(\mathbf{S}^{*}_{t}=\mathbf{H}^{*}\mathbf{\Sigma}^{*}_{t|t-1}\mathbf{H}^{*T}+\mathbf{R}^{*}\). Substituting these distributions into Eq. 26, we arrive at \[\text{EGIG}(\mathbf{d},\mathcal{M},\mathcal{M}^{*})=\] \[\quad\mathbb{E}_{p(\mathbf{x}_{t},\mathbf{y}_{t}|\mathcal{M}^{*})}\left[ \log\frac{p(\mathbf{x}_{t}\mid\mathbf{y}_{t},d,\mathcal{M})}{p(\mathbf{x}_{t}\mid d, \mathcal{M})}\right]\] \[= \frac{1}{2}\bigg{(}\log\frac{\mid\mathbf{S}_{t}}{\mid\mathbf{R}\mid}- \mathbb{E}[(\mathbf{y}_{t}-\mathbf{H}\mathbf{x}_{t})^{T}\mathbf{R}^{-1}(\mathbf{y}_{t}-\mathbf{H}\mathbf{x }_{t})]\] \[\quad+\mathbb{E}[(\mathbf{y}_{t}-\mathbf{H}\mathbf{\mu}_{t|t-1})^{T}\mathbf{S}^{ -1}_{t}(\mathbf{y}_{t}-\mathbf{H}\mathbf{\mu}_{t|t-1})]\bigg{)}. \tag{30}\] Using Eq. 27 - 29, it is straight forward to compute the means and covariances of \((\mathbf{y}_{t}-\mathbf{H}\mathbf{x}_{t})\) and \((\mathbf{y}_{t}-\mathbf{H}\mathbf{\mu}_{t|t-1})\), \[(\mathbf{y}_{t}-\mathbf{H}\mathbf{x}_{t})\sim \mathcal{N}((\mathbf{H}^{*}-\mathbf{H})\mathbf{\mu}^{*}_{t|t-1},\] \[(\mathbf{H}^{*}-\mathbf{H})^{T}\mathbf{\Sigma}^{*}_{t|t-1}(\mathbf{H}^{*}-\mathbf{H}) +\mathbf{R}^{*}) \tag{31}\] \[(\mathbf{y}_{t}-\mathbf{H}\mathbf{\mu}_{t|t-1})\sim\mathcal{N}(\mathbf{H}^{*}\mu^ {*}_{t|t-1}-\mathbf{H}\mathbf{\mu}_{t|t-1},\mathbf{S}^{*}_{t}). \tag{32}\] Given these distributions, it is useful to define the following variables \(\mathbf{\Delta}_{H}=\mathbf{H}^{*}-\mathbf{H}\) and \(\mathbf{\Delta}_{y}=(\mathbf{H}^{*}\mu^{*}_{t|t-1}-\mathbf{H}\mathbf{\mu}_{t|t-1})\). Therefore, we can define the EGIG as \[\text{EGIG}(\mathbf{d},\mathcal{M},\mathcal{M}^{*})= \tag{33}\] \[\frac{1}{2}\bigg{(}\log\frac{\mid\mathbf{S}_{t}\mid}{\mid\mathbf{R}\mid }-\text{Tr}[\mathbf{R}^{-1}\mathbf{\Delta}_{H}\mathbf{\Sigma}^{*}_{t|t-1}\mathbf{\Delta}^{T}_{ H}]-\text{Tr}[\mathbf{R}^{-1}\mathbf{R}^{*}]\] \[\quad+\text{Tr}[\mathbf{S}^{-1}_{t}\mathbf{S}^{*}_{t}]-\mathbf{\mu}^{*T}_{t|t- 1}\mathbf{\Delta}^{T}_{H}\mathbf{R}^{-1}\mathbf{\Delta}_{H}\mathbf{\mu}^{*}_{t|t-1}\] \[\quad+\mathbf{\Delta}^{T}_{y}\mathbf{S}^{-1}_{t}\mathbf{\Delta}_{y}\bigg{)}.\] _Infinite Horizon_: For the infinite horizon case for inferring \(\mathbf{x}_{t}\) we know that our prior and posterior are Gaussian. Therefore, when computing the EGIG we can use the expression in Eq. 7 and then compute the expectation over observations \(\mathbf{Y}_{t}\sim p(\mathbf{Y}_{t}|\mathcal{M}^{*})\). Here, \(r(\mathbf{X})\) is \(p(\mathbf{x}_{t}\mid\mathbf{Y}_{t},d,\mathcal{M}^{*})\), \(p(\mathbf{X})\) is \(p(\mathbf{x}_{t}\mid\mathbf{Y}_{t},d,\mathcal{M})\), and \(q(\mathbf{X})\) is \(p(\mathbf{x}_{t}\mid\mathcal{M})\). By inspection, we see again that the only terms that depend on \(\mathbf{Y}_{t}\) are the quadratic terms. Therefore, we begin with those terms. First, we note the asymptotic results: \(\mathbf{\Sigma}_{t|t}\to\mathbf{\Sigma}_{D}\), \(\mathbf{\Sigma}^{*}_{t|t}\to\mathbf{\Sigma}^{*}_{D}\), \(\mathbf{\Sigma}_{t|0}\to\mathbf{\Sigma}_{L}\), \(\mathbf{\mu}_{t|0}=\mathbf{0}\), and \(\mathbf{\mu}^{*}_{t|t}\overset{t\to\infty}{\sim}\mathcal{N}(\mathbf{0},\mathbf{\Sigma}^{*}_{L} -\mathbf{\Sigma}^{*}_{D})\). This gives us that \[\mathbb{E}_{p(\mathbf{Y}_{t}|\mathcal{M}^{*})}\left[\left(\mathbf{\mu}^{*}_{t|t}-\mathbf{ \mu}_{t|0}\right)^{T}\mathbf{\Sigma}^{-1}_{L}\left(\mathbf{\mu}^{*}_{t|t}-\mathbf{\mu}_{t|0 }\right)\right]\] \[\qquad\qquad\qquad=\text{Tr}[\mathbf{\Sigma}^{-1}_{L}\left(\mathbf{ \Sigma}^{*}_{L}-\mathbf{\Sigma}^{*}_{D}\right)]. \tag{34}\] For the second expectation, we again rely on results presented in detail in Appendix VI-A. First, \(\mathbb{E}_{p(\mathbf{Y}_{t}|\mathcal{M}^{*})}[\mathbf{\mu}^{*}_{t|t}-\mathbf{\mu}_{t|t}]= \mathbf{0}\). Second, therefore \[\mathbb{E}_{p(\mathbf{Y}_{t}|\mathcal{M}^{*})}\left[\left(\mathbf{\mu}^{*}_{t|t}-\mathbf{ \mu}_{t|t}\right)^{T}\mathbf{\Sigma}^{-1}_{D}\left(\mathbf{\mu}^{*}_{t|t}-\mathbf{\mu}_{t|t }\right)\right]\] \[\qquad\qquad\qquad=\text{Tr}[\mathbf{\Sigma}^{-1}_{D}\mathbf{M}_{\Delta}]. \tag{35}\] \[\mathbf{M}_{\Delta}= \[\mathbf{\mathcal{Q}}=\begin{pmatrix}\mathbf{KS^{*}K^{T}}&\mathbf{KS^{*}K^{*}}\\ \mathbf{K^{*}S^{*}K^{T}}&\mathbf{K^{*}S^{*}K^{*T}}\end{pmatrix} \tag{39}\] Therefore, using these two expectations we arrive at the EGIG for the infinite horizon system, \[\boxed{\text{EGIG}(\mathbf{d},\mathcal{M},\mathcal{M}^{*})=} \tag{40}\] \[\mathbb{E}_{p(\mathbf{Y}_{t}|\mathcal{M}^{*})}[\] \[\mathcal{I}_{p(\mathbf{x}_{t}|\mathbf{Y}_{t},d,\mathcal{M}^{*})}[p(\mathbf{x }_{t}\mid\mathbf{Y}_{t},d,\mathcal{M})\mid\mid p(\mathbf{x}_{t}\mid\mathcal{M})]] \rightarrow\] \[\frac{1}{2}\bigg{(}\text{Tr}[\mathbf{\Sigma}_{\mathbf{L}}^{-1}\mathbf{ \Sigma}_{L}^{*}]-\text{Tr}[\mathbf{\Sigma}_{D}^{-1}(\mathbf{\Sigma}_{D}^{*}+\mathbf{M}_{ \Delta})]+\log\frac{\mid\mathbf{\Sigma}_{L}\mid}{\mid\mathbf{\Sigma}_{D}\mid}\bigg{)} \text{ as }t\rightarrow\infty.\] ### _Expected Discriminatory Information_ While EIG measures efficiency and EGIG measures robustness, we introduce the Expected Discriminatory Information (EDI) criteria to quantify how well an experiment can identify modeling failures. As such, unlike EGIG which is focused on comparing the Bayesian inference solution in the domain of the states \(\mathbf{x}\), EDI compares them in the data domain, \(\mathbf{y}\). Therefore, we can compare models that have different states and forms, e.g. different number of states. The EDI takes inspiration from the use of Bayes factors to compare models. Therefore we define the EDI as the expected Bayes factor given data from a true model \(\mathcal{M}^{*}\) \[\text{EDI}(\mathbf{d}, \mathcal{M},\mathcal{M}^{*})=\text{D}_{\text{KL}}\left[p\left( \mathbf{Y}\mid d,\mathcal{M}^{*}\right)\left|\left|p\left(\mathbf{Y}\mid d,\mathcal{M }\right)\right.\right]\right. \tag{41}\] \[=\int p\left(\mathbf{Y}\mid d,\mathcal{M}^{*}\right)\log\frac{p\left( \mathbf{Y}\mid d,\mathcal{M}^{*}\right)}{p\left(\mathbf{Y}\mid d,\mathcal{M}\right)}dY.\] For the Bayesian filtering context where \(\mathbf{Y}_{t}=\{y_{0}\ldots y_{t}\}\), we can use express the EDI using an iterative update leveraging a similar strategy for computing the model evidence using a Bayesian filter, \[\text{EDI}(\mathbf{d},\mathcal{M},\mathcal{M}^{*},t) \tag{42}\] \[=\int p\left(\mathbf{Y}_{t}\mid d,\mathcal{M}^{*}\right)\log\frac{p \left(\mathbf{Y}_{t}\mid d,\mathcal{M}^{*}\right)}{p\left(\mathbf{Y}_{t}\mid d, \mathcal{M}\right)}d\mathbf{Y}_{t}\] \[=\mathbb{E}_{p(\mathbf{y}_{t},\mathbf{Y}_{t-1}|d,\mathcal{M}^{*})}\left[ \log\frac{p\left(\mathbf{y}_{t},\mathbf{Y}_{t-1}\mid d,\mathcal{M}^{*}\right)}{p\left( \mathbf{y}_{t},\mathbf{Y}_{t-1}\mid d,\mathcal{M}\right)}\right]\] \[=\mathbb{E}_{p(\mathbf{y}_{t},\mathbf{Y}_{t-1}|d,\mathcal{M}^{*})}\left[ \log\frac{p\left(\mathbf{y}_{t}\mid\mathbf{Y}_{t-1},d,\mathcal{M}^{*}\right)}{p\left( \mathbf{y}_{t}\mid\mathbf{Y}_{t-1},d,\mathcal{M}\right)}\right]\] \[=\mathbb{E}_{p(\mathbf{Y}_{t-1}|d,\mathcal{M}^{*})}\Big{[}\] \[\text{D}_{\text{KL}}\Big{[}p\Big{(}\mathbf{y}_{t}\mid\mathbf{Y}_{t-1},d, \mathcal{M}^{*}\Big{)}\Big{|}\|p\Big{(}\mathbf{y}_{t}\mid\mathbf{Y}_{t-1},d,\mathcal{ M}\Big{)}\Big{]}\Big{]}\] Since the EDI is just a KL divergence, for the linear systems we have been studying in this paper it is fairly straight forward to express with the various quantities we have already derived. Therefore, we will state the main results without tenuous algebraic manipulation. _Single Step Update_: For a single time step where the data is generated by the true process model \(p(\mathbf{y}_{t}\mid d,\mathbb{M}^{*})\), (see \(\mathbf{y}_{t}\) marginal of Eq. 27), but we are evaluating \(\mathbb{M}\) according to \(p(\mathbf{y}_{t}\mid d,\mathbb{M})\) (see Eq. 29), we can compute the KL divergence for these Guassian distributions using Eq. 5. Giving us \[\boxed{\text{EDI}(\mathbf{d},\mathcal{M},\mathcal{M}^{*})=} \tag{43}\] \[\frac{1}{2}\bigg{(}\text{Tr}[\mathbf{S}_{t}^{-1}\mathbf{S}_{t}^{*}]+\log \frac{\mid\mathbf{S}_{t}\mid}{\mid\mathbf{S}^{*}\mid}+\mathbf{\Delta}_{y}^{T}\mathbf{S}^{-1} \mathbf{\Delta}_{y}-s\bigg{)}.\] \(s\) is the number of observations e.g. sensors. Here we recall that \(\mathbf{\Delta}_{y}=(\mathbf{H}^{*}\mu_{t|t-1}^{*}-\mathbf{H}\mathbf{\mu}_{t|t-1})\) and emphasize that \(\mu_{t|t-1}^{*}\) and \(\mu_{t|t-1}\) need not be the same dimension since the comparison is happening in the data space. For the special case were \(\mathbf{H}^{*}=[\mathbf{H},\mathbf{\Delta}]\), the state of the model \(\mathcal{M}^{*}\) is \(\mathbf{x}_{t}^{*}=[\mathbf{x}_{t},\mathbf{\delta}_{t}]^{T}\), \(\mathbf{\mu}_{\delta,t|t-1}=\mathbb{E}[\mathbf{\delta}_{t|t-1}]\), and \(\text{Cov}(\mathbf{x}_{t}^{*})=\text{Diag}[\mathbf{\Sigma}_{t|t-1},\mathbf{\Gamma}_{t|t-1}]\) e.g. the augmented states is independent of the other states. Then \[\text{EDI}(\mathbf{d},\mathcal{M},\mathcal{M}^{*})=\frac{1}{2}\bigg{(} \text{Tr}[\mathbf{S}_{t}^{-1}\mathbf{\Delta}\mathbf{\Gamma}_{t|t-1}\mathbf{\Delta}^{T}] \tag{44}\] \[-\log|\mathbb{I}+\mathbf{S}_{t}^{-1}\mathbf{\Delta}\mathbf{\Gamma}_{t|t-1}\bm {\Delta}^{T}|+\mathbf{\mu}_{\delta,t|t-1}^{T}\mathbf{\Delta}^{T}\mathbf{S}_{t}^{-1}\mathbf{ \Delta}\mathbf{\mu}_{\delta,t|t-1}\bigg{)}\] _Infinite Horizon_: For the asymptotic case we may choose to ask a slightly different questions when assessing the EDI. Instead of asking about a single \(\mathbf{y}_{t}\) we can ask about the full trajectory \(\mathbf{Y}_{t}=\{y_{0}\ldots y_{t}\}\). Therefore to compute the EDI, we look to equation 42. Under the previous assumptions of asymptotic stability, since we know that the predictives converge and are independent of the observations \(Y_{t}\), we can expect the first term in 42 to converge to a constant which we call \(\Delta_{\text{EDI}}\). Therefore we expect \(\text{EDI}(\mathbf{d},\mathcal{M},\mathcal{M}^{*},t)\to t\Delta_{\text{EDI}}\) as \(t\rightarrow\infty\) unless \(\Delta_{\text{EDI}}=0\) meaning that there is only a finite amount of information to discriminate between the models based on the experiment even in the infinite horizon case. Therefore, \(\Delta_{\text{EDI}}\) is the critical quantity for understanding the asymptotic EDI. Using the expression for the Guassian KL divergence Eq. 5 and taking the expectation using the asymptotic results found in Appendix VI-A, we find \[\boxed{\Delta_{\text{EDI}}=\lim_{t\rightarrow\infty}\mathbb{E}_{p( \mathbf{Y}_{t-1}|d,\mathcal{M}^{*})}\] (45) \[\qquad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ \(\mathcal{M}\) and \(\mathcal{M}^{*}\) respectively. The matrix \(\mathbf{M}_{S}\) is given by \[\mathbf{M}_{S} =\text{Cov}(\mathbf{H}^{*}\mathbf{\mu}_{t|t-1}^{*}-\mathbf{H}\mathbf{\mu}_{t|t-1})\] \[=[-\mathbf{H}\mathbf{A}\quad\mathbf{H}^{*}\mathbf{A}^{*}]\mathbf{M}[-\mathbf{H}\mathbf{A} \quad\mathbf{H}^{*}\mathbf{A}^{*}]^{T} \tag{46}\] where \(\mathbf{M}\) is the joint asymptotic covariance matrix of \(\mathbf{\mu}_{t|t}\) and \(\mathbf{\mu}_{t|t}^{*}\) and is the solution to the previously specified Lyapunov equation, Eq. 37. ## IV Examples ### _Spring Mass Damper System_ Fig. 1 shows a damped spring-mass system. The equations of motion for this system are \[\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&m_{1}&0\\ 0&0&0&m_{2}0\end{bmatrix}\frac{d}{dt}\begin{bmatrix}x_{1}\\ x_{2}\\ v_{1}\\ v_{2}\end{bmatrix}= \tag{47}\] \[\begin{bmatrix}0&0&1&0\\ 0&0&1\\ -(k_{1}+k_{2})&k_{2}&-(b_{1}+b_{2})&b_{2}\\ k_{2}&-(k_{2}+k_{3})&b_{2}&-(b_{2}+b_{3})\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\\ v_{1}\\ v_{2}\end{bmatrix}\] where \(x_{1},x_{2}\) denotes the position of the masses from its rest location. The variables \(v_{1},v_{2}\) denoted their linear velocity respectively. The spring constants are \(k_{1},k_{2},k_{3}\) and the damping coefficients are \(b_{1},b_{2}\),\(b_{3}\). This continuous time linear system (CTLS) is then discretized for our analysis. By analyzing the system we can see that under the conditions of high \(k_{3}\) stiffness, low \(m_{2}\) mass, or high \(b_{3}\) damping, that the two mass system should behave closely to a single mass system. Therefore, under these conditions, we would expect the \(\Delta\)EDI criteria to become small when \(\mathcal{M}\) is the one mass system and \(\mathcal{M}^{*}\) is the two mass system. We see panel A in Fig. 2 that \(\Delta\)EDI indeed decreases as we increase the stiffness \(k_{3}\). We now consider choosing an observer design \(d\in[0,\pi/2]\) to observe the position and velocity of the known mass, \(m_{1}\), while balancing \(\Delta\)EDI and EIG. Our, admittedly arbitrary, observer measures the position and velocity of \(m_{1}\) with weights \(\cos(d)\) and \(\sin(d)\) respectively. The asymptotic EIG objective seeks to maximize information about the position and velocity of \(m_{1}\) according to \(\mathcal{M}\). To EDI objective seeks to maximize our ability to asymptotically detect whether \(\mathcal{M}\) is plausible versus \(\mathcal{M}^{*}\). Of course we don't know \(\mathcal{M}^{*}\) during the design phase so instead we average \(\Delta\)EDI over a prior range of stiffnesses from panel A of Fig. 2. We see how EIG varies over the designs as the orange curve in Fig. 2, panel B. While the mean \(\Delta\)EDI is shown as the navy curve. The trade off between these quantities is show in panel C. Depending on the importance of discrimination vs performance we may choose either only observe the velocity (maximizing EIG) or to sacrifice some EIG to gain better discrimination power by choosing mixed sensor design. ### _F-16 Model_ We use an F-16 aircraft model based from [15][1]. This system originally has 12 states of which we pull out the Fig. 1: Spring-Mass-Damper System with unknown 2\({}^{\text{nd}}\)mass. Fig. 3: F-16 model aircraft with specified states. Fig. 2: Observer design and analysis for the spring mass system. Here, the true model \(\mathcal{M}^{*}\) is a two mass system while the inference model \(\mathcal{M}\) is the single mass system. Panel A shows how increasing the stiffness decreases our ability to distinguish between the models. Panels B and C show the trade off between EIG and \(\Delta\)EDI over our design variable. longitudinal dynamics with states: \(\theta\):pitch angle, \(V\):velocity, \(\alpha\): attack angle, \(\dot{\theta}\) and controls: \(T\): thrust, \(\delta_{elc}\): elevator angle (see Fig. 3). We form a reduced-order CTLS using the closed loop system, which is then discretized. For this model, we seek add an additional output to the observer. This new output has the arbitrary form \(y_{new}=d_{1}\theta+d_{2}\alpha+d_{3}\dot{\theta}\), where \(d_{1}^{2}+d_{2}^{2}+d_{3}^{2}=1\). When considering these designs, we seek to balance maximizing asymptotic EIG while minimizing asymptotic EIG. The inference model, \(\mathcal{M}\), is the F-16 model with dynamics \(\mathbf{A}\), but the true model, \(\mathcal{M}^{*}\) has dynamics \(\mathbf{A}^{*}=\mathbf{A}+\Delta\odot\mathbf{A}\). So, \(\mathbf{A}^{*}\) has perturbations scaled relative to \(\mathbf{A}\). Because \(\Delta\) is unknown, we instead minimize the sensitivity of EIGIG to changes of \(\Delta\). Therefore, our metric is the norm, \(||\nabla_{\Delta}\text{EGIG}(d_{1},d_{2})||\). The result is summarized in Fig. 4. Panel A shows the trade off of different designs between \(\text{EIG}(d_{1},d_{2})\) and \(||\nabla_{\Delta}\text{EGIG}(d_{1},d_{2})||\) and the Pareto front of optimal designs (purple). We see that the EIGIG is much more sensitive to the design than the EIG, i.e. EIGIG varies by about a factor of 4. Therefore, for a robust design we may sacrifice a little asymptotic EIG for meaningful improvement in robustness. Panels B and C show the EIG and EIG projected on the design space along with the corresponding Pareto set. We have made the codes to these examples available on GitHub[3]. ## V Conclusion Maximizing the value of data for inference and prediction requires the careful selection of experimental conditions by modeling the experiment. These models are prone to misspecifications. We propose an information theoretic framework that extends the notion of Expected Information Gain (EIG), typically used in Bayesian experiment design, to address the model mismatch issue. The proposed the Expected Generalized Information Gain (EGIG) captures the information gained or loss with respect to a true model, when the experiment is designed based on a model with discrepancy. On the other hand the proposed Expected Discriminatory Information (EDI) discriminates between models based upon the data generated, which further aids in the model refinement. These three metrics are complementary as EIG emphasizes data efficient experiments, EIG emphasizes experiments that lead to results that are robust to model discrepancy, and EDI emphasizes experiments that would detect modeling failures. ## Acknowledgment This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing conducted at Sandia National Laboratories. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award DOE-ERCAPm3876.
2308.06982
Discrete Conditional Diffusion for Reranking in Recommendation
Reranking plays a crucial role in modern multi-stage recommender systems by rearranging the initial ranking list to model interplay between items. Considering the inherent challenges of reranking such as combinatorial searching space, some previous studies have adopted the evaluator-generator paradigm, with a generator producing feasible sequences and a evaluator selecting the best one based on estimated listwise utility. Inspired by the remarkable success of diffusion generative models, this paper explores the potential of diffusion models for generating high-quality sequences in reranking. However, we argue that it is nontrivial to take diffusion models as the generator in the context of recommendation. Firstly, diffusion models primarily operate in continuous data space, differing from the discrete data space of item permutations. Secondly, the recommendation task is different from conventional generation tasks as the purpose of recommender systems is to fulfill user interests. Lastly, real-life recommender systems require efficiency, posing challenges for the inference of diffusion models. To overcome these challenges, we propose a novel Discrete Conditional Diffusion Reranking (DCDR) framework for recommendation. DCDR extends traditional diffusion models by introducing a discrete forward process with tractable posteriors, which adds noise to item sequences through step-wise discrete operations (e.g., swapping). Additionally, DCDR incorporates a conditional reverse process that generates item sequences conditioned on expected user responses. Extensive offline experiments conducted on public datasets demonstrate that DCDR outperforms state-of-the-art reranking methods. Furthermore, DCDR has been deployed in a real-world video app with over 300 million daily active users, significantly enhancing online recommendation quality.
Xiao Lin, Xiaokai Chen, Chenyang Wang, Hantao Shu, Linfeng Song, Biao Li, Peng jiang
2023-08-14T07:35:14Z
http://arxiv.org/abs/2308.06982v1
# Discrete Conditional Diffusion for Reranking in Recommendation ###### Abstract. Reranking plays a crucial role in modern multi-stage recommender systems by rearranging the initial ranking list to model interplay between items. Considering the inherent challenges of reranking such as combinatorial searching space, some previous studies have adopted the _evaluator-generator paradigm_, with a generator producing feasible sequences and a evaluator selecting the best one based on estimated listwise utility. Inspired by the remarkable success of diffusion generative models, this paper explores the potential of diffusion models for generating high-quality sequences in reranking. However, we argue that it is nontrivial to take diffusion models as the generator in the context of recommendation. Firstly, diffusion models primarily operate in continuous data space, differing from the discrete data space of item permutations. Secondly, the recommendation task is different from conventional generation tasks as the purpose of recommender systems is to fulfill user interests. Lastly, real-life recommender systems require efficiency, posing challenges for the inference of diffusion models. To overcome these challenges, we propose a novel Discrete Conditional Diffusion Reranking (DCDR) framework for recommendation. DCDR extends traditional diffusion models by introducing a discrete forward process with tractable posterious, which adds noise to item sequences through step-wise discrete operations (e.g., swapping). Additionally, DCDR incorporates a conditional reverse process that generates item sequences conditioned on expected user responses. For efficient and robust inference, we propose several optimizations to enable the deployment of DCDR in real-life recommender systems. Extensive offline experiments conducted on public datasets demonstrate that DCDR outperforms state-of-the-art reranking methods. Furthermore, DCDR has been deployed in a real-world video app with over 300 million daily active users, significantly enhancing online recommendation quality. 2018 acmcopy2018-1-4509-XXXX-X/18/06. $15.00[https://doi.org/](https://doi.org/) input with noises in a step-wise manner; and a reverse process that iteratively generates the original input from the corrupted one with a denoising model. In light of the success of diffusion models, this paper aims to explore the potential of diffusion models for generating high-quality sequences in reranking. However, we find it is nontrivial to take diffusion models as the generator due to the following challenges: * Firstly, most diffusion models (Han et al., 2017; Wang et al., 2018; Wang et al., 2018) are designed for continuous data domains, but the item permutations in recommender systems are operated in the discrete data space. The inherent discrete nature of item sequences in recommender systems brings challenges to the application of diffusion models. * Secondly, the recommendation task is different from conventional generation tasks as the purpose of recommender systems is to fulfill user interests. The generated sequence is expected to achieve positive user feedback, and hence the diffusion model should be controllable in terms of user feedback. * Thirdly, real-life recommender systems serve a huge number of users and the inference procedure of the reranking model is expected to be efficient. Since the generation process of diffusion models works in a step-wise manner, it poses challenges to the inference efficiency of diffusion models. To tackle the aforementioned challenges, we propose a novel Discrete Conditional Diffusion Reranking (DCDR) framework, which extends traditional diffusion models with a discrete forward process and a conditional reverse process for sequence generation. Specifically, in each step of the forward process, DCDR uses a discrete operation to add noises to the input sequence. We propose two discrete operations including permutation-level operation and token-level operation. In the reverse process, DCDR introduces user feedback into the denoising model as conditions for generation. In each step, the denoising model takes conditions and the noisy sequence as input and estimates the distribution of denoised sequence in the next step. This enables the reverse process to generate sequences with expected feedback during inference. To train the denoising model, we derive the formal objective function by introducing carefully designed sequence encoding and transition matrix for both discrete operations. Moreover, for efficient and robust inference, we propose a series of techniques to enable the deployment of DCDR in real-life recommender systems. We conduct extensive offline and online A/B experiments, the comparison between DCDR and other state-of-the-art reranking methods demonstrates the superiority of DCDR. DCDR actually serves as a general framework to leverage diffusion in reranking. The discrete operation and model architecture in DCDR are not exhaustive, which can vary according to specific application scenarios. We believe that DCDR will provide valuable insights for future investigations on diffusion-based multi-stage recommender systems. The contributions of this paper can be summarized as follows: * To the best of our knowledge, this is the first attempt to introduce diffusion models into the reranking stage in real-life multi-stage recommender systems. * A novel Discrete Conditional Diffusion Reranking (DCDR) framework is presented. We carefully design two discrete operations as the forward process and and introduce user feedback as conditions to guide the reverse generation process. * We provide proper approaches for deploying DCDR in a popular video app Kuaishou, which serves over 300 million users daily. And online A/B experiments demonstrate the effectiveness and efficiency of DCDR. ## 2. Related Work ### Reranking in Recommendation Reranking in recommendation focuses on rearranging the input item sequence considering correlations between items to achieve optimal user feedback. Therefore, reranking models (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) take the whole list of items (listwise context) as input and generate a reordered list as output. This is different from ranking models (Beng et al., 2017; Wang et al., 2018) in preliminary stages (e.g., matching, ranking) that consider a single candidate item at a time. Existing studies on reranking can be roughly categorized into two aspects: the first line of researches (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018) focus on modeling item relations and directly learn a single ranking function which ranks items greedily with the ranking score; the other line of works (Wang et al., 2018; Wang et al., 2018) divides the reranking task into two components: sequence generation and sequence evaluation, with a generator produces feasible sequences an a evaluator selecting the best one based on estimated listwise utility. Our work adopts the evaluator-generator paradigm and endeavors to explore the potential of diffusion models as the generator for producing high-quality item sequences, thereby enhancing the performance of reranking models. ### Diffusion Models Diffusion models (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) have achieved significant success in generation tasks of continuous data domains, such as image synthesis and audio generation. Some studies (Wang et al., 2018; Wang et al., 2018) attempt to apply diffusion models on tasks of discrete data domains like text generation. One line of researches (Wang et al., 2018; Wang et al., 2018) design corruption processes and reconstruction processes on discrete data. Another line of researches (Wang et al., 2018; Wang et al., 2018) attempt to apply continuous diffusion models on discrete data domains by adding noises to the embedding spaces of the data or real-number vector spaces. DiffusionLM (Wang et al., 2018) is one of the state-of-the-art method which generates text sequence with continuous diffusion models. While diffusion models have achieved success, their potential for generating high-quality item sequences in recommendation remains under-explored. Recently, some studies have attempted to apply diffusion models on sequential recommendation (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), where the focus is to generate the next item based on user's historical interactions. However, it is important to note that the reranking task addressed in this paper is distinct from typical sequential recommendation. Specifically, reranking aims to generate feasible item permutations rather than focusing on the next item embedding (Wang et al., 2018) or user vector (Wang et al., 2018), which poses significant challenges for the application of diffusion models in the reranking stage. ## 3. Preliminary ### Reranking Task Denote the ordered sequence from the previous stage as \(\mathcal{R}=[\{i_{1},i_{2},\cdots,i_{l_{s}}\}\), where \(l_{s}\) is the input sequence length. The remaining problem aims to find a reordered item sequence \(\mathcal{R}^{*}=[i_{1}^{*},i_{2}^{*},\cdots,i_{l_{0}}^{*}]\) with optimal user feedback (e.g., likes, clicks), where \(l_{0}\) is the length of the final recommendation list. In practice, \(l_{0}\) is usually less than 10 and \(l_{s}\) can either be equal to or larger than \(l_{0}\). ### Diffusion Model Before we go deep into the details of DCDR, we first provide a brief introduction to diffusion models. The typical diffusion models consist of two processes: forward process and reverse process, which are illustrated in Fig. 1. During the forward phase, a sample is corrupted by adding noise in a step-wise manner, where the noise-adding process forms a Markov chain. Contrarily, the reverse phase recovers the corrupted data at each step with a denoising model. _Forward Process:_ Given a sample \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\), the forward Markov process \(q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=\prod_{t}q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) corrupts the sample into a sequence of increasingly noisy samples: \(\mathbf{x}_{1},\dots,\mathbf{x}_{T}\), where \(t\) refers to the diffusion step. Generally the noise follows a Gaussian distribution: \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t-1};\sqrt{1-\beta _{t}\mathbf{x}_{t-1}},\beta_{t}\mathbf{I})\) where \(\beta_{t}\) is the scale of the added noise at step \(t\). _Reverse Process:_ The reverse Markov process attempts to recover the last-step sample with a parameterized denoising model \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). When the noise follows Gaussian distribution, the parameterized distribution becomes \[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1}|\mu_{ \theta}(\mathbf{x}_{t},t),\sigma_{\theta}(\mathbf{x}_{t},t)), \tag{1}\] where \(\mu_{\theta}(\mathbf{x}_{t},t)\) and \(\sigma_{\theta}(\mathbf{x}_{t},t)\) are modeled with neural networks. _Training:_ The canonical objective function (Bishop, 2004) is the variational lower bound: \[\begin{split}\mathcal{L}=&\underbrace{D_{KL}(q( \mathbf{x}_{T}|\mathbf{x}_{0})\|p(\mathbf{x}_{T}))}_{\mathcal{L}_{T}}- \underbrace{\mathbb{E}_{q(\mathbf{x}_{1}|\mathbf{x}_{0})}\left[logp_{\theta}( \mathbf{x}_{0}|\mathbf{x}_{1})\right)}_{\mathcal{L}_{0}}\\ &+\underbrace{\sum_{t=2}^{T}E_{q(\mathbf{x}_{t}|\mathbf{x}_{0})} \left[D_{KL}(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\|p_{\theta}( \mathbf{x}_{t-1}|\mathbf{x}_{t})\right]}_{\mathcal{L}_{t}}.\end{split} \tag{2}\] Since \(\mathcal{L}_{T}\) is a constant, it is usually removed from the loss function. \(\mathcal{L}_{0}\) represents the reconstruction error and \(\mathcal{L}_{t}\) represents the denoising error between denoised data at each step and the corresponding corrupted data in the forward phase. _Inference:_ During inference, diffusion models start from a noisy sample \(\mathbf{x}_{T}\) and draw denoising samples with \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) step by step. After \(T\) steps, the generation process runs from \(\mathbf{x}_{T}\) to \(\mathbf{x}_{0}\) i.e. the final generated sample. ## 4. Discrete Conditional Diffusion Reranking Framework In this section, we provide a detailed introduction to DCDR. First, we introduce the overall framework in Section 4.1. Then, we elaborate on each component from Section 4.2 to Section 4.5. The characteristics of DCDR are discussed in Section 4.6. ### Overview of DCDR The framework of DCDR is illustrated in Fig. 2, which mainly consists of: 1) discrete forward process, 2) conditional reverse process. Specifically, the discrete forward process defines a discrete operation with tractable posteriors to add noises to the input sequence. Here we introduce permutation-level / token-level operation as an example, while other discrete operations are also feasible to be incorporated. The conditional backward process contains a denoising model that tries to recover the input from a noisy sequence at each step. Different from traditional diffusion models, we introduce the expected feedback of the original sequence as the condition to generate the last-step sequence, which is more consistent with the recommendation task. During training, DCDR first adds noises to the user impression list with the discrete forward process in a step-wise manner. Then, the denoising model in the reverse process is trained to recover the last-step sequence conditioned on user responses to the original impression list. During inference, an initial ranked item list is fed as input, and we set an expected feedback of each item as the condition. Then, the conditional reverse process is able to generate sequences step by step. To further improve the generation quality, we maintain \(K\) sequences with top probabilities at each step and introduce an extra sequence evaluator to select the optimal sequence for the final recommendation. ### Discrete Forward Process The forward process in vanilla diffusion models adds Gaussian noises to continuous signals like images according to a specified schedule. However, it is sub-optimal for sequence generation tasks as explained in aforementioned sections. Therefore, we propose to use discrete forward process for sequence generation in the reranking task. Remember that to train a vanilla diffusion model, the variational lower bound in Eq.(2) involves posteriors \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\) in \(\mathcal{L}_{t}\), which can be rewritten by applying Bayes' theorem: \[\begin{split} q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})& =\frac{q(\mathbf{x}_{t}|\mathbf{x}_{t-1},\mathbf{x}_{0})q(\mathbf{x}_{t-1}| \mathbf{x}_{0})}{q(\mathbf{x}_{t}|\mathbf{x}_{0})}\\ &=\frac{q(\mathbf{x}_{t}|\mathbf{x}_{t-1})q(\mathbf{x}_{t-1}| \mathbf{x}_{0})}{q(\mathbf{x}_{t}|\mathbf{x}_{0})}\end{split} \tag{3}\] In the continuous setting with Gaussian noises, \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) and \(q(\mathbf{x}_{t}|\mathbf{x}_{0})\) are easy to compute and the posterior follows a Gaussian distribution (Bishop, 2004). To enable discrete forward process, we need to design discrete operations that also yield tractable forward process Figure 1. A graphical model illustrating the forward (noising) and reverse (denoising) processes in diffusion models. posteriors \(q(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathcal{R}_{0})\). Here we introduce two examples of the discrete operation, i.e., permutation-level operation and token-level operation (shown in Fig. 3). Note that the DCDR framework is not limited to these two operations. Other discrete operations are also feasible to be incorporated in the future. #### 4.2.1. **Permutation-level Operation** We first propose to encode the item sequence in the permutation space. In this setting, each sequence \(\mathcal{R}_{t}\) is maped to an integer \(o(\mathcal{R}_{t})\in[0,l_{0}!)\) and represented as \(\mathbf{s}_{t}\in\mathbb{R}^{1\times L_{0}!}\), which is a one-hot vector with length \(l_{0}!\). Here \(l_{0}\) is the length of the final recommendation list. Then we design a simple yet effective discrete operation that forms a Markov chain in the permutation space. For each step in the forward process, we either keep the sequence same with that in last step or randomly swap a pair of items in the sequence. This corresponds to a transition matrix \(\mathbf{Q}_{t}\in\mathbb{R}^{l_{0}\times l_{0}!}\) at step \(t\) as follows: \[[\mathbf{Q}_{t}]_{ij}=\begin{cases}1-\beta_{t},&if\ d(o^{-1}(i),o^{-1}(j))=0 \\ \beta_{t}/C_{L_{0}}^{2},&if\ d(o^{-1}(i),o^{-1}(j))=2\\ 0,&otherwise,\end{cases} \tag{4}\] where \(o^{-1}(\cdot)\) derives the counterpart permutation, \(C_{L_{0}}^{2}=l_{0}(l_{0}-1)/2\) is the number of possible swap candidates, \(\beta_{t}\in[0,1]\) controls the noise strength at each step1. And \(d(o^{-1}(i),o^{-1}(j))\) denotes the difference between two sequences. For example, given \(\mathcal{R}_{t-1}=ABC\) and \(\mathcal{R}_{t}=ACB\), we have \(d(\mathcal{R}_{t},\mathcal{R}_{t-1})=2\). Footnote 1: In this work, we set \(\forall t:\beta_{t}=\beta\) as a single hyper-parameter and leave advanced noise schedule mechanisms as future work. We use an example in Fig. 3(a) to illustrate the permutation-level operation. Notice that \(d(\mathcal{R}_{t},\mathcal{R}_{t-1})=d(\mathcal{R}_{t-1},\mathcal{R}_{t})\), the transition matrix is a symmetric matrix. Moreover, we show that this transition matrix induces a Markov chain with a uniform stationary distribution, which means the corrupted sequence would eventually become a fully random sequence. The detailed proof can be found in the Appendix. With the sequence encoding \(\mathbf{s}_{t}\) and the transition matrix \(\mathbf{Q}_{t}\), the discrete forward process at each step can be formulated as: \[q(\mathcal{R}_{t}|\mathcal{R}_{t-1})=cat(\mathbf{s}_{t};\mathbf{s}_{t-1}\mathbf{Q}_{t}),\] where \(cat(\cdot;\mathfrak{p})\) means a categorical distribution with probability \(\mathfrak{p}\). After \(t\) steps of noise adding, \(\mathcal{R}_{t}\) given \(\mathcal{R}_{0}\) can be represented as: \[q(\mathcal{R}_{t}|\mathcal{R}_{0})=cat(\mathbf{s}_{t};\mathbf{s}_{0}\overline{ \mathbf{Q}}_{t}),\] where \(\overline{\mathbf{Q}}_{t}=\mathbf{Q}_{1}\mathbf{Q}_{2}\dots\mathbf{Q}_{t}\). As a result, we can compute the posterior \(q(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathcal{R}_{0})\) as follows: \[\begin{split} q(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathcal{R}_{0 })&=\frac{q(\mathcal{R}_{t}|\mathcal{R}_{t-1})q(\mathcal{R}_{t-1 }|\mathcal{R}_{0})}{q(\mathcal{R}_{t}|\mathcal{R}_{0})}\\ &=cat(\mathbf{s}_{t-1};\frac{\mathbf{s}_{t}\mathbf{Q}_{t}^{T}\otimes\mathbf{ s}_{0}\overline{\mathbf{Q}}_{t-1}}{\mathbf{s}_{0}\overline{\mathbf{Q}}_{t}\mathbf{s}_{t}^ {T}}).\end{split} \tag{5}\] This enables us to calculate the KL-divergence in \(\mathcal{L}_{t}\) during training. Note that the computation of posterior at time \(t\) requires a computation of \(\overline{\mathbf{Q}}_{t}\), this can be time-consuming if we are to compute a matrix multiplication of two matrices with a size of \(l_{0}!\times l_{0}!\). However, the matrix is fixed when the length of sequence \(l_{o}\) and \(\beta_{t}\) are determined. Note that both \(l_{o}\) and \(\beta_{t}\) are determined before training, \(\overline{\mathbf{Q}}_{t}\) can be computed and stored in advance. Meanwhile, as \(\mathbf{s}_{0}\) is a one-hot vector, the calculation of \(\mathbf{s}_{0}\overline{\mathbf{Q}}_{t-1}\) is to select one row of \(\overline{\mathbf{Q}}_{t-1}\) and \(\mathbf{s}_{0}\overline{\mathbf{Q}}_{t}\mathbf{s}_{t}^{T}\) is to select an entry of the matrix. Thus the computation of posterior is essentially very efficient during training and inference. #### 4.2. **Token-level Operation** Besides the permutation-level operation, we also propose a token-level operation beyond the permutation space. Here, we use a different way to encode an item sequence \(\mathcal{R}_{t}\) as multiple token-level vectors \(\left[\mathbf{z}_{t}^{(1)};\mathbf{z}_{t}^{(2)};\ \dots;\mathbf{z}_{t}^{(l_{0})}\right]\), where each \(\mathbf{z}_{t}^{(k)}\in\mathbb{R}^{1\times l_{k}}\) is a one-hot vector representing the item at the \(k\)-th position. For each step in the forward process, we either keep the item unchanged or substitute the item with another item from the input sequence. This corresponds with a transition matrix \(\mathbf{O}_{t}\in\mathbb{R}^{l\times l_{k}}\) as Figure 3. Illustrations of permutation-level (left) and token-level (right) operations in the discrete forward process. Figure 2. An illustration of the DCDR framework, which mainly consists of: 1) discrete forward process, and 2) conditional reverse process. In the forward process, two discrete operations with tractable posteriors are introduced; while in the reverse process, user feedback is introduced as the condition to control generation. follows: \[[\mathbf{O}_{t}]_{ij}=\begin{cases}1-\beta,&\text{if }i=j\\ \frac{\beta}{l_{s}-1},&\text{otherwise}\end{cases} \tag{6}\] where \(\beta\) controls the noise strength at each step. This forward process also induces a uniform stationary distribution, and the detailed proof can be found in the Appendix. Then, we can formulate the discrete forward process as: \[q(\mathcal{R}_{t}|\mathcal{R}_{t-1})=\left\{cat(\mathbf{z}_{t}^{(k)};\mathbf{z }_{t-1}^{(k)}\mathbf{O}_{t})\right\}_{k=1}^{l_{0}}. \tag{7}\] Similarly, we can define \(\overline{\mathbf{O}}_{t}=\mathbf{O}_{1}\mathbf{O}_{2}\ldots\mathbf{O}_{t}\), and the probability of \(\mathcal{R}_{t}\) given \(\mathcal{R}_{0}\) is represented as: \[q(\mathcal{R}_{t}|\mathcal{R}_{0})=\left\{cat(\mathbf{z}_{t}^{(k)};\mathbf{z }_{0}^{(k)}\overline{\mathbf{O}}_{t})\right\}_{k=1}^{l_{0}}. \tag{8}\] This leads to the posterior as follows: \[q(\mathcal{R}_{t-1}|\mathcal{R}_{0},\mathcal{R}_{0})=\left\{cat(\mathbf{z}_{ t-1}^{(k)};\frac{\mathbf{z}_{t}^{(k)}\mathbf{O}_{t}^{T}\odot\mathbf{z}_{0}^{(k)} \overline{\mathbf{O}}_{t-1}}{\mathbf{z}_{0}^{(k)}\overline{\mathbf{O}}_{t} \mathbf{z}_{t}^{(k)}T})\right\}_{k=1}^{l_{0}}. \tag{9}\] Note that the computation of posterior also requires a calculation of \(\overline{\mathbf{O}}_{t}\). Similarly this can be calculated and stored in advance. ### Conditional Reverse Process The reverse process in diffusion models attempts to recover the original sample from the noisy sample. For most diffusion models, this is parameterized as \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) (i.e., denoising model). However, the recommendation task is different from conventional generation tasks as the purpose of recommender systems is to fulfill user interests. Besides, users only response to the displayed item sequence. The utilities of other sequences are unknown. It may be problematic if we only train the denoising model to recover towards the impression list. As a result, we expect the reverse process to be conditioned on the user feedback (e.g., like, effective view). In this way, the denoising model attempts to model \(p_{\theta}(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathbf{c})\), where \(\mathbf{c}\) is a response sequence representing the expected feedback of sequence \(\mathcal{R}_{0}\). During training, we can use the real feedback as condition; while at the inference stage, we can set \(\mathbf{c}\) according to application scenarios (e.g., all positive feedback). #### 4.3.1. **Denoising Model Architecture** Note that DCDR framework does not restrict the concrete architecture of the denoising model \(p_{\theta}(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathbf{c})\). Here we give an instantiation based on attention mechanism (Zhu et al., 2017), shown in Fig. 4(a). Each item is first mapped into an embedding vector. Then the sequence \(\mathcal{R}_{t}\) goes through a contextual encoding layer, which consists of a self attention layer among items in the sequence and a history attention layer that uses items in the sequence as queries to aggregate the user history. The output of these two layers are concatenated position-wisely as the contextual-encoded sequence representations. To introduce user feedback as condition, we map the feedback at each position as condition embeddings. Then we use these condition embeddings as queries to aggregate the contextual-encoded sequence representations of \(\mathcal{R}_{t}\). The outputs serve as the expected sequence representations of \(\mathcal{R}_{t-1}\). Then the probability of drawing \(\mathcal{R}_{t-1}\) is computed as the cosine similarities between the expected representations of \(\mathcal{R}_{t-1}\) and the contextual-encoded sequence representations of \(\mathcal{R}_{t-1}\) position-wisely. As for the possible next sequence \(\mathcal{R}_{t-1}\), it depends on the discrete operation chosen in the forward process. #### 4.3.2. **Denoising Model Training** The training objective function of the denoising model \(p_{\theta}(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathbf{c})\) in the reverse process is the typical training loss of diffusion models: \[\mathcal{L}_{t}=E_{q}(\mathcal{R}_{t}|\mathcal{R}_{0})\ [D_{KL}(q(\mathcal{R}_{t-1}| \mathcal{R}_{t},\mathcal{R}_{0})\|_{\theta}(\mathcal{R}_{t-1}|\mathcal{R}_{t}, \mathbf{c})]. \tag{10}\] During the training stage, we randomly sample a time step \(t\in\{1,\ldots,T\}\) and get the noisy sequence \(\mathcal{R}_{t}\) via the discrete forward Figure 4. Illustrative instantiations of the denosing model in the conditional reverse process and the sequence evaluator model. Noted that the model architectures provided are not exhaustive, which can vary across application scenarios. process (i.e. sample based on \(q(\mathcal{R}_{t}|\mathcal{R}_{0})\)). Then the denoising model is optimized with \(\mathcal{L}_{t}\) accordingly. The training algorithm is presented in Alg.1. ### Inference of DCDR The inference procuture in vanilla diffusion models starts with a pure Gaussian noise \(\mathbf{x}_{T}\) and samples \(\mathbf{x}_{t-1}\) step by step. To accommodate the recomendation reranking scenario, we make some changes to enable the deployment of DCDR in real-life recommender systems. * **Condition setting**: To fullfill user interests as much as possible, we set the expected condition \(\mathbf{c}_{e}\) as a sequence with all positive feedback during inference. * **Starting sequence**: In traditional diffusion models, the inference starts with a pure Gaussian noise. However, the pure Gaussian noise may eliminate the important information like user preferences thus hurting recommendation quality. More importantly, starting from pure noise demands far more steps for high-quality generation. Therefore, we use the ordered sequence from the previous stage as the starting sequence for generation. * **Beam search**: To improve the robustness of the sequence generation process, we adopt beam search to generate a specific number of sequences and further adopt a sequence evaluator model (detailed in Section 4.5) to select the final recommended sequence. Starting from the first noisy sequence, we use the diffusion model to generate \(K\) sequences for each candidate and keep \(K\) sequences with top-probabilities at each step2. Footnote 2: Notice that when adopted token-level operation, the reverse process is possible to generate sequences containing duplicated items. We manually filter out such sequences and only retain sequences without duplicated items at each step. * **Early stop**: Despite the inference optimizations like starting sequence and beam search, the multi-step reverse process may still be costly in time. Therefore we further introduce an early-stop mechanism into the inference procedure. For each step, we compute the likelihood that generated sequences match the expected conditions. With the diffusion steps increasing, the likelihood is expected to increase accordingly. And the diffusion steps would terminate when the likelihood stops increasing or the increase becomes quite marginal. The detailed inference algorithm of DCDR is described in Alg.2. ``` 0:\(\mathcal{R}_{T}\): initial item sequence; \(\mathbf{c}_{e}\): expected user feedback; \(p_{\theta}(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathbf{c})\): conditional denoising model; \(T\): maximal step; \(K\): number of candidate sequences; \(S_{c}\): sequence candidates set; \(f_{\phi}(\mathcal{R})\): sequence evaluator model 1:\(S_{c}=\{\mathcal{R}_{T}\}\); 2:for\(t=\{T,T-1,\dots,1\}\)do 3:for\(\mathcal{R}_{t}\in S_{c}\)do 4: Compute \(p_{\theta}(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathbf{c}_{e})\) given \(\mathcal{R}_{t}\) and \(\mathbf{c}_{e}\); 5: Sample \(K\) item sequences according to \(p_{\theta}(\mathcal{R}_{t-1}|\mathcal{R}_{t},\mathbf{c}_{e})\) with highest probabilities; 6:endfor 7: Merge the sampled item sequences into \(S_{c}\) and keep \(K\) item sequences with the overall highest probabilities; 8:if meet early stop criterion then 9: break; 10:endif 11:endfor 12: Select the final sequence with \(f_{\phi}(\mathcal{R})\) for recommendation; ``` **Algorithm 2** Inference Algorithm for DCDR ### Sequence Evaluator The sequence evaluator model \(f_{\phi}(\mathcal{R})\) mainly focuses on estimating the overall utility of a given sequence. Note that many existing methods for the sequence evaluator (Golovolov et al., 2015; Golovolov et al., 2015) can be incorporated in our DCDR framework. To center the contribution of this paper to the overall framework, we adopt an intuitive design of the sequence evaluator. The architecture of the evaluator model is depicted in Fig. 4(c). Specifically, the input sequence is encoded by the contextual encoding layer as that in the denoising model. Then, the representation at each position is fed into a MLP to predict the score of a given feedback label. The overall utility of the sequence is measured by the rank-weighted sum of scores at each position. Notice that the feedback label may vary across different recommendation tasks, such as clicks and purchases in e-commerce services, likes and forwards in online social media. Without loss of generality, we utilize the same feedback label as that in the conditional reverse process, and the objective function is a binary cross-entropy loss. ### Discussion The proposed DCDR provides a general and flexible framework to utilize diffusion models in recommendation. Various discrete operations applied in the forward process yield different diffusion models. For instance, the proposed permutation-level operation is well-suited for scenarios where the set of items to be displayed has been fixed (i.e., \(l_{s}=l_{o}\)). The diffusion model learns how to generate the optimal permutation by iteratively swapping items conditioned on the expected feedback. Conversely, the token-level operation is suitable when the final displayed item list is a subset of the initial sequence (i.e., \(l_{s}>l_{o}\)). The diffusion model learns how to generate the best sequence by step-wise substituting each item with a candidate item. Researchers can also develop other discrete operations tailored to specific application scenarios. Moreover, the architectures of the denoising model and the sequence evaluator model are also flexible to incorporate specific contextual features. Consequently, we believe that DCDR will provide valuable insights for future investigations on diffusion-based multi-stage recommender systems. ## 5. Experiments We conduct extensive experiments in both offline and online environments to demonstrate the effectiveness of DCDR. Three research questions are investigated in the experiments: * **RQ1**: How does DCDR perform in comparison with state-of-the-art methods for reranking in terms of recommendation accuracy and generation quality? * **RQ2**: How do different settings of DCDR affect the performance? * **RQ3**: How does DCDR perform in real-life recommender systems? ### Experiment Setup #### 5.1.1. **Dataset** For the consistency of dataset and the problem setup, we expect that each sample of the dataset is a real item sequence displayed in a complete session rather than a list constructed manually. Therefore we collect two datasets for offline experiments: * **Avito3**: this is a public dataset consisting of user search logs. Each sample corresponds to a search page with multiple ads and thus is a real impressed list with feedback. The data contains over 53M lists with 1.3M users and 36M ads. The impressions from first 21 days are used as training and the impressions in last 7 days are used as testing. And each list has a length of 5. Footnote 3: [https://www.kaggle.com/c/avito-context-ad-clicks/data](https://www.kaggle.com/c/avito-context-ad-clicks/data). #### 5.1.2. **Baselines** We compare the proposed DCDR with serveral state-of-the-art reranking methods and a modified discrete diffusion method for text generation. * **PRM (Wang et al., 2017)**: PRM is one of the state-of-the-art models for reranking tasks, which uses self attention to capture the relations between items. And it has been used for reranking tasks in Taobao recommender systems. * **DLCM (Beng et al., 2017)**: DLCM adopts gated recurrent units to model the cross-item relations sequentially. Meanwhile DLCM is optimized with an attention-based loss function, which also contributes to its predictive accuracy. * **SetRank (Zhou et al., 2018)**: SetRank tries to learn a permutation-invariant model for reranking by removing positional encodings and generates the sequence with greedy selection of the items. * **EGRerank (Zhou et al., 2018)**: EGRerank consists of a sequence generator and a sequence discriminator. And the generator is optimized with reinforcement learning to maximize the expected user feedbacks under the guidance of evaluator. It is worth noticing that EGRerank has been used for reranking tasks in AliExpress. * **DiffusionLM-R (Zhou et al., 2018)**: DiffusionLM is originally proposed for generating text sequences, which is similar to the reranking task. We treat items as words and modify DiffusionLM to take the same conditions with DCDR as inputs for controllable generation. #### 5.1.3. **Implementation Details** For different datasets, we use different user feedback as condition in the reverse process of DCDR. For Avito, we use the click behavior as feedback. For VideoRerank, we set the feedback condition as a binary value indicating whether a video has been completed watched. The settings for hyper-parameters of DCDR can be found in Section 5.2.2. As for other baselines, we carefully tune corresponding hyper-parameters to achieve the best performance. #### 5.1.4. **Metrics** We use two widely-adopted metrics, namely AUC and NDCG, to evaluate the performance of different methods in the offline experiments. ### Offline Experiment Results For all offline experiments, we first pretrain a ranking model with Lambdamart and use the ranked list as the initial sequence for the reranking task. For our DCDR, the variant using permutation-level operation is denoted as **DCDR-P**, while the variant using token-level operation is denoted as **DCDR-T**. #### 5.2.1. **Performance Comparison (RQ1)** The performances of different approaches are listed in Table 1. Notice that DCDR achieves the best performances over other approaches, this verifies the effectiveness of DCDR in item sequence generation quality. Moreover the comparison between DCDR and DiffusionLM-R indicates that the generation quality by discrete diffusion models in DCDR outperforms DiffusionLM-R significantly. This verifies the benefits of discrete diffusion models for discrete data domains. #### 5.2.2. **Analysis of DCDR (RQ2)** In this section, we provide a detailed analysis of DCDR from multiple aspects. _Beam size K:_ We alter the beam size in the reverse process for inference and the results are presented in Fig. 5. Note that the result is best when beam size is set to 6, this makes sense since a proper number of beam size improves the recommendation robustness without involving too many sequences to evaluate. _Reverse steps T:_ We alter the number of reverse steps from 1 to 5 and list the performances in Fig. 5. As shown in the figure, the \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Avito} & \multicolumn{2}{c}{VideoRerank} \\ \cline{2-5} & AUC & NDCG@3 & AUC & NDCG@3 \\ \hline PRM & 0.8651 & 0.3559 & 0.6174 & 0.6317 \\ EGRerank & 0.8724 & 0.3570 & 0.6171 & 0.6325 \\ SetRank & 0.8663 & 0.3564 & 0.6037 & 0.6235 \\ DLCM & 0.8982 & 0.3578 & 0.6054 & 0.6251 \\ DiffusionLM-R & 0.9031 & 0.3766 & 0.6302 & 0.6422 \\ \hline DCDR-T & \(\mathbf{0.9120^{*}}\) & \(\mathbf{0.3818^{*}}\) & \(\mathbf{0.6340^{*}}\) & \(\mathbf{0.6522^{*}}\) \\ DCDR-P & \(\mathbf{0.9172^{*}}\) & \(\mathbf{0.3901^{*}}\) & \(\mathbf{0.6361^{*}}\) & \(\mathbf{0.6576^{*}}\) \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison between DCDR and other baselines. \({}^{*}\) means significant improvements with \(p<0.05\). performance gets improved when the number of diffusion steps increases. However the improvement becomes marginal when the number of steps reaches a certain value. This coincides with the intuition of adopting early-stop for efficiency. _Noise scale \(\beta\)_: We alter the the swapping probability from 0.1 to 0.5 during training and present the results in Fig. 5. The result indicates that too much noise may add difficulty to the learning process and a proper amount of noise leads to satisfactory performances. ### Online Experiment Results (RQ3) We conduct online A/B experiments on KuaiShou APP. #### 5.3.1. **Experiment Setup** In online A/B experiments, the traffic of the whole app is split into ten buckets uniformly. 20% of traffic is assigned to the online baseline PRM while another 10% is assigned to DCDR-P. As revealed in (Xiao et al., 2018), KuaiShou serves over 320 million users daily, and the results collected from 10% of traffic for a whole week is very convincing. The initial video sequence comes from the early stage of the recommender system, which greedily ranks the items with point-wise ranking scores. And we directly use the initial sequence as the noisy sequence \(\mathcal{R}_{T}\) for generation. To enable the controllable video sequence generation, we set two conditions for diffusion models: first, we expect the users to finish watching each video in the sequence; second, we expect the positive feedback from users (for example like the video) #### 5.3.2. **Experiment Results** The experiments have been launched on the system for a week, and the results are listed in Table 2. The metrics for online experiments include views, likes, follows (subscriptions of authors), collects (adding videos to collections) and downloads. Notice that DCDR-P outperforms the baseline in all the related metrics by a large margin. This verifies the quality of recommended video sequence of DCDR-P. #### 5.3.3. **Efficiency of DCDR** As diffusion models generates the samples in a step-wise manner, the cost of computation and latency become a challenge for the deployment in real-life recommender systems. The computation costs and service latency are listed in Table 3. The step-wise generation causes inevitable latency costs of the recommendation service. However, the cost is acceptable in the system since it does not jeopardize the user experience. ## 6. Conclusion In this paper, we make the first attempt to enhance the reranking stage in recommendation by leveraging diffusion models, which faces many challenges such as discrete data space, user feedback incorporation, and efficiency requirements. To address these challenges, we propose a novel framework called Discrete Conditional Diffusion Reranking (DCDR), involving a discrete forward process with tractable posteriors and a conditional reverse process that incorporates user feedback. Moreover, we propose several optimizations for efficient and robust inference of DCDR, enabling its deployment in a real-life recommender system with over 300 million daily active users. Extensive offline evaluations and online experiments demonstrate the effectiveness of DCDR. The proposed DCDR also serves as a general framework. Various discrete operations and contextual features are flexible to be incorporated to suit different application scenarios. We believe DCDR will provide valuable insights for future investigations on diffusion-based multi-stage recommender systems. In the future, we plan to study the impact of noise schedule and explore methods to enhance the efficiency and controllability of the generation process. \begin{table} \begin{tabular}{c c c c} \hline \hline Views & Likes & Follows & Collects & Downloads \\ \hline +0.341\%* & +0.884\%* & +1.100\%* & +1.299\%* & +1.358\%* \\ \hline \hline \end{tabular} \end{table} Table 2. Online experiment results. All values are the relative improvements of DCDR-P over the baseline PRM. For Online A/B tests in KuaiShou, the improvement of over \(0.5\%\) in positive interactions (Like, Follow, Collect, Download) and \(0.3\%\) in Views is very significant. \begin{table} \begin{tabular}{c c c c} \hline \hline Avg (Comp) & Max (Comp) & Avg (Latency) & Max (Latency) \\ \hline +0.055\% & +0.198\% & +0.610\% & +0.696\% \\ \hline \hline \end{tabular} \end{table} Table 3. The additional cost of computation to the system and the additional latency of reranking service. The computation is measured by CPU time. Avg (Comp)/Max (Comp) measure the average/maximum computation costs during the launch time of experiments respectively. Figure 5. The performances of DCDR-P and DCDR-T on Avito dataset with different hyper-parameter settings, including beam size \(K\) in reverse process for inference, the number of diffusion steps \(T\) and the noise scale \(\beta\) in forward process. ## 7. Appendix Lemma 7.1 ().: _Let \(\mathbf{P}\) be the transition matrix of a Markov chain. If \(\mathbf{P}\) is a doubly-stochastic matrix, then the Markov chain defined by \(\mathbf{P}\) has a uniform stationary distribution._ Proof.: Given a transition matrix \(\mathbf{P}\), there exists a collection of eigenvalues and eigenvectors. As \(\mathbf{P}\) is doubly-stochastic (every row sums to \(1\) and every column sums to \(1\)), it is easy to verify that \(1\) is an eigenvalue of \(\mathbf{P}\), with a corresponding eigenvector \(\mathbf{e}/n\)\((1\cdot\mathbf{e}/n=\mathbf{P}\cdot\mathbf{e}/n=\mathbf{e}/n)\). Therefore, we have \(\pi=\mathbf{e}/n\) and \(\pi=\pi\mathbf{P}\), indicating that uniform distribution is a stationary distribution of the Markov chain. Lemma 7.2 ().: _A Markov chain is ergodic if and only if the process satisfies 1) connectivity: \(\forall i,j:\mathbf{P}^{k}(i,j)>0\) for some \(k\), and 2) aperiodicity: \(\forall i:gcd\{k:\mathbf{P}^{k}(i,j)>0\}=1\). And any ergodic Markov chain has a unique stationary distribution._ Proof.: The details can be found in the reference [18]. Theorem 7.3 ().: _The Markov chain for the discrete forward process with permutation-level operation or token-level operation has a unique uniform stationary distribution._ Proof.: First, it is easy to verify that both transition matrices are doubly stochastic: \(\forall j:\sum_{i}[\mathbf{Q}]_{ij}=1\) and \(\forall i:\sum_{j}[\mathbf{Q}]_{ij}=1\). Therefore, uniform distribution is a stationary distribution of both processes according to Lemma 7.1. Meanwhile, we can show that both Markov chains are ergodic. For the permutation-level operation, any permutation can be achieved through finite steps of swaps. For the token-level operation, it is easy to verify that each item has a chance to appear at each position. Therefore, both forward processes satisfy the connectivity condition. Besides, notice that \([\mathbf{Q}]_{ii}>0\) and \([\mathbf{Q}]_{ii}>0\), thus the set \(\{k:\mathbf{Q}^{k}(i,j)>0\}\) and \(\{k:\mathbf{Q}^{k}(i,j)>0\}\) contain \(1\). This indicates that \(gcd\{k:\mathbf{Q}^{k}(i,j)>0\}=1\) and \(gcd\{k:\mathbf{Q}^{k}(i,j)>0\}=1\), which satisfies the aperiodicity condition. Therefore, both forward processes are ergodic, and hence only one stationary distribution exists according to Lemma 7.2. Combining the above conclusions, both processes have a unique uniform stationary distribution, which means that the sequences are almost randomly arranged after a sufficient number of steps.
2303.01347
Letz Translate: Low-Resource Machine Translation for Luxembourgish
Natural language processing of Low-Resource Languages (LRL) is often challenged by the lack of data. Therefore, achieving accurate machine translation (MT) in a low-resource environment is a real problem that requires practical solutions. Research in multilingual models have shown that some LRLs can be handled with such models. However, their large size and computational needs make their use in constrained environments (e.g., mobile/IoT devices or limited/old servers) impractical. In this paper, we address this problem by leveraging the power of large multilingual MT models using knowledge distillation. Knowledge distillation can transfer knowledge from a large and complex teacher model to a simpler and smaller student model without losing much in performance. We also make use of high-resource languages that are related or share the same linguistic root as the target LRL. For our evaluation, we consider Luxembourgish as the LRL that shares some roots and properties with German. We build multiple resource-efficient models based on German, knowledge distillation from the multilingual No Language Left Behind (NLLB) model, and pseudo-translation. We find that our efficient models are more than 30\% faster and perform only 4\% lower compared to the large state-of-the-art NLLB model.
Yewei Song, Saad Ezzini, Jacques Klein, Tegawende Bissyande, Clément Lefebvre, Anne Goujon
2023-03-02T15:26:46Z
http://arxiv.org/abs/2303.01347v1
# Letz Translate: Low-Resource Machine Translation for Luxembourgish ###### Abstract Natural language processing of Low-Resource Languages (LRL) is often challenged by the lack of data. Therefore, achieving accurate machine translation (MT) in a low-resource environment is a real problem that requires practical solutions. Research in multilingual models have shown that some LRLs can be handled with such models. However, their large size and computational needs make their use in constrained environments (e.g., mobile/IoT devices or limited/old servers) impractical. In this paper, we address this problem by leveraging the power of large multilingual MT models using knowledge distillation. Knowledge distillation can transfer knowledge from a large and complex teacher model to a simpler and smaller student model without losing much in performance. We also make use of high-resource languages that are related or share the same linguistic root as the target LRL. For our evaluation, we consider Luxembourgish as the LRL that shares some roots and properties with German. We build multiple resource-efficient models based on German, knowledge distillation from the multilingual No Language Left Behind (NLLB) model, and pseudo-translation. We find that our efficient models are more than 30% faster and perform only 4% lower compared to the large state-of-the-art NLLB model. Neural Machine Translation, Low-resource Languages, Low-resource Translation, Knowledge distillation, Luxembourgish ## I Introduction Natural language is a valuable heritage of mankind. It needs to be preserved, developed, and bridged with other languages. Natural language processing (NLP) has been shown to have a direct impact on the development and maintenance of natural languages, such as enabling the Neural Machine Translation of Hokkien, an unwritten language, to English [1]. Machine Translation (MT) is one of the most useful day-to-day applications of NLP that improves inter-cultural human interactions. Neural Machine Translation (NMT) has revolutionized the MT task in recent years. Recent advances in language models and transfer learning have allowed the MT task to scale up to a large extent by taking advantage of existing large parallel corpora. However, Low-Resource Languages (LRLs) lack such large corpora. Thus, making classic NMT models for LRLs is not possible. Although the recent Meta's No Language is Left Behind (NLLB) [2] multi-language translation model can handle 200 languages, its large size and computational needs make running the model in low resource environment (e.g., online services, mobile/IoT devices) impractical. Motivated by addressing the discussed limitations, we propose resource-efficient translation models for LRLs using knowledge distillation and language-approaching techniques. Our proposed technique aims at achieving near state-of-the-art performance with minimal computational power. Knowledge distillation is concerned with using a large-scale multilingual teacher model (i.e., NLLB) to teach a student model the related task to achieve approximately the same performance with a low inference cost. The language-approaching technique adapts the resources of a high-resource language (HRL) that is related to the target LRL by performing token and expression replacement. Our approach aims to provide accurate translation models with low computational cost, few parallel data, and fast inference time. In our case study, our objective is to efficiently translate Luxembourgish (LB) to English (EN). So we consider LB as the LRL that is related to German (DE), HRL in our case. In our experiments, we found that our data-efficient distillation-based approach achieves better results compared to the use of large models either with larger parallel datasets for training or when augmenting the existing LB data using pseudo-translation from DE. ## II Related Work In this section, we position our work to the existing literature on low-resource NMT that uses models or data from another HRL. In their article [3], Ko et al. proposed an adaptation method that combines large parallel HRL and monolingual LRL data to create an unsupervised translation model that can generalize across language groups. The authors found that their approach can achieve up to 36% bilingual evaluation understudy (BLEU) score (24% improvement compared to using the Spanish model) in translating Portuguese (an LRL) from and to English. Their approach relies on back-translation and adversarial domain adaptation techniques prior to a final fine-tuning step. The paper does not discuss the computational efficiency of the created models. Similar to our article, there are existing works that utilize an HRL that is similar to the target LRL (e.g., share the same root) to improve the NMT of the LRL from or to English. Neubig et al. [4] among others [5, 6] used the technique of mixing parallel data from both LRL and HRL to train and improve the NMT of the LRL of interest. This technique is based on either using the HRL for pre training and the LRL data for fine-tuning, or by mixing both datasets after balancing and augmenting the minority language data to result in one training set. The former proved to have a better performance. In contrast to the existing literature, in our work, we produce resource- and data-efficient models using knowledge distillation from a large multilingual model. We also compare the knowledge distillation to using a related HRL model, fine-tuning it on an LRL parallel set, and on a synthetic set created using pseudo-translation by transforming a large HRL parallel dataset. ## III Approach This section describes the main pipelines and various techniques that are used in this work. ### _Problem Definition_ Our aim is to obtain a **fast** and **high-performing** translation model 1 using a few data from an LRL and 2 in an environment with limited computing power. We will consider a translation model as **fast** if the model's inference time per instance is significantly smaller than the inference time required by state-of-the-art approaches such as NLLB. As an example, the large NLLB model (with 3.3B parameters) has an inference time of 17 seconds per instance on a 48-core CPU server, which makes it relatively slow and impractical. A translation model can be considered as **high-performing** if its translation scores are close to the state-of-the-art larger models. As a first constraint, we will use limited LRL data to imitate the low resource case. For example, using a small **single-source corpus** of Luxembourgish (LB). As a second constraint, we will use our models in scenarios where computing resources are very restricted, such as mobile devices, intranets without GPU servers, etc. Our goal is to achieve a per-sentence inference time of less than 1 second on a CPU-only server. ### _Two Ways for Getting Parallel Data_ In this paper, we compare the performance of the following two techniques: 1 pseudo-translation and 2 knowledge distillation, for LRL-to-EN translation models. We also conduct comparative experiments using our existing parallel corpus. **Pseudo-Translation:** As in some previous work on low-resource language models such as LuxemBERT [7], and Translating Translationese [8], they use pseudo-translation to obtain acceptable quality sentences from another HRL that has a close linguistic or semantic relationship to the LRL. In our case, Luxembourgish is very similar to German in grammar and French (FR) in some vocabulary. **Knowledge Distillation:** In a NISP'2014 paper [9], Hinton et al. introduced the concept of Knowledge Distillation, which aims to migrate the knowledge learned from one large model or multiple models to another lightweight single model for easy deployment. In some cases, calculating the loss function using Soft-Target can produce a completely trained model without using any ground truth data. The concept of knowledge distillation is one of the original inspirations for this work. ### _Pipeline_ Figure 1 outlines the pipeline for obtaining our **pseudo models**. In order to create our models from pseudo parallel data, we first need existing HRL-to-EN parallel data that can be used alongside a proper dictionary to perform the pseudo translation task and transform HRL sentences into pseudo LRL ones. Once the pseudo-parallel data is available, we use it to fine-tune existing pre-trained NMT models on the pseudo-LRL to the English set. Both HRL-to-EN and Multi-to-EN models can be used for this task. In Figure 2, we show the pipeline for creating our **distilled models**. For knowledge distillation, we first need an LRL corpus (not parallel) that can be fed to the large multilingual NLLB model to generate the soft target for HRL-to-EN or Multi-to-EN models with fewer parameters. The distillation process consists of passing the knowledge from a large complex teacher model to a smaller and simpler student model without losing much on the performance. Soft-target is used to minimize the error difference between both models. Once our models are ready, an additional fine-tuning step can be performed on tiny but high-quality parallel data (LRL-to-EN). We will compare the two dimensions of using different first-round fine-tuning datasets and whether or not to use second-round fine-tuning. We also use the Ground-Truth fine-tuning set as a baseline. Finally, we expect to obtain a Luxembourgish-to-English translation model with high computational speed, low computational resource consumption and performance close to that of a very large multilingual model. The best conclusion would be if our approach could be extended to all LRL-HRL translations. We will evaluate our created models in the next section. ## IV Evaluation ### _Experiment Design_ We rely on two datasets for our experiments. The first set is a pseudo-translated LB from DE-to-EN parallel data, obtained using the first pipeline in figure 1, and the second is the LB corpus to the generated soft target EN using the NLLB-Distilled model (600M). We needed to compare the translation performance on several different models. Many-to-one machine translation models have recently become increasingly popular, typically Romance to English, some of which include Luxembourgish and similarly other Frankish languages. We want to test the performance of our data on different pre-trained versions of similar machine learning models. ### _Research Questions_ **RQ1. What is the most accurate solution for low-resource machine translation?** For RQ1, we want to assess the performance of the models that are based on pseudo-translation and knowledge distillation for our Luxembourgish use case, and compare them to our baselines. **RQ2. What is the best NMT model for LRL in terms of computational efficiency and inference time?** For RQ2, we evaluate all models on the same hardware, to determine the best trade-off between model size, inference time, and translation accuracy. ### _Implementation details and Availability_ The main libraries we used to create our models are Transformers, PyTorch, and Redis. We used the following pre-Trained Models: OPUS-MT-DE-EN, OPUS-MT-MUL-EN, NLLB-200-3.3B (NLLB-large), and NLLB-200-DISTILLED-600M (NLLB-Distilled). We performed our training tasks on 1 Nvidia TESLA V100 32GB with Intel Xeon(R) CPU E5-2698, and run the inference on AMD EPYC 7552. The created model (Distilled training, 2-round fine-tuning, based on OPUS-MT-DE-EN) is available on the HuggingFace platform1. Footnote 1: [https://huggingface.co/etamin/OPUS-TruX-Ltz-EN](https://huggingface.co/etamin/OPUS-TruX-Ltz-EN) ### _Data Collection_ In our evaluation, we use multiple datasets for different purposes, such as fine-tuning, pseudo-translation, and soft-target calculation. We consider the dataset provided by Gierschek in [10] as our parallel dataset of **LB-DE-EN** sentences. She collected them from RTL Luxembourg2, a Luxembourgish news website. We removed all sentence lengths shorter than 50 alphabets and longer than 500. It remains 110,720 parallel sentence triples. We call this dataset the **Ground-Truth**. Footnote 2: www.rtl.lu The **pseudo-translation** dataset is built by applying our collected bilingual dictionary on the ground-truth German to English set provided by Gierschek in [10]. We collect a 51,617 words dictionary3, and apply Eifeler Regel (a special rule on words with suffix 'n' and 'nn') to German sentences [11]. Footnote 3: github.com/Etamin/Ltz_dictionary The **soft target** for distilled fine-tuning, is generated by the NLLB-Distilled model [12]. For this purpose, we use the Luxembourgish sentences from the ground truth (i.e., the dataset from Gierschek in [10]), and no English sentences are used for mixing loss. It took us around 157 hours to generate the soft target. In our experiments, we used the smallest NLLB Fig. 1: The pipeline for obtaining our pseudo models Fig. 2: The pipeline for obtaining our distilled models model in terms of size (NLLB-Distilled). Note that if we had used the largest version (NLLB-large), the experiments would require 350 additional hours processing time. We use the _Flores-2004_ train set for fine-tuning, and both the _Flores-200_ test set and the _Tatoeba5_ dataset testing. Flores-200 is a widely used multilingual translation dataset between English and low-resource languages. It is an extended version of Flores101 released by Meta. Flores-200 includes 997 pairs in the train set(for fine-tuning), 1017 pairs in the test set for test [12]. Tatoeba is an open community-based collection of parallel sentences for 420 languages [13]. However, Tatoeba only has 306 Luxembourgish to English pairs. Footnote 5: [https://huggingface.co/datasets/facebook/flores](https://huggingface.co/datasets/facebook/flores) ### _Pre-trained model and fine-tuning_ To build computationally efficient models, we have chosen two different pre-trained versions of OPUS-MT due to their cheap training cost, **OPUS-MT-DE-EN** and **OPUS-MT-MUL-EN**[14]. For all fine-tuning tasks, we use the same training settings as follows. Max steps of 250,000, the learning rate is set to 2e-5 on AdamW, the weight decay is 0.01, and the batch size is 16. We also use the Cross-Entropy loss function. To verify the effect of pseudo-translation or knowledge distillation on the generalization ability of the models, we conduct a **second round** of fine-tuning in less than 500 steps using a subset of high-quality Flores-200. This process was designed to explore whether our data had the correct gradient for the model's ability to project from Luxembourgish to English. ### _Evaluation Metrics_ In our evaluation procedure, we use _SacreBLEU_ and _ChrF++_ to measure the performance of our models and the quality of the resulting translation compared to the ground truth. **SacreBLEU** is an implementation variant of BLEU (Bilingual Evaluation Understudy) with the same objective of evaluating the quality of text translation by measuring the distance between the translated and ground-truth sentences. For simplicity, we will use the "BLEU" notation instead of SacreBLEU. **ChrF++** is an improved version of ChrF which uses the F-score statistic for character n-gram matches. ChrF++ improves ChrF by adding n-grams of words to its computation. ### _Experiments_ In this section, we outline the experiments we conduct to answer our research questions. #### Iv-G1 Baseline models We consider three multilingual machine translation models that support Luxembourgish to English translation as baselines: NLLB-large (3.3B), NLLB-Distilled (600M), and M2M-100 (1.2M). In addition to these models, we consider two versions of OPUS-MT, one is multilingual as well (OPUS-MT-MUL-EN) that supports Luxembourgish, and the second is the German-to-English OPUS-MT-DE-EN model. For each version, we evaluate the models before and after fine-tuning the small Flores-200 Train Set. The selected baseline models are listed in Table I which fine-tuned with Flores train set has an "_FT_" suffix. In Table I, we can see that NLLB-large takes around 17 seconds to translate one sentence. It has the largest number of parameters (3.3B) and therefore requires the most computational power. Obviously, this largest model has the best performance compared to other baselines. The M2M-100 [15] is the second largest model with around 7 seconds per sentence execution time and a size of 1.2B parameters. However, it is only the fifth model in terms of accuracy, meaning that model size does not always correlate with performance. On the other hand, despite NLLB-Distilled being a lightweight version of NLLB and the second most accurate in our list, it still needs more than three seconds per sentence and has 600M parameters, which makes it a good baseline to compare against. OPUS-MT-based models can translate two to three sentences per second and have the smallest size (8 times smaller than NLLB-Distilled and 44 times than NLLB-large). These lightweight models are suitable candidates for our use case (online services that require fast inference time with low-computational resources). #### Iv-G2 Our models As discussed earlier, for our evaluation, we consider two main approaches (i.e., Pseudo-translation and Knowledge distillation) applied to two OPUS-MT-based models, MUL-EN and DE-EN, as listed in Table II. In this experiment, we compared the performance of our OPUS-MT-based models that are fine-tuned on the pseudo-translated set, the distilled set (obtained from distillation), and the whole ground truth. We also evaluate these models after a second fine-tuning on the Flores-200 train set. As shown in Table II, the pseudo-translation models perform better than M2M-100 and OPUS-MT-DE-EN models after the second fine-tuning, but have poor overall results as there was a large discrepancy between the pseudo-Luxembourgish sentences and the actual ground-truth, with BLUE scores lower than 35 points. These models are outperformed by OPUS-MT-MUL-EN and NLLB models on all datasets in Table I. The performance of the models obtained using distillation learning was very close to the models that use the 'ground-truth' parallel data for fine-tuning. On average, the difference is less than 2% BLEU points. After the second round of fine-tuning with Flores-200, our model gained some improvement in some scenarios. In a cross-column comparison in Table II, the DE-EN model achieves better results after knowledge transfer to Luxembourgish, with an average advantage of around 1% BLEU score. This may be due to the fact that some of the sentences in Luxembourgish are very similar to German, or it may be that some of the German knowledge has been retained in the fine-tuning. Based on the advantage of the DE-EN model over the MUL EN model, we can guess that linguistically close bilingual models are more useful than multilingual models in transferring knowledge. However, to prove this, we need more experiments, such as testing the Limburgish (an LRL from the Netherlands) on Dutch (HRL) model and Romanish (an LRL from Switzerland) on the Italian (HRL) model. Regarding time performance, as the inference time is not influenced by fine-tuning, this means that knowledge distillation brings the lightweight and fast inference benefits of OPUS-MT models with improved performance. According to Table I, our model inference speed (i.e., the inference time of OPUS-MT-MUL-EN) was 30 times faster than NLLB-large and 6 times faster than NLLB-Distilled. ### _Answers to the RQs_ **RQ1.** What is the most accurate solution for low-resource machine translation? The NLLB-large model is clearly ahead in terms of translation accuracy compared to other models or baselines. Among our solutions, the distilled models perform much better compared to the pseudo-models and the non-NLLB baselines (i.e., OPUS-MT and M2M in Table I). Moreover, they have a close performance to the same base models using large parallel data for fine-tuning. So, to answer this RQ, the best technique to provide accurate LRL translation using lightweight models is **knowledge distillation**. **RQ2.** What is the best NMT model for LRL in terms of computational efficiency and inference time? Given that the **OPUS-MT-DE-EN** model obtained using knowledge distillation, has the lowest inference time and model size, and has a slight performance edge over _OPUS-MT-MUL-EN_, we can safely say that **OPUS-MT-DE-EN** is the best solution in this paper. ### _Discussion_ Firstly, our research demonstrates that using knowledge distillation can produce high-performance bilingual machine translation models using mega-multilingual models with only a single-side Luxembourgish corpus (RTL news or Wikipedia). Second, we found that it is difficult to train effective unsupervised translation models even for using pseudo-translation from German to Luxembourgish (which are linguistically similar) when a lexicon and grammar adjustment is applied. We also found that models of related languages have an advantage over multilingual models as a basis for transfer learning. To validate this observation, more comparative experiments on other languages are needed, e.g. Limburger, Lithuanian, Suomi, etc. Additionally, after a manual check by a Luxembourgish native speaker, we realized that the Tatoeba dataset is too simple and lacks complexity. Flores-200, on the other hand, lacks noun diversity. The low quality of Tatoeba is expected as it is a community-based dataset. A better evaluation dataset would have been welcomed. Finally, it should be noted that until a late stage of writing this article we were able to find out that the original OPUS-MT-MUL-EN model uses some Tatoeba data sets in its pre-pretraining, we are not sure the LB set was used. However, we do not observe an advantage of this model compared to OPUS-MT-DE-EN when evaluated on this dataset. ## V Conclusion In this paper, we proposed two techniques to produce lightweight models for low-resource language (LRL) translation. Our research demonstrates that high-performing low-resource language mini-models can be obtained using distillation learning based on large models. Our models are smaller, faster, and perform nearly as well as large multilingual NLLB models. For future work, we plan to improve the pseudo-translation technique and test the knowledge distillation in other low-resource languages that have sparse parallel data. We also want to build and evaluate English-to-LRL translation models.
2301.00620
Dynamically Modular and Sparse General Continual Learning
Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid catastrophic forgetting, rehearsal-based methods have proven effective. However, they are still prone to forgetting due to task-interference as all parameters respond to all tasks. To counter this, we take inspiration from sparse coding in the brain and introduce dynamic modularity and sparsity (Dynamos) for rehearsal-based general continual learning. In this setup, the DNN learns to respond to stimuli by activating relevant subsets of neurons. We demonstrate the effectiveness of Dynamos on multiple datasets under challenging continual learning evaluation protocols. Finally, we show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli.
Arnav Varma, Elahe Arani, Bahram Zonooz
2023-01-02T12:24:24Z
http://arxiv.org/abs/2301.00620v1
# Dynamically Modular and Sparse General Continual Learning ###### Abstract Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid catastrophic forgetting, rehearsal-based methods have proven effective. However, they are still prone to forgetting due to task-interference as all parameters respond to all tasks. To counter this, we take inspiration from sparse coding in the brain and introduce dynamic modularity and sparsity (_Dynamos_) for rehearsal-based general continual learning. In this setup, the DNN learns to respond to stimuli by activating relevant subsets of neurons. We demonstrate the effectiveness of _Dynamos_ on multiple datasets under challenging continual learning evaluation protocols. Finally, we show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli. The code is available at [https://github.com/NeurAI-Lab/DynamicContinualLearning](https://github.com/NeurAI-Lab/DynamicContinualLearning). Dynamic Neural Networks, Policy Gradients, Lifelong Learning. ## 1 Introduction Deep neural networks (DNNs) have achieved human-level performance in several applications (Greenwald et al., 2021; Taigman et al., 2014). These networks are trained on the multiple tasks within an application with the data being received under an independent and identically distributed (i.i.d.) assumption. This assumption is satisfied by shuffling the data from all tasks and balancing and normalizing the samples from each task in the application (Hadsell et al., 2020). Consequently, DNNs can achieve human-level performance on all tasks in these applications by modeling the joint distribution of the data as a stationary process. Humans, on the other hand, can model the world from inherently non-stationary and sequential observations (French, 1999). Learning continually from the more realistic sequential and non-stationary data is crucial for many applications such as lifelong learning robots (Thrun and Mitchell, 1995) and self-driving cars (Nose et al., 2019). However, vanilla gradient-based training for such continual learning setups with a continuous stream of tasks and data leads to task interference in the DNN's parameters, and consequently, catastrophic forgetting on old tasks (McCloskey and Cohen, 1989; Kirkpatrick et al., 2017). Therefore, there is a need for methods to alleviate catastrophic forgetting in continual learning. Previous works have aimed to address these challenges in continual learning. These can be broadly classified into three categories. First, regularization-based methods (Kirkpatrick et al., 2017; Schwarz et al., 2018; Zenke et al., 2017) that penalize changes to the parameters of DNNs to reduce task interference. Second, parameter isolation methods (Adel et al., 2020) that assign distinct subsets of parameters to different tasks. Finally, rehearsal-based methods (Chaudhry et al., 2019) that co-train on current and stored previous samples. Among these, regularization-based and parameter isolation-based methods often require additional information (such as task-identity at test time and task-boundaries during training), or unconstrained growth of networks. These requirements fail to meet general continual learning (GCL) desiderata (Delange et al., 2021; Farquhar and Gal, 2018), making these methods unsuitable for GCL. Although rehearsal-based methods improve over other categories and meet GCL desiderata, they still suffer from catastrophic forgetting through task interference in the DNN parameters, as all parameters respond to all examples and tasks. This could be re solved by inculating task or example specific parameter isolation in the rehearsal-based methods. However, it is worth noting that unlike parameter isolation methods, modularity and sparsity in the brain is not static. There is evidence that the brain responds to stimuli in a dynamic and sparse manner, with different modules or subsets of neurons responding "dynamically" to different stimuli (Graham and Field, 2006). The advantages of a dynamic and sparse response to stimuli have been explored in deep learning in stationary settings through mechanisms such as gating of modules (Veit and Belongie, 2018), early-exiting (Li et al., 2017; Hu et al., 2020), and dynamic routing (Wang et al., 2018), along with training losses that incentivize sparsity of neural activations (Wu et al., 2018). These studies observed that DNNs trained to predict dynamically also learn to respond differently to different inputs. Furthermore, the learned DNNs demonstrate clustering of parameters in terms of tasks such as similarity, difficulty, and resolution of inputs (Wang et al., 2018; Veit and Belongie, 2018), indicating dynamic modularity. Hence, we hypothesize that combining rehearsal-based methods with dynamic sparsity and modularity could help further mitigate catastrophic forgetting in a more biologically plausible fashion while adhering to GCL desiderata. To this end, we propose Dynamic Modularity and Sparsity (_Dynamos_), a general continual learning algorithm that combines rehearsal-based methods with dynamic modularity and sparsity. Concretely, we seek to achieve three objectives: dynamic and sparse response to inputs with specialized modules, competent performance, and reducing catastrophic forgetting. To achieve dynamic and sparse responses to inputs, we define multiple agents in our DNN, each responsible for dynamically zeroing out filter activations of a convolutional layer based on the input to that layer. The agents are rewarded for choosing actions that remove activations (sparse responses) if the network predictions are accurate, but are penalized heavily for choosing actions that lead to inaccurate predictions. Agents also rely on prototype losses to learn specialized features. To reduce forgetting and achieve competent performance, we maintain a constant-size memory buffer in which we store previously seen examples. The network is retrained on previous examples alongside current examples to both maintain performance on current and previous tasks, as well as to enforce consistency between current and previous responses to stimuli. _Dynamos_ demonstrates competent performance on multiple continual learning datasets under multiple evaluation protocols, including general continual learning. Additionally, our method demonstrates similar and overlapping responses for similar inputs and disparate responses for dissimilar inputs. Finally, we demonstrate that our method can simulate the trial-to-trial variability observed in humans (Faisal et al., 2008; Werner and Mountcastle, 1963). ## 2 Related Work Research in deep learning has approached the dynamic compositionality and sparsity observed in the human brain through dynamic neural networks, where different subsets of neurons or different subnetworks are activated for different stimuli (Bengio et al., 2015; Bolukbasi et al., 2017). This can be achieved through early exiting (Hu et al., 2020), dynamic routing through mixtures of experts or multiple branches (Collier et al., 2020; Wang et al., 2022), and through gating of modules (Wang et al., 2018). Early-exiting might force the DNN to learn specific features in its earlier layers and consequently hurt performance (Wu et al., 2018) as the earlier layers of DNNs are known to learn general purpose features (Yosinski et al., 2014). Dynamic routing, on the other hand, would require the growth of new experts in response to new tasks that risk unconstrained growth, or the initialization of a larger DNN with branches corresponding to the expected number of tasks (Chen et al., 2020). Dynamic networks with gating mechanisms, meanwhile, have been shown to achieve competent performance in i.i.d. training with standard DNNs embedded with small gating networks (Veit and Belongie, 2018; Wu et al., 2018; Wang et al., 2018). These gating networks emit a discrete keep/drop decision for each module, depending on the input to the module or the DNN. As this operation is non-differentiable, a Gumbel Softmax approximation (Veit and Belongie, 2018; Wang et al., 2018), or an agent trained with policy gradients (Wu et al., 2018; Sutton and Barto, 2018) is commonly used in each module to enable backpropagation. However, unlike the latter, the Gumbel-Softmax approximation induces an asymmetry between the forward pass activations at inference and training (Wang et al., 2018). Furthermore, these methods are not applicable to continual learning. Recent works have attempted to build dynamic networks for continual learning setups (Chen et al., 2020; Abati et al., 2020), where data arrive in a more realistic sequential manner. InstAParam (Chen et al., 2020), Random Path Selection (RPS) (Rajasegaran et al., 2019), and MoE (Collier et al., 2020) start with multiple parallel blocks at each layer, finding input-specific or task-specific paths within this large network. Nevertheless, this requires knowledge of the number of tasks to be learned ahead of training. More importantly, initializing a large network might be unnecessary as indicated by the competent performance of dynamic networks with gating mechanisms in i.i.d training. In contrast to this, MNTDP (Veniat et al., 2021), LMC (Ostapenko et al., 2021), and CCGN (Abati et al., 2020) start with a standard architecture and grow units to respond to new data or tasks. Of these, MNTDP and LMC develop task-specific networks where all inputs from the same task elicit the same response and therefore do not show a truly dynamic response to stimuli. CCGN, however, composes convolutional filters dynamically to respond to stimuli, using a task-specific vector for every convolutional filter, and task boundaries to freeze frequently active filters. However, this leads to unrestrained growth and fails in the absence of task-boundaries, which makes it unsuitable for general continual learning. Therefore, we propose a general continual learning method with dynamic modularity and sparsity (Dynamos) induced through reinforcement learning agents trained with policy gradients. ## 3 Methodology Humans learn continually from inherently non-stationary and sequential observations of the world without catastrophic forgetting, even without supervision about tasks to be performed or the arrival of new tasks, maintaining a bounded memory throughout. This involves, among other things, making multi-scale associations between current and previous observations (Goyal and Bengio, 2020) and responding sparsely and dynamically to stimuli (Graham and Field, 2006). The former concerns consolidation of previous experiences and ensuring that learned experiences evoke a similar response. The latter concern dynamically composing a subset of the specialized neural modules available to respond to stimuli, reusing only the relevant previously learned information. This also avoids erasure of information irrelevant to current stimuli but relevant to previous experiences. We now formulate an approach for dynamic sparse and modular general continual learning that mimics these procedures with DNNs. ### Dynamic, Modular, and Sparse response to stimuli To achieve a dynamic, modular, and sparse response to inputs, we use a DNN \(F\) with a policy to compose a subset of the available modules in each layer to respond to the input to that layer. More specifically, we use a CNN which is incentivized to drop some channels in its activations adaptively using policy gradients (Sutton and Barto, 2018; Williams, 1992). Let us consider the \(l^{\text{th}}\) convolutional layer with \(c_{l}\) output channels \(\forall l\in\{1,2,...L\}\), where \(L\) is the total number of convolutional layers in the network. The input to the convolutional layer is processed using an agent module with actions \(a_{l}\in\{0,1\}^{c_{l}}\) as output, where each action represents the decision to drop (action = 0) or keep (action = 1) the corresponding channel of the output of the convolutional layer. The agent module uses a self-attention network to obtain a channel-wise attention vector \(v_{l}\) of dimension \(c_{l}\), which is converted into "action probabilities" using a probability layer. The policy for choosing actions is then sampled from a \(c_{l}\)-dimensional Bernoulli distribution; \[\begin{split} p_{l}&=\sigma(v_{l})\\ \pi_{l}(a_{l})&=\prod_{i=1}^{c_{l}}p_{l,i}^{a_{l,i}} (1-p_{l,i})^{(1-a_{l,i})},\end{split} \tag{1}\] where \(p_{l}\in(0,1)^{c_{l}}\) is the output of the probability layer \(\sigma\), and \(\pi_{l}\) is the policy function. The final output of the convolutional layer is the channel-wise product of the actions with the output of the convolution. This policy formulation is used at each convolutional layer in the CNN, leading to \(L\) agents in total. The overall structure of an agent for a convolutional layer is shown in Figure 1. These agents are rewarded for dropping channels while making accurate predictions through a reward function. For an input to the DNN \(X\) applied to classification with label \(Y\): \[\begin{split} Z,V&=F(X)\text{, }V=[v_{1}|v_{2},...|v_{L}]\\ \hat{Y}&=\arg\max Z,\end{split} \tag{2}\] where \(Z\) refers to the logits. Now, the ratio of activations or channels that were retained in the layer \(l\) is determined by \(\frac{1}{c_{l}}\sum_{i=1}^{c_{l}}a_{l,i}\). So, for a target activation retention rate per layer or "keep ratio" \(kr\), the reward function is as follows: \[R_{l}(X,Y)=\begin{cases}-(kr-\frac{1}{c_{l}}\sum_{i=1}^{c_{l}}a_{l,i})^{2},& \text{if }\hat{Y}=Y\\ -\lambda(kr-\frac{1}{c_{l}}\sum_{i=1}^{c_{l}}a_{l,i})^{2},&\text{ otherwise.}\end{cases} \tag{3}\] Therefore, when the DNN's predictions are correct, each agent is rewarded for dropping enough activations to match the "keep ratio" from its corresponding convolutional layer. However, when the prediction is incorrect, each agent is penalized for the same, scaled by a constant penalty factor \(\lambda\). The global nature of the reward function, achieved through dependence on the correctness of the prediction, also enforces coordination between agents. Following RE-INFORCE Williams (1992), the loss from all agents \(t=1,2,...L\) is: \[L_{R}(X,Y) =\mathbb{E}_{l}\mathbb{E}_{\pi}[-R_{I}(X,Y)\log\pi_{l}(a_{l})]\] \[=\mathbb{E}_{l}\mathbb{E}_{\pi}[-R_{I}(X,Y)\log\prod_{i=1}^{c_{l} }p_{l,i}a_{l,i}\] \[\qquad\qquad+(1-p_{l,i})(1-a_{l,i})] \tag{4}\] \[=\mathbb{E}_{l}\mathbb{E}_{\pi}[-R_{I}(X,Y)\sum_{i=1}^{c_{l}}\log [p_{l,i}a_{l,i}\] \[\qquad\qquad+(1-p_{l,i})(1-a_{l,i})]].\] Although the agents along with this loss ensure sparse and dynamic responses from the DNN, they do not explicitly impose any specialization of compositional neural modules seen in humans. As the channel-wise "modules" activated in the DNN are directly dependent on the channel-wise attention vectors, we finally apply a specialization loss that we call prototype loss to them. Concretely, for classification, in any batch of inputs, we pull the vectors belonging to the same class together while pushing those from different classes away. This would cause different subsets of channel-wise modules to be used for inputs of different classes. When combined with a sufficiently high "keep ratio", this will encourage overlap and therefore, reuse of relevant previously learned information (for example, reusing channels corresponding to a learned class for a newly observed class) and, consequently, learning of general-purpose features by the modules. For an input batch \(X\) with corresponding labels \(Y\), and the corresponding batch of concatenated channel-wise attention vectors \(V\) (Equation 2), the prototype loss is given by: \[L_{P}(X,Y)=\frac{1+\Sigma_{(V_{1},V_{2})\in V^{2}:Y_{1}=Y_{2}}MSE(V_{1},V_{2}) }{1+\Sigma_{(V_{1},V_{2})\in V^{2},Y_{1}\neq Y_{2}}MSE(V_{1},V_{2})}, \tag{5}\] where \(MSE\) refers to the Mean Squared Error estimator. Note that we only apply this loss to samples for which the predictions were correct. ### Multi-Scale associations As discussed earlier, one of the mechanisms employed by humans to mitigate forgetting is multi-scale associations between current and previous experiences. With this goal in mind, we follow recent rehearsal-based approaches Buzzega et al. (2020); Riemer et al. (2019) that comply with GCL and use a memory buffer during training to store previously seen examples and responses. The buffer is updated using reservoir sampling Vitter (1985), which helps to approximate the distribution of the samples seen so far Isele and Cosgun (2018). However, we only consider the subset of batch samples on which the prediction was made Figure 1: An overview of _Dynamos_’ dynamic and sparse response mechanism at the \(l^{\text{th}}\) convolutional layer. Blacked activations are removed. The agent (bottom path) self-attention network uses a pointwise convolution to match output channels and global average pooling to get a channel-length flattened vector. This is sent through an MLP with one hidden layer and Sigmoid activation, and multiplied with the original channel-length representation to get the channel-wise self-attention vector. correctly for addition to the memory buffer. These buffer samples are replayed through the DNN alongside new samples with losses that associate the current response with the stored previous response, resulting in consistent responses over time. Let \(M\) denote the memory buffer and \(D_{T}\) denote the current task stream, from which we sample batches \((X_{M},Y_{M},Z_{M},V_{M})\) and \((X_{t},Y_{t})\), respectively. Here, \(Z_{M}\) and \(V_{M}\) are the saved logits and channel-wise attention vectors corresponding to \(X_{M}\) when it was initially observed. The consistency losses associated with current and previous responses are obtained during the task \(T\) as follows: \[Z^{{}^{\prime}}_{M},V^{{}^{\prime}}_{M} =F(X_{M}) \tag{6}\] \[L_{C}(Z_{M},Z^{{}^{\prime}}_{M}) =\mathbb{E}_{X_{M}}[\|Z_{M}-Z^{{}^{\prime}}_{M}\|_{2}^{2}]\] \[L_{C}(V_{M},V^{{}^{\prime}}_{M}) =\mathbb{E}_{X_{M}}[\|V_{M}-V^{{}^{\prime}}_{M}\|_{2}^{2}].\] In addition to consistency losses, we also enforce accuracy, and dynamic sparsity and modularity on the memory samples. Therefore, we have four sets of losses: * Task performance loss on current and memory samples to ensure correctness on current and previous tasks. For classification, we use cross-entropy loss (\(L_{CE}\)). * Reward losses (Equation 4) on current and memory samples to ensure dynamic modularity and sparsity on current and previous tasks. * Prototype losses (Equation 5) on current and memory samples to ensure the specialization of modules on current and previous tasks. * Consistency losses (Equation 6) for multi-scale associations between current and previous samples. Putting everything together, the total loss becomes: \[\begin{split} L_{\text{total}}&=L_{CE}(X_{B},Y_{B} )+\gamma L_{r}(X_{B})\\ &+\beta[L_{CE}(X_{M},Y_{M})+\gamma L_{r}(X_{M})]\\ &+\alpha L_{C}(Z_{M},Z^{{}^{\prime}}_{M})+\alpha_{p}L_{C}(V_{M},V ^{{}^{\prime}}_{M})\\ &+w_{p}[L_{P}(X_{B},Y_{B})+L_{P}(X_{M},Y_{M})].\end{split} \tag{7}\] The weights given to the losses - \(\alpha\), \(\alpha_{p}\), \(\beta\), \(w_{p}\), and \(\gamma\), and the penalty for misclassification (\(\lambda\)) and keep ratio (\(kr\)) in Equation 3, are hyperparameters. Note that we employ a warm-up stage at the beginning of training, where neither the memory buffer nor the agents are employed. This is equivalent to training using only the cross-entropy loss for this period, while the agents are kept frozen. This gives agents a better search space when they start searching for a solution. We call our method as described above Dynamic modularity and sparsity - _Dynamos_. ## 4 Experiment Details Datasets.We show results on sequential variants of MNIST (LeCun et al., 1998) and SVHN: Seq-MNIST and Seq-SVHN (Netzer et al., 2011), respectively. Seq-MNIST and Seq-SVHN divide their respective datasets into 5 tasks, with 2 classes per task. Furthermore, to test the applicability of _Dynamos_ under general continual learning, we also use the MNIST-360 dataset (Buzzega et al., 2020). Architecture.We use a network based on the ResNet-18 (He et al., 2016) structure by removing the later two of its four blocks and reducing the number of filters per convolutional layer from 64 to 32. The initial convolution is reduced to \(3\times 3\) to work with smaller image sizes. For the baseline experiments, we did not use any agents. For our method, while agents can be used for all convolutional layers, we only use agents in the second block. We make this choice based on recent studies that observe that earlier layers undergo minimal forgetting (Davari et al., 2022), are highly transferable (Yosinski et al., 2014), and are used for most examples even when learned with dynamic modularity (Abati et al., 2020). We use a sigmoid with a temperature layer as the probability layer in the agents and a probability of 0.5 as a threshold for picking actions, i.e., channels during inference. The temperature serves the purpose of tuning the range of outputs of the self-attention layers, ensuring that the probabilities being sampled to choose the actions are not too small and that enough activations are chosen to enable learning. The exact network structure used for each experiment, including the self-attention networks of the agents, can be found in Appendix, in Table 3 and Table 4. Settings.All methods are implemented in the Mammoth repository1 in PyTorch 1.6 and were trained on Nvidia V100 GPUs. The hyperparameters corresponding to each experiment can be found in Appendix, Table 5. We always maintain a keep ratio higher than \(1/Num\_tasks\) to allow the learning of overlapping, reusable, and general-purpose modules. The temperature of the Sigmoid activation of the probability layers is kept at 0.15 unless mentioned otherwise. Footnote 1: [https://github.com/aimagelab/mammoth/](https://github.com/aimagelab/mammoth/) ## 5 Results We will evaluate _Dynamos_ under two standard evaluation protocols that adhere to the core desiderata of GCL. ### Class-Incremental Learning (CIL) Class-incremental learning (CIL) refers to the evaluation protocol in which mutually exclusive sets of classes are presented sequentially to the network, and the identity of the task is not provided at the test time, which meets the core desiderata of GCL (Farquhar and Gal, 2018). We compare against Conditional Convolutional Gated Network (CCGN) (Abati et al., 2020), which also dynamically composes convolutional filters for continual learning. We observe in Figure 2 that _Dynamos_ shows higher accuracies on both the Seq-MNIST and Seq-SVHN datasets under all buffer sizes. However, CCGN requires a separate task vector for every task per convolutional layer, resulting in unrestricted growth during training, whereas we maintain a bounded memory through training. Furthermore, unlike CCGN, we do not leverage the task boundaries or the validation set during training. Therefore, _Dynamos_ outperforms the previous state-of-the-art for dynamic compositional continual learning in class-incremental learning, while showing bounded memory consumption during training. ### General Continual Learning (GCL) So far, we have observed _Dynamos_ under the CIL protocol. Unlike CIL, real-world data streams without clear task boundaries, where the same data may reappear under different distributions (e.g. different poses). Following (Buzzega et al., 2020), we approximate this setting using MNIST-360, where tasks overlap in digits (i.e. classes), reappear under different rotations (i.e. distributions), and each example is seen exactly once during training. This serves as a verification of the adherence to the GCL desiderata (Farquhar and Gal, 2018; Delange et al., 2021). We study the impact of both dynamic modularity as well as multi-scale associations by removing them incrementally from _Dynamos_. When neither is used, the learning is done using vanilla gradient-based training, with no strategy to counter forgetting. When dynamic modularity is removed, the learning strategy forms our baseline, where no agents are used, simplifying the total training loss from Equation 7 to: \[\begin{split} L_{\text{base}}=L_{CE}(X_{B},Y_{B})+\beta L_{CE}(X_{ M},Y_{M})+\\ \alpha L_{C}(Z_{M},Z_{M}^{{}^{\prime}}).\end{split} \tag{8}\] Table 1 shows that _Dynamos_ outperforms the baseline in all buffer sizes, proving that dynamic modularity is advantageous in GCL. Furthermore, when multi-scale associations are also removed, no buffer is used, and the DNN undergoes catastrophic forgetting. Thus, _Dynamos_ is applicable to general continual learning, with dynamic modularity improving over the baseline. We hypothesize that dynamic modularity makes dealing with the blurred task boundaries of GCL easier by adaptively reusing relevant previously learned information, which in this case corresponds to learned filters. ## 6 Model Characteristics We now analyze some of the characteristics and advantages of _Dynamos_. For all experiments in this section, we use our model trained on Sequential-MNIST with buffer size 500. ### Dynamic Modularity and Compositionality Humans show modular and specialized responses to stimuli(Meunier et al., 2010) with dynamic and sparse Figure 2: Quantitative results under Class-Incremental Learning protocol. Results are averaged across three seeds. CCGN values taken from the original paper. The precise accuracies can be found in Table 2. response to inputs Graham and Field (2006) - a capability that we instilled in our DNN while learning a sequence of tasks by dynamically removing channel activations of convolutional layers. Therefore, we examine the task- and class-wise tendencies of the firing rates of each neuron (filter) in Figure 3. It can be seen that _Dynamos_ learns a soft separation of both tasks and classes, as evidenced by the per-task and per-class firing rates, respectively, of each filter. This is in contrast to static methods, where all filters react to all examples. Figure 2(a) further shows that this allows learning of similar activation patterns for similar examples. For example, MNIST digit pairs 1 and 7, and 6 and 8, which share some shape similarities, also share similarities in their activation patterns/rates. This could be attributed to being able to reuse and drop learned filters dynamically, which causes the DNN to react similarly to similar inputs, partitioning its responses based on example similarities. Additionally, the ability to dynamically reuse filters allows DNNs to learn overlapping activation patterns for dissimilar examples and classes, instead of using completely disparate activation patterns. This also facilitates the learning of sequences of tasks without having to grow the DNN capacity or having a larger capacity at initialization, as opposed to the static parameter isolation methods for continual learning. Following Abbasi et al. (2022), we quantify the overlap between the activation rates for each class pair in the final layer using the Jensen-Shanon divergence (JSD) between them in Figure 4. Lower JSDs signify higher overlap. The JSD is lowest for the class pair \((1,7)\) (both digits look like vertical lines), and is \(\sim\frac{1}{15^{\text{th}}}\) the average JSD across class pairs, and \(\sim\frac{1}{42^{\text{th}}}\) that of the least overlapping class pair \((1,8)\) (1 is a line, 8 is formed of loops). Now, as per Equation 1, filters in the layer are activated based on the channel-wise attention vector \(v_{L}\) (see Equation 2), which are pushed together for examples of the same classes, and pushed away from each other for examples of different classes using prototype loss (Equation 5). We visualize the t-SNEs of these \(v_{L}\)s on the test set in Fig \begin{table} \begin{tabular}{c c|c c c} \hline \hline Multi-Scale & Dynamic & \multicolumn{3}{c}{Buffer Size} \\ \cline{3-5} Associations & Modularity & 100 & 200 & 500 \\ \hline ✓ & ✓ & \(\mathbf{64.418\pm 4.095}\) & \(\mathbf{79.638\pm 2.853}\) & \(\mathbf{90.519\pm 0.737}\) \\ ✓ & ✗ & \(61.192\pm 3.072\) & \(75.364\pm 1.259\) & \(88.150\pm 0.888\) \\ \hline ✗ & ✗ & \multicolumn{3}{c}{\(18.712\pm 0.690\)} \\ \hline \hline \end{tabular} \end{table} Table 1: General continual learning results for multiple buffer sizes. All results are averaged across five seeds. Figure 4: Jensen-Shanon Divergences (\(\times 100\)) of the activation rates of class pairs on the test set. Figure 3: Filter activation rates on the test set for each filter with respect to tasks and classes. For ease of visualization, we only look at the last 40 filters. Full visualizations can be found in Appendix (Figure 7). ure 5 and observe that the samples belonging to the same classes are clustered, confirming the effectiveness of our prototype loss. Moreover, the clusters of visually similar classes are close together, which is concomitant with the JSDs and class-wise activation rates seen earlier. Class similarities are also reflected through multiple clusters for the digit 9, indicating its similarity with the digits 6 (loop) and 1 (line) in one cluster, but also with 7 (line) and 4 (line \(+\) loop) in another cluster. Finally, we observe that there are examples that are scattered away from their class clusters and overlap with other clusters, probably indicating that these particular examples are visually closer to other digits. Note, however, that these similar examples and classes are distributed across tasks, which explains the lower similarities in activation patterns between task pairs in Figure 2(b) compared to the class pairs in Figure 2(a). Therefore, _Dynamos_ is capable of learning modular and specialized units that result in input-adaptive dynamic separation and overlap of activations, based on the extent of similarities with previously learned examples. We also contend that the overlapping activations for digits of similar shape suggest the learning of general-purpose features. ### Trial-to-trial variability The brain is known to show variability in response across trials (Faisal et al., 2008; Werner and Mountcastle, 1963). For the same stimulus, the precise neuronal response could differ between trials, a behavior absent in most conventional DNNs. In our method, this aspect of brains can be mimicked by using Bernoulli sampling instead of thresholding to pick keep/drop decisions at each convolutional layer. In Figure 6, we plot the response variability in the last convolutional layer of our DNN with the same example in four trials. We only pick responses for which the predictions were correct. It can be seen that each trial evoked a different response from the DNN. Furthermore, despite the differences, there are also some similarities in the response. There are some filters that are repeatedly left unused, as well as some filters that are used in every trial. This demonstrates that _Dynamos_ can additionally simulate the trial-to-trial variability observed in brains. ## 7 Conclusion and Future Work We propose _Dynamos_, a method for general continual learning, that simulates the dynamically modular and sparse response to stimuli observed in the brain. Dynamos rewards the input-adaptive removal of channel activations of convolutional layers using policy gradients for dynamic and sparse responses. To further induce modularity, channel-wise self-attention vectors corresponding to each convolutional layer are pulled together for examples from same classes, and are pushed apart for examples from different classes; these vectors are then used to sample the keep/drop decision for the corresponding channel. Using a memory buffer, we enforce multi-scale consistency between previous and current responses to prevent forgetting. Dynamos outperforms previous baselines on multiple datasets when evaluated using class-incremental learning (CIL) and general continual learning (GCL) protocols. Dynamos exhibits similar and overlapping responses for similar inputs, yet distinct responses to dissimilar inputs by utilizing subsets of learned filters in an adaptive manner. We quantified the extent of class-wise overlaps and showed that the semantic similarity of classes (digits in MNIST, e.g. 1 and 7) are reflected in higher representation overlaps. We additionally visualized the channel-wise attention vectors and observed that they are clustered by the classes and the clusters of semantically similar classes lie together or overlap. Finally, we also demonstrated the ability of our method Figure 5: t-SNEs on the test set of class prototypes learned from channel-wise self-attention vectors for all classes. Figure 6: Trial-to-trial variability of responses to same input in _Dynamos_. to mimic the trial-to-trial variability seen in the brain, where same inputs achieve same outputs through different "responses", i.e. activations. Thus, we consider our work as a step toward achieving dynamically modular and general-purpose continual learning.
2310.09623
A Digital Language Coherence Marker for Monitoring Dementia
The use of spontaneous language to derive appropriate digital markers has become an emergent, promising and non-intrusive method to diagnose and monitor dementia. Here we propose methods to capture language coherence as a cost-effective, human-interpretable digital marker for monitoring cognitive changes in people with dementia. We introduce a novel task to learn the temporal logical consistency of utterances in short transcribed narratives and investigate a range of neural approaches. We compare such language coherence patterns between people with dementia and healthy controls and conduct a longitudinal evaluation against three clinical bio-markers to investigate the reliability of our proposed digital coherence marker. The coherence marker shows a significant difference between people with mild cognitive impairment, those with Alzheimer's Disease and healthy controls. Moreover our analysis shows high association between the coherence marker and the clinical bio-markers as well as generalisability potential to other related conditions.
Dimitris Gkoumas, Adam Tsakalidis, Maria Liakata
2023-10-14T17:10:19Z
http://arxiv.org/abs/2310.09623v1
# A Digital Language Coherence Marker for Monitoring Dementia ###### Abstract The use of spontaneous language to derive appropriate digital markers has become an emergent, promising and non-intrusive method to diagnose and monitor dementia. Here we propose methods to capture language coherence as a cost-effective, human-interpretable digital marker for monitoring cognitive changes in people with dementia. We introduce a novel task to learn the temporal logical consistency of utterances in short transcribed narratives and investigate a range of neural approaches. We compare such language coherence patterns between people with dementia and healthy controls and conduct a longitudinal evaluation against three clinical bio-markers to investigate the reliability of our proposed digital coherence marker. The coherence marker shows a significant difference between people with mild cognitive impairment, those with Alzheimer's Disease and healthy controls. Moreover our analysis shows high association between the coherence marker and the clinical bio-markers as well as generalisability potential to other related conditions. ## 1 Introduction Dementia includes a family of neurogenerative conditions that affect cognitive functions of adults. Early detection of cognitive decline could help manage underlying conditions and allow better quality of life. Many aspects of cognitive disorders manifest in the way speech is produced and in what is said Forbes-McKay and Venneri (2005); Voleti et al. (2019). Previous studies showed that dementia is often associated with thought disorders relating to inability to produce and sustain coherent communication McKhann (1987); Hoffman et al. (2020). Language coherence is a complex multi-faceted concept which has been defined in different ways and to which several factors contribute Redeker (2000). A high-quality communication is logically consistent, topically coherent, and pragmatically reasonable Wang et al. (2020). Fig. 1 illustrates two snapshots from people with dementia and healthy controls in the Pitt Corpus Becker et al. (1994), containing subjects' descriptions of the Cookie Theft Picture (CTP, Appx. A) from the Boston Diagnostic Aphasia Examination Goodglass et al. (2001). As shown in Fig. 1, dementia subjects present more disruptions in the logical consistency of their CTP narratives than healthy controls. For example, the pair of semantically unrelated utterances \(\{S_{1},S_{2}\}\) is logically consistent and descriptive. By contrast, even though \(\{S_{3},S_{4}\}\) are semantically related, the pair is logically inconsistent since the latter utterance disrupts the description of the CTP. Here we focus on learning coherence as logical-thematic consistency of utterances in narratives, rather than the semantic relatedness of entities across sentences, to capture Figure 1: Snapshots from healthy controls and people with dementia describing the Cookie Theft Picture. Green frames indicate logically consistent utterances and red disruptive ones (e.g., elaborations or ‘flight of ideas’). disruptive_ utterances, such as _flight of ideas_ and _discourse elaborations_. The latter have been shown to be indicative of cognitive disorders (Abdalla et al., 2018; Iter et al., 2018). Indeed, thought disorders (TD) is exhibited as disruption in the structure of thoughts and as it affects both language content and the thinking process, it affects how thoughts are expressed in language. TD is associated with various conditions including dementia. In particular, disorganized speech is a symptom of dementia and can be caused by damage to the brain that occurs with the disease (Botha and Josephs, 2019). The use of computational linguistics and natural language processing (NLP) to screen and monitor dementia progression has become an emergent and promising field (Fraser et al., 2016; Konig et al., 2018). However, recent work used language to distinguish people with Alzheimer's Disease (AD) from healthy controls, neglecting the longitudinal and fine-grained aspects of subjects' language impairments (Luz et al., 2020, 2021; Nasreen et al., 2021). Here, we address this limitation by first learning the logical-thematic coherence of adjacent utterances in narratives, and then investigating the connection between longitudinal changes in language coherence and cognitive status. Recent work for coherence in text has exploited deep (Cui et al., 2017; Feng and Mostow, 2021), discriminative (Xu et al., 2019), and generative (Laban et al., 2021) neural models for three evaluation tasks namely: a) the shuffle task (i.e., to discriminate genuine from randomly shuffled text), b) sentence ordering (i.e., to produce the correct order of sentences in a text), and c) insertion (i.e., to predict the position of a missing sentence in a text). However these tasks are prone to learning the shuffle-ness of a text rather than its actual coherence (Laban et al., 2021). By contrast, our motivation is to learn the logical consistency of adjacent utterances in narratives to capture fine-grained coherence impairments (Fig. 1) rather than semantic relatedness or the global aspects of utterances' order. In this paper we make the following contributions: * We define the new task of learning logical thematic coherence scores on the basis of the logical-thematic consistency of adjacent utterances (Sec. 3.1). We train on narratives from healthy controls in the DementiaBank Pitt Corpus (Becker et al., 1994), hypothesising that controls produce a logically consistent order of utterances. We investigate a range of state-of-the-art (SOTA) neural approaches and obtain models in three different settings: a) fine-tuning transformer-based models, b) fully training discriminative models, and c) zero-shot learning with transformer-based generative models (Sec. 3.3). Our experiments show that a fine-tuned transformer model (RoBERTa) achieves the highest discrimination between adjacent and non-adjacent utterances within a healthy cohort (Sec. 4.1.1). * We introduce a human-interpretable digital coherence marker for dementia screening and monitoring from longitudinal language data. We first obtain logical thematic coherence scores of adjacent utterances and then aggregate these across the entire narrative (Sec. 3.1). * We conduct a comprehensive longitudinal analysis to investigate how the digital coherence marker differs across healthy and dementia cohorts. The resulting digital coherence marker yields significant discrimination across healthy controls, people with mild cognitive impairment (MCI), and people with AD (Sec. 4.2.1). * We compare our digital coherence marker against one based on semantic similarity, showing superior performance of the former in both distinguishing across cohorts (Sec. 4.2.1) and in detecting human-annotated disruptive utterances (Sec. 4.2.2). * We evaluate our logical thematic coherence marker against three clinical bio-markers for cognitive impairment, showing high association and generalisability potential (Sec. 4.2.3). ## 2 Related Work **NLP and dementia:** Early NLP work for dementia detection analysed aspects of language such as lexical, grammatical, and semantic features (Ahmed et al., 2013; Orimaye et al., 2017; Kave and Dassa, 2018), and studied para-linguistic features (Gayraud et al., 2011; Lopez-de Ipina et al., 2013; Pistono et al., 2019). Recent work in this area has made use of manually engineered features (Luz et al., 2020, 2021; Nasreen et al., 2021), disfluency features (Nasreen et al., 2021; Rohanian et al., 2021), or acoustic embeddings (Yuan et al., 2020; Shor et al., 2020; Pan et al., 2021; Zhu et al., 2021). Closer to the current study, Abdalla et al. (2018) investigated discourse structure in people with AD by analyzing discourse relations. All such previous work has focused on differentiating across cohorts at fixed points in time without considering language changes over time. **Coherence modeling:** The association between neuropsychological testing batteries and language leads researchers to exploit linguistic features and naive approaches for capturing coherence in spontaneous speech to predict the presence of a broad spectrum of cognitive and thought disorders. [16, 17, 18]. Other work for coherence in text focused on feature engineering to implement some of the intuitions of Centering Theory [1, 19, 12, 13]. Despite their success, existing models either capture semantic relatedness or entity transition patterns across sentences rather than logical-thematic consistency. **Neural coherence:** Driven by the success of deep neural networks, researchers exploited distributed sentences [14], discriminative [15], and BERT-based [16] models by evaluating coherence mostly on the shuffle task (refer to Sec. 1 for more details). Recent work has shown that a zero-shot setting in generative transformers can be more effective than fine-tuning BERT or RoBERTa achieving a new SOTA performance for document coherence [1]. Here, we investigate a variety of such successful architectures to learn the temporal logical-thematic consistency of utterances in transcribed narratives. ## 3 Methodology ### Logical Thematic Coherence Let us denote a collection \(C\) of \(N\) transcribed narratives from healthy controls, i.e., \(C=\{d_{k}\}_{k=1}^{N}\), where each narrative consists of a sequence of utterances \(\{u_{i}\}\). The logical thematic coherence task consists in learning scores from adjacent pairs of utterances \((u_{i},u_{i+1})\) in the healthy controls, so that these are higher than corresponding non-adjacent pairs of utterances \((u_{i},u_{j})\) in a narrative, where \(u_{j}\) is any forward utterance following the adjacent pair [16] To monitor changes in cognition over time, we define a digital language coherence marker by computing the logical thematic coherence scores of adjacent utterances in people with dementia and controls in a test set and aggregating these over the entire narrative. To obtain comparisons across cohorts, we calculate longitudinal changes in the coherence marker from the last to the first and between adjacent subjects' narratives over the study. To assess the reliability of the coherence marker, we compute changes in the coherence marker and in widely used clinical markers from the end to the beginning of the study. ### Data We have conducted experiments and trained coherence models on the DementiaBank Pitt Corpus [1], where subjects are asked to describe the Cookie Theft picture [1] up to 5 times across a longitudinal study (see Appx. B for more details about the Pitt Corpus). **Coherent pairs:** We have learnt the temporal logical-thematic coherence of adjacent utterances from the healthy cohort, consisting of 99 people with a total amount of 243 narratives. **Incoherent pairs:** We use logically inconsistent utterance ordering by choosing utterances following an adjacent pair, from the same narrative so as to avoid learning cues unrelated to coherence due to potential differences in language style [15, 16]. While the level of coherence of controls may vary, we hypothesise that adjacent sentences by healthy controls will be more coherent than the negative instances, i.e. non-adjacent pairs from the same narrative. Table 1 summarizes the overall amount of utterances after splitting the healthy population into 80%, 10%, and 10% for training, validation, and testing. To evaluate the ability of the digital language coherence marker to differentiate across cohorts and its reliability against the clinical bio-markers, we filtered people with dementia who have at least two narratives across the longitudinal study. This resulted in 62 people with AD and 14 people with MCI, with a total of 148 and 42 narratives respectively. We also included healthy controls, a total of 19 people with a total of 25 narratives. \begin{table} \begin{tabular}{l c c c} \hline \hline **Utterances** & **Training** & **Validation** & **Testing** \\ \hline \# Coherent & 2,178 & 223 & 233 \\ \hline \# Incoherent & 16,181 & 1,401 & 1,417 \\ \hline \hline \end{tabular} \end{table} Table 1: Amount of coherent and incoherent utterances for learning logical thematic coherence from the healthy cohort. ### Coherence Models **Baseline Digital Marker:** We use Incoherence Model (Iter et al., 2018), which scores adjacent pairs of utterances in a narrative based on the cosine similarities of their sentence embeddings (Reimers and Gurevych, 2019). We consider three main neural architectures, known to achieve SOTA performance on document coherence, to learn logical thematic coherence: A) fine-tuning transformer-based models, B) fully training discriminative models, and C) zero-shot learning with generative models. Transformer-based Models:We fine-tune pre-trained transformers by maximising the probability that the second utterance in a pair follows the first (see Fig. 3 (A) in Appx. C). The model's input is a sequence of tokens in the form of \([CLS]+Utterance_{1}+[SEP]+Utterance_{2}\), where \((Utterance_{1},Utterance_{2})\) is a pair of either coherent of incoherent utterances in a narrative (see Sec. 3.2), \([SEP]\) is an utterance separator token, and \([CLS]\) is a pair-level token, used for computing the coherence score. We append to the transformer module a feed-forward neural network (FFNN) followed by a sigmoid function where the coherence score \(f\) is the sigmoid function of FNNN that scales the output between 0 and 1. We fine-tune the models with a standard binary cross-entropy loss function (i.e., BCELoss), setting the output of the model to 1 for coherent and 0 for incoherent pairs of utterances. We have experimented with the following variants: a) BERT-base (Lee and Toutanova, 2018) since it has been pre-trained on the Next Sentence Prediction (NSP) task which is similar to the task of scoring the coherence of adjacent utterances. b) RoBERTa-base (Liu et al., 2019), which has been pre-trained without the NSP task. c) a Convolutional Neural Network baseline (Cui et al., 2017) which uses pre-trained word embeddings extracted by BERT-base (refer to Appx. C for a detailed description). Discriminative Models:We have trained discriminative models by maximizing the probability of an utterance pair being coherent. We have experimented with an architecture previously shown effective in coherence modelling for both speech (Patil et al., 2020) and text. (Xu et al., 2019). The model receives a pair of utterances and a sentence encoder maps the utterances to real-value vectors \(U_{1}\) and \(U_{2}\) (see Fig. 3 (B) in Appx. C). The model then computes the concatenation of the two encoded utterances, as follows: \[\textit{concat}[U_{1},U_{2},U_{1}-U_{2},U_{1}*U_{2},|U_{1}-U_{2}|] \tag{1}\] , where \(U_{1}-U_{2}\) is the element-wise difference, \(U_{1}*U_{2}\) is the element-wise product, and \(|U_{1}-U_{2}|\) is the absolute value of the element-wise difference between the two encoded utterances. The choice to represent the difference between utterances in the form of Eq. 1 was introduced by Xu et al. (2019) as a high level statistical function that could capture local level interaction between utterances and we make the same assumption. Finally, the concatenated feature representation is fed to a one-layer MLP to output the coherence score \(f\). We have trained the model in bi-directional mode with inputs \((U_{1},U_{2})\) and \((U_{2},U_{1})\) for the forward and backward operations and used a margin loss as follows: \[L(f^{+},f^{-})=max(0,n-f^{+}+f^{-}) \tag{2}\] , where \(f^{+}\) is the coherence score of a coherent pair of utterances, \(f^{-}\) thescore of an incoherent pair, and \(n\) the margin hyperparameter. The model can work with any pre-trained sentence encoder. Here, we experiment with two variants: a) pre-trained sentence embeddings from SentenceBERT (Reimers and Gurevych, 2019)(**DCM-sent**), and b) averaged pre-trained word embeddings extracted from BERT-base (Lee and Toutanova, 2018)(**DCM-word**). Generative Models:We experiment with a zero-shot setting for generative transformers, an approach that previously achieved best out-of-the-box performance for document coherence (Laban et al., 2021). We provide a pair of utterances to a generative transformer and compute the perplexity in the sequence of words for each pair (refer to Appx. C for a detailed description). Perplexity is defined as the exponential average log-likelihood in a sequence of words within a pair \(P\) as follows: \[PPL(P)=exp\Big{\{}-\frac{1}{t}\sum_{i}^{t}p(w_{i}|w_{<i})\Big{\}}, \tag{3}\] , where \(p(w_{i}|w_{<i})\) is the likelihood of the \(i^{th}\) word given the preceding words \(w_{<i}\) within a pair of utterances. Finally, we approximate the coherence score \(f\) as follows: \[f=1-PPL(P),\] We use \(1-PPL\) rather than \(PPL\) since low perplexity indicates that a pair is likely to occur, but we need high coherence scores for sequential pairs. We have experimented with two SOTA generative transformers, of different sizes and architecture: a) **GPT2**, a decoder transformer-based model (Radford et al., 2019) and b) **T5**, an encoder-decoder transformer-based model (Raffel et al., 2020). In the end we also pre-train T5-base, i.e., **T5-base\({}_{pre}\)**. In particular, we feed sequential pairs of utterances and consider the loss on the second sequential sentence within the pair, just like sequence to sequence models. For testing, we extract coherence scores according to Eq. 4 for coherent and incoherent pairs. For the training details of coherence models please refer to Appx. F. ### Evaluation Metrics For evaluating the temporal logical thematic coherence models, we report the average coherence score of adjacent and non-adjacent utterance pairs, denoted as \(f^{+}\) and \(f^{-}\), respectively. The higher the \(f\) score, the more coherent the pair. We also report the models' accuracy on adjacent utterances denoted as _temporal_ accuracy, i.e., \(Acc_{temp}\), calculated as the correct rate between the adjacent utterances recognized as coherent and the total number of adjacent pairs in the test corpus. In particular, a pair of adjacent utterances \(\{u_{i},u_{i+1}\}\) in the test set is perceived as coherent if its coherence score \(f_{(u_{i},u_{i+1})}\) is higher than the coherence score \(f_{(u_{i},u_{k>i+1})}\) of the corresponding non-adjacent pair of utterances as follows: \[f(u_{i},u_{i+1})=\begin{cases}1&\text{if }f_{(u_{i},u_{i+1})}\textgreater{}f_{(u_{i},u_{ k>i+1})}\\ 0&\text{otherwise}\end{cases} \tag{5}\] , where \(1\) corresponds to coherent and \(0\) to incoherent pair, correspondingly. The coherence across an entire narrative is approximated by averaging the coherence scores of adjacent utterances, denoted as _entire_ accuracy, i.e., \(Acc_{entire}\). Similarly, the entire accuracy is calculated as the correct rate of narratives recognized as coherent out of the total amount of narratives in the test corpus. A narrative is perceived as coherent if the averaged scores of the adjacent utterances are higher than the average scores of the non-adjacent ones within a narrative. The higher the temporal and entire accuracy, the better the model. Finally, we report the absolute percentage difference in \(f\) scores between adjacent and non-adjacent utterances, denoted \(\%\Delta\) (refer to Appx. D for more details), and the averaged loss of the models. The higher and more significant the \(\%\Delta\), the better the model, while the reverse holds for the averaged loss. To investigate the reliability of the digital coherence marker, we evaluate against three different clinical bio-markers collected from people with dementia. These are the Mini-Mental State Examination (MMSE), the Clinical Dementia Rating (CDR) scale (Morris, 1997), and the Hamilton Depression Rating (HDR) scale (Williams, 1988). The lower the MMSE score the more severe the cognitive impairment. The opposite is true of the other scores, where a higher CDR score denotes more severe cognitive impairment and higher HDR scores indicate more severe depression (for more details about the bio-markers please refer to Appx. E). ## 4 Experimental Results ### Logical Thematic Coherence Models #### 4.1.1 Quantitative Analysis Table 2 summarizes the performance of logical thematic coherence models trained on the healthy cohort. Overall, fine-tuned transformerssignificantly outperform discriminative and generative transformer models. All models score higher on consecutive utterance pairs than non-consecutive ones. While the absolute percentage difference of coherence scores between sequential and non-sequential pairs of utterances is higher for the discriminative models, \(\%\Delta\) has a higher significance for the transformer-based models. BERT and RoBERTa are the best performing models, achieving a significant high entire accuracy (100%), meaning that the model is able to predict all the narratives in the healthy population as being coherent, in line with our hypothesis. RoBERTa yielded an increased logical thematic coherence accuracy of 81.4% compared to 75.4% for BERT. Despite the original BERT being trained with two objectives, one of which is Next Sentence Prediction (NSP), an indirect signal for the coherence of adjacent utterances, RoBERTa, trained without the NSP objective, outperformed BERT. Presumably, RoBERTa outperforms BERT since the former was trained on a much larger dataset and using a more effective training procedure. Moreover, the simple CNN baseline, while performing worse than BERT and RoBERTa still outperforms the discriminative and generative models, which shows the effectiveness of fine-tuning. The discriminative models perform better when using pre-trained embeddings from BERT rather than pre-trained sentence embeddings. Our experiments show that discriminative models are outperformed by transformers when modelling thematic logical coherence in transcribed narratives. This is contrary to earlier work (Xu et al., 2019; Patil et al., 2020) where discriminative models outperformed early RNN based models, but we note that this work did not compare against transformers. Despite Laban et al. (2021) showing that a zero-shot setting in generative transformers can be more effective than fine-tuning BERT or RoBERTa, our experiments show that this setting has the worst performance. The results did not improve even when we pre-trained the T5 model on the Pitt corpus (see T5-base\({}_{pre}\) in Table 2). We presume that large pre-trained language models may suffer from domain adaptation issues here and operate on too short a window to capture logical consistency in narratives. Future work could investigate fine-tuning or prompt-training generative transformers for this task. ### The digital Language Coherence Marker Here, we exploited the best-performing logical thematic coherence model, i.e., RoBERTa, to obtain a digital language coherence marker for subjects across different cohorts over the longitudinal study (refer to Sec. 3.1 for more details). We first present results regarding the longitudinal discrimination ability for this marker and then show its reliability by evaluating against three clinical bio-markers. #### 4.2.1 Longitudinal Discrimination Ability We analyzed changes in the digital marker over time and across cohorts. First, we calculated the average of digital markers across the three cohorts. The column \(Marker\) in Table 3 summarizes the results. The averaged digital marker was higher in the healthy cohort than in MCI and AD cohorts. Similarly, the averaged marker in the MCI group was higher than that in the AD group. However, the difference was significant only between the healthy and AD cohorts (\(p<0.05\)) 1. Footnote 1: We use a nonparametric test, namely the Mann-Whitney test, to measure if the distribution of a variable is different in two groups. We subsequently calculated changes in the digital marker from the end to the start of the study and across the cohorts (i.e., \(\Delta_{(end-onset)}\) in Table 3). There was a significant decrease for the MCI and AD groups and a significant increase for the healthy controls (\(p<0.05\)) 2. The increase in healthy controls is presumably because subjects are able to remember and do better at the CTP description when seeing it again (Goldberg et al., 2015). Moreover, we noticed that people with MCI exhibited more substantial change than those with AD, despite the average digital coherence marker of the former being 0.597 compared to 0.567 for the latter. Footnote 2: We use a nonparametric test, namely the Mann-Whitney test, to measure if the distribution of a variable is different in two groups. We also calculated changes in the digital marker between adjacent narratives over time and then ag \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Model** & **Setting** & **Avg. \(f^{+}\)** & **Avg. \(f^{-}\)** & \(\%\Delta\)** & **Avg. \(Acc_{temp}\)** & **Avg. \(Acc_{entire}\)** & **Avg. Loss** \\ \hline CNN & Training & 0.560 & 0.475 & 18.2\({}^{\dagger}\) & 73.4\% & 92.0\% & 0.636 \\ \hline BERT-base & Fine-tuning & 0.630 & 0.422 & 49.1\({}^{\dagger}\) & 75.4\% & **100.0\%** & 0.575 \\ \hline RoBERTa-base & Fine-tuning & 0.604 & 0.353 & **71.0\({}^{\dagger}\)** & **81.4\%** & **100.0\%** & **0.554** \\ \hline \hline DCM-sent & Training & -0.034 & -1.975 & **98.2\({}^{\dagger}\)** & 63.9\% & 76.0\% & 3.64 \\ \hline DCM-word & Training & 0.282 & -1.068 & **126.4\({}^{\dagger}\)** & 69.6\% & 80.0\% & 3.84 \\ \hline \hline GPT2-base & Zero Shot & -383.8 & -384.8 & 0.3 & 50.4\% & 48.0\% & - \\ \hline GPT2-medium & Zero Shot & -313.0 & -318.5 & 1.7 & 48.9\% & 48.0\% & - \\ \hline GPT2-large & Zero Shot & -290.1 & -298.8 & -2.9 & 50.0\% & 60.0\% & - \\ \hline T5-base & Zero Shot & -0.668 & -0.751 & 11.0 & 64.8\% & 64.0\% & - \\ \hline T5-large & Zero Shot & -3.674 & -3.996 & 8.1 & 58.2\% & 60.0\% & - \\ \hline T5-base\({}_{pre}\) & Pre-train & -0.224 & -0.208 & 7.3 & 46.1\% & 40.0\% & 0.376 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of logical thematic coherence models trained on healthy controls in three different settings; A) training, B) fine-tuning, and C) zero-shot. \(f^{+}\) is the coherence score of adjacent utterances, \(f^{-}\) the coherence score of non-adjacent ones, and \(\%\Delta\) the absolute percentage difference between \(f^{+}\) and \(f^{-}\). \(\dagger\) denotes significant difference between the two coherence scores. \(Acc_{temp}\) and \(Acc_{entire}\) measure accuracy on adjacent utterances and entire narratives, respectively. Best performance is highlighted in bold. gregated the changes within subjects in the study. In Table 3, we report the average change across cohorts, i.e., \(\Delta_{(long)}\). We obtain similar results as the ones taken from end to start. We finally compared the longitudinal discrimination ability of our proposed digital marker with a baseline digital marker based on the semantic relatedness of adjacent utterances (refer to Sec. 3.3). The averaged baseline marker was higher in the MCI cohort than in healthy and AD cohorts (see Table 3). Moreover, there was no significant difference across the cohorts. On the other hand, we observed similar changes (i.e., \(\Delta_{(end-start)}\) and \(\Delta_{(long)}\) in Table 3) in the baseline marker over time compared to the one proposed in this paper. However, such changes were not significant across cohorts for the baseline marker (\(p>0.05\)) 1. Footnote 1: For the definition refer to 3.4. #### 4.2.2 Evaluation on Human-Annotated Disruptive Utterances We investigated the effectiveness of the digital coherence marker in capturing disruptive utterances in narratives, and compared it with the baseline digital marker. Such disruptive utterances are annotated with the code \([+\ exec]\) in the transcripts of the Pitt corpus and constitute a significant indicator of AD speech (Abdalla et al., 2018; Voleti et al., 2019). Out of 1,621 pairs of adjacent utterances in the AD cohort, 543 ones (33%) are disruptive. For the baseline marker, the average score of disruptive utterances decreased to 0.19 (STD=0.17) compared to 0.26 (STD=0.17) for non-disruptive ones, i.e., an absolute percentage difference 2 of 31%. For our proposed marker, the average score of disruptive utterances decreased to 0.41 (STD=0.09) from 0.64 (STD=0.15) for non-disruptive ones, i.e., an absolute percentage difference of 44%. The results showed that both digital markers significantly captured disruptive utterances (\(p_{t-test}<0.05\)). However, our proposed digital marker is more robust in capturing such utterances. Footnote 2: For the definition refer to 3.4. #### 4.2.3 Association with Clinical Bio-markers We investigated the reliability of the digital marker by associating its changes with different degrees of changes in cognitive status from the end to the beginning of the longitudinal study, as expressed by widely accepted cognition scales. We analyzed association patterns in the largest cohort, i.e., the AD group consisting of 62 participants. We first investigated the association between changes in the coherence marker against the Mini-Mental State Examination (MMSE) (Morris, 1997). MMSE ranges from 0-30. The higher the MMSE score, the higher the cognitive function (refer to Appx. E for more details about MMSE). Here, we have split the AD population into four bins on the basis of the magnitude of MMSE change. Table 4 provides details regarding bin intervals and the association of changes between the MMSE and the digital coherence marker. Overall, we observed that the digital marker decreases across the population for the different degrees of cognitive decline. In particular, the higher \begin{table} \begin{tabular}{l c c c} \hline \hline **Bin** & \# **Subjects** & \(\Delta\) **MMSE** & \(\Delta\) **Coherence** \\ \hline Low & 25 & [-6,2] & -0.003 (0.089) \\ \hline Minor & 17 & [-12,–7] & -0.030 (0.094) \\ \hline Moderate & 11 & [-18,-13] & -0.076 (0.095) \\ \hline Severe & 9 & [-27,-19] & -0.200 (0.104) \\ \hline \hline \end{tabular} \end{table} Table 4: Association between changes in Mini-Mental State Examination (MMSE) and the digital coherence marker in AD patients at different degrees of cognitive decline. Numbers in \([,]\) define the lower and upper values of each bin interval. Numbers in \(()\) refer to the standard deviation. \(\#\) Subjects = Population within bins. \(\Delta\) = Change from the end to the onset of the study. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{**Our digital marker**} & \multicolumn{3}{c}{**Baseline digital marker**} \\ \cline{2-7} **Cohort** & **Marker** & \(\Delta_{(end-start)}\) & \(\Delta_{(long)}\) & **Marker** & \(\Delta_{(end-start)}\) & \(\Delta_{(long)}\) \\ \hline Healthy & **0.604 (0.08)** & **0.09 (0.07)** & **0.07 (0.05)** & 0.249 (0.05) & 0.02 (0.06) & 0.01(0.06) \\ \hline MCI & 0.597 (0.09) & **-0.05 (0.09)** & **-0.05 (0.07)** & 0.262 (0.06) & -0.03 (0.07) & -0.03 (0.06) \\ \hline AD & **0.567 (0.10)** & **-0.02 (0.16)** & **-0.02 (0.11)** & 0.241 (0.07) & -0.01 (0.08) & -0.01 (0.06) \\ \hline \hline \end{tabular} \end{table} Table 3: Longitudinal discrimination ability between the proposed digital marker and a baseline based on semantic similarity. Marker: Average of coherence marker within a population. \(\Delta_{(end-start)}\): Average change of the marker from the end to the beginning of the study. \(\Delta_{(long)}\): Average change of the digital marker between adjacent narratives within subjects. Numbers in \(()\) refer to corresponding standard deviations. Numbers in bold denote significant difference between the health controls and dementia cohorts (see Sec. 4.2.1). the difference in MMSE, the more substantial the decrease in the digital marker change over the longitudinal study. For people with moderate or severe cognitive decline, the coherence decreased significantly compared to that of people with low cognitive decline (\(p<0.05\) ) 1\({}^{,}\)3. Footnote 3: Here, we investigated how coherence change distributions differ across the AD population at different degrees of cognitive decline progression. Next, we investigated the association between changes in the coherence marker and the Clinical Dementia Rating (CDR) (Morris, 1997). CDR is based on a scale of 0-3 in assessing people with dementia. The higher the CDR, the lower the cognitive function (refer to Appx. E for more details about CDR). Here, we split the AD population into low, minor, moderate and severe bins according to the magnitude of CDR change, i.e., \(\Delta\) CDR in Table 5. The higher the CDR change the more severe the cognitive decline over time. The digital coherence marker decreased across the population at different degrees of CDR change. In particular, the higher the increase in CDR, the higher the decrease in the digital coherence marker over the longitudinal study. Changes in the digital coherence marker are similar for people with low and minor cognitive decline. However, there is significant decrease in coherence for the moderate and severe bins compared to the minor and mild ones \(p<0.05\) ) 1\({}^{,}\)3. Footnote 4: We considered the last HDR record instead of changes in HDR over time since there were missing HDR measurements in the study. Finally, we investigated the generalisability potential of our proposed coherence marker in association with the Hamilton Depression Rating (HDR) (Williams, 1988). HDR can be a useful scale for assessing cognitively impaired patients who have difficulty with self-report instruments and is one of the most widely used and accepted instruments for assessing depression. It is based on a 17-item scale. The higher the HDR, the more severe the level of depression (refer to Appx. E for more details about HDR). We investigated associations between the last HDR record 4 and changes in the digital coherence marker from the end to start of the study. Table 6 summarizes the association between HDR and changes in the digital coherence marker. Changes in coherence were similar for people with no or mild depression. However, there was a significant decrease for people with moderate depression (\(p<0.05\) ) 1\({}^{,}\)3. This is in line with current studies showing that individuals experiencing difficulty constructing coherent narratives generally report low well-being and more depressive symptoms (Vanderveren et al., 2020). Footnote 4: We considered the last HDR record instead of changes in HDR over time since there were missing HDR measurements in the study. ## 5 Conclusion We have introduced a new task for modelling the logical-thematic temporal coherence of utterances in short transcribed narratives to capture disruptive turns indicative of cognitive disorders. To this end, we have investigated transformer-based, discriminative, and generative neural approaches. Our experiments show that a fine-tuned transformer model (RoBERTa) achieves the best performance in capturing the coherence of adjacent utterances in narratives from the healthy cohort. We aggregate temporal language coherence to create a human-interpretable digital language coherence marker for longitudinal monitoring of cognitive decline. Longitudinal analysis showed that the digital marker is able to distinguish people with mild cognitive impairment, those with Alzheimer's Disease (AD) and healthy controls. A comparison with a baseline digital marker based on semantic similarity showed the superiority of our digital \begin{table} \begin{tabular}{l c c c} \hline \hline **Bin** & \# **Subjects** & **HDR** & \(\Delta\) **Coherence** \\ \hline No Depression & 17 & [0,7] & -0.02 (0.11) \\ \hline Mild & 18 & [8,16] & -0.01 (0.10) \\ \hline Moderate & 14 & [17,23] & -0.21 (0.10) \\ \hline \hline \end{tabular} \end{table} Table 6: Association between the last Hamilton Depression Rating (HDR) record and changes in the digital coherence for AD patients. Numbers in \([,]\) define the lower and upper values of each bin interval. Numbers in \(()\) refer to the standard deviation. \(\#\) Subjects = Population within bins. \(\Delta\) = Change from the end to the onset of the study. \begin{table} \begin{tabular}{l c c c} \hline \hline **Bin** & \# **Subjects** & \(\Delta\) **CDR** & \(\Delta\) **Coherence** \\ \hline Low & 20 & [0, 0.5] & -0.009 (0.091) \\ \hline Minor & 16 & (0.5,1.5] & -0.011 (0.060) \\ \hline Moderate & 15 & (1.5,2.5] & -0.060 (0.110) \\ \hline Severe & 11 & (2.5,3] & -0.125 (0.078) \\ \hline \hline \end{tabular} \end{table} Table 5: Association between changes in Clinical Dementia Rating (CDR) and the digital coherence marker in AD patients at different degrees of cognitive decline. Numbers in \((,]\) define the lower and upper values of each bin interval. Numbers in \(()\) refer to the standard deviation. \(\#\) Subjects = Population within bins. \(\Delta\) = Change from the end to the onset of the study. marker. Moreover, evaluation against three clinical bio-markers showed that language coherence can capture changes at different degrees of cognitive decline and achieves significant discrimination between people with moderate or severe cognitive decline within an AD population. It can also capture levels of depression, showing generalisability potential. In future, we aim to integrate disfluency language patterns and develop strategies for improving the performance of generative models. ### Limitations Monitoring dementia using computational linguistics approaches is an important topic. Previous work has mostly focused on distinguishing people with AD from healthy controls rather than monitoring changes in cognitive status per individual over time. In this study, we have used the Pitt corpus, currently the largest available longitudinal dementia dataset, to investigate longitudinal changes in logical coherence and their association with participants' cognitive decline over time. An important limitation of the Pitt corpus is that the longitudinal aspect is limited, spanning up to 5 sessions/narratives maximum per individual with most participants contributing up to two narratives. Moreover, the number of participants is relatively small, especially for the MCI cohort. In the future, we aim to address these limitations by investigating the generalisability of the proposed digital language coherence marker on a recently introduced rich longitudinal dataset for dementia (currently under review) and on transcribed psychotherapy sessions (data is collected in Hebrew) to monitor mood disorders. In this study, we used manually transcribed data from Pitt. In a real-world scenario, participants mostly provide speech via a speech elicitation task. This implies that the introduced method requires an automatic speech recognition (ASR) system robust to various sources of noise to be operationalized. ASR for mental health is currently underexplored, with most transcription work being done by human transcription. It may be that the proposed digital coherence marker becomes a less accurate means for monitoring dementia when people experience other comorbidities, neurodegenerative and mental illnesses, that significantly affect speech and language. Indeed, cognitive-linguistic function is a strong biomarker for neuropsychological health (Voleti et al., 2019). Finally, there is a great deal of variability to be expected in speech and language data affecting the sensitivity of the proposed digital marker. Both speech and language are impacted by speaker identity, context, background noise, spoken language etc. Moreover, people may vary in their use of language due to various social contexts and conditions, a.k.a., style-shifting (Coupland, 2007). Both inter and intra-speaker variability in language could affect the sensitivity of the proposed digital marker. While it is possible to tackle intra-speaker language variability, e.g., by integrating speaker-dependent information to the language, the inter-speaker variability remains an open-challenging research question. ### Ethics Statement Our work does not involve ethical considerations around the analysis of the DementiaBank Pitt corpus as it is widely used. Ethics was obtained by the original research team by James Backer and participating individuals consented to share their data in accordance with a larger protocol administered by the Alzheimer and Related Dementias Study at the University of Pittsburgh School of Medicine (Becker et al., 1994). Access to the data is password protected and restricted to those signing an agreement. This work uses transcribed dementia data to identify changes in cognitive status considering individuals' language. Potential risks from the application of our work in being able to identify cognitive decline in individuals are akin to those who misuse personal information for their own profit without considering the impact and the social consequences in the broader community. Potential mitigation strategies include running the software on authorised servers, with encrypted data during transfer, and anonymization of data prior to analysis. Another possibility would be to perform on-device processing (e.g. on individuals' computers or other devices) for identifying changes in cognition and the results of the analysis would only be shared with authorised individuals. Individuals would be consented before any of our software would be run on their data. ## Acknowledgements This work was supported by a UKRI/EPSRC Turing AI Fellowship to Maria Liakata (grant EP/V030302/1), the Alan Turing Institute (grant EP/N510129/1), and Wellcome Trust MEDEA (grant 213939). Matthew Purver acknowledges financial support from the UK EPSRC via the projects Sodestream (EP/S033564/1) and ARCID-UCA (EP/W001632/1), and from the Slovenian Research Agency grant for research core funding P2-0103.
2302.09432
BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and Benchmark
To advance Chinese financial natural language processing (NLP), we introduce BBT-FinT5, a new Chinese financial pre-training language model based on the T5 model. To support this effort, we have built BBT-FinCorpus, a large-scale financial corpus with approximately 300GB of raw text from four different sources. In general domain NLP, comprehensive benchmarks like GLUE and SuperGLUE have driven significant advancements in language model pre-training by enabling head-to-head comparisons among models. Drawing inspiration from these benchmarks, we propose BBT-CFLEB, a Chinese Financial Language understanding and generation Evaluation Benchmark, which includes six datasets covering both understanding and generation tasks. Our aim is to facilitate research in the development of NLP within the Chinese financial domain. Our model, corpus and benchmark are released at https://github.com/ssymmetry/BBT-FinCUGE-Applications. Our work belongs to the Big Bang Transformer (BBT), a large-scale pre-trained language model project.
Dakuan Lu, Hengkui Wu, Jiaqing Liang, Yipei Xu, Qianyu He, Yipeng Geng, Mengkun Han, Yingsi Xin, Yanghua Xiao
2023-02-18T22:20:37Z
http://arxiv.org/abs/2302.09432v2
BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and Benchmark ###### Abstract To advance Chinese financial natural language processing (NLP), we introduce BBT-FinT5, a new Chinese financial pre-training language model based on the T5 model. To support this effort, we have built BBT-FinCorpus, a large-scale financial corpus with approximately 300GB of raw text from four different sources. In general domain NLP, comprehensive benchmarks like GLUE and SuperGLUE have driven significant advancements in language model pre-training by enabling head-to-head comparisons among models. Drawing inspiration from these benchmarks, we propose BBT-CFLEB, a Chinese Financial Language understanding and generation Evaluation Benchmark, which includes six datasets covering both understanding and generation tasks. Our aim is to facilitate research in the development of NLP within the Chinese financial domain. Our model, corpus and benchmark are released at [https://github.com/ssymmetry/BBT-FinCUGE-Applications](https://github.com/ssymmetry/BBT-FinCUGE-Applications). Our work belongs to the Big Bang Transformer (BBT), a large-scale pre-trained language model project. ## 1 Introduction Pre-trained language models(PLMs), such as BERT (Devlin et al., 2018) and T5 (Raffel et al., 2019), have led to great performance boosts across many NLP tasks. Despite the excellent performance of pre-trained language models (PLMs) on a large number of NLP tasks, their performance is often affected when applied to domain-specific texts that exhibit significant differences from general text in terms of word usage, syntax, and writing style (Gururangan et al., 2020; Gu et al., 2021). To address this issue, Gururangan et al. (2020) proposed that continuing to pre-train a general PLM on target domain corpora and task-relevant texts can effectively improve its performance on domain-specific tasks, while Gu et al. (2021) further suggested that pre-training domain-specific PLMs from scratch with a sufficiently large corpus can achieve even better domain-specific performance. Inspired by these studies, domain-specific pre-trained language models have emerged in some domains, such as BioBERT (Peng et al., 2019) and PubMedBERT (Gu et al., 2021) in the biomedicine field, which have been utilized for practical tasks like entity and relation extraction. We collect all existing NLP competition tasks and academic datasets related to finance on the Chinese internet and summarized them in Table 2, revealing a growing demand for NLP capabilities in finance, particularly in information extraction and sentiment analysis. To meet these demands and improve the overall level of Chinese financial NLP, several companies have already developed and released Chinese financial pre-trained language models, such as FinBERT (Hou et al., 2020) and Mengzi-BERT-base-fin (Zhang et al., 2021). However, these models are based on the BERT-base model, have a single architecture type, and a parameter count (around 110 million) that is outdated and unable to meet the increasing demand for NLP capabilities in this field. Therefore, we propose FinT5, the largest Chinese financial pre-trained language model to date, based on the advanced T5 architecture, with 220 million parameters for the base version and 1 billion for the large version. Furthermore, NLP tasks in the financial industry focus primarily on information extraction, requiring models with high entity knowledge understanding and memorization capabilities. Although studies have shown that pre-trained PLMs on large-scale corpora already have some entity knowledge understanding and memorization capabilities, there are still some shortcomings. To address this issue, many studies have used knowledge-enhanced pre-training methods to improve PLMs' understanding and memorization of entity knowledge. However, these methods mostly target BERT-like models and lack strategies designed for T5 models. To improve T5's performance on financial NLP tasks, we propose a concise knowledge-enhanced pre-training method based on the T5 model's text-to-text paradigm. In addition, another challenge faced by Chinese financial NLP is the lack of corpus. The scale and diversity of corpora play an essential role in language model pre-training (Xu et al., 2020; Raffel et al., 2019; Gao et al., 2020). However, existing Chinese financial corpora are small in scale, poor in diversity and not open, as can be shown in Table 1. To solve this problem, we first need to determine the text types that a qualified Chinese financial corpus needs to cover. To this end, we first collected almost all existing Chinese financial NLP tasks and summarized their text sources, as shown in the Table 2. According to the source distribution of these tasks, we have determined the range of text types we need to collect. As a result, we collect and release a large-scale Chinese financial corpus named BBT-FinCorpus with about 300 GB raw text, which consists of five different sources to enhance its diversity covering most text sources of Chinese financial NLP tasks. The widespread use of benchmark evaluations is a key driving force that has greatly improved and rapidly iterated PLMs. These evaluations use a single score to assess model performance across multiple tasks, enabling direct and comprehensive comparisons between pre-trained language models. Existing English PLMs use the general benchmark evaluations GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019), while the general benchmark evaluation for Chinese PLMs is CLUE (Xu et al., 2020). Almost all PLMs participate in these evaluations to compare their performance with other models. However, there is no publicly available benchmark for Chinese financial NLP, which makes it difficult to compare existing pre-trained language models on different task sets and hinders the rapid improvement of PLM performance in the Chinese financial domain. To address this issue and promote research in the financial domain, we propose CFLEB, the **C**hinese **F**inancial **L**anguage Understanding and Generation **E**valuation **B**enchmark, consisting of six datasets covering language understanding and generation tasks. These datasets encompass a diverse range of text genres, dataset sizes, and levels of difficulty, and more importantly, emphasize challenges that arise in real-world scenarios. Our contributions are summarized as follows: * We introduce BBT-FinT5, a state-of-the-art financial Chinese PLM with large-scale parameters and knowledge-enhanced pre-training. * We provide BBT-FinCorpus, a comprehensive and diverse financial Chinese corpus. * We propose BBT-CFLEB, a benchmark for evaluating Chinese language understanding and generation in the financial domain. ## 2 Related Work ### Domain-specific PLMs and Corpora PLMs have achieved state-of-the-art performance in many NLP tasks (Devlin et al., 2018; Raffel et al., 2019; Liu et al., 2019). However, when applied to domain-specific tasks, models pre-trained on general corpora often produce unsatisfactory results due to the difference in word distribution from general to specific domains (Gururangan et al., 2020; Gu et al., 2021). To better adapt a language model to a target domain, pre-training on the corpus of the target domain is proposed (Gururangan et al., 2020). For domains with abundant unlabeled text, such as biomedicine, pre-training from scratch results in substantial gains over continual pre-training of general-domain language models (Gu et al., 2021). Consequently, many domain-specific PLMs have been proposed and pre-trained on their respective corpora. In the field of financial NLP, domain-specific pre-trained language models (PLMs) have demonstrated their superiority over general-domain PLMs. For instance, Araci (2019) and Yang et al. (2020) pre-trained BERT on English finance news and communications, respectively, and outperformed competitive baselines on financial sentiment analysis tasks. In the context of Chinese financial NLP, Hou et al. (2020) pre-trained BERT on Chinese financial news, analysis reports, company announcements, and encyclopedias, and evaluated it on news classification, sentiment analysis, and named entity recognition tasks. Furthermore, Zhang et al. (2021) pre-trained the Chinese PLM Mengzi on a 20GB financial corpus and demonstrated its effectiveness on multiple downstream tasks. Table 1 summarizes the characteristics of typical PLMs and their corpora in the financial domain. It can be observed that both the scale of our model and corpus exceed existing works. ### Knowledge Enhanced Pre-training Although PLMs can acquire rich linguistic knowledge from pretraining on large-scale corpora, many studies have shown that PLMs still have shortcomings in entity knowledge understanding and memory, as the distribution of entity knowledge in unfiltered corpora is sparse and long-tailed (Yang et al., 2021). Therefore, PLMs can benefit from knowledge-enhanced pretraining methods that strengthen entity knowledge understanding and memory. For example, Ernie (Sun et al., 2019) is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. The disadvantage of this approach is that it can only help the model better learn existing entity knowledge from the corpus, without addressing the issues of sparse and long-tailed distribution of entity knowledge in the corpus. Ernie 3.0, introduced by Sun et al. (2021), incorporates the universal knowledge-text prediction (UKTP) task. This task involves a pair of triples from a knowledge graph and their corresponding sentences from an encyclopedia, where either the relation in the triple or the words in the sentence are randomly masked. In order to predict the relation in the triple, the model must identify the head and tail entities mentioned in the sentence, and determine the semantic relationship between them. The limitation of this approach is that it only masks the relation in the triple and not the entities, which can hinder the learning of entity representations. Moreover, distant supervision has a certain amount of noise, which means that the relation in the triple may not necessarily appear in the sentence (Smirnova and Cudre-Mauroux, 2018). Therefore, only masking the relation and predicting it can have a strong negative impact on the model. Although the above methods have made some progress, they are all designed for the BERT-like model. To our knowledge, there is currently a gap in knowledge enhancement pre-training methods available for T5-like models. ### Domain-specific NLP Benchmarks Various domain-specific NLP benchmarks have been proposed to compare the ability of different methods in modeling text from specific domains in a fair manner. The BLUE benchmark (Peng et al., 2019) evaluates the ability of models in biomedical text mining through five tasks. The BLURB benchmark (Gu et al., 2021) further focuses on clinical domains by removing two unrelated tasks and includes a wider range of biomedical applications. Despite these efforts, a comprehensive set of benchmark tasks for training, evaluating, and analyzing financial PLMs is still largely unexplored. Currently, the FLUE (Shah et al., 2022) is the only benchmark for the financial domain, consisting of five tasks specifically designed for English financial text. However, we are the first to construct a comprehensive set of benchmarks for Chinese financial text, covering a range of language understanding and generation tasks that differ from previous works. ## 3 The Corpus: BBT-FinCorpus We build FinCorpus, the biggest corpus of Chinese financial domain to get a superior pre-trained language model. Section 3.1 covers how we decided on the corpus contents. We collected, refined and sorted the corpus to finally obtain the FinCorpus, as elaborated in Section 3.3. ### Coverage Confirmation of the Corpus We believe that, since the purpose of domain pre-training is to help models better understand domain texts and perform domain tasks more effectively, it is essential to observe the text distribution of domain tasks to determine the coverage of the corpus. The domain corpus should cover the text sources of domain tasks as much as possible to enhance the model's understanding of the tasks. To this end, we first collected almost all Chinese financial NLP task datasets available on the Chinese internet in recent years, including several datasets used in this study, and their text sources, as shown in Table 2. It can be seen that the text sources of these financial NLP datasets are mainly concentrated in financial news, company announcements, research reports, and social media. For financial news, we chose the largest financial news websites on the Chinese Internet for crawling, namely Sina Finance 1, Tencent Finance 2, Phoenix Finance 3, 36Kr 4and Huxiu 5. For company announcements and research reports, we chose Eastmoney 6 for crawling. For social media, we chose the two largest financial social media platforms on the Chinese Internet, Guba 7 and Xueqiu 8, for crawling. Footnote 4: [https://36kr.com/](https://36kr.com/) Footnote 5: [https://www.huxiu.com/](https://www.huxiu.com/) Footnote 6: [https://www.eastmoney.com/](https://www.eastmoney.com/) Footnote 7: [https://guba.eastmoney.com/](https://guba.eastmoney.com/) Footnote 8: [https://xueqiu.com/](https://xueqiu.com/) ### Crawling and Filtering of the Corpus We used a proxy-based distributed crawler to crawl public web pages. We filtered the web pages using a series of rules (Raffel et al., 2019; Yuan et al., 2021). ### Description of the Corpus After crawling, cleaning, and processing, we obtained the FinCorpus, a large-scale Chinese financial domain corpus that contains four types of language materials: * **Corporate announcements.** These are the announcements released by all listed companies in China over the past twenty years. The original data is in PDF format, with a total size of about 2TB. Using a PDF parser, we converted the PDF files into text files, resulting in a total size of 105GB. * **Research reports.** These are research reports issued by investment institutions such as securities firms and investment banks on macroeconomic issues, sectors, industries, \begin{table} \begin{tabular}{l l l l} \hline **PLM** & **Size** & **Corpus Size** & **Corpus Sources** \\ \hline FinBERT (Araci, 2019) & 110M & 29M words & News filtered by financial keywords \\ FinBERT (Yang et al., 2020) & 110M & 4.9B tokens & Corporate Reports, Earnings Call Transcripts, Analyst Reports \\ FinBERT (Hou et al., 2020) & 110M & 3B tokens & News, Analyse reports, Company announcements and Encyclopedias \\ Mengzi-BERT-base-fin (Zhang et al., 2021) & 110M & 20GB file & News, Analyse reports, Company announcements \\ BBT-FinTS (ours) & 220M, 1B & 80B tokens & Corporate Reports, Analyst Reports, Social media and Financial News \\ \hline \end{tabular} \end{table} Table 1: Typical financial PLMs and their corpora. \begin{table} \begin{tabular}{l l l l} \hline **Dataset** & **Text Source** & **Open State** & **Practicality** \\ \hline DuEE-fin (Han et al., 2022) & Financial news, Company & Yes & High \\ & announcement & & \\ FinRE (Li et al., 2019) & Financial news & Yes & High \\ Announcement information extraction (Tianchi, 2018) & Company announcement & Yes & High \\ Discovery of new entities in Internet finance (Datafountain, 2019) & Social media & Unspecified & Low \\ Announcement information extraction (Bienda, 2019) & Company announcement & Unspecified & High \\ Construction of financial knowledge graph (Bienda, 2020b) & Analyse report & Unspecified & Medium \\ Event causality extraction (Bienda, 2021) & Financial news & Unspecified & Low \\ Financial NL2SQL (Bienda, 2022a) & Data query sentence & Unspecified & Medium \\ Few-shot event extraction (Bienda, 2022b) & Financial news & Unspecified & Medium \\ Few-shot event extraction (Bienda, 2020a) & Financial news & Unspecified & Medium \\ FinNL (ours) & Financial news & Yes & High \\ FinNA (ours) & Financial news & Yes & High \\ FinFE (ours) & Social media & Yes & High \\ FinNSP (ours) & Social media & Yes & High \\ \hline \end{tabular} \end{table} Table 2: Chinese financial datasets we collected, with their open source status and practicality scores and individual stocks, analyzing the current status and future development trends of the research object. The original data is in PDF format, with a total size of about 1TB. After conversion, the total size of the resulting text files is about 11GB. * **Financial news.** These are the financial news articles from the past five years crawled from websites including Sina Finance, Tencent Finance, Phoenix Finance, 36Kr, and Huxiu. After cleaning, the total size of the resulting text files is about 20GB. * **Social media.** These are the posts from all stockholders and bloggers published on stock bar and Xueqiu website over the past twenty years. After cleaning, the total size of the resulting text is about 120GB. The corpus from the above five sources basically covers all types of texts in the common Chinese financial NLP. ## 4 The Large PLM: BBT-FinT5 To enhance the performance of the Chinese financial NLP baseline and foster the growth of the open-source community in this domain, we introduce the FinT5 model. This model's architecture and pre-training tasks are consistent with the T5 (Raffel et al., 2019) model and are pre-trained on BBT-FinCorpus (refer to Section 3). We chose this model for its robust performance on many general benchmarks and compatibility with understanding and generating tasks based on the text-to-text paradigm, which facilitates transfer learning. Our experiments demonstrate that the FinT5 model significantly outperforms T5 trained on the general corpus. In this section, we first describe the architecture and pre-training task of the T5 model. Then we outline the pre-training acceleration method based on DeepSpeed, and finally introduce the knowledge enhancement pre-training method that we propose for the T5 model, which is based on triple masking. ### Pre-training Model Architecture and Task Raffel et al. (2019) model all NLP tasks in a text-to-text format which enable the use of a unified network architecture, training approach, and loss function to handle all NLP tasks, promoting transfer learning in the NLP field. Building upon this, they conducted a series of comparative experiments and chose to develop a large-scale PLM, T5, based on an encoder-decoder architecture and pre-trained using MLM. Specifically, T5 utilizes the span mask method proposed by SpanBERT (Joshi et al., 2020), randomly masking 15% contiguous spans within a sentence rather than independent tokens. ### Pre-training Acceleration We use the optimizer state parallelism and gradient parallelism implemented by DeepSpeed (Rasley et al., 2020) to accelerate the pre-training process. In particular, we found that using the BFLOAT16 (Kalamkar et al., 2019) half-precision floating-point format for optimization can effectively solve the problem of gradient overflow that occurs in the training process with FP16 half-precision floating-point format, without the need to repeatedly adjust gradient scaling coefficients and other hyperparameters. Kalamkar et al. (2019) pointed out that in the training of deep neural networks, the value range (i.e., exponent range) of the floating-point numbers used to represent each parameter in the network is more important for training stability and performance than their mantissa precision. Therefore, the BFLOAT16 format uses the same eight-bit exponent as the FP32 format to represent the same exponent range as the FP32 format, at the cost of having three fewer mantissa bits than the FP16 format. Extensive experiments have shown that this trade-off makes the BFLOAT16 format as fast and memory-efficient as the FP16 format while having training stability and performance close to that of the FP32 format. ### Knowledge Enhancement Pre-training Method Based on Triple Masking We propose a knowledge enhancement pre-training method based on triple masking (KETM). First, for each triple in the knowledge graph, we use the distant supervision algorithm to obtain sentences corresponding to it. Specifically, for a knowledge triple (head entity, relation, tail entity), if there is a sentence in the encyclopedia that contains both the head and tail entities, we consider this sentence to contain the knowledge described by this triple. Next, for a sentence and its contained triple, we concatenate the triple at the beginning of the sentence. For the triple part, we randomly mask one element, and for the sentence part, we randomly mask 15% of a random-length span. Finally, we input the masked triple and sentence into the model and require the model to predict the masked element, as shown in the Figure 1. The model is trained to fill the masked element in the triple based on the two unmasked elements in the triple and the partially masked sentence, which helps the model better understand and memorize entity-related knowledge. ## 5 The Benchmark: BBT-CFLEB In this section, we first describe the method used for selecting tasks for the benchmark. We then introduce the selected tasks and the three leaderboards, each of which is composed of different tasks. ### Task Selection We propose that for domain-specific NLP evaluation benchmarks, special attention should be paid to their practicality, especially for the financially valuable field, to better reflect the model's ability in practice. Therefore, we use a practicality score to measure the practicality of the tasks we collect. Specifically, we invited financial experts to evaluate the practicality of each task and gave a low, medium, or high practicality rating, only selecting tasks with a high practicality rating as candidate tasks. In addition, we only kept tasks with a clear open-source statement as candidate tasks. Finally, we selected six tasks for BBT-CFLEB in Table 2. ### Task Introduction CFLEB includes six tasks in total, consisting of two language generation tasks and four language understanding tasks. These tasks are as follows: * FinNL, a financial news classification dataset. Given financial news articles, the model needs to classify them into up to 15 possible categories, with evaluation measured by F1-Score. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles. * FinNA, a financial news summarization dataset. Given financial news articles, the model needs to generate a summary, with evaluation measured by Rouge Lin (2004). The training set contains 24,000 articles, the validation set contains 3,000 articles, and the test set contains 3,000 articles. * FinRE, a financial news relation extraction dataset. Given financial news articles and head-tail entity pairs, the model needs to classify the relation between entity pairs into up to 44 categories, including the null relation, with evaluation measured by F1-Score. The training set contains 7,454 articles, the validation set contains 1,489 articles, and the test set contains 3,727 articles. * FinFE, a financial social media text sentiment classification dataset. Given financial social media text, the model needs to classify the sentiment of the text into negative-neutral-positive categories, with evaluation measured by accuracy. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles. * FinQA, a financial news announcement event question-answering dataset, derived from the DuEE-fin Han et al. (2022) dataset. Given Figure 1: Knowledge enhancement pre-training method based on triple masking (KETM) financial news or announcement text and a question related to an event mentioned in the text, the model needs to generate an answer to the question based on the text, with evaluation measured by F1-Score. The training set contains 16,000 articles, the validation set contains 2,000 articles, and the test set contains 2,000 articles. * FinNSP, a financial negative news and its subject determination dataset. Given financial news or social media text and entities mentioned in the text, the model needs to determine if the text contains negative news related to any entity and identify which entity is the subject of the negative news, with evaluation measured by F1-Score. The training set contains 4,800 articles, the validation set contains 600 articles, and the test set contains 600 articles. ### Leaderboard Introduction We have organized the tasks into multiple leaderboards according to different ability requirements (Xu et al., 2020), so that researchers can observe the model's ability rankings from different perspectives. The leaderboards of FinCUGE are as follows: * Overall leaderboard: includes all six tasks. * Understanding ability leaderboard: includes four language comprehension tasks, FinNL, FinRE, FinFE, and FinNSP. * Generation ability leaderboard: includes two language generation tasks, FinNA and FinQA. ## 6 Experiments In this section, we first introduces the basic settings of the experiment, including the basic information of the PLMs involved in the comparison and the processing format of the tasks in the evaluation benchmark. Then we conduct sufficient experimental and comparative analysis to validate the effectiveness of the proposed model and method. ### Experiments Setup #### 6.1.1 Pre-trained Language Models The models participating in the comparative experiment of this section include: * **GPT2-base**(Zhao et al., 2019). A Chinese GPT2 released by Zhao et al. (2019). Pre-trained using the general corpus CLUECorpusSmall (Xu et al., 2020). * **T5-base**(Zhao et al., 2019). A Chinese T5 released by Zhao et al. (2019). Pre-trained using the general corpus CLUECorpusSmall (Xu et al., 2020). * **FinBERT**(Hou et al., 2020). A Chinese BERT for the financial domain released by Hou et al. (2020). * **Mengzi-BERT-base-fin**(Zhang et al., 2021). A Chinese BERT for the financial domain released by Zhang et al. (2021). * **FinT5-base**. Our Chinese pre-trained language model for the financial domain, pre-trained on our financial corpus, FinCorpus. Its model architecture, parameter size, and \begin{table} \begin{tabular}{l l c c} \hline **Task Name** & **Introduction** & **Data** & **Evaluation** \\ \hline FinNL & Multi-label classification of financial news & 8000/1000/1000 & F1-score \\ FinNA & Generation of summaries for financial news & 24000/3000/3000 & Rouge \\ FinRE & Entity relation classification for financial news & 7454/1489/3727 & F1-score \\ FinFE & Sentiment classification of financial social media text & 8000/1000/1000 & Accuracy \\ FinQA & Question-answering for financial news/events & 16000/2000/2000 & F1-score \\ FinNSP & Detection of negative messages and entities in financial news & 4800/600/600 & F1-score \\ \hline \end{tabular} \end{table} Table 3: Summary of CFLEB tasks. pre-training hyperparameters are the same as T5-v1.1-base. * **FinT5-base-KE**. Knowledge-enhanced version of FinT5-base, enhanced by KETM method using CN-DBPedia Xu et al. (2017) knowledge graph. * **FinT5-large**. Our proposed Chinese pre-trained language model for the financial domain, with a total of about 1 billion model parameters, and the pre-training hyperparameters are the same as T5-base. #### 6.1.2 Fine-tuning For generative models (GPT, T5), we evaluated all six datasets by modeling all tasks as text-to-text. For BERT-based models, we evaluated them on four language understanding tasks: FinNL, FinRE, FinFE, and FinNSP, using BERT with an additional classification layer for all tasks. ### Experiment 1: Comparison of Pre-trained Model Architectures For the two models in the general domain, GPT2-base and T5-base, their pre-training corpora, hyperparameters, and training volume are all the same, but their average scores differ significantly, with T5-base significantly outperforming GPT2-base, as shown in Table 4. This difference is mainly due to the differences in the architectures, parameter sizes, and pre-training methods of the T5 and GPT models. This performance confirms the correctness of our choice of the T5 model. ### Experiment 2: Effectiveness of Domain Pre-training As shown in Table 4, the comparison between the FinT5-base model and the T5-base model indicates that the FinT5-base model pre-trained on FinCorpus significantly outperforms the T5-base model with the same parameter size, demonstrating the effectiveness of domain pre-training and the effectiveness of FinCorpus. ### Experiment 3: Superiority Compared to Existing Models in the domain As shown in Table 4, in the four language understanding tasks evaluated with FinBERT and Mengzi-BERT-base-fin, FinT5-base significantly outperformed both models, demonstrating the superiority of FinT5 over existing models in the domain. ### Experiment 4: Effectiveness of KETM As shown in Table 4, by comparing FinT5-base-ke with FinT5-base, it can be seen that the knowledge-enhanced text modeling method significantly improves the model's performance on tasks such as relation extraction and news summarization, without significantly compromising the performance on other tasks, thus proving the effectiveness of the KETM method. ### Experiment 5: Effectiveness of parameter scaling up As shown in Table 4, the performance comparison between FinT5-base and FinT5-large models indicates that the FinT5-large model with one billion parameters performs significantly better than the FinT5-base model, demonstrating the effectiveness of parameter scaling up. ## 7 Conclusion In this article, we introduced three new contributions to the domain of NLP in the context of Chinese finance. We created the largest open-source corpus for this domain, called FinCorpus, which contains a diverse collection of around 300GB of text from four sources. Our FinT5 model is the largest pre-trained language model for the Chinese financial domain, with one billion parameters. To enhance our pre-training method, we developed a unique knowledge-based approach called KETM, \begin{table} \begin{tabular}{l c c c c|c c|c|c|c} \hline **PLMs** & **FinFE** & **FinNL** & **FinNSP** & **FinRE** & **Un.Avg.** & **FinNA** & **FinQA** & **Ge.Avg.** & **Avg.** \\ \hline GPT2-base & 79.05 & 84.09 & 91.30 & 36.37 & 72.70 & 44.19 & 75.22 & 59.71 & 68.37 \\ T5-base & 79.40 & 87.48 & **95.43** & 54.93 & 79.56 & 48.54 & 83.58 & 66.06 & 74.89 \\ FinBERT-base & 79.45 & 84.69 & 69.01 & 55.33 & 72.37 & - & - & - & - \\ Mengzi-BERT-base-fin & 79.50 & 85.88 & 71.72 & 58.25 & 73.59 & - & - & - & - \\ BBT-FinT5-base & 80.19 & 87.55 & 94.50 & 60.62 & 80.21 & 50.06 & 84.82 & 67.44 & 76.29 \\ BBT-FinT5-base-KE & 79.43 & 87.77 & 95.05 & 61.79 & 80.26 & 51.36 & 85.66 & 68.51 & 76.84 \\ BBT-FinT5-large & **80.24** & **88.44** & 94.54 & **61.88** & **81.78** & **51.42** & **85.95** & **68.69** & **77.07** \\ \hline \end{tabular} \end{table} Table 4: Results of BBT-CFLEB from different PLMs. which was effective. We also created a benchmark to evaluate the understanding and generation capabilities of language models, called CFLEB. We believe domain benchmarks should prioritize practicality to better reflect how improvements in language models in academia can benefit the real world. Our future work includes expanding FinCorpus and FinT5 and exploring multilingual and multimodal applications.
2302.10163
Learning temporal relationships between symbols with Laplace Neural Manifolds
Firing across populations of neurons in many regions of the mammalian brain maintains a temporal memory, a neural timeline of the recent past. Behavioral results demonstrate that people can both remember the past and anticipate the future over an analogous internal timeline. This paper presents a mathematical framework for building this timeline of the future. We assume that the input to the system is a time series of symbols--sparse tokenized representations of the present--in continuous time. The goal is to record pairwise temporal relationships between symbols over a wide range of time scales. We assume that the brain has access to a temporal memory in the form of the real Laplace transform. Hebbian associations with a diversity of synaptic time scales are formed between the past timeline and the present symbol. The associative memory stores the convolution between the past and the present. Knowing the temporal relationship between the past and the present allows one to infer relationships between the present and the future. With appropriate normalization, this Hebbian associative matrix can store a Laplace successor representation and a Laplace predecessor representation from which measures of temporal contingency can be evaluated. The diversity of synaptic time constants allows for learning of non-stationary statistics as well as joint statistics between triplets of symbols. This framework synthesizes a number of recent neuroscientific findings including results from dopamine neurons in the mesolimbic forebrain.
Marc W. Howard, Zahra G. Esfahani, Bao Le, Per B. Sederberg
2023-02-20T18:49:34Z
http://arxiv.org/abs/2302.10163v4
# Foundations of a temporal RL ###### Abstract Recent advances in neuroscience and psychology show that the brain has access to timelines of both the past and the future. Spiking across populations of neurons in many regions of the mammalian brain maintains a robust temporal memory, a neural timeline of the recent past. Behavioral results demonstrate that people can estimate an extended temporal model of the future, suggesting that the neural timeline of the past could extend through the present into the future. This paper presents a mathematical framework for learning and expressing relationships between events in continuous time. We assume that the brain has access to a temporal memory in the form of the real Laplace transform of the recent past. Hebbian associations with a diversity of synaptic time scales are formed between the past and the present that record the temporal relationships between events. Knowing the temporal relationships between the past and the present allows one to predict relationships between the present and the future, thus constructing an extended temporal prediction for the future. Both memory for the past and the predicted future are represented as the real Laplace transform, expressed as the firing rate over populations of neurons indexed by different rate constants \(s\). The diversity of synaptic timescales allows for a temporal record over the much larger time scale of trial history. In this framework, temporal credit assignment can be assessed _via_ a Laplace temporal difference. The Laplace temporal difference compares the future that actually follows a stimulus to the future predicted just before the stimulus was observed. This computational framework makes a number of specific neurophysiological predictions and, taken together, could provide the basis for a future iteration of RL that incorporates temporal memory as a fundamental building block. Consider the experience of listening to a familiar melody. As the song unfolds, notes feel as if they recede away from the present, an almost spatial experience. According to Husserl (1966) "points of temporal duration recede, as points of a stationary object in space recede when I 'go away from the object." For a familiar melody, Husserl (1966) argues that events predicted in the future also have an analogous spatial extent, a phenomenon he referred to as _protention_. This experience is consistent with the hypothesis that the brain maintains an inner timeline extending from the distant past towards the present and from the present forwards into the future. In addition to introspection and phenomenological analysis, one can reach similar conclusions from examination of data in carefully controlled cognitive psychology experiments (Tiganj, Singh, Esfahani, & Howard, 2022). The evolutionary utility of an extended timeline for future events is obvious. Knowing what will happen when in the future allows for selection of an appropriate action in the present. Indeed, much of computational neuroscience presumes that the fundamental goal of the cortex is to predict the future (Clark, 2013; Friston & Kiebel, 2009; Friston, 2010; Rao & Ballard, 1999; Palmer, Marre, Berry, & Bialek, 2015). In AI, a great deal of research focuses on reinforcement learning (RL) algorithms that attempt to optimize future outcomes within a particular planning horizon (Ke et al., 2018; Dabney et al., 2020) without a temporal memory. From the perspective of psychology, RL is a natural extension of the Rescorla-Wagner model (Rescorla & Wagner, 1972) an associative model for classical conditioning (Sutton & Barto, 1981; Schultz, Dayan, & Montague, 1997; Waelti, Dickinson, & Schultz, 2001). Associative models describe connections between a pair of stimuli (or stimulus and an outcome etc) as a simple scalar value. Variables that affect the strength of an association, such as the number of pairings between stimuli, or attention, etc, all combine to affect a single scalar value. Thus, although the strength of an association can fall off with the time between stimuli, the association itself does not actually convey information about time _per se_(Gallistel, 2021a). In RL, the goal of the Bellman equation is to estimate discounted future reward: \[V(t)\simeq\sum_{t}\gamma^{r}r(t+\tau) \tag{1}\] but without ever explicitly estimating the future \(r(t+\tau)\). Temporal difference (TD) learning requires only measurement of the reward in the present and the value of states local in time: \[\delta(t)=r(t)+\gamma\hat{V}(t+1)-\hat{V}(t) \tag{2}\] In recent years, several authors have pursued temporal alternatives to TD learning (Ludvig, Sutton, & Kehoe, 2008; Kurth-Nelson & Redish, 2009; Momennejad & Howard, 2018; Tiganj, Gershman, Sederberg, & Howard, 2019; Tano, Dayan, & Pouget, 2020). Those models have attempted to incorporate temporal information into states (Ludvig et al., 2008) or choose a spectrum of discount rates to give appropriate behavior at a range of scales (Kurth-Nelson & Redish, 2009; Momennejad & Howard, 2018; Tano et al., 2020). To the extent that the brain can directly estimate the future, the problem solved by the Bellman equation--compute expected future reward without explicitly computing the future--is not a problem that is faced by the brain. Within psychology, many theorists argue that classical conditioning reflects explicit storage and retrieval of temporal contingencies between stimuli (Cohen & Eichenbaum, 1993; Arcediano, Escobar, & Miller, 2005; Balsam & Gallistel, 2009; Gallistel, Craig, & Shahan, 2019). Recent neurophysiological work (Jeong et al., 2022) has shown that temporal contingency provides a better account of the firing of dopamine neurons than TDRL, highlighting the need for a neural theory for learning temporal contingency. Such a theory requires a temporal memory. ### Temporal memory in the brain There is overwhelming evidence that temporal memory is widespread throughout the mammalian brain. From examining ongoing neural activity, it is possible to decode what happened when in the past from many brain regions (King & Dehaene, 2014; Murray et al., 2017; Terada, Sakurai, Nakahara, & Fujisawa, 2017; Rossi-Pool et al., 2019; Enel, Wallis, & Rich, 2020; Cueva et al., 2020). There are at least two forms of coding for time that support this ability. So-called time cells (Pastalkova, Itskov, Amarasingham, & Buzsaki, 2008; MacDonald, Lepage, Eden, & Eichenbaum, 2011, for reviews see Eichenbaum, 2014, 2017; Tsao, Yousefzadeh, Meck, Moser, & Moser, 2022) fire in sequence following a salient stimulus. Different stimuli trigger different sequences of time cells, so that from observing which time cells are firing at any moment it is is possible to decode what happened when in the past. In addition, so-called "temporal context cells" are triggered shortly after presentation of an event and then relax exponentially back to their baseline firing rate. Critically, temporal context cells have a heterogeneity of time constants (Tsao et al., 2018; Bright et al., 2020). Because different stimuli trigger different temporal context cells and because temporal context cells have a wide range of time constants, one can decode information about what happened when in the past over a wide range of time scales from a population of temporal context cells. The properties of temporal context cells--exponential receptive fields with a wide range of time constants--are as one would expect if firing rate across the population of temporal context cells records the real Laplace transform of the past leading up to the present (Shankar & Howard, 2010; Howard et al., 2014). Let's assume that \(\mathbf{f}(t)\) is a vector describing which of several discrete events happen at time \(t\) and that \(\mathbf{f}(t)\) is zero for most values of \(t\). We can specify the past leading up to each moment \(t\) as \(\mathbf{f}_{t}(\tau)\) (see Figure 1) with \(\tau\) ranging from zero to \(-\infty\) describing how far in the past an event was experienced. The goal of the temporal memory is to estimate the past \(\mathbf{f}_{t}(\tau)\) at each time \(t\), \[\mathbf{F}_{t}(s)=\int_{0}^{\infty}e^{-s\tau}\mathbf{f}_{t}(-\tau)d\tau= \mathcal{L}\left\{\mathbf{f}_{t}(-\tau)\right\}(s) \tag{3}\] where we understand \(s\) to be restricted to the positive real line. The observation that \(\mathbf{F}_{t}(s)\) is the Laplace transform of \(\mathbf{f}_{t}(-\tau)\) establishes that it serves as a temporal memory. Time cells, with circumscribed receptive fields resemble a "direct" estimate of the past: \[\hat{f_{t}}(\tau<0)=-\frac{1}{\tau}\int_{0}^{\infty}\Phi\left(\frac{-\tau}{ \tau}\right)f_{t}(-\tau)d\tau \tag{4}\] where \(\Phi(x)\) is a unimodal function with its maximum at 1 and \(\tau\) is defined to be negative. Time cells have properties that we would expect if the firing rate over a population of neurons records the approximate inverse Laplace transform. As \(\Phi()\) becomes more and more sharp, approaching a delta function, we see that \(\hat{f_{t}}(\tau)\) goes to \(f_{t}(t+\hat{\tau})\). The properties of time cells in the hippocampus confirm several predictions that follow from this hypothesis (Kraus, Robinson, White, Eichenbaum, & Hasselmo, 2013; Taxidis et al., 2020; Cao, Bladon, Charczynski, Hasselmo, & Howard, 2022). ### Eligibility traces and memory in RL Although RL models have historically assumed Markov statistics and used the theory of Markov decision processes, the idea of a temporal memory is not at all foreign to RL. The eligibility trace (Sutton, 1988) at time \(t\), \(\mathbf{e}_{t}\) updates from time step to time step as \[\mathbf{e}_{t}=\lambda\mathbf{e}_{t-1}+\mathbf{f}_{t}\] where \(\mathbf{f}_{t}\) is the state observed at time \(t\). It is clear that this expression results in exponential forgetting of inputs. Taking the continuum limit we find \[\mathbf{e}_{t}=\sum_{0}^{\infty}\lambda^{\tau}\mathbf{f}_{t-\tau}\simeq\int_ {0}^{\infty}e^{-s\tau}\mathbf{f}_{t}(\tau)d\tau \tag{5}\] where in the last expression \(s\) is chosen as \(s=-\log\lambda\). Thus the eligibility trace is an exponentially-weighted sum over past events. Comparing this last expression to Eq. 3, we find that the salient difference between the eligibility trace and Laplace transform of the past is that the eligibility trace is usually understood to have one forgetting rate \(\lambda\) whereas the Laplace transform requires a continuum of rate constants \(s\). By choosing a continuum of forgetting rates, one obtains a temporal memory extending roughly from the fastest time constant to the slowest time constant. The resolution of this temporal memory is controlled by the spacing between adjacent time constants and the degree to which the firing rates of spiking neurons can faithfully obey Eq. 3. Critically, the resolution of this Laplace temporal memory does not depend on the properties of \(\Phi()\) or whatever mechanism is used to extract information. This paper assumes the existence of a population of neurons whose firing rate \(\mathbf{F}_{t}^{*}(s)\) encodes the real Laplace transform of the past leading up to time \(t\). One of the primary goals of the paper is to develop a hypothesis that allows construction of a population \(\mathbf{F}_{t}^{*}(s)\) that provides an estimate of the real Laplace transform of the future that is expected to follow time \(t\). ### Constructing neural timelines of the past and future The goal of this paper is to write out a formalism we can take seriously to describe laboratory behavioral tasks used in psychology and neuroscience. We assume that the input to the model is a finite set of discrete symbols, x, v, etc., that are occasionally presented for an instant in continuous time. We refer to the symbol available at any particular moment as a stimulus. When a symbol is presented at time \(t\), the stimulus is the basis vector for that stimulus for a delta function centered at \(t\). At most times, the stimulus is zero. We assume that there are temporal relationships between some of the symbols. For convenience we assume that the time between repetitions of any symbol is much longer than the temporal relationships that are to be discovered and much longer than the longest time constant \(1/s_{\text{min}}\). This assumption allows us to imagine that experience is segmented into a series of discrete trials; each symbol can be presented at most once per trial. This assumption is not fundamental to the model but allows easy interpretation of quantities that we will derive. ### The present Let us take as input to the model a stream of inputs, \(\mathbf{f}(t)\). The notation \(\mathbf{v}\) refers to a vector with each element a real number, \(\mathbf{v}^{\prime}\) is a transposed vector, so that \(\mathbf{u}^{\prime}\mathbf{v}\) is the inner product, a scalar, and \(\mathbf{u}\mathbf{v}^{\prime}\) is the outer product, a matrix. We write \(\mathbf{f}_{t}\) for the stimulus available at time \(t\). At instants \(t\) when no stimulus is presented, \(\mathbf{f}_{t}=\mathbf{0}\), the vector with all entries zero. We will occasionally refer to the moment on a particular trial when x is presented, \(\mathbf{f}_{t}=\mathbf{x}\) as \(t_{x}\). We ignore similarity between symbols, so that \(\mathbf{y}^{\prime}\mathbf{x}=\delta_{y,x}\). If symbol x was predicted to occur at time \(t\), but was not observed, we will occasionally write \(\mathbf{f}_{t}=\tilde{\mathbf{x}}\) to describe the observation of a failed prediction for symbol x. One may imagine the basis vectors \(\mathbf{x}\), \(\mathbf{y}\), \(\tilde{\mathbf{x}}\) etc as one-hot vectors without changing any of the results in this paper. We write \(\mathbf{f}_{t}(\tau)\) to describe the true past that led up to time \(t\), where \(\tau\) runs from \(0^{-}\), corresponding to the moment of the past closest to the present backwards to \(-\infty\), corresponding to the distant past. Whereas \(\mathbf{f}_{t}\) is the stimulus available in the present at time \(t\), \(\mathbf{f}_{t}(\tau<0)\) is the timeline that led up to time \(t\). Under the assumption that every symbol is presented at most once per trial, each component of \(\mathbf{f}_{t}(\tau<0)\) is either a delta function at some particular \(\tau\) or zero everywhere. ### Neural manifolds for the past and the future We estimate both the past and the future as functions over neural manifolds. Each manifold is a population of processing elements--neurons--each of which is indexed by a position in a coordinate space. The coordinates describing the neurons are continuous and locally Euclidean. At each moment, each neuron is mapped onto scalar value correspond to its firing rate over a macroscopic period of time on the order of at least tens of ms. We propose that the past and the future are represented by separate manifolds that interact with one another. The representations for both the past and the future each utilize two connected manifolds. We refer to one kind of manifold, indexed by an effectively continuous variable \(s\), as a Laplace space. The other kind of manifold, indexed by an effectively continuous variable \(\overset{*}{\tau}\), is referred to as an inverse space. The representations of the past follow previous work in theoretical neuroscience (Shankar Howard, 2013; Howard et al., 2014), psychology (Howard, Shankar, Aue, & Criss, 2015), and neuroscience (Eright et al., 2020; Cao et al., 2022). _Laplace spaces for remembered past and predicted future_. The Laplace space corresponding to the past, which we write as \(\mathbf{F}_{t}^{*}(s)\) encodes the Laplace transform of \(\mathbf{f}_{t}(\tau)\), the past leading up to time \(t\): \[\mathbf{F}_{t}^{*}(s)=\mathcal{L}\left\{\mathbf{f}_{t}(\tau<0)\right\}(s) \tag{6}\] We restrict \(s\) to real values on the positive line (but see Aghajan, Kreiman, & Fried, 2022). The Laplace space corresponding to the future, which we write as \(\mathbf{F}^{+}(s)\) is an an attempt to estimate the Laplace transform of the future, \(\mathcal{L}\left\{\mathbf{f}_{t}(\tau>0)\right\}(s)\). Many neurons tile the \(s\) axis continuously for each symbol. One may imagine that each symbol, rather than being represented by a single neuron as in a one-hot vector, is represented by a line of neurons representing the history, or future, of that symbol. The index \(s\) assigned to a neuron corresponds to the inverse of its functional time constant. Thus, there is a natural mapping between \(1/s\) and \(\tau\) within both the past and the future. By convention, \(s\) is positive for both the past and the future so that \(\mathbf{F}_{t}^{*}(s)\) is the Laplace transform of \(\mathbf{f}_{t}(-\tau)\) for \(\tau<0\) whereas \(\mathbf{F}_{t}^{*}(s)\) is the Laplace transform of \(\mathbf{f}_{t}(\tau)\) for \(\tau>0\). Although \(s\) is effectively continuous, this does not require that neurons sample \(s\) evenly. Following previous work in psychology (e.g., Chater & Brown, 2008; Piantdosi, 2016; Howard & Shankar, 2018), neuroscience (Guo, Huson, Macosko, & Regehr, 2021; Cao et al., 2022), and theoretical neuroscience (Lindeberg & Fagerstrom, 1996; Shankar & Howard, 2013), we assume that \(s\) is sampled on a logarithmic scale. Let \(n\) be the neuron number, starting from the largest value of \(s_{\text{max}}\) nearest \(\tau=0\) and extending out from the present. We obtain a logarithmic scale by choosing \(ds/dn=-s\). _Updating Laplace spaces in real time_. Suppose that we have arranged for one particular component of \(\mathbf{F}_{t}^{*}(s)\) or \(\mathbf{F}_{t}^{+}(s)\) to hold the Laplace transform of one particular symbol, which we write as \(f_{t}(\tau)\). Suppose further that \(f_{t}(\tau)\) is zero in the neighborhood of \(\tau=0\). Consider how this component, which we write as \(F^{-}(s)\) or \(F^{+}(s)\), should update as time passes. Let us pick some minimal increment of time \(\delta t\) on the order of, say, 100 ms. At time \(t+\delta t\), information in \(f_{t}(\tau<0)\) recedes further away from the present, so that \(F_{t+\delta t}^{+}(s)=\mathcal{L}\left\{f_{t}(\tau+\delta t)\right\}\). In contrast, at time \(t+\delta t\), information in \(f_{t}(\tau>0)\) comes closer to the present, so that \(F_{t+\delta t}^{+}(s)=\mathcal{L}\left\{f_{t}(\tau-\delta t)\right\}\). More generally, suppose that \(F_{t}(s)\) is the Laplace transform of a function over some variable \(x\), \(F_{t}(s)=\mathcal{L}\left\{f_{t}(x)\right\}(s)\). Defining \(\alpha\equiv\delta x/\delta t\), we can update \(F_{t}(s)\) as \[F_{t+\delta t}(s)=\mathcal{L}\left\{\mathcal{T}_{\alpha(\delta t)}f_{t}(x) \right\}(s)=e^{-s\alpha(\delta t)}F_{t}(s) \tag{7}\] where \(\mathcal{T}\) is the translation operator, \(\mathcal{T}_{\alpha}f(x)=f(x+a)\) and we have used the expression for the Laplace transform of translated functions. Equation 7 describes a recipe for updating both \(F_{t}^{+}(s)\) with \(\alpha_{\pm}\) in the absence of new input. Using the sign convention developed here, we fix \(\alpha_{-}=1\) for \(F^{-}(s)\) and fix \(\alpha_{+}=-1\) for \(F^{+}(s)\). It is possible to incorporate changes into the rate of flow of subjective time by letting \(\alpha_{\pm}\) change in register, such that \(\alpha_{+}(t)=-\alpha_{-}(t)\) for all \(t\). The expression in Eq. 7 holds more generally and can be used to update Laplace transforms over many continuous variables of interest for cognitive neuroscience (Howard et al., 2014; Howard et al., 2020; Howard and Hasselmo, 2020). We are in a position to explain how \(\mathbf{F}_{t}^{-}(s)\) comes to represent the Laplace transform of \(\mathbf{f}_{t}(\tau<0)\); a discussion of how \(\mathbf{F}_{t}^{+}(s)\) comes to estimate the future requires more development and will be postponed. When a symbol is presented at time \(t\), it enters timeline of the past at \(\tau=0^{-}\). So, incorporating the input at time \(t\) into the past at time \(t+\delta t\) we have \[\mathbf{F}_{t+\delta t}^{-}(s) = e^{-s(\delta t)}\left[\mathbf{F}_{t}^{-}(s)+\mathcal{L}\left[ \delta\left(0^{-}\right)\mathbf{f}_{t}\right]\right] \tag{8}\] \[= e^{-s(\delta t)}\mathbf{F}_{t}^{-}(s)+e^{-s(\delta t)}\mathbf{f }_{t}.\] At time \(t+\delta t\), the input from time \(t\) is encoded as the Laplace transform of that symbol a time \(\delta t\) in the past. At each subsequent time step, an additional factor of \(e^{-s\delta t}\) accumulates. As time passes, the input from time \(t\) is always stored as Laplace transform of a delta function at the appropriate place on the timeline. Because this is true for all stimuli that enter \(\mathbf{F}^{-}(s)\), we conclude that \(\mathbf{F}_{t}^{-}(s)\) encodes the Laplace transform of the past \(\mathbf{f}_{t}(\tau<0)\). The middle panel of Figure 2 illustrates the profile of activity over \(\mathbf{F}_{t}^{-}\) and \(\mathbf{F}_{t}^{+}\), shown as a function of cell number \(n\), resulting from the Laplace transform of a delta function at various moments in time. In the middle panel, the axis for the past is reversed to allow appreciation of the relationship between past time \(\tau<0\) and \(F^{-}\). Note that the Laplace transform of a delta function has a characteristic shape as a function of cell number that merely translates as time passes. Note that the magnitude of the translation of \(F^{+}[n]\) depends on the value of \(\tau_{o}\). It can be shown that for a delta function \(F_{t+\delta t}^{+}[n]=F_{t}^{+}[n+\delta n]\) with \(\delta n=\alpha_{\pm}\frac{\delta t}{\tau_{o}}\). This can be appreciated by noting that the distances between successive lines in the middle panel of Fig. 2 are not constant despite the fact that they correspond to the same time displacement. Whereas \(\delta n\) goes down as time passes for \(F^{-}[n]\), \(\delta n\) increases with the passage of time for \(F^{+}[n]\). There are implementational challenges to building a neural circuit that obeys Eq. 7; these challenges are especially serious when \(\alpha<0\), which requires activation to grow exponentially. These challenges would be mitigated by a neural circuit that obeys an equivalent PDE obtained by differentiating Eq. 7 with respect to cell number \(n\). The use of a PDE allows error to be distributed over many neurons and would allow neurons that have zero activation to grow if their neighbors are nonzero. Moreover, if one could literally implement the PDE this would preserve linearity. A disadvantage of a PDE is that it may require careful tuning of a neural circuit. If one were willing to restrict the representation of each symbol to the Laplace transform of a delta function at a single point in time, it would be straightforward to implement a continuous attractor network (Khona and Fiete, 2021) to allow the "edge" in the Laplace transform as a function of \(n\) to translate appropriately. _Inverse spaces for remembered past and predicted future_. The bottom panel of Figure 2 shows a graphical depiction of the inverse space for the past and the future during the interval between presentation of x and y. The inverse spaces approximate the past, \(\tilde{f}(\tilde{\tau}<0)\), and the future, \(\tilde{f}(\tilde{\tau}>0)\), on a log scale. Neurons in the inverse space have circumscribed receptive fields in time, like time cells in the hippocampus (Pastalkova et al., 2008; MacDonald et al., 2011). As the delta function corresponding to the time of x recedes into the past, the corresponding bump of activity in \(\mathbf{x}^{\prime}\tilde{\mathbf{f}}_{t}(\tilde{\tau}<0)\) also moves, keeping its shape but moving more and more slowly Figure 1: Guide to notation. **A.** Sign conventions. At the present moment \(t\), objective time \(\tau\) runs from \(-\infty\) to \(\infty\). \(\tau=0\) corresponds to time \(t\). The real Laplace domain variable \(s\) runs from \(0^{+}\) to \(+\infty\) for both past and future, approximated as \(s_{\text{min}}\) and \(s_{\text{max}}\). The units of \(s\) are \(\tilde{\tau}^{-1}\); the values corresponding to different points of the timeline are shown in the same vertical alignment. Cell number for Laplace and inverse spaces \(n\) are aligned with one another. The variable _tustaur_ describes position along the inverse spaces. It is in register with \(\tau\) and derived from \(s\). **B.** The stimulus available in the present, \(\mathbf{f}_{t}\) provides input to two sets of neural manifolds. One set of neural manifolds represents the past; the other estimates the future. **M(\(s\))** stores temporal relationships between events. as x recedes further and further into the past. In the future, the delta function corresponding to the predicted time of \(\tau\) should start a time \(\tau_{o}\) in the future and come closer to the present as time passes. As the prediction for \(\tau\) approaches the present, the corresponding bump of activity in \(\mathbf{y}^{\prime}\tilde{\mathbf{f}}_{i}(\tau>0)\) keeps its shape but the speed of the bump accelerates rather than slowing with the passage of time. Previous papers have made use of the Post approximation to implement the inverse transform. This is not neurally reasonable (Gosmann, 2018); the Post approximation is difficult to implement even in artificial neural networks (e.g., Tano et al., 2020; Jacques, Tiganj, Howard, & Sederberg, 2021). A more robust approach would be a continuous attractor network (for a review see Khona & Fiete, 2021) that takes input as the derivative of \(F\) with respect to \(n\). The width of the bump in \(\tilde{f}\) would depend on internal connections between neurons in \(\tilde{f}\) and global inhibition would stabilize the activity over \(\tilde{f}\). In this case, moving the bump in different directions, corresponding to \(\alpha>0\) and \(\alpha<0\) is analogous to moving the bump of activity in, say, a ring attractor for the head direction system, in different directions. ### Predicting the future from the past The previous subsection describes how to evolve the Laplace manifold for the past. We could use the same approach to evolve the Laplace manifold for the future during periods when no symbol is experienced if we could initialize the representation of the future appropriately. This will be accomplished _via_ learned temporal relationships between the past and the future. For present we only consider simple pairwise relationships between symbols. The moment a nonzero stimulus \(\mathbf{f}_{i}\) is experienced, we assume it is available to both \(\mathbf{F}^{-}\) and \(\mathbf{F}^{+}\), triggering a number of operations which presumably occur sequentially within a small window of time on the order of perhaps 100 ms. First, the present stimulus updates a prediction for the future _via_ a set of connections \(\mathbf{M}\) organized by \(s\). Then these connections are updated by associating the past to the present. Finally the present stimulus is added to the representation of the past. For ease of exposition we will first focus on describing the connections between the past and the future. We write \(\mathbf{M}(s)\) for a set of connections that associates the Laplace transform of the past to the Laplace transform of the future (Fig. 3). For any particular value \(s_{o}\), \(\mathbf{M}(s_{o})\) is a matrix describing connections from each symbol in \(\mathbf{F}^{-}(s_{o})\) to each symbol in \(\mathbf{F}^{+}(s_{o})\). For each pair of symbols, say x and \(\tau\), we write \(M_{y}^{\,\,\,\,x}(s_{o})\) for the strength of the connection _from_ the cell corresponding to \(\mathbf{x}\) with \(s=s_{o}\) in \(\mathbf{F}^{-}\)_to_ the cell corresponding to \(\mathbf{y}\) in \(\mathbf{F}^{+}\) with \(s=s_{o}\). \(\mathbf{M}(s)\) does not include connections between neurons with different values of \(s\). On occasion it will be useful to think of the set of connections between a pair of symbols over all values of \(s\), which we write as \(M_{y}^{\,\,\,\,x}(s)\). Similarly, we write \(\mathbf{M}^{\mathbf{F}}(s)\) for the set of connections _from_\(\tau\) in \(F^{-}\) to all stimuli in \(F^{+}\) over all values of Figure 3: Schematic figure illustrating \(M_{y}^{\,\,\,\,x}(s)\). \(F^{-}(s)\) and \(F^{+}(s)\) components for all the possible symbols, here shown schematically as sheets. Two symbols x and v are shown in both \(F^{-}(s)\) and \(F^{+}(s)\). Each symbol is associated with a population of neurons spanning a continuous set of \(s\) values, shown as the heavy lines in this cartoon. \(\mathbf{M}(s)\) describes the connections between each symbol in \(\mathbf{F}^{-}(s)\) to each symbol in \(\mathbf{F}^{+}(s)\) for each value of \(s\). The curved lines \(M_{y}^{\,\,\,\,x}(s)\) illustrate the set of weights connecting units corresponding to x in \(F^{-}\) to units corresponding \(\tau\) in \(F^{+}\). Connections exist only between units with the same values of \(s\). The strength of the connections in \(M_{y}^{\,\,\,x}(s)\) vary as a function of \(s\) in a way that reflects the pairwise history between x and y. Figure 2: Neural manifolds to construct a log compressed timeline of the past and the future. Top: A temporal relationship exists between x and y such that y always follows x after a delay of \(\tau_{o}\) seconds. Consider how the internal timeline ought to behave after x is presented at \(t=0\). At time \(t\), the past should include x \(t\) seconds in the past and y \(\tau_{o}-t\) seconds in the future. Samples of the timeline at evenly-spaced moments between zero and \(\tau_{o}\). Earlier moments closer to \(t=0\) are darker and later moments closer to \(t=\tau_{o}\) are lighter. Red lines are neurons coding for x, blue lines are neurons coding for y. Middle: Laplace spaces for the past (left) and future (right) shown as a function of cell number \(n\); Bottom: inverse spaces, constructed using the Post approximation, for the past (left) and future (right) shown as a function of log time. Exactly at time \(t=0\), x is available a time \(0^{+}\) in the future (dark horizontal red line, middle right). Similarly, exactly at \(t=\tau_{o}\), y is available a time \(0^{-}\) in the past (light horizontal blue line, middle left). \(s\). We write \(\mathbf{M}_{y}(s)\) for the set of connections _to_\(\mathrm{\gamma}\) in \(F^{+}\) from all symbols and all values of \(s\). When a particular stimulus, let's say \(\mathrm{\gamma}\), is presented, the connections to and from that stimulus in \(\mathbf{M}(s)\) are updated. The connections from \(\mathrm{\gamma}\) in the past are updated as \[\mathbf{M}^{\mathrm{\gamma}}(s)\rightarrow\rho\mathbf{M}^{\mathrm{\gamma}}(s) \tag{9}\] That is, the connections from \(y\) in \(F^{-}\) to every other stimulus for each value of \(s\), are all scaled down by a value \(\rho\). Later we will consider the implications of a continuous spectrum of \(\rho\) values; for now let us just treat \(\rho\) as a fixed parameter restricted to be between zero and one. When \(\mathrm{\gamma}\) is presented, it momentarily becomes available at the "rewarward part" of the future. In much the same way that the present enters the past (Eq. 8) at \(\tau=0^{-}\), we also assume that the present is also available momentarily in the future at \(\tau=0^{+}\). The connections to \(\mathrm{\gamma}\) in the future are updated as \[\mathbf{M}_{y}(s)\rightarrow\mathbf{M}_{y}(s)+(1-\rho)\mathbf{F}_{t}^{-}(s) \tag{10}\] We can understand Eq. 10 as a Hebbian association between the units in \(\mathbf{F}^{-}(s)\), whose current activation is given by \(\mathbf{F}_{t}^{-}(s)\) and the units in the future \(\mathbf{F}^{+}(s)\) corresponding to the present stimulus \(\mathrm{\gamma}\) (see Fig. 4). More generally, we can understand this learning rule as strengthening connections from the past \(\mathbf{F}_{t}^{-}(s)\) to the rearward part of the future, \(\mathcal{L}\left\{\delta(0^{+})\mathbf{f}_{t}\right\}(s)=e^{-s0}\mathbf{f}_{t}\). Because the second term is the product of two Laplace transforms, it can also be understood as the Laplace transform of a convolution, here, the convolution of the present with the past.1 Convolution has long been used as an associative operation in mathematical psychology (Murdock, 1982; Jones & Mewhort, 2007), neural networks (Plate, 1995; Eliasmith, 2013; Blouw, Solodkin, Thagard, & Eliasmith, 2016), and computational neuroscience (Steinberg & Sompolinsky, 2022). Footnote 1: Because of the sign conventions adopted here, \(F_{t}^{-}(s)\) is the Laplace transform of \(f_{t}(-\tau)\) whereas \(F_{t}^{+}(s)\) is the transform of \(f_{t}(\tau)\). Viewed in this light it is more precise to think of Eq. 10 as learning the Laplace transform of the cross-correlation between the present and the past. \(\mathbf{M}(s)\)_stores pairwise temporal relationships_. To understand the properties of \(\mathbf{M}(s)\), let us assume that the model as described thus far learns in a world containing two stimuli, \(\mathrm{x}\) and \(\mathrm{\gamma}\) for many trials. Let us assume that \(\mathrm{x}\) is presented on each trial. If \(\mathrm{\gamma}\) is presented on a given trial it appears precisely \(\tau_{o}\) seconds after \(\mathrm{x}\). The probability \(\mathrm{\gamma}\) is presented on each trial is \(P(y|x)\). In this simple situation we can restrict our attention to \({M_{y}^{\,\mathrm{x}}}(s)\), the weights connecting the neurons coding for \(\mathrm{x}\) in the past to the neurons coding for \(\mathrm{\gamma}\) in the future. From examination of Eqs. 9 and 10, we see that after each trial \({M_{y}^{\,\mathrm{x}}}(s)\) is multiplied by \(\rho\) when \(\mathrm{x}\) was presented. For trials on which \(\mathrm{\gamma}\) was also presented, \((1-\rho)e^{-s\tau_{o}}\) is added to \({M_{y}^{\,\mathrm{x}}}(s)\). Writing \(h[i]\) as an indicator variable for the history of presentations of \(\mathrm{\gamma}\) on the trial \(i\) steps in the past we find \[{M_{y}^{\,\mathrm{x}}}(s)=(1-\rho)e^{-s\tau_{o}}\sum_{i}\rho^{i}h[i] \tag{11}\] Note that if \(P(y|x)=1\), then after an infinitely long series of trials \(\sum_{i}h[i]\rho^{i}=\frac{1}{1-\rho}\) and \({M_{y}^{\,\mathrm{x}}}(s)=e^{-s\tau_{o}}\) for all choices of \(\rho\). Following similar logic, if we relax the assumption that \(P(\mathrm{y}|x)=1\) and take the limit as \(\rho\) goes to \(1\), we find that \({M_{y}^{\,\mathrm{x}}}(s)=P(\mathrm{y}|x)e^{-s\tau_{o}}\). Now let us relax the assumption that the time lag between \(\mathrm{x}\) and \(\mathrm{x}\) always takes the same value. Let the lag be a random variable \(\tau_{xy}\) subject to the constraint that \(\tau_{xy}\) is always \(>0\). This is not a fundamental restriction; if \(\tau_{xy}\) changed sign, those observations would contribute to \({M_{y}^{\,\mathrm{x}}}(s)\) instead of \({M_{y}^{\,\mathrm{x}}}(s)\). Now, again taking the limit as \(\rho\to 1\), we find \[{M_{y}^{\,\mathrm{x}}}(s)=P\left(y|x\right)E\left[e^{-s\tau_{xy}}\right]=P\left( y|x\right)\mathcal{L}\left\{\tau_{xy}\right\}(s) \tag{12}\] where we have used the definition for the Laplace transform of a random variable, again with the understanding that we restrict \(s\) to be real and positive. Equation 12 illustrates several important properties of \(\mathbf{M}(s)\). First, we can see that \({M_{y}^{\,\mathrm{x}}}(s)\) provides complete information about the distribution of temporal lags averaged over history. This can be further appreciated by noting that the Laplace transform of the random variable on the right hand side is the moment generating function of \(-\tau_{xy}=\tau_{yx}\). Keeping the computation in the Laplace domain means that there is no blur introduced by going into the inverse space as in previous attempts to build a model for predicting the future (Momennejad Howard, 2018; Tiganj et al., 2019; Goh, Ursekar, & Howard, 2022). Second, because \(\mathcal{L}\left\{\tau_{xy}\right\}(s=0)=1\) as long as the expectation of \(\tau_{xy}\) is finite, we can write \({M_{y}^{\,\mathrm{x}}}(s)={M_{y}^{\,\mathrm{x}}}(s=0)\bar{M}_{y}^{\,\mathrm{x} }(s)\) where \({M_{y}^{\,\mathrm{x}}}(s=0)\) is just \(P(y|x)\). This allows us to cleanly decompose information about what will happen in the future, stored in \({M_{y}^{\,\mathrm{x}}}(s=0)\), from information about when those events will happen, stored in \(\bar{M}_{y}^{\,\mathrm{x}}(s)\) Figure 4: Learning and expressing pairwise associations with \(\mathbf{M}(s)\). The horizontal line is time; the diagonal lines indicate the internal timeline at the moments they intersect. Memory for the past is below the horizontal line; prediction of the future is above. When \(\mathrm{x}\) is presented for the first time, it predicts nothing. When \(\mathrm{\gamma}\) is presented, the past contains a memory for \(\mathrm{x}\) in the past. When \(\mathrm{\gamma}\) is presented, \({M_{y}^{\,\mathrm{x}}}(s)\) stores the temporal relationship between \(\mathrm{x}\) in the past and \(\mathrm{\gamma}\) in the present—the rearward part of the future. In addition to storing learned relationships, connections from each item decay each time it was presented (not shown). When \(\mathrm{x}\) is repeated much later in time, the stored connections in \({M_{y}^{\,\mathrm{x}}}(s)\) retrieve a prediction of \(\mathrm{\gamma}\) in the future. More generally we can express \(\mathbf{M}(s)\) as \(\mathbf{M}(s)=\mathbf{M}_{\mathrm{what}}\cdot\mathbf{\hat{M}}(s)\) where \(\cdot\) indicates pointwise rather than matrix multiplication. _Continuum of \(\rho\) and memory for trial history_. Before moving on we briefly note the implications of understanding \(\rho\) as a continuous variable. Treating \(\rho\) as continuous, Eq. 11, which describes the situation where \(\tau_{xy}\) is equal to \(\tau_{o}\) on each trial, can be rewritten as \[M_{y}^{x}(\rho,s)=(1-\rho)e^{-\pi\tau_{o}}\mathcal{Z}\left\{h[i]\right\}\left( \rho^{-1}\right)\] where \(\mathcal{Z}\left\{\right\}(z)\) is the Z-transform, the discrete analog of the Laplace transform (Ogata, 1970). Although the notation is a bit more unwieldy, allowing \(\tau_{xy}\) to vary across trials we see that the trial history of timing is also retained by \(\mathbf{M}(\rho,s)\). Writing the delay between x and y on the trial \(i\) steps in the past as \(\tau[i]\), and \(H[i](s)\equiv h[i]e^{-\pi[i]}\) \[M_{y}^{x}(\rho,s) = (1-\rho)\mathcal{Z}\left\{H[i](s)\right\}\left(\rho^{-1}\right). \tag{13}\] Because the Z-transform is in principle invertible, information about the entire trial history has been retained by virtue of having a continuum of forgetting rates \(\rho\). Figure 5 illustrates the ability to extract the trial history including timing information of events that follow x from \(\mathbf{M}(\rho,s)\mathbf{x}\). This illustrates a remarkable property of Laplace-based temporal memory. Although each synaptic matrix with a specific value of \(\rho_{o}\) forgets exponentially with rate \(-\log\rho_{o}\), the set of matrices with a continuum of \(\rho\) retains information about the entire trial history. Of course, in practice there must be some numerical imprecision in the biological instantiation of \(\mathbf{M}(\rho,s)\). In principle however, a continuum of forgetting rates \(\rho\) means that the past is not forgotten. Rather the past, here as a function of trial history, has been written to the continuous values of \(\rho\). ### Updating the future Let us return to the problem of generating a prediction of the immediate future. We again restrict our attention to the limit as \(\rho\) goes to 1 and assume the system has experienced a very long sequence of trials with the same underlying statistics. Moreover, we assume for the present that only pairwise relationships are important, so we can neglect the temporal credit assignment problem, and construct the Laplace transform of the future that predicted solely on the basis of the present stimulus. There are two problems that need to be resolved to write an analog of Eq. 8 for \(\mathbf{F}_{t+\delta t}^{+}(s)\). First, we can only use use Eq. 7 to update \(\mathbf{F}_{t}^{+}(s)\) if \(\mathbf{F}_{t}^{+}(s)\) is already the Laplace transform of a predicted future; we must create a circumstance that makes that true. Second, we need to address the situation where a prediction reaches the present. Because of the discontinuity at \(\tau=0\) special considerations are necessary to allow the time of a stimulus to pass from the future to the past. _Predicting the future with the present_. Equation 12 indicates that the weights in \(M_{y}^{x}(s)\) record the future occurrences of y given that x occurs in the present. \(M_{y}^{x}(s)\) captures both the probability that y will follow x as well as the distribution of temporal delays at which y is expected to occur. This information is encoded as a probability times the Laplace transform of a random variable. If we only need to consider x in predicting the future, then \(M_{y}^{x}(s)\) is precisely how we would like to initialize the future prediction for y in \(\mathbf{F}_{t}^{+}(s)\) after x is presented (Fig. 4). We probe \(\mathbf{M}(s)\) with the "immediate past." When x is presented it enters \(\mathbf{F}_{t}^{-}(s)\) as \(\mathcal{L}\left\{\delta\left(0^{-}\right)\mathbf{x}\right\}(s)\). Multiplying \(\mathbf{M}(s)\) from the right with the immediate past, yields a prediction for the future. \[\mathbf{M}(s)e^{-s0}\mathbf{x}=\mathbf{M}(s)\mathbf{x}=P\left(y|x\right) \mathcal{L}\left\{\tau_{xy}\right\}\mathbf{y} \tag{14}\] More generally, the input to the future at time \(t\) should be given by \(\mathbf{M}(s)\mathcal{L}\left\{\delta(0^{-})\mathbf{f}_{t}\right\}\). For concision we write this as \(\mathbf{M}(s)\mathbf{f}_{t}\). Because the past stored in \(\mathbf{M}(s)\) was a probability times the Laplace transform of a probability distribution, so too will the future recovered in this way. _Continuity of the predicted future through \(\tau=0\)._ The neural representation described here approximates a continuous timeline by stitching together separate Laplace neural man Figure 5: \(\mathbf{M}(\rho,s)\) contains information about both time within a trial and trial history. Left: Consider a single pairing of x and y on the most recent trial. The heatmap shows the degree to which y is cued by x by \(\frac{\mathbf{F}\mathbf{M}_{\rho}\mathcal{G}\mathbf{x}}{1-\rho}\) projected onto log time. The profile as a function of log \(\tau\) is identical to the profile for future time in Figure 2. If the pairing between x and y had a longer delay, the edge would be further to the right. Right: The single pairing of x and y is followed by an additional series of trials on which x was presented by itself. Now there is an edge in both trial history and time within trial. Additional trials with only x would push this edge further towards the top of the graph. Additional trials with x and y paired would be added to this plot with a time delay that reflects the timing of the pairing. ifolds for the past and the future. With the passage of time, information in the future moves ever closer to the present. As time passes and a prediction reaches the present, this discontinuity must be addressed. We can detect predictions that have reached the present by examining \(\mathbf{F}_{t}^{+}(s=\infty)\), which only rises from zero when \(\tau\to 0\). In practice, we would use \(s_{\text{max}}\) which should be on the order of \((\delta t)^{-1}\). If the future that is being represented is the Laplace transform of a delta function, then we can simply take components for which \(\mathbf{F}_{t}^{+}(s_{\text{max}})>0\) to zero for all \(s\) at the next time step. More generally, if the future that is represented is not simply a delta function, the linearity of the Laplace transform allows us to subtract \(\mathbf{F}_{t}^{+}(s=\infty)\) from all \(s\) values without affecting the evolution at subsequent time points. If a prediction reaches the present and is observed, then no further action is needed. If a prediction reaches the present, but is not observed, we can trigger an observation of a "not stimulus", written e.g., \(\bar{\mathbf{x}}\) to describe the observation of a failed prediction for a stimulus \(\mathbf{x}\). Although we won't pursue it here, one could allow "not stimuli" to be predicted by stimuli and to predict other stimuli, allowing for the model to provide appropriate predictions for a relatively complex set of contingencies. _Evolving_\(\mathbf{F}_{t+\delta t}^{+}(s)\). Integrating these two additional factors allows us to write a general expression for evolving \(\mathbf{F}_{t}^{+}(s)\) to \(\mathbf{F}_{t+\delta t}^{+}(s)\). \[\mathbf{F}_{t+\delta t}^{+}(s) = e^{-s\delta t}\mathbf{F}_{t}^{+}(s)-\mathbf{F}_{t}^{+}(s=\infty )+\mathbf{M}(s)\mathbf{f}_{t}. \tag{15}\] In simple situations where it is sufficient to know pairwise associations between stimuli separated in time, this provides a complete model for constructing a timeline of the future. Credit assignment and Laplace temporal difference learning Consider an experiment. In the control condition of this hypothetical experiment, the participant is presented with \(\mathbf{v}\) followed by \(\mathbf{z}\) at a delay of \(5\) s for \(N\) trials. In the experimental condition, during an initial phase of training, \(\mathbf{x}\) is followed by \(\mathbf{z}\) at a delay of \(10\) s for some number of trials. After this initial traning, the participant is presented with \(\mathbf{x}\) followed by \(\mathbf{y}\) at a delay of \(5\) s and then \(\mathbf{z}\) 5 s after \(\mathbf{y}\) for \(N\) trials. The number of pairings between \(\mathbf{v}\) and \(\mathbf{z}\), and the delays between them, are identical in the two conditions so that \(M_{z}^{\gamma}(s)\) would be the same in the two conditions. However, whereas \(\mathbf{v}\) is "solely responsible" for \(\mathbf{z}\) in the control condition, \(\mathbf{x}\) is also capable of predicting \(\mathbf{z}\) in the experimental condition. We might expect the "credit" that \(\mathbf{v}\) receives for predicting \(\mathbf{z}\) to be less in the experimental condition and indeed, analogous effects are observed in temporal blocking experiments (e.g., Amundson and Miller, 2008). Traditional RL makes sense of this phenomenon, and blocking more generally, by hypothesizing that plasticity at the time \(\mathbf{z}\) is presented depends on how well it was predicted. However, in the present framework, in the experimental condition, \(\mathbf{z}\) would be predicted an appropriate distance in the future at the moment \(\mathbf{x}\) is presented. The two conditions thus differ in the degree to which \(\mathbf{z}\) is predicted at the moment that \(\mathbf{v}\) is presented. The basic strategy pursued here is to use conditions at the time of \(\mathbf{v}\) to assess how much credit it should get for the occurrence of \(\mathbf{z}\), including the time of its presentation (Figure 6). We assume that there is a prediction \(\mathbf{F}_{t}^{+}(s)\) available at all times, although this prediction can be zero. The "Laplace temporal difference" measure compares the true future that follows each stimulus, stored in \(\mathbf{M}(s)\), to the future predicted _just before_ each stimulus is presented. A second set of connections, \(\mathbf{N}(s)\), records the average future available just before each item was observed. That is, for each stimulus \(\mathbf{v}\), \(\mathbf{N}^{v}(s)\) averages \(\mathbf{F}_{t}^{+}(s)\) observed at moments \(t\) such that \(\mathbf{f}_{t}=\mathbf{y}\). The same strategy was used in Goh et al. (2022). We will see that this Laplace temporal difference is sensitive to the amount of information about both the identity and timing of future events. As such it provides a neurally-reasonable mechanism to estimate temporal contingency (Balsam and Gallistel, 2009; Gallistel, 2021b; Jeong et al., 2022). ### Estimating prediction independent of the present via \(\mathbf{N}(s)\) If \(\mathbf{v}\) is presented at time \(t\), then \(\mathbf{N}(s)\) is updated as \[\mathbf{N}^{v}(\rho,s)\rightarrow\rho\mathbf{N}^{v}(\rho,s)+(1-\rho)\mathbf{F }_{t}^{+}(s) \tag{16}\] Following Eq. 13 we see that if \(\rho\) is a continuous variable, then \(\mathbf{N}^{v}(\rho,s)\) is the Z-transform of the trial history of the predicted future available just prior to the moment \(\mathbf{v}\) was presented. Note that, like \(\mathbf{M}(\rho=1,s)\), \(\mathbf{N}(\rho=1,s)\) can be decomposed into components corresponding to information about what symbols are predicted to occur and when. Definitions \(\mathbf{N}_{\text{what}}=\mathbf{N}(s=0)\), we can write \(\mathbf{N}(s)=\mathbf{N}_{\text{what}}\cdot\hat{\mathbf{N}}(s)\) much like we did for \(\mathbf{M}(s)\). To illustrate the properties of \(\mathbf{M}(\rho,s)\) and \(\mathbf{N}(\rho,s)\), let us restrict our attention to cases where at most three symbols \(\mathbf{x}\), \(\mathbf{v}\) and \(\mathbf{z}\) are presented in order on each trial. Let us refer to the time lags between symbols as random variables \(\tau_{xy}\), \(\tau_{yz}\); on trials where all three symbols are observed \(\tau_{xz}=\tau_{xy}+\tau_{yz}\) by assumption. We assume that the distributions are chosen Figure 6: Whereas \(\mathbf{M}(s)\) records the Laplace transform of the future that follows each item, \(\mathbf{N}(s)\) records the expectation of the Laplace transform of the future predicted prior to that item. Consider a scenario with three consecutive items \(\mathbf{x}\), \(\mathbf{v}\), and \(\mathbf{z}\) so that \(\mathbf{z}\) is predicted prior to the presentation of \(\mathbf{v}\). \(M_{z}^{\gamma}(s)\) learns the connection between \(\mathbf{v}\) in the past and \(\mathbf{z}\) in the present. \(N_{z}^{\gamma}(s)\) learns the connection between \(\mathbf{v}\) in the present and the prediction for \(\mathbf{z}\) available just before \(\mathbf{v}\) was presented. Comparing \(M_{z}^{\gamma}(s)\) to \(N_{z}^{\gamma}(s)\) allows one to estimate how much \(\mathbf{v}\) is responsible for presentation of \(\mathbf{z}\). such that the relative times of presentation do not overlap. We denote the probabilities of each symbol occuring on a trial such that \(P(z|y)\) gives the conditional probability that z is observed on a trial given that r is also observed on that trial. ### Estimating three point correlation functions from \(\mathbf{M}(\rho)\) and \(\mathbf{N}(\rho)\) A great deal of information can be extracted from the trial history encoded in \(\mathbf{M}(\rho,s)\) and \(\mathbf{N}(\rho,s)\). On a particular trial after r has been presented, we would like to compute the probability that z will be presented and at what time. \(\mathbf{M}\) contains the two-point probability distribution of r and z. It would be preferable to predict the occurrence of z using the three-point probability distribution, taking into account the occurrence and timing of both x and r. Because \(\mathbf{M}(\rho,s)\) and \(\mathbf{N}(\rho,s)\) contain information about the paired trial history, in principle we can extract information about the three-point correlation function. For instance, if z only occurs on trials on which both x and r are presented, then we should observe a positive correlation between the trial history encoded in \(M_{z}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ striatum (e.g., van der Meer & Redish, 2011), PFC (e.g., Rainer, Rao, & Miller, 1999; Ning, Bladon, & Hasselmo, 2022), OFC (e.g., Namboodiri et al., 2019; Schoenbaum, Chiba, & Gallagher, 1998; Young & Shapiro, 2011), hippocampus (Ferbunteanu & Shapiro, 2003; Duvelle, Grieves, & van der Meer, 2022) and thalamus (Komura et al., 2001) contain active representations that code for the future. One can find evidence of predictive signals extending over long periods of time that modulate firing in primary visual cortex (Gavornik & Bear, 2014; Kim, Homann, Tank, & Berry, 2019; Homann, Koay, Chen, Tank, & Berry, 2022; Yu et al., 2022). Prediction apparently involves a substantial proportion of the brain. Coordinating activity and plasticity over such a wide region would require careful synchronization (Hasselmo, Bodelon, & Wyble, 2002; Hamid, Frank, & Moore, 2021). The timescale of this synchronization, presumably on the order of 100 ms, fixes \(\delta t\), places a bound on the fastest timescales \(1/s\) that can be observed, and operationalizes the duration of the "present." Given the widespread nature of predictive signal, we will not attempt to map specific equations onto specific brain circuits. Rather we will illustrate the observable properties implied by these equations with an eye towards facilitating future empirical work. The predictions fall into two categories. One set of predictions describes properties of ongoing firing of neurons participating in Laplace and inverse spaces. Another set of predictions are a direct consequence of the properties of learned weights. We also briefly discuss the model in this paper in the context of recent empirical work on the computational basis of the dopamine signal (Jeong et al., 2022). ### Active firing neurons This paper proposes the existence of neural manifolds to code for the identity and time of _future_ events. The prediction is that there should be two related manifolds, one implementing the Laplace space and one implementing the inverse space. Previous neuroscientific work has shown evidence for Laplace and inverse spaces for a timeline for the past. The properties of the proposed neural manifolds for future time can be understood by analogy to the neural manifolds for the past. #### Single-cell properties of neurons coding for the past Socalled temporal context cells observed in the entorhinal cortex (Tsao et al., 2018; Bright et al., 2020) are triggered by a particular event and then relax exponentially back to baseline firing with a variety of time constants. The firing of temporal context cells is as one would expect for a population coding \(\mathbf{F}^{-}(s)\). So-called time cells observed in the hippocampus (Pastalkova et al., 2008; MacDonald et al., 2011; Taxisis et al., 2020; Shahbaba et al., 2022; Shikano, Ikegaya, & Sasaki, 2021; Schonhaut, Aghajan, Kahana, & Fried, 2022) and many other brain regions (e.g., Tiganj, Cromer, Roy, Miller, & Howard, 2018; Tiganj, Kim, Jung, & Howard, 2017; Mello, Soares, & Paton, 2015; Bakhurin et al., 2017; Akhlaghpour et al., 2016; Jin, Fujii, & Graybiel, 2009) fire sequentially as events recede into the past, as one would expect from neurons participating in \(\tilde{f}(\overset{*}{\tau}<0)\). Time cells are consistent with qualitative and quantitative predictions of \(\tilde{f}(\overset{*}{\tau}<0)\), including the conjecture that time constants are distributed along a logarithmic scale (Cao et al., 2022). #### Single-cell and population-level properties of neurons coding for the past and the future In situations where the future can be predicted, \(\mathbf{F}^{+}(s)\) and \(\tilde{f}(\overset{*}{\tau}>0)\) should behave as mirror images of the corresponding representations of the past. Figure 7A illustrates the firing of cells coding for a stimulus remembered in the past (left) and predicted in the future (right). Neurons participating in the Laplace space, sorted on their values of \(s\), are shown in the top; neurons participating in the inverse space, sorted on their values of \(\overset{*}{\tau}\) are shown on the bottom. The firing of neurons constituting the Laplace space shows a characteristic shape when plotted as a function of time in this simple experiment. Neurons coding for the past are triggered shortly after presentation of the stimulus and then relax exponentially with a variety of rates. Neurons coding for the future ramp up, peaking as the predicted time of occurrence grows closer. The ramps have different characteristic time constants. Different populations are triggered by the presentation of different symbols (not shown) so that the identity of the remembered and predicted symbols as well as their timing can be decoded from populations coding \(\mathbf{F}^{-}(s)\) and \(\mathbf{F}^{+}(s)\). The largest value of \(1/s\) in the figure is chosen to be a bit longer than the delay in the experiment, resulting in a subset of neurons that appear to fire more or less constantly throughout the delay (Enel et al., 2020). The firing of neurons constituting the inverse space also shows a characteristic shape when plotted as a function of time in this simple experiment. Neurons tile the delay, with more cells firing early in the interval with more narrow receptive fields. The logarithmic compression of \(n\) results in a characteristic "backwards J" shape for the past and a mirror image "J" shape for the future. Again, different populations would code for different stimuli in the past and in the future (not shown) so that the identity of the remembered and predicted stimuli and their time from the present could be decoded from a population coding \(\tilde{f}(\overset{*}{\tau})\). Figure 7B shows firing that would be expected for a population that includes cells coding for the same stimulus, say \(\mathrm{r}\), both in the past and the future around the time of a predicted occurrence of that symbol. #### Plausible anatomical locations for an internal future timeline This computational hypothesis should evaluated with carefully planned analyses. However, the published literature shows evidence that is at least roughly in line with the hypothesis of neural manifolds for future time. Firing that ramps systematically upward in anticipation of important outcomes including planned movements has been observed in (at least) mPFC (Henke et al., 2021), ALM (Inagaki, Inagaki, Romani, & Svoboda, 2018), and thalamus (Komura et al., 2001). Komura et al. (2001) showed evidence for ramping firing in the thalamus that codes for outcomes in a Pavlovian conditioning experiment. Preliminary evidence from secondary analyses suggest that there is a heterogeneity of time constants in ALM (Inagaki et al., 2018) and mPFC (Henke et al., 2021). There is also circumstantial neurophysiological evidence for sequential firing leading to predicted future events as predicted by \(\tilde{f}(\tilde{\tau}>0)\). Granule cells in cerebellum appear to fire in sequence in the time leading up to an anticipated reward (Wagner et al., 2017; Wagner and Luo, 2020). OFC may be another good candidate region to look for "future time cells." OFC has long been argued by many authors to code for the identity of predicted outcomes (Hikosaka and Watanabe, 2000; Schoenbaum and Roesch, 2005; Mainen and Kepecs, 2009). More recently Enel et al. (2020) showed sequential activation in OFC during a task in which it was possible to predict the value of a reward that was delayed for several seconds. Finally, it should be noted that the properties of \(\tilde{f}(\tilde{\tau}>0)\) are a temporal analog of spatial "distance-to-goal" cells observed in spatial navigation studies (Sarel et al., 2017; Gauthier and Tank, 2018). ### Predictions from weight matrix \(\mathbf{M}(\rho,s)\) _Properties of weights due to \(s\)._ Consider an experiment in which different symbols, denoted cs1, cs2, etc, precede an outcome \(\mathtt{r}\) by a delay \(\tau_{o}\). The value of \(\tau_{o}\) changes across the different symbols (Figure 8A). Ignoring \(\rho\) for the moment, the strength of the connections from each cs to \(\mathtt{r}\) depend on the value of \(\tau_{o}\) for that stimulus and the value of \(s\) for each synapse: \(e^{-s\tau_{o}}\). When a particular cs is presented at time \(t\), the amount of information that flows along each synapse is \(e^{-s\tau_{o}}\) and the pulse of input to \(\mathbf{F}_{t+\delta t}^{+}(s)-\mathbf{F}_{t}^{+}(s)\) corresponding to the outcome is \(e^{-s\tau_{o}}\). Thus, considering each connection as a function of \(\tau_{o}\), firing should go down exponentially as a function of \(\tau_{o}\) with a rate constant that depends on the value of \(s\). This pattern of results aligns well with results presented at SfN in 2022 (Masset et al., 2022). The experiment was constructed much as described above. Masset et al. (2022) measured the firing of dopamine neurons to different stimuli predicted reward delivery at different delays. It has long been known that firing of dopamine neurons, averaged over neurons, around the time of the conditioned stimulus goes down with delay (Fiorillo et al., 2008). This study showed that there was a heterogeneity of exponential decay rates in the firing of dopamine neurons in this paradigm, much as illustrated in Fig. 8A. In the context of TDRL, this finding is also consistent with a continuous spectrum of exponential discount rates (Momennejad and Howard, 2018; Tano et al., 2020). _Properties of weights due to \(\rho\)._ A continuum of forgetting rates \(\rho\) predicts a range of trial history effects. Figure 8B shows the weights in \(\mathbf{M}(\rho)\) over past trials that result from different values of \(\rho\). This is simply \(\rho^{i}\) where \(i\) is the trial recency with values normalized such that the weight at the most recent trial is 1. The weights \(\mathbf{M}(\rho)\) record the trial history of reinforcement. The weights \(\mathbf{N}(\rho)\) record the trial history of predicted outcomes. The difference between \(\mathbf{M}(\rho)\) and \(\mathbf{N}(\rho)\) as a function records the history of prediction violations. Many papers show dependence on previous trial outcomes in response to a cue stimulus in learning and decision-making experiments (Bernacchia Figure 8: Neural predictions derived from properties of \(\mathbf{M}(s)\). **Left.** Plot of the magnitude of the entry in \(\mathbf{M}_{*}(\rho=1,s)\) connecting each of the conditioned stimuli cs to the outcome \(\mathtt{k}\) as a function of the \(\tau_{o}\) corresponding to that cs. Different lines correspond to entries with different values of \(s\). Weights corresponding to different values of \(s\) show exponential discounting as \(\tau_{o}\) is manipulated, with a variety of discount rates. **Right.** Plot of the magnitude of \(\mathbf{M}(\rho,s=0)\) associated with a single pairing of cs and \(\mathtt{k}\) a certain number of trials in the past. Different lines show the results for different values of \(\rho\). For clarity, these curves have been normalized such that they have the same value at trial lag zero. Figure 7: Predicted firing for Laplace and inverse spaces plotted as heatmaps. \(\mathbf{A}\). Consider an experiment in which x precedes x separated by 10 s. The top row shows firing as a function of time for cells in the Laplace space for the past (left) and the future (right). Note that the cells in \(F_{t}^{-}(s)\) peak at time zero and then decay exponentially. In contrast cells in \(F_{t}^{+}(s)\) peak at 10 s and ramp up exponentially. The bottom row shows firing as a function of time for cells in the Inverse space. \(\mathbf{B}\). Consider an experiment in which \(\mathtt{r}\) is predicted to occur at time zero and then recedes into the past. Cells coding for both past and future are recorded together and sorted on the average time at which they fire. Left: For Laplace spaces, neurons in \(F_{t}^{+}(s)\) are sorted to the top of the figure and neurons \(F_{t}^{-}(s)\) are sorted to the bottom of the figure. Right: Inverse spaces show similar properties but give rise to a characteristic “pinwheel” shape. Morcos and Harvey, 2016; Scott et al., 2017; Akrami, Kopec, Diamond, & Brody, 2018; Hattori, Danskin, Babic, Mlynnayk, & Komiyama, 2019; Hattori & Komiyama, 2022). These studies show history-dependent effects in a wide range of brain regions and often show a continuous spectrum of decay rates within a brain region (see especially Bernacchia et al., 2011). Notably, distributions of time constants for trial history effects cooccur with distributions of ongoing activity in multiple brain regions (Spitmaan, Seo, Lee, & Soltani, 2020). ### Dopamine and learning The connection between TDRL and neuroscience related to dopamine has been one of the great triumphs of computational neuroscience (Schultz et al., 1997). The standard account is that the firing of dopamine neurons signals reward prediction error (RPE) which drives plasticity. Despite its remarkable success at predicting the findings of many behavioral and neurophysiological experiments, the RPE account has been under increasing strain over recent years. The standard account did not predict the existence of a number of striking effects, including increasing dopamine firing during delay under uncertainty (Fiorillo, Tobler, & Schultz, 2003), dopamine ramps in spatial experiments (Howe, Tierney, Sandberg, Phillips, & Graybei, 2013), dopamine waves (Hamid et al., 2021), and heterogeneity of dopamine responses across neurons and brain regions (Dabney et al., 2020; Masset et al., 2022; W. Wei, Mohebi, & Berke, 2021). Recently Jeong et al. (2022) reported the results of several experiments that flatly contradict the standard model. These experiments were proposed to evaluate an alternative hypothesis for dopamine firing in the brain. Jeong et al. (2022) propose that dopamine signals whether the current stimulus is a cause of reward. The model developed there, referred to as ANCCR, assesses the contingency between a stimulus and outcomes. LTD(\(s\)) measures the contingency--temporal and otherwise--between a symbol and possible outcomes. Both ANCCR and the framework developed in this paper are inspired by a similar critique of Rescorla-Wagner theory and TDRL (Gallistel, 2021). In order to make a complete account of the experiments in the Jeong et al. (2022) paper, the current framework would have to be elaborated in several ways. In order to keep the calculations simple here we have assumed that each symbol can only occur once per trial. Consider the \(\mathbf{M}(s)\) and \(\mathbf{N}(s)\) that would result if this assumption were relaxed. If we define \(p_{z}^{\,\mathrm{\chi}}(\tau)\) as the probability that we will observe \(\mathrm{\chi}\) in a small interval around \(t+\tau\) given that we observed \(\mathrm{\chi}\) at time \(t\), we get \[M_{z}^{\,\mathrm{\chi}}(s)=\int e^{-s\tau}p_{z}^{\,\mathrm{\chi}}(\tau)d\tau= \mathcal{L}\left\{p_{z}^{\,\mathrm{\chi}}\right\}(s).\] Assuming we can pretend that it is acceptable to ignore the overlap between the distributions we would find \[N_{z}^{\,\mathrm{\chi}}(s) = \sum_{x}\int e^{-s\tau}\left(p_{z}^{\,\mathrm{\chi}}\!\!\!\#p_{ z}^{\,\mathrm{\chi}}\right)(\tau)d\tau\] \[= \mathcal{L}\left\{p_{z}^{\,\mathrm{\chi}}\!\!\#p_{z}^{\,\mathrm{ \chi}}\right\}(s).\] Where \(\#\) signifies cross-correlation. This is closely analogous to Eqs. 19 and 20. The expression for \(\mathbf{M}(s)\) ends up giving expected number of observations of \(\mathrm{\chi}\) that would follow \(\mathrm{\chi}\) out to a timescale on the order of \(1/s\). \(N_{z}^{\,\mathrm{\chi}}(s)\) gives the number of observations of \(\mathrm{\chi}\) that were expected prior to observation of \(\mathrm{\chi}\). It is thus possible to construct a measure of prospective contingency like that used in the Jeong et al. (2022) model. The current framework does not require one to specify an intrinsic timescale of association _a priori_. ## 5 Discussion This paper takes a phenomenological approach to computational neuroscience. The strategy is to write down equations that, if the brain could somehow obey them, would be consistent with a wide range of observed cognitive and neural phenomena. The phenomenological equations make concrete predictions that can be evaluated with cognitive and neurophysiological experiments. To the extent the predictions hold, the question of how the brain manages to obey these phenomenological equations could then become a separate subject of inquiry. The phenomenological equations require a number of capabilities of neural circuits, both at the level of synapses and in terms of ongoing neural activity. We make those explicit here. ### Circuit assumptions for synaptic weights The connections \(\mathbf{M}(\rho,s)\) and \(\mathbf{N}(\rho,s)\) require that the brain uses continuous variables, \(s\) and \(\rho\), to organize connections between many neurons, most likely spanning multiple brain regions. For the phenomenological equations to be viable, these continuous variables should be deeply embedded in the functional architecture of the brain. For instance, in order to invert the integral transforms, it is necessary to compute a derivative over these continuous variables. This suggests a gradient in these continuous variables should be anatomically identifiable. Conceivably anatomical gradients in gene expression and functional architecture (e.g., Phillips et al., 2019; Guo et al., 2021; Roy, Zhang, Halassa, & Feng, 2022) could generate anatomical gradients in \(s\) and/or \(\rho\). Perhaps part of the function of traveling waves of activity such as theta oscillations (Lubenov & Siapas, 2009; Patel, Fujisawa, Berenyi, Royer, & Buzsaki, 2012; Zhang & Jacobs, 2015) or dopamine waves (Hamid, Frank, & Moore, 2019) is to make anatomical gradients salient. LTD(\(s\)) is not intended as a literal description of a neural computation the brain could use to assess contingency between stimuli. However, the idea of assessing contingency by comparing the future that immediately precedes a stimulus to the future that follows it _should_ be taken seriously. If brain oscillations are important in synchronizing the flow of information proposed in this framework, then perhaps information from \(\mathbf{M}(s)\) and \(\mathbf{N}(s)\) are available at different phases of an oscillation. Perhaps this consistent relationship temporal combined with spike timing dependent plasticity (Bi & Poo, 1998; Dan & Poo, 2004) somehow facilitates computation of the contingency between the present and the future. ### Circuit assumptions for ongoing activity At the neural level, this framework assumes the existence of circuits that can maintain activity of a Laplace neural manifold over time. There is evidence that the brain has found some solution to this problem, at least for past time and at least in the entorhinal cortex (Tsao et al., 2018; Bright et al., 2020). Exponential growth of firing, as proposed by Eq. 15 seems on its face to be a computationally risky proposition. However, this proposal does create testable predictions. Moreover, firing rates that increase monotonically in time are widely observed. For instance border cells as an animal approaches a barrier (Solstad, Boccara, Kropff, Moser, & Moser, 2008) and evidence accumulation cells (Roitman & Shadlen, 2002) both increase monotonically. If this monotonic increase in firing reflects movement of an edge along a Laplace neural manifold, the characteristic time scale of the increase should be heterogeneous across neurons. If the brain has access to a circuit with paired \(\alpha\)'s, it could reuse this circuit to construct cognitive models for spatial navigation (Howard et al., 2014), evidence accumulation (Howard et al., 2018), and perhaps cognitive computation more broadly (Howard & Hasselmo, 2020). Consistent with this hypothesis, monotonic cells in spatial navigation and evidence accumulation--border cells and evidence accumulation cells--have sequential analogues (Wilson & McNaughton, 1993; Morcos & Harvey, 2016; Koay, Charles, Thiberge, Brody, & Tank, 2022) as one would expect if they reflect a Laplace space that is coupled with an inverse space. Perhaps part of the solution to implementing these equations in the brain is to restrict the kinds of functions that can be represented over the Laplace manifold. Perhaps a continuous attractor network that can maintain and evolve the Laplace transform of a single delta function per basis vector would be straightforward to construct. In this case, each component of \(\mathbf{F}_{t}^{-}(s)\) and \(\mathbf{F}_{t}^{+}(s)\) would be at any moment the Laplace transform of a delta function; \(\mathbf{M}(s)\) and \(\mathbf{N}(s)\) would still be able to store distributions over multiple presentations. In this case perhaps \(\mathbf{F}_{t+\delta t}^{+}(s)\) could update by sampling from the distribution expressed by \(\mathbf{M}(s)\mathbf{f}_{t}\). Perhaps predictions are updated in the more general case by sampling from a superposition of the previous prediction \(\mathbf{F}_{t}^{+}(s)\) and the future that follows the current item \(\mathbf{M}s\mathbf{f}_{t}\) masked by \(\mathbf{N}(s)\mathbf{f}_{t}\). ### What about control? RL algorithms have been successful in AI application because of their ability to learn policies to control actions in the absence of explicit supervision (Watkins & Dayan, 1992; Sutton & Barto, 2018). The current framework does not include a deep connection to control theory. It is conceivable that the current framework could be integrated into existing deep network approaches to RL. Perhaps it is possible to learn an embedding that maps continuous features onto symbols then control actions using existing methods from deep RL. Another possibility is to develop a control theory in the Laplace domain. Indeed, this is how control theory problems are typically solved analytically (Ogata, 1970). Indeed, there is some evidence that control systems in neurobiology, for instance gaze stabilization in the oculomotor system, make use of multiple time constants over several orders of magnitude (Miri, Bhasin, Aksay, Tank, & Goldman, 2022). Perhaps a continuous set of time constants, as required for the Laplace neural manifolds used here, may enable brains to make use of diversity enabled sweet spots (Nakahira, Liu, Sejnowski, & Doyle, 2021). It should be possible to extend the current framework to multiple dimensions beyond time, including real space and abstract spaces (Howard et al., 2014, 2018). Properties of the Laplace domain enable data-independent operators that enable efficient computation (Howard & Hasselmo, 2020). For instance, given that a state of a Laplace neural manifold is Laplace transform of a function, we can construct the Laplace transform of the translated function (Eq. 7, see also (Shankar, Singh, & Howard, 2016)). Critically, the translation operator is independent of the function to be translated. Restricting our attention to Laplace transforms of delta functions, we can construct the sum or difference using convolution and cross correlation respectively (Howard et al., 2015; Howard & Hasselmo, 2020). The binary operators for addition and subtraction also do not need to be learned. Perhaps the control theory that governs behavior is analogous to generic spatial navigation in a continuous space. ### Scale-covariance as a design goal Because the \(s\) values are sampled along a logarithmic scale, all of the quantities in this paper are scale-covariant. Rescaling time, taking \(\tau_{xz}\to a\tau_{xz}\), \(\tau_{xy}\to a\tau_{xy}\), etc, simply takes \(s\to s/a\). Because the \(s\) values are chosen in a geometric series, rescaling time simply translates along the \(n\) axis. All the components of the model, \(\mathbf{F}^{-}\), \(\mathbf{F}^{+}\), \(\mathbf{M}\), \(\mathbf{N}\), and LTD(\(s\)), all use the same kind of logarithmic scale for time. This means that rescaling time simply translates all the components of the entire model, up to a scaling factor. All of the components of the model are thus time scale-covariant, responding to rescaling time with a translation over cell number. Thus any measure that integrates over \(n\) (and is not subject to edge effects) is scale-invariant. Empirically, there is not a characteristic time scale to associative learning (Balsam & Gallistel, 2009; Gershman, 2022); any model that requires choice of a time scale for learning to proceed is thus incorrect. Logarithmic time scales are observed neurally (Cao et al., 2022; Guo et al., 2021). Logarithmic time scales can be understood as a commitment to a world with power law statistics (X.-X. Wei & Stocker, 2012; Piantdosi, 2016) or as an attempt to function in many different environments without a strong prior on the time scales it will encounter (Howard & Shankar, 2018). Recent work has shown that the use of logarithmic time scales enables scale-invariant CNNs for vision (Jansson & Lindeberg, 2021) and audition (Jacques, Tiganj, Sarkar, Howard, & Sederberg, 2022). For instance, Jacques et al. (2022) trained deep CNNs to categorize spoken digits. When tested on digits presented at very different speeds than the training examples (imagine someone saying the word "seven" stretched over four seconds), the deep CNN with a logarithmic time axis generalized perfectly. Rescaling time translates the neural representation at each layer; convolution is translation equivariant; including a maxpool operation over the convolutional layer renders the entire CNN translation-invariant. Time is not only important in speech perception (e.g., Lerner, Honey, Katkov, & Hasson, 2014) but vision as well (Russ, Koyama, Day-Cooney, Perwez, & Leopold, 2022) suggesting that these ideas can be incorporated into a wide range of sensory systems.
2306.02090
Deep Classifier Mimicry without Data Access
Access to pre-trained models has recently emerged as a standard across numerous machine learning domains. Unfortunately, access to the original data the models were trained on may not equally be granted. This makes it tremendously challenging to fine-tune, compress models, adapt continually, or to do any other type of data-driven update. We posit that original data access may however not be required. Specifically, we propose Contrastive Abductive Knowledge Extraction (CAKE), a model-agnostic knowledge distillation procedure that mimics deep classifiers without access to the original data. To this end, CAKE generates pairs of noisy synthetic samples and diffuses them contrastively toward a model's decision boundary. We empirically corroborate CAKE's effectiveness using several benchmark datasets and various architectural choices, paving the way for broad application.
Steven Braun, Martin Mundt, Kristian Kersting
2023-06-03T11:45:16Z
http://arxiv.org/abs/2306.02090v5
# Deep Classifier Mimicry without Data Access ###### Abstract Access to pre-trained models has recently emerged as a standard across numerous machine learning domains. Unfortunately, access to the original data the models were trained on may not equally be granted. This makes it tremendously challenging to fine-tune, compress models, adapt continually, or to do any other type of data-driven update. We posit that original data access may however not be required. Specifically, we propose Contrastive Abductive Knowledge Extraction (CAKE), a model-agnostic knowledge distillation procedure that mimics deep classifiers without access to the original data. To this end, CAKE generates pairs of noisy synthetic samples and diffuses them contrastively toward a model's decision boundary. We empirically corroborate CAKE's effectiveness using several benchmark datasets and various architectural choices, paving the way for broad application. ## 1 Introduction In the contemporary machine learning landscape, the rise in availability of pre-trained models has significantly facilitated development of downstream applications. In conjunction with prominent underlying techniques, ranging from parameter pruning and sharing [11; 35], low-rank factorization [5; 40], to knowledge distillation [14], these pre-trained models can now be efficiently fine-tuned, compressed, or even adapted continually. Enabling all the latter through a single mechanism, knowledge distillation seems to be a particularly promising contender from the plethora of available options. At its core, it aims to transfer the knowledge from a (typically larger, more complex) teacher model to a (typically smaller, simpler) student model by training the student to mimic the teacher's predictions, feature responses, or other inferrable quantities from the learned function. Such mimicry then enables the student to reach similar performance levels, at reduced computational cost and memory usage or allow a model to continue learning, if the student is the same model that retains knowledge from a prior time step [21]. However, the knowledge distillation optimization procedure traditionally requires access to original data. Unfortunately, a provided model state may not be accompanied with all its training data or access to the latter may deliberately not be granted. Despite an impressive amount of ensuing applications in natural language processing [18; 25; 32], computer vision [22; 37; 44], and speech recognition [14; 23; 28], the majority of approaches is thus still limited by a host of assumptions. In most works, students train on the original training data [3; 10; 14] or additional generative auxiliary models are used to approximate the data distribution [3; 10]. Alternatively, the necessity of data can be alleviated by imposing heavy constraints on architectures [3; 10; 26; 38]. These dependencies limit the applicability of knowledge distillation when the original training data is not available, the teacher and student model architectures mismatch, or training additional generative models is infeasible. However, a crucial realization is that a majority of these tasks may not even require strong assumptions if one accounts for the task's nature. Specifically, _we posit that supervised scenarios and classification, in particular, do not require all knowledge to be distilled_. On the contrary, it is _the decision boundary that needs to be closely mimicked_ by a student. We refer to respective distillation as **abductive knowledge extraction**. Based on this realization, we lift prior works' assumptions and propose the first knowledge distillation method that is truly free of model assumptions while not requiring access to any original training data. To this end, our introduced Contrastive Abductive Knowledge Extraction (CAKE) generates synthetic data pairs via a contrastive diffusion process, which are directed toward opposing sides of a teacher model's decision boundary. In symbiosis, a contrastive pull ensures that a prospective student trains on samples that closely support the teacher's decision, whereas induced noise scatters samples to sweep relevant regions along the boundary. Figure 1 shows an intuitive "two-moons" example, where CAKE is compared to naive synthetic samples based on gradient descent alone and a generative model. As detailed later, CAKE succeeds where competitors fail at data-free model-agnostic knowledge distillation due to collapse to trivial solutions or failure to cover a broad spectrum close enough to the relevant decision boundary. In summary, our contributions are three-fold: * We introduce Contrastive Abductive Knowledge Extraction (CAKE), a model-agnostic knowledge distillation procedure without access to original data. Instead, a contrastive diffusion process generates synthetic samples that border a teacher's decision boundary. * We empirically highlight the contribution of CAKE's components, showcase how teacher and student neural networks can differ in depth and capacity, and analyze CAKE's effectiveness when teacher and student models differ (MLP, CNN, ResNet, and ViT). * We corroborate that CAKE's classification accuracy is competitive with a variety of "state-of-the-art" methods that require data access or heavy model assumptions. ## 2 Knowledge Distillation and the Challenge of Data Availability In this section, we will discuss the key concepts behind knowledge distillation, briefly explore the different types of distilled knowledge and distillation schemes, and summarize limitations with respect to data availability commonly found in the literature and respective surveys [9]. ### Knowledge Distillation in Supervised Classification The original variant of knowledge distillation introduced by Hinton et al. [14] uses a softened version of the teacher's (logit) output to train a student model to mimic the teacher. At the example of supervised classification, given a training dataset with \(N\) input-target pairs \((\mathbf{x}_{i},y_{i})\), a student \(f^{\text{S}}\), and a teacher \(f^{\text{T}}\), we denote \(\mathbf{z}_{i}^{\text{S}}=f^{\text{S}}(\mathbf{x}_{i})\) and \(\mathbf{z}_{i}^{\text{T}}=f^{\text{T}}(\mathbf{x}_{i})\) as the student and teacher logits respectively. The student is trained by minimizing a loss function \(\mathcal{L}\) that Figure 1: Comparison of naive, generative, and CAKE methods for knowledge distillation on the two-moons dataset. The background visualizes teacher (green/purple) and student (blue/red) decision functions, juxtaposed with original data (\(\circ\)) and synthesized samples (\(\triangle\)). Naive and generative methods often converge to similar local minima, inducing an ineffective student decision function. In contrast, CAKE generates samples across the entire decision-relevant region, resulting in a student model that accurately learns the data decision function if trained exclusively on its synthetic samples. balances the prediction of ground truth labels and matches the softened output of the teacher \(p(\mathbf{z}_{i},\tau)=\left(\exp\left(z_{i}^{1}/\tau\right)/Z_{i},\ldots,\exp\left(z _{i}^{C}/\tau\right)/Z_{i}\right)\), where \(Z_{i}=\sum_{j}\exp(z_{i}^{j}/\tau)\) is the normalization constant and \(\tau\) is a temperature parameter that controls the softness of the output distribution. The full student training objective thus becomes a conjunction of true labels and the teacher's "soft labels": \[\mathcal{L}\left(\mathbf{x}_{i},y_{i}\right)=\lambda_{1}\underbrace{\mathbf{CE} \left(\mathbf{y}_{i},\mathbf{p}\left(\mathbf{z}_{i}^{\mathsf{S}},\mathbf{1}\right)\right)}_{ \left[\text{\tiny{match true labels}}\right]}+\lambda_{2}\underbrace{ \mathbf{CE}\left(\mathbf{p}\left(\mathbf{z}_{i}^{\mathsf{T}},\mathbf{\tau}\right),\mathbf{p} \left(\mathbf{z}_{i}^{\mathsf{S}},\mathbf{\tau}\right)\right)}_{\left[\text{\tiny{ match teacher soft labels}}\right]}, \tag{1}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are hyperparameters that control the trade-off between the two terms and CE is the cross-entropy loss. The first term (\(\mathcal{L}_{\mathrm{NLL}}\)) in the loss function encourages the student to predict the ground truth labels, while the second term (\(\mathcal{L}_{\mathrm{KD}}\)) tries to match the softened output of the teacher. More generally, knowledge distillation techniques can be categorized based on the _distilled knowledge_ and _distillation schemes_. Distilled knowledge may include response-based methods focusing on model outputs [14], feature-based methods targeting intermediate representations [2; 42], and relation-based methods capturing pairwise relations between instances [37; 39]. Distillation schemes may encompass offline distillation, which involves pre-trained teacher models, online distillation where teacher and student models are trained simultaneously, and self-distillation, where the teacher and student are the same model [15; 36; 43]. The choice of the distillation scheme depends on an application's requirements, including computational resources, training data, and desired accuracy. ### Knowledge Distillation without Data Access While the seminal work of Hinton et al. [14] introduced knowledge distillation with the student having access to the original training data and using the smoothed teacher outputs as _additional information_, knowledge distillation can be further lifted to a _"data-free"_ setting. Here, data-free refers to providing no access to the original data distribution that the teacher was trained on. The focus is then to construct synthetic samples from the teacher that serve as exclusive training data for the student. There are generally two approaches to achieve such generation of synthetic data. One angle makes use of generative models to synthesize samples that are relevant to the teacher's objective, therefore extracting knowledge into an auxiliary generative model that learns to sample the data distribution. Recent examples are adversarial distillation, such as DAFL [3] and its successor RDSKD [10] which employ a generative adversarial network (GAN) [8]. However, while students are trained with synthetic GAN samples, the training procedure of the GAN itself again requires access to original data to construct the adversarial objective. Further, an additional model now needs to be carefully crafted, which may be prone to issues such as e.g. mode collapse in GANs. The alternative angle is to leverage the teacher's parameters directly to construct a synthetic dataset. The initial work, DeepDream [24], uses an image prior and performs gradient descent on an input image w.r.t. maximizing a specific class probability. The later DeepInversion [38] uses DeepDream's total variation and the \(l_{2}\)-norm as an image prior and extends the optimization objective by a feature distribution regularization term. This imperative term measures the \(l_{2}\)-distance between convolution activations and respective BatchNorm [17] statistics, as the latter provides a simple Gaussian proxy to the encoded distribution. However, DeepInversion then entails a restriction on the teacher requiring specific layers. As such, the approach cannot be applied if the teacher is either treated as a black box with no access to intermediate outputs or does not contain the necessary functions. CAKE follows in these works' footsteps, but lifts the constraints on model architecture and intermediate value access. ### The Pitfalls when Removing Data Access without Imposed Model Constraints To contextualize prior works and highlight the challenge of removing both access to data and intermediate values of specific model functions, we circle back to the earlier shown figure Fig. 1. On the basis of the simple 2-D two-moons example, the top row depicts original (circles) and synthetic (triangles) data for the naive DeepDream approach. As the latter optimizes initially random inputs solely to maximize the cross-entropy loss, the first common pitfall ensues. Namely, _samples easily satisfy maximum confidence if they lie far away from the decision boundary_. Unfortunately, when a student is trained on these samples, the decision boundary is overly simplistic, here leading to a linear decision that is incorrect for the original task. The second row shows a respective generative model trained to synthesize data that minimize the teacher's confidence. Whereas the first pitfall may also occur, we can condition samples and contrast pairs (as in the later CAKE for direct comparison). However, a second caveat now arises. Namely, _parameterized generators may easily collapse towards trivial solutions or sample select regions_. As they collapse to specific modes that do not cover the distribution necessarily, the student's solution may once more be inadequate for the original task. ## 3 CAKE: Contrastive Abductive Knowledge Extraction In the previous section, we expounded on the inherent limitations and assumptions associated with existing knowledge distillation techniques when original training data is unavailable and strict model assumptions cannot be made. To overcome these challenges, we now introduce Contrastive Abductive Knowledge Extraction, CAKE for short. In contrast to prior works, CAKE extracts the abductive knowledge of a teacher in a fully model-agnostic way that does not require any original data. ### Contrasting the Decision Boundary: Abductive Knowledge Extraction We propose a conceptual shift in the objective of the distillation procedure. Contrary to the emphasis placed by a significant portion of the knowledge distillation literature on the visual fidelity and closeness to original data, we argue that the ultimate goal is not to accurately emulate the data-generating distribution. Instead, it should be to sample effectively along the decision boundary region, such that a student can later mimic it. With this in mind, we propose to create pairs of noisy synthetic samples and employ a contrastive approach to diffuse them towards the decision boundary. Intuitively, think of drawing two samples for two different classes (or sets in multi-class scenarios) and pulling both towards each other until their predicted label gets swapped. To this end, we employ the squared Euclidean distance between logit pairs for synthetic samples of different classes: \[\mathcal{L}_{\text{contr}}(\mathbf{x}_{i},\mathbf{x}_{j})=\mathbbm{1}\,\left[y_{i} \neq y_{j}\right]\left\|f^{\text{T}}(\mathbf{x}_{i})-f^{\text{T}}(\mathbf{x}_{j}) \right\|_{2}^{2}\quad. \tag{2}\] Note that despite the availability of elaborate contrastive formulations [7; 31; 33], we focus on initial simplicity. To avert the risk of synthetic samples collapsing into a single region that minimizes the objective, recall the second row of Fig. 1, it becomes necessary to further promote the dispersion of these samples along the decision boundary, as we elaborate in the following subsection. ### Sweeping the Decision Boundary: Implicit and Explicit Noise in CAKE and LAKE Having developed an objective aimed at generating samples close to the decision boundary, we must acknowledge that this objective does not yet ensure extensive coverage _along_ the decision boundary. On the one hand, abductive knowledge extraction need not perfectly reflect the data distribution. However, on the other hand, it is imperative to mimic a wide range of the decision boundary. We therefore require an additional mechanism to explore along the decision boundary. We posit that such exploration can be achieved through the introduction of noise into the sample update. As the contrastive term already acts as a perpendicular force, ensuring closeness between sets of samples between classes, the injection of noise effectively diffuses them in parallel to the decision boundary. In CAKE, we thus inject noise by means of the well-understood stochasticity of SGD-based optimizations and common step size schedules. Again for initial simplicity, we choose a simple linear schedule, but we note that a plethora of variants for noisy estimates exist. This effectively causes the optimization to disperse the synthetic samples along the decision boundary. While CAKE presents an intuitive, highly empirically effective, but perhaps somewhat ad-hoc, solution to the induction of noise, we now also propose a more principled formulation. Recent advances in generative modeling have rediscovered the importance of noise through the integration of diffusion processes. Following this spirit, we introduce a CAKE variant termed Langevin Abductive Knowledge Extraction (LAKE). In the latter, we incorporate noise into the synthesis procedure with Langevin dynamics based diffusion, generating samples from noisy gradients of the input: \[\mathbf{x}_{i}^{t+1}=\mathbf{x}_{i}^{t}+\frac{\boxed{\eta(t)\,\,\nabla_{\mathbf{x}}\mathbf{ \mathcal{L}}\big{(}\mathbf{x}_{i}^{t}\big{)}}+\sqrt{2\,\eta(t)\,\,\mathbf{\varepsilon }_{i}^{t}}}{\boxed{\text{gradient update}}}\quad\boxed{\text{with}}\quad\boxed{ \text{e}_{i}^{t}}}\sim\mathcal{N}(0,\mathbf{I})\quad, \tag{3}\] for \(t=1,\dots,T\). The process will converge samples according to the true distribution defined by the loss landscape, as both \(T\to\infty\) and \(\eta(t)\to 0\). The diffusion property of the Langevin update step aids in dispersing samples along the decision boundary, thus preventing collapse. However, the theoretical guarantees only hold for the limit \(T\to\infty\) and \(\eta(t)\to 0\) and further empirical findings seem to indicate that the explicit Gaussian noise term in the diffusion process may not be fully necessary [1, 4]. Ultimately, we emphasize that the presence of noise seems to be crucial, as also highlighted by the quantitative results for CAKE and LAKE in subsequent Section 4, but the choice w.r.t a potential trade-off between empirical results and rigor is left to the prospective user. ### Injecting Auxiliary Domain-specific Knowledge: The Role of Data Priors In addition to our rigorous premise of no access to original training data, we acknowledge that information about the data domain typically exists. That is, even when a pre-trained model contains no reference to real data, its purpose and domain of application is typically made obvious. There is no conflict in integrating such auxiliary knowledge into the sample synthesis process through data priors. For instance, when the application is image-based, we can employ a total-variation prior, as initially used also by DeepDream for the purpose of "generating beautiful art" from random noise: \[\mathcal{L}_{\text{TV}}(\mathbf{x})=\sum_{i=1}^{H}\sum_{j=1}^{W}\|\mathbf{x}_{i,j}-\bm {x}_{i-1,j}\|+\|\mathbf{x}_{i,j}-\mathbf{x}_{i,j-1}\|\quad, \tag{4}\] Here, \(\mathbf{x}\) represents an image of dimensions \(H\times W\), and \(\mathbf{x}_{i,j}\) corresponds to the pixel at the location \((i,j)\). Intuitively, this prior mirrors our expectation that inputs are images, and we thus expect depicted concepts to be locally consistent. More generally, such priors enable injection of potential meta-knowledge we may possess in the form of constraints that facilitate the synthetic sample optimization. Whereas our work later showcases popular image classification, imagine e.g. a prior on the range of expected numerical values when confronted with tabular data as a second example. ### The Overall CAKE Algorithm ``` 0: teacher \(f^{\mathsf{T}}\), iterations \(T\), #mini-batches \(M\) of \(N\) samples, schedule \(\eta\), priors \(p(\mathbf{x})\,,p(y)\) 1:procedureCAKE(\(f^{\mathsf{T}},T,M,N,\eta,p(\mathbf{x})\,,p(y)\)) 2:for\(m=1\) to \(M\)do\(\triangleright\) Number of mini-batches 3: Initialize \(\widetilde{\mathcal{D}}_{m}^{t=0}\leftarrow\left\{\left(\widetilde{\mathbf{x}}_{1} ^{t=0},\widetilde{y}_{1}\right),\ldots,\left(\widetilde{\mathbf{x}}_{N}^{t=0}, \widetilde{y}_{N}\right)\right\}\), where \(\widetilde{\mathbf{x}}_{i}^{t=0}\sim p(\mathbf{x})\) and \(\widetilde{y}_{i}\sim p(y)\) 4:for\(i=1\) to \(N\)do\(\triangleright\) Number of synthetic samples per mini-batch 5:for\(t=1\) to \(T\)do\(\triangleright\) Number of iterations 6:\(\mathbf{z}^{\mathsf{T}}\leftarrow f^{\mathsf{T}}\Big{(}\widetilde{\mathbf{x}}_{i}^{t} \Big{)}\)\(\triangleright\) Forward pass through teacher 7:\(l\leftarrow\mathcal{L}\Big{(}\widetilde{\mathbf{x}}_{i}^{t},\mathbf{z}^{\mathsf{T}}, \widetilde{y}_{i},\widetilde{\mathcal{D}}_{m}^{t}\Big{)}\)\(\triangleright\) Compute extraction loss 8:\(\widetilde{\mathbf{x}}_{i}^{t+1}\leftarrow\widetilde{\mathbf{x}}_{i}^{t}-\eta(m)\, \nabla_{\mathbf{x}}l\)\(\triangleright\) Update synthetic samples 9:return\(\widetilde{\mathcal{D}}=\bigcup_{m=1}^{M}\widetilde{\mathcal{D}}_{m}^{T}\) ``` **Algorithm 1** Contrastive Abductive Knowledge Extraction For completeness, we lay out the full CAKE procedure in Algorithm 1. Conceptually, all synthetic samples could be generated in parallel. However, due to both practical compute and memory constraints, and to make the injection of noise more intuitive, the algorithm outlines the generation of \(M\) sets of synthetic sample "mini-batches". For each mini-batch \(\widetilde{\mathcal{D}}_{m}\), \(N\) random synthetic samples and labels \((\widetilde{\mathbf{x}}_{i}^{t=0},\widetilde{y}_{i})\), where \(\widetilde{\mathbf{x}}_{i}^{t=0}\) and \(\widetilde{y}_{i}\) are drawn from priors of our choice \(p(\mathbf{x})\,,p(y)\). Subsequently, the algorithm iterates over the number of synthetic samples per mini-batch, \(\frac{N}{M}\), and for each sample, it performs \(T\) iterations. Within each iteration, the samples \(\widetilde{\mathbf{x}}_{i}^{t}\) are fed through the teacher \(f^{\mathsf{T}}\) to obtain logits \(\mathbf{z}^{\mathsf{T}}\). Then, we compute the extraction loss \(l\) as a weighted mixture of \(\mathcal{L}_{\text{KD}},\mathcal{L}_{\text{cont}}\), and \(\mathcal{L}_{\text{TV}}\). An update step with scheduled step size \(\eta(m)\) is performed on the synthetic sample as specified in Section 3.2. The algorithm ultimately returns the union of all synthetic mini-batches, \(\widetilde{\mathcal{D}}=\bigcup_{m=1}^{M}\widetilde{\mathcal{D}}_{m}^{T}\). We can then proceed and train a student model on the newly synthesized dataset. In CAKE, we argue that the necessary noise can be induced intuitively as a function of the current mini-batch. Respectively, the main change in LAKE is to replace line 8 with earlier Eq. (3). ## 4 Ablation Studies on CIFAR To highlight the contributions of the design elements introduced in Sections 3.1 to 3.3 and to corroborate their utility beyond the two-dimensional Fig. 1, we start with ablation studies on CIFAR-10. Table 1 shows respectively obtained student accuracies for CAKE and the LAKE variant for a ResNet-34 [12] teacher and smaller ResNet-18 student, both trained for 30 epochs on batches of size 256 with SGD and a learning rate of 0.5 scheduled with OneCycleLR [30]. We extract 500 mini-batches of 256 samples with loss weights \(\lambda_{\text{contr}}=1e1,\lambda_{\text{cls}}=1e3\), and \(\lambda_{\text{TV}}=1e5\) for 256 iterations and an initial step size of 0.1. Further details follow standard practice and are provided in the appendix. As described in earlier sections, we introduce noise by linearly decaying the step size across four magnitudes for the sample synthesis in CAKE and through Langevin diffusion in LAKE. We can observe that the baseline, where synthetic samples are generated solely by maximizing cross-entropy, delivers a student accuracy of only 28.0% for LAKE and 15.6% for CAKE, demonstrating the necessity of additional synthetization terms. The addition of the knowledge distillation loss (\(\mathcal{L}_{\text{KD}}\)) improves the performance for both LAKE and CAKE, increasing student accuracy to 34.8% and 19.2% respectively, indicating, that the use of teacher-based soft labels is not as effective when applied to synthetic data compared to original training data. However, the contrastive loss \(\mathcal{L}_{\text{contr}}\) presents a more nuanced scenario. Whereas it significantly enhances the performance of CAKE to 39.7%, it seems to slightly degrade performance for LAKE's to 24.7% when taken on its own, suggesting that \(\mathcal{L}_{\text{contr}}\) is more beneficial in the absence of noise. However, when \(\mathcal{L}_{\text{KD}}\) and \(\mathcal{L}_{\text{contr}}\) are combined, both LAKE and CAKE exhibit improved performance, with student accuracy reaching 36.8% and 42.5% respectively, illustrating the complementary nature of these two loss components. The final addition of \(\mathcal{L}_{\text{TV}}\) as a means to induce prior knowledge about the possible structure of image-based data leads to large improvements in performance of both LAKE and CAKE, resulting in student accuracies of 58.6% and 71.0%. \begin{table} \begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{Student Accuracy} \\ \cline{2-3} Setting & LAKE & CAKE \\ \hline baseline & 28.0 \(\pm\) 3.16 & 15.6 \(\pm\) 4.15 \\ \(+\mathcal{L}_{\text{KD}}\) & 34.8 \(\pm\) 5.56 & 19.2 \(\pm\) 4.91 \\ \(+\mathcal{L}_{\text{contr}}\) & 24.7 \(\pm\) 2.32 & 39.7 \(\pm\) 7.27 \\ \(+\mathcal{L}_{\text{KD}}+\mathcal{L}_{\text{contr}}\) & 36.8 \(\pm\) 3.02 & 42.5 \(\pm\) 8.13 \\ \(+\mathcal{L}_{\text{KD}}+\mathcal{L}_{\text{contr}}+\mathcal{L}_{\text{TV}}\) & 58.6 \(\pm\) 4.02 & 71.0 \(\pm\) 3.75 \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation analysis of CAKE & LAKE in distilling a ResNet-34 to a ResNet-18 on CIFAR-10, highlighting that inclusion of each individual component is meaningful to the overall performance of our method. Figure 2: Student model accuracy on CIFAR-10 (y-axis) when trained on synthetic data distilled from ResNet teacher models of different depths. Each group of bars corresponds to a ResNet teacher model of a particular depth (x-axis), and each bar within a group shows the accuracy of the student model distilled from that teacher model, along with its standard deviation as error bars. As desired, CAKE can compress models at a stable accuracy until capacity is too heavily constrained. ## 5 CAKE Enables Students to Mimic Teachers Across Various Scales Following the spirit of the original knowledge distillation paper's experiments and goals [14], we investigate CAKE's ability for model compression on CIFAR-10. To this end, we employ the well-known family of ResNet models with varying depths as both teachers and students. Specifically, we consider the following depths, with the number of parameters denoted in millions in brackets: 152 (58.2M), 101 (43.5M), 50 (23.5M), 34 (21.3M), 18 (11.2M), and 4 (1.2M). Fig. 2 shows respective groups of bars for achieved test accuracy for each teacher depth on the x-axis, where the hatched bars (with further explicit "teacher" bar label) show the respectively sized teacher's performance. All other bars quantify students of varying depths. For convenience, student depth is provided explicitly as the bar label and an applied shading further highlights smaller models through lighter shading. From the obtained accuracies, it becomes evident that CAKE displays stable performance across various teacher-student model capacities. Most importantly, even smaller student models (ResNet-18, ResNet-34) exhibit competitive performance when distilled from deeper teachers, indicating the effectiveness of CAKE in compressing knowledge of previously overparametrized models. As such, if the teacher model's complexity decreases too much, here in the case of ResNet-4, the student models also feature a significant drop in accuracy, irrespective of their depth. Naturally, this suggests that the lower capacity of the teacher model limits the quality of knowledge it can provide, which the student cannot recover at any capacity. Not surprisingly, a student model with very limited capacity suffers from bottlenecked information and amplifies performance degradation when knowledge is distilled, resulting in expected inferior performance across all examined teacher model capacities. ## 6 CAKE Transfers Knowledge Across Model Types A key advantage of CAKE does not only lie in its effectiveness without access to original data but also in the fact that there are no imposed constraints on model architectures or required intermediate model values for distillation. Much in contrast to the earlier mentioned prior works that require models to be of the same type or functioning on the premise of batch normalization layers, we are thus free to distill knowledge between a teacher and student model of different types. In fact, our sole requirement is that a model API implements a black box differentiable "forward" and a "backward" pass, where it is sufficient to simply obtain the final input gradient without any in-between states. In the following, we thus investigate the performance of CAKE between four popular neural network types: 1) _Multi-layer Perceptrons (MLP)_, 2) _Convolutional Neural Networks (CNN)_, 3) _ResNet-4_, and 4) _Vision Transformer (ViT)_[6]. For fair comparison, we have matched the models' parameter amounts, see appendix for details. Fig. 3 shows the result for _across-model type_ distillation on MNIST, combining every model type with every other. Each group on the x-axis describes a set of Figure 3: Performance of different student models distilled from teacher models of various model types trained on MNIST: CNNs, MLPs, ResNets, and ViTs (parameter amounts are set to be similar). Each group of bars corresponds to a particular teacher type and each bar within a group shows the accuracy of a particular type of distilled student model, along with its standard deviation as error bars (5 trials). Overall, matching model types consistently provides good results, whereas distillation across types seems to work if the teacher has less inductive bias than the student. experiments with a specific teacher type, where the teacher's accuracy is hatched. The intra-group bars represent student results when trained on the synthetic samples of the particular teacher. Overall, we find that distillation from an MLP to any other model is effective across the board, while the distillation performance from a ResNet to other models is generally poor. Importantly, the distillation performance is notably robust when both the teacher and student models share the same model type. Following these results, we first emphasize that our chosen models have all been roughly matched in terms of overall parameter amount and achieve negligibly similar teacher test accuracies. Thus, we hypothesize that when teacher and student models _have similar inductive biases, the distillation process tends to be most effective_. In addition, as observable in the case of MLPs that have less inductive biases than the other contenders, it seems that the majority of students can also excel as they are unrestricted in forming their own auxiliary assumptions. Here, the only exception is the ViT, for which distillation results are mixed and consistently underperform, unless the teacher is also a ViT. We further conjecture that this outcome could be attributed to the fundamentally distinct manner in which inputs are fed into the model, specifically, the tokenization into sequences in ViTs. However, most importantly, as we track Fig. 3 to the right, our analysis thus suggests a rather simple rule of thumb. _When in doubt of the teacher's type, choosing a ResNet model appears to be a safe choice_, as it provides stable performance independently of the source model. \begin{table} \begin{tabular}{l c c c c c c} Method & DF & MADataset & Teacher & Acc. & Student & Acc. \\ \hline \multirow{3}{*}{KD [14]} & \multirow{3}{*}{\(\mathcal{X}\)} & \multirow{3}{*}{\(\mathcal{X}\)} & MNIST & LeNet-5 & 99.3 & LeNet-5-Half & 98.8 \\ & & & FMNIST & LeNet-5 & 90.8 & LeNet-5-Half & 89.7 \\ & & & CIFAR-10 & ResNet-34 & 95.6 & ResNet-18 & 94.3 \\ \hline \multirow{3}{*}{DAFL [3]} & \multirow{3}{*}{\(\mathcal{X}\)} & MNIST & LeNet-5 & 97.9 & LeNet-5-Half & 97.6 \\ & & & SVHN & WResNet-40-2 & 95.9 & WResNet-16-1 & 94.3 \\ & & & CIFAR-10 & ResNet-34 & 93.7 & ResNet18 & 90.4 \\ \hline \multirow{3}{*}{RDSKD [10]} & \multirow{3}{*}{\(\mathcal{X}\)} & MNIST & LeNet-5 & 97.9 & LeNet-5-Half & 97.6 \\ & & & SVHN & WResNet-40-2 & 95.9 & WResNet-16-1 & 94.6 \\ & & & CIFAR-10 & ResNet-34 & 93.7 & ResNet18 & 90.8 \\ \hline \multirow{3}{*}{DI [38]} & \multirow{3}{*}{\(\mathcal{Y}\)} & CIFAR-10 & VGG-11 & 92.3 & VGG-11 & 84.2 \\ & & CIFAR-10 & VGG-11 & 92.3 & ResNet-18 & 83.8 \\ & & & CIFAR-10 & ResNet-34 & 95.4 & ResNet-18 & 91.4 \\ \hline \multirow{3}{*}{ADI [38]} & \multirow{3}{*}{\(\mathcal{Y}\)} & CIFAR-10 & VGG-11 & 92.3 & VGG-11 & 90.8 \\ & & CIFAR-10 & VGG-11 & 92.3 & ResNet-18 & 90.7 \\ & & CIFAR-10 & ResNet-34 & 95.4 & ResNet-18 & 93.3 \\ \hline \multirow{3}{*}{GD [27]} & \multirow{3}{*}{\(\mathcal{Y}\)} & SVHN & ResNet-18 & 94.5 & MobileNetV2 & 92.9 \\ & & CIFAR-10 & ResNet-34 & 93.3 & ResNet-18 & 86.0 \(\pm\) 0.12 \\ & & & CIFAR-10 & ResNet-34 & 93.3 & ResNet-34 & 87.1 \(\pm\) 0.23 \\ \hline \hline \multirow{3}{*}{DD [24]} & \multirow{3}{*}{\(\mathcal{Y}\)} & CIFAR-10 & VGG-11 & 92.3 & VGG-11 & 36.6 \\ & & CIFAR-10 & VGG-11 & 92.3 & ResNet-18 & 39.7 \\ & & CIFAR-10 & ResNet-34 & 95.4 & ResNet-18 & 30.0 \\ \hline \multirow{3}{*}{CAKE} & \multirow{3}{*}{\(\mathcal{Y}\)} & MNIST & LeNet-5 & 99.3 \(\pm\) 0.12 & LeNet-5-Half & 96.3 \(\pm\) 3.55 \\ & & FMNIST & LeNet-5 & 91.0 \(\pm\) 0.12 & LeNet-5-Half & 57.8 \(\pm\) 4.67 \\ & & SVHN & LeNet-5 & 89.8 \(\pm\) 0.38 & LeNet-5-Half & 62.9 \(\pm\) 4.17 \\ & & ✓ & SVHN & ViT-8 & 94.4 \(\pm\) 0.13 & ViT-4 & 83.7 \(\pm\) 4.77 \\ & & SVHN & ResNet-34 & 96.1 \(\pm\) 0.08 & ResNet-18 & 94.2 \(\pm\) 0.54 \\ & & CIFAR-10 & ViT-8 & 73.2 \(\pm\) 0.76 & ViT-4 & 53.8 \(\pm\) 5.63 \\ & & CIFAR-10 & ResNet-34 & 91.8 \(\pm\) 0.11 & ResNet-18 & 78.9 \(\pm\) 2.59 \\ \hline \end{tabular} \end{table} Table 2: Comparison of knowledge distillation techniques, presenting teacher and student model accuracies, to highlight that CAKE is effective despite lifting typical model constraints (MA: model-agnostic) and requiring no data access (DF: data-free). Note that standard deviations are typically not reported in the literature, obfuscating potential volatility in reproduction. CAKE Parallels Performance of Tailored Distillation Methods Having evaluated the key factors contributing to CAKE's performance, its efficacy in model compression, and its ability to distill across diverse model types, we now position CAKE within the larger context of existing techniques. As discussed in Section 2, these methods often require access to original data, are tailored to specific models, or impose both conditions. We include a wide set of techniques, their assumptions, and performances on several datasets in Table 2. Despite all other techniques imposing strict requirements on model type and data availability, our results show compelling evidence that CAKE can effectively lift assumptions with little to no performance detriment. For both MNIST (LeNet-5 to LeNet-5-Half) and SVHN (ResNet-34 to ResNet-18) settings, CAKE achieves comparable student accuracy to other techniques, despite the latter requiring data access and/or being model-specific. In the CIFAR-10 scenario (ResNet-34 to ResNet-18), we attain a student accuracy of 78.9%, almost matching techniques with data access and additional model assumptions with a mere 10%-15% gap. Remarkably, on CIFAR-10, CAKE outperforms DeepDream (30.0% for ResNet-34 to ResNet-18 and 39.7% for VGG-11 to ResNet-18), the only other truly data-free and model-agnostic technique, by a factor of two. ## 8 Discussion We have shown that CAKE can effectively transfer abductive knowledge between models of various capacities as well as models of entirely different types, despite the fact that CAKE lifts previous standard assumptions on models and requirements on original data access. In light of these results, we challenge the current de facto standard of requiring original training data or making model assumptions. Already now, this entails a host of highly interesting future applications of societal significance, as well as an even greater set of prospects once remaining limitations are lifted. Future WorkAs CAKE's design lifts the requirement of original data access and simultaneously removes unnecessary model constraints, it now opens up a plethora of future applications and research directions. On the one hand, these lie in performance improvements to our initial intuitive approach. For instance, we can now further make use of the wide array of improved contrastive formulations [7; 31; 33], improvements for diffusion processes, or leverage adaptive signals to dynamically steer the distillation process based on the student's performance in the spirit of curriculum learning [34]. On the other, even more exciting, hand, CAKE's main premise of extracting abductive knowledge also entails that the data distribution is not closely mimicked. This implies that generated synthetic samples do not resemble original data. In fact, as depicted in Fig. 4 for three datasets, samples look rather noisy. Intuitively, they seem to look more like commonly found adversarial attacks [16], but note that our synthetic data doesn't serve the same purpose to trick a classifier towards misclassification by perturbing original data. This characteristic opens up intriguing possibilities in the context of privacy-sensitive applications, such as scenarios involving medical data, continual learning, and federated learning. In general, CAKE's ability to distill knowledge without data resemblance could be invaluable for applications where data privacy and confidentiality are paramount. LimitationsAlthough CAKE already yields promising results without common assumptions, we see two remaining limitations to be lifted in the future. First, as highlighted in the previous paragraph, our current distillation process operates independently of the student model. This results in a lack of direct measures to estimate the quality of the synthetic dataset during the distillation phase, potentially limiting the effectiveness of the distillation and the resulting student model's performance. Second, although we assume no access to original data, model constraints, and require no access to intermediate model values, we do nevertheless still require a callable backward function. This does not yet allow CAKE to be used in scenarios where a model is hosted with an API that only allows Figure 4: Synthetic samples generated from a ResNet teacher by CAKE on various datasets, demonstrating no visual resemblance with original training data. forward evaluation calls, a limitation we foresee to be overcome through a transition to e.g. the very recent and concurrently proposed forward-forward algorithm [13]. Societal ImpactWe raise awareness that model-agnostic abductive knowledge extraction without training data access may be misused to inappropriately extract knowledge from proprietary or confidential models, thereby leading to potential violations of privacy or intellectual property theft. While amicable use cases for e.g. continual or federated learning exist, this also simultaneously highlights the importance of conducting further research into securing our public models. In particular, we foresee that the above-mentioned final limitation of requiring a backward API call may be lifted in the foreseeable future, exposing a crucial issue with current models. Finally, distillation methods will inadvertently mimic existing biases in the teacher model, perpetuating or even exacerbating unfairness or discrimination, potentially making efforts towards data transparency even more challenging. AcknowledgementsThis work was supported by the Federal Ministry of Education and Research (BMBF) Competence Center for AI and Labour ("kompAKI", FKZ 02L19C150) and the project "safeFBDC - Financial Big Data Cluster" (FKZ: 01MK21002K), funded by the German Federal Ministry for Economics Affairs and Energy as part of the GAIA-x initiative. It benefited from the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK; projects "The Third Wave of AI" and "The Adaptive Mind"), and the Hessian research priority programme LOEWE within the project "WhiteBox".
2301.09162
Deep Reinforcement Learning for Concentric Tube Robot Path Following
As surgical interventions trend towards minimally invasive approaches, Concentric Tube Robots (CTRs) have been explored for various interventions such as brain, eye, fetoscopic, lung, cardiac and prostate surgeries. Arranged concentrically, each tube is rotated and translated independently to move the robot end-effector position, making kinematics and control challenging. Classical model-based approaches have been previously investigated with developments in deep learning based approaches outperforming more classical approaches in both forward kinematics and shape estimation. We propose a deep reinforcement learning approach to control where we generalise across two to four systems, an element not yet achieved in any other deep learning approach for CTRs. In this way we explore the likely robustness of the control approach. Also investigated is the impact of rotational constraints applied on tube actuation and the effects on error metrics. We evaluate inverse kinematics errors and tracking error for path following tasks and compare the results to those achieved using state of the art methods. Additionally, as current results are performed in simulation, we also investigate a domain transfer approach known as domain randomization and evaluate error metrics as an initial step towards hardware implementation. Finally, we compare our method to a Jacobian approach found in literature.
Keshav Iyengar, Sarah Spurgeon, Danail Stoyanov
2023-01-22T17:11:54Z
http://arxiv.org/abs/2301.09162v4
# Deep Reinforcement Learning for Concentric Tube Robot Path Planning ###### Abstract As surgical interventions trend towards minimally invasive approaches, Concentric Tube Robots (CTRs) have been explored for various interventions such as brain, eye, fetoscopic, lung, cardiac and prostate surgeries. Arranged concentrically, each tube is rotated and translated independently to move the robot end-effector position, making kinematics and control challenging. Classical model-based approaches have been previously investigated with developments in deep learning based approaches outperforming more classical approaches in both forward kinematics and shape estimation. We propose a deep reinforcement learning approach to control where we generalise across two to four systems, an element not yet achieved in any other deep learning approach for CTRs. In this way we explore the likely robustness of the control approach. Also investigated is the impact of rotational constraints applied on tube actuation and the effects on error metrics. We evaluate inverse kinematics errors and tracking error for path following tasks and compare the results to those achieved using state of the art methods. Additionally, as current results are performed in simulation, we also investigate a domain transfer approach known as domain randomization and evaluate error metrics as an initial step towards hardware implementation. Finally, we compare our method to a Jacobian approach found in literature. Kinematics, Path Planning, Reinforcement Learning, Concentric Tube Robots ## I Introduction Concentric tube robots (CTRs) are a class of continuum robot that depend on the interactions between neighbouring, concentrically aligned tubes to produce the curvilinear shapes of the robot backbone [1]. The main application of these unique robots is that of minimally invasive surgery (MIS), where most of the developments for CTRs have been focused. MIS has trended towards semi-autonomous and autonomous robotic surgery to improve surgical outcomes [2, 3]. Due to the confined workspaces and resulting extended learning times for surgeons in MIS, dexterous, compliant continuum robots such as CTRs have been under development in preference to the mechanically rigid and limited degrees-of-freedom (DOF) robots used in interventional medicine today. The precurved tubes of CTRs, which are sometimes referred to as active cannulas or catheters, are manufactured from super-elastic materials such as Nickel-Titanium alloys with each tube nested concentrically [4]. From the base, the individual tubes can be actuated through extension, as seen in Fig. 1, which results in the bending and twisting of the backbone as well as access to the surgical site through the channel and robot end-effector. CTRs are motivated clinically for use in brain, lung, cardiac, gastric and other surgical procedures where tool miniaturization and non-linear tool path trajectories are beneficial [6]. Particularly, they have been investigated as steerable needles and surgical manipulators. As steerable needles, they substitute for traditional steerable needles with higher precurvature and dexterity. They are also actuated with a follow the leader approach, where every point along the robot's backbone traces the same path as the tip. As surgical manipulators, the benefit of using CTRs is the large number of design parameters that allow for patient and procedure specific CTR designs through optimization for surgical requirements and design heuristics [7, 8]. In general however, CTRs, with their increased DOFs and miniaturization potential, may be beneficial with their flexibility to reach a larger section of a surgical site with a working channel through the tubes for irrigation, ablation or other tools. Fig. 1: (a) Real CTR system with tubes adapted from [5] and (b) illustration with tubes, actuation and robot workspace. Thus, it is key to control the robot tip position to desired Cartesian points in the robot workspace accurately. However, due to tube interactions, modelling and control is challenging. Position control for CTRs has relied on model development, and although a balance between computation and accuracy has been reached in the literature [9], there remain issues such as performance in the presence of tube parameter discrepancies and the impact of unmodelled physical phenomena such as friction and permanent plastic deformation. This motivates the development of an end-to-end model-free control framework for CTRs. One such model-free framework for robotic control that is gaining popularity is reinforcement learning (RL), a paradigm of machine learning that deploys an agent to output an action that interacts with an environment [10]. The environment then processes this action, and returns a new state and, depending on the task, a reward signal. One parallel of RL is that of a control system. The agent is equivalent to the controller, the actions are equivalent to the control actions, the state is equivalent to any measurable signal from the plant, the reward equivalent to a performance metric (eg. minimize steady-state error) and lastly, the learning algorithm is equivalent to the adaptive mechanism of the controller. Deep reinforcement learning (DeepRL) combines deep learning and RL and allows for high-dimensional states and actions traditionally not available to RL algorithms or classical control. In this work, we expand on the previously published literature on utilizing DeepRL for control of CTRs [11]. Specifically, the aim is to control the end-effector Cartesian robot tip position with a DeepRL agent by means of actions that represent changes in joint values. The state includes the desired goal at the current step, allowing for more complex control tasks such as path following as well as inverse kinematics. In SSII, we review the Jacobian approach [12] as well as the state of the art inverse kinematics error metrics. The model reviewed is also used in simulation for training and evaluation for our DeepRL method. In SSIII, the main components of our DeepRL approach for CTRs is described. Improvements such as exploring constrained and constraint-free rotation as well as generalizing the policy to accommodate multiple CTR systems are introduced. For constraint-free rotation, we investigate how constraining the rotational degree of freedom for CTRs affects overall error metrics. Also introduced is a novel end-to-end CTR generalized DeepRL policy, which to our knowledge, is the first work in generalizing over tube parameters with a model-free framework for CTRs. In SSIV, results validating these improvements as well as error comparisons to previous tip-tracking and inverse kinematics methods are presented. Finally, we discuss strategies for translation to hardware including domain transfer and initial experiments for a domain transfer strategy known as domain randomization. The contribution evolved from our previous work [11] may be summarised as: 1. Investigating constrained and constraint-free rotation on tube rotation. 2. Development of an initial proof-of-concept CTR system generic policy for CTRs. 3. Details for a pathway and initial simulation results for strategies to hardware translation. ## II Related Work Over the last few years, deep learning approaches have become popular for control of kinematic and dynamic estimation of CTRs. The first deep learning approach was by Bergeles et al. [13] for forward and inverse kinematics, performed in simulation where a simple extension and rotation representation was used. However, tube rotation representation in the network resulted in ambiguity in inverse kinematics solutions. In later work by Grassman et al. [14], by improving the joint representation, errors of \(2.8\%\) of robot length were achieved, albeit in a limited region of the workspace and in simulation. More recently, work on shape estimation and shape to joint input estimation has been investigated [15, 16] using deep neural networks. Finally, Donat et al. [17] introduced tip contact force estimation based on backbone deflection using a data driven approach via deep direct cascade learning (DDCL). With tip error represented as a percentage of robot backbone length, more traditional optimization-based methods and inverse Jacobian methods been found to have a tip tracking error of \(3.2\%\) and \(2.5\%\). State-of-the-art closed-loop control methods can achieve errors of \(0.9\%\) and \(0.5\%\)[18]. With active constraints and use of a model-predictive closed loop controller, tip errors of \(0.3-0.5\%\)[19] have also been reported. The Jacobian-based controllers which are common in literature [18, 20] can be used in a closed-form fashion to perform path following with a CTR. A similar comparison was done in [19]. Given a control input or desired change in joint values \(\dot{q}_{d}\), the desired change in Cartesian space \(\dot{x}_{d}\) and desired position in Cartesian space \(x_{d}\), a positive semi-definite matrix \(K_{p}\) and the pseudo-inverse Jacobian \(J^{\dagger}\), we can define a control law such that \[\dot{q}_{d}=J^{\dagger}\left[\dot{x}_{d}+K_{p}\left(x_{d}-x\right)\right]. \tag{1}\] Moreover, the pseudo-inverse can become very sensitive to singularities so a damping factor \(\Lambda\) is added such that \(J^{\dagger}=(J^{T}J+\Lambda^{2}I)^{-1}J^{T}\). As shown in [19], this method does not account for joint limits resulting in failed trajectories. Although learning-based approaches have been well developed and have had success for forward kinematics, force estimation and shape estimation, inverse kinematics and control using deep learning has remained an open problem for CTRs. Given that the deep learning based forward kinematics [14] and shape estimation [15] report better error metrics than their physics-based model comparisons, investigating a deep learning based approach for inverse kinematics and control could be beneficial and advantageous. To this end, we have investigated the use of DeepRL for CTRs. Our initial work [21] investigated the exploration problem for CTRs with simpler constant curvature dominant stiffness kinematics for simulation. The exploration problem stems from previous analysis for workspace characterization [22] that has shown the bias associated with uniform joint sampling for CTR workspaces. Due to the constraints to extension from the actuation perspective, full extension and retraction are less likely to be sampled. Since DeepRL methods rely on the experiences collected during training episodes as determined by the agent's actions, if full extension or retraction joint values are not sampled, kinematics and control in the those areas of the workspace will not be accurate. To mitigate this, noise is usually added to the selected actions or policy network to explore the state space. Our initial work determined that applying separate noise to the rotation and extension joints was crucial in acquiring a policy with accurate control. More recently, we improved the DeepRL approach by using a more accurate geometrically exact kinematics model [9] for simulation, investigating joint representations and applying a reward curriculum to improved sample efficiency (faster policy convergence with less data) for training. Two joint representations and three curricula functions were evaluated. The curricula were used to determine the goal tolerance during training steps, a novel approach for DeepRL to our best knowledge. The egocentric representation with the decay curriculum performed best overall in terms of sample efficiency and error metrics. To demonstrate policy robustness, a second noise-induced simulation was created where Gaussian noise was added to the join values as encoder noise and end-effector position as tracking noise. Training was then performed on the noise-induced and noise-free simulation. Performing evaluations on a noise-induced simulation, a slight improvement was seen in policy trained with the noise-induced simulation. The main takeaway from these experimental results was that the policy learned can incorporate some amount of noise in the state, and still perform adequately. To begin, we first introduce the state, action and reward definitions for the CTR control problem as required for any RL-based method, the joint representations, specifically proprioceptive and egocentric and how these representations affect training, the novel goal tolerance based curriculum for training a policy as well as main results and conclusions. We then expand to constraint-free rotation to significantly improve error results with the aim of generating results across multiple CTR systems. Finally, to motivate transfer to hardware, we provide an initial experiment for domain transfer to hardware using domain randomization in simulation. ## III Methods In RL, Markov Decision Processes (MDPs) define mathematically the agent's task. Importantly, it defines the key elements for changing the state, the associated rewards and the actions that affect the state to achieve the task. ### _Markov Decision Process Formulation_ In the following, the state, action, reward, and goals are defined. State (\(s_{t}\)) : The state at timetstep \(t\), is defined as the concatenation of the trigonometric joint representation [14], Euclidean norm between the current desired position and desired position and current goal tolerance. As shown in Fig. 2, rotation and extension of tube \(i\) (ordered innermost to outermost) are \(\alpha_{i}\) and \(\beta_{i}\) with \(L_{i}\) representing the full length. First, the trigonometric representation, \(\gamma_{i}\), of tube \(i\) is defined as: \[\gamma_{i}=\{\gamma_{1,i},\gamma_{2,i},\gamma_{3,i}\}=\{\cos(\alpha_{i}),\sin( \alpha_{i}),\beta_{i}\}. \tag{2}\] The rotation can be retrieved by taking the arc-tangent \[\alpha_{i}=\mathrm{arctan2}(\gamma_{2,i},\gamma_{1,i}). \tag{3}\] The extension joint \(\beta_{i}\) can be retrieved directly and has constraints \[0\geq\beta_{3}\geq\beta_{2}\geq\beta_{1} \tag{4}\] \[0\leq L_{3}+\beta_{3}\leq L_{2}+\beta_{2}\leq L_{1}+\beta_{1} \tag{5}\] from the actuation side. In our previous work, the rotation was constrained from \([-180^{\circ},180^{\circ}]\), which was not required in the trigonometric representation, as will be shown with constraint-free rotation. The Cartesian goal error is the current error of the achieved end-effector position \(G_{a}\), and desired end-effector position \(G_{d}\). Lastly, the current goal tolerance, \(\delta(t)\) is included in the state where \(t\) is the current timestep \(t\) of training. The full state, \(s_{t}\), can then be defined as: \[s_{t}=\{\gamma_{1},\gamma_{2},\gamma_{3},G_{a}-G_{d},\delta(t)\}. \tag{6}\] Action (\(a_{t}\)) : Actions are defined as a change in rotation and extension joint positions: \[a_{t}=\{\Delta\beta_{1},\Delta\beta_{2},\Delta\beta_{3},\Delta\alpha_{1}, \Delta\alpha_{2},\Delta\alpha_{3}\}. \tag{7}\] The maximum action in extension and rotation are set to \(1.0\) mm and \(5^{\circ}\). Goals (\(G_{a}\), \(G_{d}\)) : Goals are defined as Cartesian points within the workspace of the robot. There is the achieved goal, \(G_{a}\) and desired goal, \(G_{d}\) where the achieved goal is determined with the forward kinematics of the kinematics model used and is recomputed at each timestep as the joint configuration changes from the selected actions from the policy. The desired goal updates at the start of every episode where a desired goal is found by sampling valid joint configurations in the workspace and applying forward kinematics of the model. However, this is not uniform in Cartesian space and requires action exploration. Rewards (\(r_{t}\)) : The reward is a scalar value returned by the environment as feedback for the chosen action by the agent at the current timestep. In this work, sparse rewards are used as they have been shown to be more effective than dense Fig. 2: Joint variables \(\beta\) and \(\alpha\) of a 3 tube CTR where \(L\) is the overall length. \(s\) is the arc-length or axis along the backbone. rewards when using hindsight experience replay (HER) [23]. The reward function used in this work is defined as: \[r_{t}=\begin{cases}0&\text{if }e_{t}\leq\delta(t)\\ -1&\text{otherwise}\end{cases} \tag{8}\] where \(e_{t}\) is the Euclidean distance \(||G_{a}-G_{d}||\) at timestep \(t\) and \(\delta(t)\) is the goal-based curriculum function that determines the goal tolerance at training timestep \(t\). The workspace and various state and reward elements are illustrated in Fig. 3. ### _Goal-Based Curriculum_ In our previous work, we introduced a novel goal based curriculum that reduces the goal tolerance through training steps to improve error convergence and overall error convergence. Linear and exponential decay goal-based curriculum along with a baseline constant curriculum function were compared with combinations of proprioceptive and egocentric joint representations. Each curriculum reduces the goal tolerance as a function of timestep \(t\), number of timesteps to apply the function, \(N_{ts}\), the initial tolerance, \(\delta_{initial}\) and final tolerance, \(\delta_{final}\). The linear curriculum is defined as \[\delta_{linear}(t) =at+b \tag{9}\] \[a =\frac{\delta_{final}-\delta_{initial}}{N_{ts}}\] \[b =\delta_{initial},\] and the exponential decay curriculum is defined as \[\delta_{decay}(t) =a(1-r)^{t} \tag{10}\] \[a =\delta_{initial}\] \[r =1-\left(\frac{\delta_{final}}{\delta_{initial}}\right)^{\frac{1 }{N_{ts}}}.\] The values used for these various parameters can be found in [11]. ### _Joint Representation_ To improve learning sample efficiency, joint representations were investigated. Specifically, proprioceptive (absolute) and egocentric (relative) joint representations where the reference of measure for each joint position is changed respectively. Although proprioceptive representations are used often in classical control for robotics, egocentric joint representations are utilized heavily in reinforcement learning control simulation environments like the DeepMind control suite [24]. In proprioceptive or absolute joint representation all the joints are referenced from a common base reference. This is illustrated for rotations in Fig. 4a. However, in egocentric or relative joint representations, only the inner tube is referenced from the base shown in Fig. 4b. The next outer tube is referenced from the previous inner tube and so forth. This can be used for both rotation and extension joints. \[\alpha_{ego} =\{\alpha_{1},\alpha_{2}-\alpha_{1},\alpha_{3}-\alpha_{2}\} \tag{11}\] \[=\{\alpha_{1},\Delta\alpha_{2-1},\Delta\alpha_{3-2}\}\] and extensions \[\beta_{ego}=\{\beta_{1},\beta_{2}-\beta_{1},\beta_{3}-\beta_{2}\} \tag{12}\] To retrieve the absolute joint representation, the cumulative sum is taken as shown below: \[\alpha_{prop}=\{\alpha_{1},\Delta\alpha_{2-1}+\alpha_{1},\Delta\alpha_{3-2}+ \Delta\alpha_{2-1}+\alpha_{1}\} \tag{13}\] ### _CTR Simulation Environment_ To collect data and experiences for the deepRL algorithm to learn a policy, a simulation environment for kinematics was developed following the openAI gym framework [25]. The environment takes a set of tube parameters describing a CTR system, joint configuration and selected actions by the agent to determine the overall shape of the CTR. For DeepRL, a large number of experiences are needed to train a policy, so a "sweet spot" or relatively fast computationally and relatively accurate kinematics model was used which was first presented in [26] and later presented for externally loaded CTRs with point and distributed forces and moments in [1, 12]. This model ignores friction, permanent strain and forces along the backbone of the robot. Fig. 4: Illustrated example of (a) proprioceptive and (b) egocentric joint representation for rotational joints. Inner (blue), middle (red) and outer (green) tubes rotation representation with respect to base (black). Fig. 3: State with starting position (white), achieved goal (black), desired goal (magenta), goal tolerance, \(\delta(t)\). Outer tube (green), middle tube (red) and inner tube (blue). Reviewing the kinematics model, the tubes are modelled as deformable curves in 3D space with frames located along a curve with the \(z\)-axis always tangent to the curve. The configuration space of the robot is limited to a set of 3D points and orientations or twisting along this curve. The 3D points in the configuration space along the arc-length \(s\), are \(\mathbf{r}(s):[0,l]\rightarrow\mathrm{I\!R}^{3}\) and the orientations are a family of orthogonal transformations \(\mathbf{R}(s):[0,l]\to SO(3)\). In the first step, the robot is segmented into transition points at which continuity constraints of shape and moment are maintained. Assuming that at a given time \(t\), the final deformed curve of all tubes must be equal to the inner most tube \(\mathbf{r}^{i}(s)=\mathbf{r}^{1}(s,t)\) and using \(\theta^{i}(s,t)\) to parameterize the twisting around the \(z\)-axis with no external torques, the curvatures of the tubes can be calculated with the following differential equations. (') indicates a partial derivative with respect to arc length \(s\). \[\mathbf{r}^{\mathbf{1}^{\prime}}(s,t) =\mathbf{R}^{1}(s,t)e3, \tag{14a}\] \[\mathbf{R}^{\mathbf{1}^{\prime}}(s,t) =\mathbf{R}^{1}(s,t)\hat{\mathbf{u}}^{1}(s,t). \tag{14b}\] \[u_{n}^{i}=\left(\sum_{j}^{N}\mathbf{K}^{j}\right)^{-1}\mathbf{R }_{z}^{T}(\theta^{i}(s,t))\times \tag{14c}\] \[\left.\left(\sum_{j=1}^{N}\mathbf{R}_{z}(\alpha^{j}(s,t))\mathbf{ K}^{j}\mathbf{U}^{j}\right)\right|_{n=1,2}\] \[u_{3}^{i^{\prime}}(s,t) =\frac{E^{i}I^{i}}{G^{i}J^{i}}(u_{1}^{i}U_{2}^{i}-u_{2}^{i}U_{1}^ {i}), \tag{14d}\] \[\theta^{i^{\prime}}(s,t) =u_{3}^{i}(s,t) \tag{14e}\] The superscripts \(i=1,\ldots,N\) denote the \(i\)-th tube, with \(i=1\) corresponding to the inner most tube. The subscripts \(n=1,2,3\) denote the \(n\)-th element of the vector. The vector aligned with the \(z\)-axis is denoted as \(e3=[0,0,1]^{T}\). \(\mathbf{u}\) is the curvature vector of the deformed backbone with \(U^{i}\) denoting the precurvature of each tube before deformation. \(\theta^{i}\) denotes the angle of twist about the local \(z\)-axis with respect to the global frame. \(K^{i}\) is the stiffness matrix for tube \(i\) with \(E\), \(I\), \(G\) and \(J\) specifying the Young's modulus, second moment of inertia, shear modulus and polar moment of inertia. The system of differential equations can be solved for given boundary conditions in terms of tube curvatures and joint values. \[\mathbf{r}^{1}(0,t) =[0,0,0]^{T} \tag{15a}\] \[\mathbf{R}^{1}(0,t) =\mathbf{R}_{z}(\alpha^{1}(t)-\beta^{1}(t)u_{3}^{1}(0,t))\] (15b) \[\theta^{i}(0,t) =\alpha^{i}(t)-\beta^{i}u_{3}^{i}\] (15c) \[u_{3}^{i}(l^{i}+\beta^{i},t) =U_{3}^{i} \tag{15d}\] where \(l^{i}\) indicates the length of the \(i\)-th tube. The solution gives the overall robot curvature. Although we solve for the full curvature as part of the kinematics, the simulation environment only returns the end-effector Cartesian position. This position is the achieved goal, \(G_{achieved}\) found in the state representation from (6). Evaluating the constant goal tolerance and two curriculum functions (linear and decay) each with the proprioceptive and egocentric representations resulted in \(6\) experiments that were performed. To train the agent, the deep deterministic policy gradient (DDPG) [27] algorithm with hindsight experience replay (HER) [23] was chosen. DDPG has been shown to be more stable than other algorithms in stable environments [28] and HER provides a future goal sampling strategy allowing for relabelling of failed trajectories into successful ones. In a sparse reward environment, this is important for training convergence. The experiments were run using tube parameters and algorithm hyperparameters found in our previous work and are visualized in Fig. 6 as System \(3\). For each experiment, the final policy was evaluated for \(1000\) evaluation episodes with results shown in Table I. Success rate is defined as the number of successful trajectories over the total number of trajectories, where success is achieving a trajectory with below \(1\) mm error. The success rate is reasonably high for all experiments suggesting the selected algorithm is stable for the task. However, the error mean and standard deviation can definitely be improved. To improve the error metrics, constraint-free rotation, as described in SSIV, was investigated. Also performed were path following tasks following straight and circular trajectories with system \(3\) and the egocentric decay method. Since the desired goal is within the state as shown in equation (6), a policy controller was created where the desired goal was updated via a trajectory generator. This type of controller is normally not available for other deep neural network approaches as they are not inherently timestep based whereas DeepRL aims to optimize actions at each timestep. The agent was given \(20\) timesteps before the next desired goal was set in the path. For the straight line path, the agent had a mean tracking error of \(0.58\) mm. To demonstrate robustness of the policy, a second policy was trained on a noise induced simulation environment. Zero-mean Gaussian noise was added to the joint configuration (\(\alpha\) and \(\beta\)) as encoder noise with a \(1^{\circ}\) variance. For the achieved goal or tracking noise in the noise induced simulation, a variance of \(0.8\) mm for the Gaussian noise was used based on an EM tracker (Aurora, NDI Inc., CA) precision data found in documentation. When training a robust policy in the noise-induced policy, a tracking error of \(1.37\) mm following a circular path. Both paths are displayed in Fig. 5. Although the noise-induced mean tracking error of \(1.37\) mm is promising, the large error metrics in evaluation suggest improvements could be made. Moreover, as visualized in Fig. 6, the selected CTR system has a relatively small workspace and the selected path following tasks are quite simple. An important consideration is how the deepRL method incorporates larger workspaces as these systems will require more actions to achieve the desired goal. In this work, we propose two main improvements from the previous DeepRL curriculum method. By performing a workspace and joint error analysis, we found removing constraints on tube rotations provided a significant performance improvement, particularly in larger CTR systems. The second improvement proposed was to develop a DeepRL method that generalizes over multiple CTR systems with various tube parameters. This generic policy would be useful as only a single policy would be needed for multiple systems and would be the first steps towards full generalization for deep learning based CTR kinematics. Furthermore, to initiate the domain transfer to hardware, we provide initial experiments for domain randomization, a method to facilitate transfer of policies from simulation to hardware. ### _Improvements with constraint-free tube rotation_ With the best policy training method, egocentric decay from the previous work, state information such as the achieved goal, desired goal, Cartesian error and joint error at the end of each episode from \(1000\) evaluation episodes were tabulated after training the policy on the larger CTR system \(0\). Plotting the Cartesian points of the achieved goal with RGB values corresponding to Cartesian error to the desired goal, results in Fig. 7a. Furthermore, thresholding achieved goal points with Cartesian errors to the desired goal greater than \(2\) mm, regions of larger errors can be isolated. As seen, there is a large standard deviation in errors greater than \(2\) mm with constrained rotation. Constraint-free rotation training results in no errors greater than \(2\) mm, as seen in Figure 7b, thus the standard deviation of errors is greatly reduced. In order to investigate the joint values associated with these large errors, Figure 7 shows that the Cartesian achieved goals for the innermost tube or \(\alpha_{1}\) at the end of each episode and the associated errors in the robot workspace. As seen, there is a large number of errors greater than \(2\) mm with some points up to \(30\) mm in error. In Figs. 8a, 8b and 8c, the constraints causing the large errors at the boundaries of \(-180\) and \(+180\) where the largest outliers for errors are confirmed with a polar plot for each tube rotation. In previous work, this rotation constraint is to limit the joint space sampling during the generation of new desired goals, starting joint values and data collection as has been implemented in previous CTR deep learning work [14]. However, this constraint through timesteps is non-essential in the trigonometric representation. Training a policy using the egocentric decay curriculum without constraining the rotations of the tubes considerably improved the error metrics from the previous constrained egocentric curriculum from a mean error of \(4.05\) mm to \(0.68\) mm for the largest system \(0\). To further analyze the behaviour of the agent with respect to error, a goal distance to Cartesian error analysis was performed from the \(1000\) evaluation episodes and state information was tabulated. This analysis reveals the relationship between distance to the desired goal and the associated final error. Generally a closer goal, ie. a low initial goal distance, would Fig. 5: Path following tasks (a) noise-free straight path and (b) noise-induced circular path from previous work [11]. Starting point (red), final point (green), desired path (black) and achieved path (green) at \(z=100\) mm. Fig. 6: CTR systems visualizing differences in tube parameters. System 0 (blue), system 1 (orange), system 2 (green) and system 3 (purple). Ordered from longest to shortest.Each system is fully extended with the middle and inner tube rotated at \(0^{\circ}\) and \(180^{\circ}\) for a total of \(4\) configurations per system. be expected to have smaller errors overall, with farther goals having larger errors. For this analysis, all \(1000\) data points were used to determine the linear relationship between initial goal distance and final Cartesian error. For constrained rotation with system \(0\), the slope was found to be \(3.27\) mm with a \(y\)-intercept of \(0.8\) mm. This suggests a poor inverse kinematics solver as errors become very large with higher goal distances. With constraint-free rotation, the slope was found to be \(0.6\) mm with a \(y\)-intercept of \(0.66\) mm, a reasonable slope for such a relationship. As an inverse kinematics solver, our DeepRL method performs adequately when rotations are unconstrained. However, an important note is only one solution is provided as it is an iterative solver and is dependant on the initial condition of the joint configuration at the start of each episode. However, this can be remedied with multiple episodes with different initial joint configurations. Visualized in Fig. 9 is an example of the same desired end-effector position with two different initial joint configurations, resulting in two different final inverse kinematics solutions. Using the constraint-free egocentric decay method for system \(0\), the desired goal position was \((0,50,150)\) mm. The final joint Fig. 8: Constrained rotation errors with respect to rotation angle for (a) \(\alpha_{1}\), (b) \(\alpha_{2}\) and (c) \(\alpha_{3}\). Constraint-free rotation errors with respect to rotation angle for (d) \(\alpha_{1}\), (e) \(\alpha_{2}\) and (f) \(\alpha_{3}\). Fig. 7: (a) Constrained and (b) constraint-free trained policy evaluation achieved goals and associated Cartesian errors in RGB. configuration in Fig. (a)a was \(\beta=[-15.35,-12.59,-4.01]\) mm, \(\alpha=[-3.73,-183.07,-25.54]^{\circ}\) with a tip error of \(0.99\) mm and for Figure (b)b \(\beta=[-14.77,-9.12,-5.12]\) mm and \(\alpha=[-179.19,-180.32,1.93]^{\circ}\) with a tip error of \(0.36\) mm. To verify the constraint-free results and to demonstrate training and evaluation of our DeepRL method, we apply it to three other CTR systems from various sources in the literature and trained each with constrained and constraint-free rotation and performed \(1000\) evaluation episodes. The main changes to hyperparameters were \(3\) million training timesteps with \(1.5\) million steps for the curriculum, buffer size of \(500,000\), and neural network layer size of \(3\) hidden units with \(256\) neurons each. The results for all four CTR systems are presented in Table II. Of note is the increased standard deviation in constrained rotation. However, with smaller systems such as system \(3\), this is not as pronounced due to the smaller robot workspace. From our previous work, there as been a large reduction in mean and standard deviation of errors as shown in the largest CTR system. ### _Generic Policy_ To further motivate the utility of DeepRL for CTRs, we introduce an initial proof of concept for a CTR system generic policy. A hurdle currently with using deep learning approaches for CTRs is the limitation of CTR system generalization. Because deep learning relies solely on the data collected, if only one CTR system is used for training, the learned policy will accurately control that system alone. Moreover, we aim to demonstrate that using our egocentric decay goal-based curriculum, with constraint-free rotation has improved error metrics when compared with the one that does not employ these extensions for such a generic policy. This CTR generic policy will seek to generalize over four CTR systems which have different tube parameters. The objective is to obtain good performance across the CTR systems with a single control policy. For generalization, a system specifier, \(\psi=\{0,1,2,3\}\), was appended to the state, \(s_{t}\), for the agent to differentiate the CTR systems. The state is now defined as \[s_{t}=\{\gamma_{1},\gamma_{2},\gamma_{3},G_{achieved}-G_{desired},\delta(t),\psi\} \tag{16}\] At the start of each episode, a discrete uniform distribution is sampled to determine the value of \(\psi\), or the CTR system parameters. These parameters are then set in the simulation environment and for that episode, the selected system is the one simulated for the agent's task. Once the episode is reset, a new \(\psi\) is sampled. Thus, over timesteps, all systems should be sampled with the agent collecting experiences from all systems uniformly. We acknowledge this is not true generalization, and the policy only learns the systems given, however, this initial proof-of-concept demonstrates some form of generalization is possible for CTRs using DeepRL, an attribute not shown in any previous work. Given the right network parameters, all attributes defining a CTR system could be included in the state to generalize fully. To demonstrate initial generalization, we train a single policy to generalize over two, three and four systems including different combinations. The systems are ordered \(0\) to \(3\), from the shortest overall system to the longest overall system. First, to validate our egocentric decay constraint-free method, we compared generalization with our constraint-free egocentric decay method to constraint-free proproceptive constant method and constrained proprioceptive constant policy. We aim to demonstrate that our constraint-free egocentric curriculum is key to policy convergence for generalization. A full set of results for all combinations of generalization are provided for the different systems. Then, to demonstrate the learned policy we present path following task results generalized over four systems, with and without sensor and encoder noise to display robustness. We make our gym environment and the code to reproduce these results available online. 1 Footnote 1: [Online]. Available: [https://github.com/keshaviyenaf/gym-ctr-reach](https://github.com/keshaviyenaf/gym-ctr-reach) ## IV Experiments and Results Apart from inverse kinematics evaluation error metrics, we validate the deepRL method by performing path following experiments in simulation. In a surgical scenario, the surgeon would be controlling the end-effector tip position with a haptic device similar to a Phantom Omni from Sensable Technologies. Thus, the inputs are Cartesian coordinates of the end effector tip position and fit well with the state description of our DeepRL method. The experimental framework for path following tasks is as follows. First, \(x,y,z\) desired goal positions are generated by a path generator component to substitute control inputs from a user. This component takes as input path shape parameters and the discretization parameter. Shapes include polygons, circular and helix paths. The generator outputs a series of \(x,y,z\) desired goal positions of the path with the path parameters given. The next component is the policy controller which takes two inputs, the desired goal positions and initial joint configuration. The controller is described in detail in the next section. The controller outputs the changes in joint positions to achieve the path by reaching each of the desired goals given by the path generator. The controller is open-loop since information about whether the goal is reached is not relayed back. There are a number of steps given to reach the goal, and even if the goal is not reached, the next goal is given. Finally, the last component is the CTR Fig. 9: Two inverse kinematics solutions with different starting joint configurations given the same desired goal. Black dot indicates starting position and the magenta dot indicates the desired position with the black dashed line indicating the path taken. simulation. The simulation takes input of the changes in joint positions, performs kinematics for each step and returns the achieved goal positions of the end-effector as well as the full CTR backbone shape, with which the resulting path followed can be visualized. The entirety of validation path following framework can be found in Fig. 10. ### _Policy Controller_ The policy controller component acts as an open loop controller that takes as input a series of desired goal positions. The main control occurs while iterating over the given desired goal positions. Iterating through desired goals, first, the environment is initialized with a reset function which returns the first state. The reset function is used as an input to the system identifier and desired goal. Next, in the main for loop of the controller, the policy is given \(20\) timesteps to achieve the current desired goal with actions from the policy function. If the agent has achieved the current desired goal within the \(1\) mm tolerance before \(20\) timesteps, then a break is initiated and the next desired goal is set. This is outlined in Algorithm 1. Fig. 11: System 0 following a helix path. (a) starting point of path, (b) midway through path, (c) final point of path and (d) full path following results. Fig. 10: Illustration of process by which paths are generated, control actions are determined and finally, paths followed. ### _Single System Validation_ To validate our constraint-free, egocentric decay curriculum DeepRL method, we train different CTR systems and present error metrics for inverse kinematics and path following for each system. This was done to verify that the same method can be applied to various systems, resulting is an accurate learned policy. Additionally, we add state noise in the form of encoder and end-effector position noise. This is to demonstrate the learned policy is somewhat resilient to noise in the state. For simplicity, a \(1^{\circ}\) standard deviation was selected. To determine the extension joint noise, a gear ratio of \(0.001\) was used. For achieved goal or tracking noise, a standard deviation \(0.8\) mm based on an EM tracker (Aurora, NDI Inc., CA) precision data found in documentation. Error metrics Secondly, to provide a pathway to hardware translation, we include initial results for domain randomization. In domain randomization, to transfer the policy from the source domain (simulation) to target domain (hardware), a series of environmental parameters in the source domain are sampled from a randomized space. During training, the episodes are collected with the source domain with randomization sampling applied. This allows for the policy to be exposed to a variety of environmental parameters to generalize. In this way, the policy is trained to maximize the expected reward over a distribution of CTR configurations around the desired CTR configuration. Specifically, uniform domain randomization [29], is implemented for the tube parameters specified in the simulation. These tube parameters include length, curved length, inner diameter, outer diameter, stiffness, torsional stiffness, and pre-curvature. In uniform domain randomization, an interval range of from which the tube parameters are uniformly sampled should be defined. These intervals are shown in Table IV. The intervals have been chosen to be close to the parameters of the desired tube configurations. The chosen CTR configuration is that of system \(2\) with an interval of \(\pm 5\%\) of each parameter. To evaluate this translated domain policy, we performed path following tasks and inverse kinematics on system \(2\) tube parameters. Although there exist more sophisticated methods of domain translation, as an initial work, we believe this demonstrates the feasibility to translate the policies trained to hardware. In \(1000\) evaluation episodes, a mean error of \(0.86\) mm was found with a standard deviation of \(0.64\) mm. Using a straight-line path for testing for path following, the mean tracking error was \(1.10\) mm with a standard deviation of \(0.15\) mm. We compare these results to state of the art in the next section. Without domain randomization, training results for inverse kinematics were summarized in Table II under constraint-free experiments. For path following, in a noise-free simulation, system \(0\) had error metrics of \(0.66\pm 0.28\) mm for a helix path and \(1.74\pm 0.72\) mm in a noise-induced simulation. System \(0\) is the longest system, with the highest errors and was chosen for results, evaluation and comparisons. ### _Generic Policy Validation_ To validate our generic policy method, we trained a generic policy for different combinations of two, three and four CTR systems. For example, to generalize over two CTR systems, because we have four different CTR systems available to train on, we trained on all combinations of two systems resulting in a total of six experiments. Similarly, generalizing over three systems results in four experiments and a single experiment to generalize over all four CTR systems. Performing \(1000\) evaluation episodes and summarizing the error metrics, the proposed method is able to generalize over multiple systems. The full error metrics are shown in Table III. Looking at the Fig. 12: System \(1\) following a straight path. (a) starting point of path, (b) midway through path, (c) final point of path and (d) full path following results. error metrics with respect to systems, there is a correlation between length of system and higher errors similar to the previous constrained and constraint-free experiments. System \(0\) is the longest length, and consistently has the largest error metrics. We believe this is because of the workspace size, as overall length increases, the agent will require more training steps and experiences. Another factor is that in general, longer CTR systems have larger errors, and thus comparisons are done with percentage of robot length. To mitigate this effect, a sampling strategy is used where the sampling of the system used in the environment is based on the lengths of the systems. In this length-based sampling strategy, the categorical distribution is proportionate to the length of each system. Each system probability is the system length divided by the sum of the system lengths being generalized. In this manner, systems that are longer and that have larger workspace are sampled more during training and have more experiences for the policy to train. Evaluating this sampling strategy for generalizing over four CTR systems, the error metrics were improved from the previous uniform sampling. To validate the generalization, we performed a helix path following task with system \(0\) as seen in Fig. 11. The error metrics were a mean tracking error of \(1.01\) mm and with a standard deviation of \(0.41\) mm with \(50\) desired goal points in the path. Performing a noise-induced experiment the error metrics were a mean tracking error of \(1.86\) mm with a standard deviation of \(0.8\) mm. When the number of points was increased to \(100\), error metrics were a mean tracking error of \(0.91\) mm with a standard deviation of \(0.41\) mm. In a noise-induced simulation, the mean tracking error was \(1.91\) mm with a standard deviation of \(0.78\) mm. To compare our generic egocentric constraint-free method, we also train using proprioceptive representation for a four system generic policy with constrained rotation and one with constraint-free rotation to compare to our egocentric constraint-free policy. We summarize the results in Table V. To note is the importance of removing rotation constraints as seen in the mean and standard deviation of errors from constrained proprioceptive to constraint-free proprioceptive. Error metrics are greatly reduced with this improvement. Finally, using an egocentric representation does improve metrics but has less of a significant impact as compared to rotational constraints. ### _Comparisons to the State of the Art_ To compare our inverse kinematics and path following results to previous state of the art classical methods and deep learning methods, we convert errors to percentage of robot length. First, we present inverse kinematics results for our constraint-free egocentric decay for each system when trained separately ie. not the generic policy. Mean and standard deviation as percentage for each system was \(0.16\%\pm 0.065\%\), \(0.17\%\pm 0.18\%\), \(0.20\%\pm 0.08\%\) and \(0.3\%\pm 0.13\%\). When taken as a percentage of the robot length, the similarity of the error metrics is noteworthy. In our generalization method, for the four system generalization inverse kinematics for each system is \(0.18\%\pm 0.02\%\), \(0.19\%\pm 0.02\%\), \(0.24\%\pm 0.02\%\) and \(0.42\%\pm 0.04\%\). For tip tracking error, for the more complex helix path errors were \(0.23\%\pm 0.095\%\) for system 0 with \(50\) goal points. Increasing the number of desired goal points to \(100\), the following mean tracking errors of \(0.20\%\pm 0.08\%\) as seen in Fig. 11. As reported in SSII, Jacobian-based methods can achieve errors of \(0.5\%\) to \(0.9\%\). As shown in Fig. 13, our DeepRL method is able to avoid joint limits especially in extension, whereas the Jacobian approach does not include joint limits in the linearization. This comparison was done on system 0 with \(K_{p}=2I\) for a linear and circular path. The error metrics found were \(1.15\) mm \(\pm\)\(0.32\) for the Jacobian method and \(0.62\) mm \(\pm\)\(0.07\) for our DeepRL method in Fig 13a. More importantly, the Jacobian based method was unable to follow some circular, linear and helical trajectories that our DeepRL method successfully completed due to the Jacobian not including joint limits in the formulation, even if the damped-least sqaures method was used with \(\Lambda=0.45\). as seen in Fig. 13b. The advanced MPC method can achieved tip errors of \(0.3\%\) to \(0.5\%\), however, we were unable to perform comparisons in simulation as the open source simulation code was for a two-tube system. Our method does perform comparably the reported state-of-the-art in simulation, however, this work is only in simulation and does not include constraints for snapping conditions in the model used. The approach will need to be validated in hardware. One possible way to consider snapping and singularity is to design a dense reward function that includes minimisation of elastic energy, hence avoiding snapping conditions. For our domain randomization results, as a percentage of robot length, the inverse kinematics errors were \(0.86\%\pm 0.21\%\). Following a straight-line path, the errors were \(0.36\%\pm 0.05\%\). The domain randomization metrics are higher, however, the aim is to demonstrate feasiblity of the transfer method. To validate, we will need to compare transfer to hardware with and without domain randomization or other transfer methods. ## V Conclusion In this work, we investigate a DeepRL, end-to-end method for kinematic control of CTRs. Specifically, we aim to explore constraints on tube rotation and the impact on error metrics. Furthermore, the first proof-of-concept in system generalization is developed, not yet done for any deep learning method for CTRs. Finally, to provide initial work for hardware, we provided an initial pathway for domain transfer or policy transfer from simulation to hardware using domain randomization. Our method does demonstrate error metrics that perform well, however, validation in hardware is needed. Moreover, other domain transfer methods should be explored. We believe, with this work, we have demonstrated that deepRL methods may be able to out perform model-based methods for inverse kinematics and control for CTRs, similar to deep learning methods for forward kinematics and shape estimation.
2306.12283
Quantum droplets with particle imbalance in one-dimensional optical lattices
We study the formation of particle-imbalanced quantum droplets in a one-dimensional optical lattice containing a binary bosonic mixture at zero temperature. To understand the effects of the imbalance from both the few- and many-body perspectives, we employ density matrix renormalization group (DMRG) simulations and perform the extrapolation to the thermodynamic limit. In contrast to the particle-balanced case, not all bosons are paired, resulting in an interplay between bound states and individual atoms that leads to intriguing phenomena. Quantum droplets manage to sustain a small particle imbalance, resulting in an effective magnetization. However, as the imbalance is further increased, a critical point is eventually crossed, and the droplets start to expel the excess particles while the magnetization in the bulk remains constant. Remarkably, the unpaired particles on top of the quantum droplet effectively form a super Tonks-Girardeau (hard-rod) gas. The expulsion point coincides with the critical density at which the size of the super Tonks-Girardeau gas matches the size of the droplet.
Jofre Vallès-Muns, Ivan Morera, Grigori E. Astrakharchik, Bruno Juliá-Díaz
2023-06-21T14:11:15Z
http://arxiv.org/abs/2306.12283v2
**Quantum droplets with particle imbalance in one-dimensional optical lattices** ## Abstract **We study the formation of particle-imbalanced quantum droplets in a one-dimensional optical lattice containing a binary bosonic mixture at zero temperature. To understand the effects of the imbalance from both the few- and many-body perspectives, we employ density matrix renormalization group (DMRG) simulations and perform the extrapolation to the thermodynamic limit. In contrast to the particle-balanced case, not all bosons are paired, resulting in an interplay between bound states and individual atoms that leads to intriguing phenomena. Quantum droplets manage to sustain a small particle imbalance, resulting in an effective magnetization. However, as the imbalance is further increased, a critical point is eventually crossed, and the droplets start to expel the excess particles while the magnetization in the bulk remains constant. Remarkably, the unpaired particles on top of the quantum droplet effectively form a super Tonks-Girardeau (hard-rod) gas. The expulsion point coincides with the critical density at which the size of the super Tonks-Girardeau gas matches the size of the droplet.** ###### Contents * 1 Introduction * 2 Physical model * 2.1 Numerical method * 2.2 Particle-balanced situation * 2.2.1 Density profile * 3 Few-body systems with particle imbalance * 3.1 Four-particle case * 3.2 Bound states for particle imbalance * 4 Ground state properties in the particle-imbalanced situation * 4.1 Particle-imbalanced quantum droplets * 4.2 Coherence in quantum droplets * 5 4.3 Thermodynamic properties * 4.4 Super Tonks-Girardeau gas of the exceeded particles * 4.5 Bound state insulator * 4.6 Magnetic structure within the droplet * 5 Conclusion * A DMRG Convergence * A.1 Bond dimension * A.2 Maximum number of bosons per site ## 1 Introduction Recently a whole new class of ultra-dilute quantum droplets has been produced in ultracold atomic laboratories with dipolar bosonic atoms [1, 2, 3] and bosonic mixtures [4, 5, 6, 7]. These quantum droplets originate from a compensation between mean-field and quantum fluctuations [8] and consist of a new type of liquid, which densities can be up to eight orders of magnitude more dilute than liquid helium droplets [9], the only atomic species which naturally remain liquid at zero temperature [10]. Ultracold atomic systems can be subjected to optical lattices created by counter-propagating laser beams [11]. Atoms interact with each other at each site, known as on-site interactions, and can also tunnel through the potential barriers between sites, known as tunneling. Interacting spinless bosons in a high optical lattice are described by the Bose-Hubbard model [12], a model which gained a lot of attention in recent years [13, 14, 15, 16, 17, 18]. The use of optical lattices and Feshbach resonances to fine-tune the interactions provides exquisite control over the system and allows the implementation of ideal quantum simulators of Hubbard models, which are ubiquitous in condensed matter. In particular, ultracold atoms can be trapped to a potential that restricts the movement to only one dimension [19]. One-dimensional geometry allows changing the interaction strength in a much wider range compared to three-dimensional case, due to the suppression of three-body losses [20]. In particular, the coupling constant can be made infinitely repulsive (unitary regime), and a single-component Bose gas acquires a number of fermionic properties [21]. One-dimensional quantum liquids are formed in the regime where the mean-field contribution is on average repulsive [22], differently from the 3D case. Quantum droplets made of bosonic binary mixtures in a one-dimensional lattice have been studied in the particle-balanced situation [23, 24], where the number of atoms of both species is equal. In this work, we consider the fate of quantum droplets in the particle-imbalanced case. First, in Sec. 2 we introduce the model, comment on the used numerical method and briefly review the properties of the balanced case. In Sec. 3, we characterize the effects of particle imbalance in the few-body regime. We find the presence of few-body bound states and characterize them by computing the binding energies and correlation functions. The effects of particle imbalance in the many-body limit are studied in Sec. 4. Physical model We study a binary mixture of bosonic atoms interacting via short-range potential loaded in a deep one-dimensional optical lattice at zero temperature. For a sufficiently high optical lattice, the system properties are well described by the two-component Bose-Hubbard Hamiltonian [13], \[\hat{H}=-t\sum_{i}\sum_{\alpha=A,B}\left(\hat{b}_{i,\alpha}^{\dagger}\hat{b}_{i+ 1,\alpha}+\text{h.c.}\right)+\frac{U}{2}\sum_{i}\sum_{a=A,B}\hat{n}_{i,\alpha} (\hat{n}_{i,\alpha}-1)+U_{AB}\sum_{i}\hat{n}_{i,\alpha}\hat{n}_{i,B}\,, \tag{1}\] where \(\hat{b}_{i,\alpha}\) (\(\hat{b}_{i,\alpha}^{\dagger}\)) are the annihilation (creation) bosonic operators at site \(i=1,\dots,L\) for species \(\alpha=A,B\); and \(\hat{n}_{i,\alpha}\) are their corresponding number operators. For simplicity, we assume that both species possess the same tunneling strength, \(t>0\), and have equal repulsive intra-species interaction strength, \(U>0\). Throughout the entire work, \(t\) is used as the energy scale. We study the case of attractive inter-species interaction, \(U_{AB}<0\), and introduce the dimensionless ratio \(r=1+U_{AB}/U>0\). ### Numerical method We use the density matrix renormalization group (DMRG) algorithm to study the ground-state properties numerically. In the DMRG computations used in this work, unless explicitly stated differently, we set a cutoff on the maximum number of bosons of each species per site of \(M=4\) for simulations with sufficiently large interaction strength \(U/t\). This gives a local Hilbert space dimension of \(d=(M+1)^{2}=25\). We have explicitly checked that our results are robust with respect to this cutoff, see Appendix A.2 for details on the convergence with the bosonic cutoff \(M\). For systems with open boundary conditions, the maximum bond dimension of our DMRG is set to \(\chi=256\) for quantitative results of the density and energy of the system and \(\chi=2048\) when we study correlation functions. For systems with periodic boundary conditions, we use \(\chi=512\) to extract the energy of the system. A study of the convergence of the physical quantities with the bond dimension \(\chi\) is presented in Appendix A.1. ### Particle-balanced situation A bosonic binary mixture loaded in a high one-dimensional lattice at zero temperature presents quantum droplets in the particle-balanced situation when the repulsive intra-species interactions are compensated by a comparable attractive inter-species interaction [23, 24]. Here, we provide a brief review of the key aspects of these droplets in the particle-balanced situation, i.e. \(N_{A}=N_{B}\). #### 2.2.1 Density profile Using the DMRG method we are able to obtain the density profile of the ground state which provides important insights into the phase diagram of the system. Specifically, the density profile of a quantum droplet can be well approximated [23] by a symmetrized Fermi function [25], \[n_{i,\alpha}=\frac{n_{u}\sinh(R/(2\alpha))}{\cosh(R/(2\alpha))+\cosh(i/\alpha)}\,, \tag{2}\] where \(R\) is the size of the droplet, \(a\) the typical length scale of the meniscus and \(n_{M}\) is a parameter fixed by the normalization \(\sum_{i}n_{i,\alpha}=N_{\alpha}\). In Fig. 1(b), we present a representative density profile for a droplet alongside the fit provided by Eq. (2) for comparison. The value of the central density is an important quantity in the superfluid phase. Its value is determined by calculating the average density within the bulk of the droplet, \[\langle\hat{n}_{a}\rangle=\sum_{i=i_{M}=-R/2-4a}^{i_{M}+R/2+4a}\frac{n_{i,a}}{R+8 a}\,, \tag{3}\] where \(R\) and \(a\) are obtained by fitting the densities of the droplet with Eq. (2) and \(i_{M}\) is the position of the center of mass, \[i_{M}=\frac{\sum_{i=0}^{L}in_{i,i}}{\sum_{i=0}^{L}n_{i,A}}\,. \tag{4}\] Figure 1(a) shows the evolution of the averaged central density of a droplet as a function of the interaction strength \(U/t\) at fixed \(r\). For low values of the interaction strength \(U/t\) the density tends to the beyond-mean-field (BMF) prediction of the equilibrium density in the continuum, \(n_{i}=8U/9\pi^{2}r^{2}\)[26]. This is expected since in the limit of low density the distance between the atoms is large compared to the lattice spacing, and discrete description approaches the continuum one. Thus for \(U/t\to 0\) we expect to recover BMF results in the continuum. For large \(U/t\) the one-dimensional lattice is able to stabilize the density of the droplet and stop the rapidly-growing value of the BMF prediction. This feature shows one of the advantages of the lattice, as lower values in the equilibrium density imply lower three-body losses [27]. It is harder to simulate droplets with low equilibrium density (such as for vanishing or large values of \(U/t\)) as this requires the use of large lattices in order to obtain converged results. For \(U/t=2/r\) a liquid-gas transition is expected [24] which also corresponds to the threshold for dimer-dimer bound state formation. This transition can be identified as the point where the saturated density of the droplet vanishes at finite \(U/t\). Figure 1: (a) Averaged density of each species in the bulk of the droplet as a function of the interaction strength \(U/t\) for \(r=0.15\), the maximum number of bosons per site \(M=6\) and different \(L\), ensuring that the droplets fit inside the lattice. (b) Typical density profile of a droplet compared with the corresponding fit using Eq. (2). The droplet in panel (b) is obtained for \(U/t=4\), \(L=144\) and \(N_{A}=N_{B}=24\). ## 3 Few-body systems with particle imbalance Before studying the effects of particle imbalance in quantum droplets we first address the few-body problem. As in the particle-balanced case [24], the few-body problem offers great insights into the formation of quantum droplets and liquids in the many-body situation. In order to understand the internal structure of the ground state, we perform calculations with varying numbers of particles. This allows us to analyze the different configurations these particles can form, what we refer to as decomposition channels. Each channel represents a potential configuration of interacting subsystems within the overall system, characterized by distinct binding energies and other relevant properties. By examining these channels, we can gain deeper insights into the complex interactions within the system. ### Four-particle case We start by considering a system of four bosons, \(N_{A}+N_{B}=4\). In the particle-balanced situation (\(N_{A}=N_{B}=2\)) the system dimerizes for large values of \(U/t\) at fixed and small \(r\)[24]. An effective interaction between dimers emerges in this regime. In order to rule out the presence of trimers in the particle-balanced situation we compute their respective binding energies. In Fig. 2(a) we report the energy of two dimers AB and a trimer AAB with a free particle B. The sum of the energy of two dimers is always lower than the sum of the energy of a trimer and the energy of a free particle. Therefore we rule out the formation of trimers in the balanced four-particle case \(N_{A}=N_{B}=2\). ### Bound states for particle imbalance After ruling out the presence of trimers in the balanced case we study the formation of bound states with particle imbalance. In Fig. 2(b) we compare the binding energies of the balanced Figure 2: (a) Energy of different particle configurations as a function of the interaction strength \(U/t\) for \(r=0.15\) and a lattice of size \(L=200\). Circles show the energy of an AAB trimer and a free B particle while squares represent the energy of two AB dimers. (b) Binding energies as a function of the interaction strength \(U/t\) for \(r=0.15\) and \(L=200\). Up (down) triangles show the binding energy of the tetramer (hexamer) obtained by subtracting the energy of two AB dimers (two AAB trimers), respectively. tetramer AABB formed by two dimers and the imbalanced hexamer AAAABB formed by two trimers. We observe that both binding energies exhibit a similar trend for any value of \(U/t\) suggesting that quantum droplets may be created in the many-body limit when particle imbalance is present in the system, as we will further explore in Sec. 4. This has to be contrasted with the two-dimensional continuum system where it has been seen that dimers and trimers do not bind at the same coupling strength [28]. Moreover, we have observed that depending on the amount of particle imbalance different large particle composites can be created. We show that the specific structure of bound states depends on the interaction strength \(U/t\) and thus, transitions can be observed for a fixed \(r\) and a fixed \(N_{A}\) and \(N_{B}\). This can be inferred from Fig. 3, where we study different decomposition channels of the AAAABB hexamer and their respective binding energies. A multiparticle composite is more likely to break when it exhibits a decomposition channel with a small binding energy. In particular, when the binding energy of a decomposition channel is zero the system fully decomposes into the respective particle composites. We find that the hexamer is bound for \(U/t\lesssim 12\) at \(r=0.15\) with the most likely decomposition channel being an AAABB pentamer and a single A atom for \(U/t\lesssim 6.5\) and two trimers for \(U/t\gtrsim 6.5\). The binding energy associated with the decomposition into two trimers becomes zero for hexamer \(U/t\gtrsim 12\) indicating that the hexamer fully decomposes into two trimers. The decomposition of a hexamer into two trimers is also reflected in the formation of two bumps in the density profiles denoting the physical separation of the two trimers. With this large variety of bound states in the few-body case, one expects to find intriguing many-body phases associated with the self-organization of these multiparticle composites. We now study the effect of particle imbalance by changing the number of B particles while keeping fixed the number of A particles. To do so, we start with the balanced configuration Figure 3: Main plot, the binding energies of the hexamer state (\(N_{A}=4,N_{B}=2\)) as a function of the interaction strength \(U/t\) for \(r=0.15\) and \(L=200\). The green zone in the background is the region where two trimers are not bound together, this is identified when the binding energy \(E_{AAABB}-2E_{AAB}\) vanishes. Right panel, characteristic density profiles of the hexamer state calculated for two values of the interaction strengths \(U/t\) which are marked with grey dashed lines in the main plot. The blue and orange lines correspond to the density of the A and B species, respectively. \(N_{A}=N_{B}=4\) and remove B particles. We compute the respective binding energies of all decomposition channels for each imbalance case, from \(N_{A}=N_{B}=4\) to \(N_{A}=4\), \(N_{B}=0\). Then, we determine the decomposition channel with the smallest binding energy in each situation. In Fig. 4(a) we plot the binding energy of the most favorable decomposition channel for each imbalance situation as a function of system size. For small imbalances \(N_{A}-N_{B}\leq 2\) the system is able to bind all particles while for \(N_{A}-N_{B}>2\) the system fully decomposes into smaller composites. Thus we conclude that different bound states appear for different imbalances. Furthermore, to elucidate the internal structure of the system we compute the correlation function \(\langle\hat{n}_{i}^{A}\hat{n}_{j}^{A}\rangle\) shown in Fig. 4(b) and (c). In a bound state, there is an exponential decay of the correlator \(\langle\hat{n}_{i}^{A}\hat{n}_{j}^{A}\rangle\propto\exp(-x/l)\) while the binding energy \(E_{B}\propto-\hbar^{2}/(ml^{2})\)[29]. We numerically confirm good agreement with both results. However, when A particles get expelled, the correlation function shows instead a two-regime behavior: it decays exponentially at short distances and then shows saturation at large distances, see Fig. 4(c). Therefore, we find that for \(N_{B}\geq 2\), the B particles bind with the other four A particles forming a large composite object. In contrast, for \(N_{B}=1\) the B particle alone is not able to bind all the A particles, and instead, with two A particles it creates an AAB trimer while the other two A particles are expelled. Thus, the exponential decay at short distances found in Fig. 4(c) can be identified with the presence of a trimer while the saturated value at large distances is given by the concentration of the expelled particles in a finite-size box and vanishes in the thermodynamic limit, which was numerically confirmed by increasing the lattice size. ## 4 Ground state properties in the particle-imbalanced situation In the previous section, we have shown that particle imbalance leads to the formation of multiple bound states in the few-body limit. We now show how the presence of these bound states affects the many-body properties of the system. By employing the DMRG method we Figure 4: (a) Main decomposition binding energies as a function of the lattice size \(L\). Panels (b) and (c), the correlator \(\langle\hat{n}_{i}^{A}\hat{n}_{j}^{A}\rangle\) with \(i\) fixed in the middle of the lattice and \(j\) scans from \(j=i\) to the end of the lattice. In panel (b) the correlator is computed for \(N_{A}=4,N_{B}=2\) and in (c) for \(N_{A}=4,N_{B}=1\). All three panels are obtained for \(U/t=8\) and \(r=0.15\) and both panels (b) and (c) are obtained for \(L=300\). are able to study systems with fairly large particle numbers and system sizes, sufficiently large for studying the transition from the few-body to the many-body regime. In the following, we quantify the particle imbalance by means of the dimensionless polarization, \[z=\frac{N_{A}-N_{B}}{N_{A}+N_{B}}\,, \tag{5}\] where \(N_{A}\) and \(N_{B}\) are the total number of particles for the species A and B, respectively. In a balanced unpolarized system \(z=0\) while in a fully polarized system \(|z|=1\). The imbalance is introduced by removing B particles in the system while fixing the number of A particles. ### Particle-imbalanced quantum droplets Previous beyond-mean-field studies of quantum droplets have shown that spin excitations creating particle imbalance are highly energetic and above the particle expulsion threshold, leading to evaporation of the excess of imbalanced particles [8]. In contrast to the continuum case, here we show that strongly correlated droplets in one-dimensional optical lattices are robust against a certain amount of imbalance. To characterize the stability we introduce the magnetization, \[m_{ab}=\frac{\langle\hat{n}_{A}\rangle-\langle\hat{n}_{B}\rangle}{2}\,, \tag{6}\] where \(\langle\hat{n}_{A}\rangle\) (\(\langle\hat{n}_{B}\rangle\)) is the averaged density in the bulk of the droplet for the species A (B), defined in Eq. (3). In Fig. 5 we show the evolution of the magnetization as the particle imbalance \(z\) is increased. As the imbalance is augmented, we identify two distinct regimes: the droplet linearly gains magnetization in the bulk for small imbalances while the magnetization in the Figure 5: In the main plot, we show the magnetization \(m_{ab}\) (defined in Eq. 6) as a function of the imbalance quantity \(z\). The dashed line is an analytical approximation for low particle imbalance, and the vertical dashed-dotted line is the value of the critical imbalance \(z^{*}\). In the background, the green region shows where the system expels A particles outside the droplet. In the insets, we show the density profile of components A and B in blue and orange, respectively, in the corresponding \(z\) region as a function of the lattice site. This figure was obtained for \(U/t=8\), \(r=0.15\) and \(L=200\). The droplets in the insets are obtained for \(N_{A}=40\). bulk of the droplet is locked and the excess of particles A are expelled outside the droplet for large imbalances. The expulsion of particles results in a plateau of magnetization as a function of imbalance. The transition between these two regimes occurs at a critical imbalance \(z^{*}\), the specific value of which depends on the interactions in the system. Moreover, these two regimes can be clearly identified by looking at the density profiles, see insets in Fig. 5. Density profiles for \(z>z^{*}\) are characterized by a central droplet with finite magnetization and an outer gas. In order to take care of possible finite-size corrections we perform simulations for different total number of particles and check if there is a strong dependence on \(N\). Instead, we find that the magnetization of the droplets as a function of imbalance \(z\) shows a universal behavior for different number of particles, see Fig. 5. Moreover, other physical properties such as the size of the droplet \(R\) and its mean density \(n\) also exhibit this universal behavior denoting that in our case finite size effects can be safely neglected. Particle expulsion from the droplet resembles the phenomenon found in the few-body regime discussed in Sec. 3. When particle imbalance is increased the system decomposes into a region of large bound states and a region of non-bound A particles. The critical value of the imbalance can be understood as the point at which the B particles are not able to bind all the other A particles and thus they are expelled. ### Coherence in quantum droplets We now analyze the coherence properties of an imbalanced quantum droplet that has expelled two particles (one to the left and one to the right) by computing the one-body density matrix (OBDM), see Fig. 6. As we are interested in the coherence between the exterior gas and the droplet, we focus on the OBDM of A species. Remarkably, coherence exists not only inside the droplet but also between the droplet and exterior gas. Figure 6(b) shows the algebraic decay of the OBDM inside the droplet, typical to coherent systems in one dimension. Outside of the Figure 6: One-body density matrix (OBDM) \(\langle\hat{b}^{\dagger}_{i,A}\hat{b}_{j,A}\rangle\) where \(i\) and \(j\) are lattice sites, applied over a quantum droplet that has two expelled particles outside (\(N_{\text{A}}=40\), \(N_{B}=24\), \(U/t=8\), \(r=0.15\) and \(L=200\)). Both panels (b) and (c) are two cuts in which \(i\) is fixed and \(j\) goes from \(j=i\) to \(j=L\). In panel (a) we draw these cuts considered for panels (b) and (c) with a dashed and dashed-dotted line, respectively. In panel (b) the dashed line is a fit inside the droplet region with \(\langle\hat{b}^{\dagger}_{i,A}\hat{b}_{j,A}\rangle\propto 1/|i-j|^{a}\), where we extract \(\alpha\approx 0.29\). In panel (c) the dotted line is a fit inside the left gas region with \(\langle\hat{b}^{\dagger}_{i,A}\hat{b}_{j,A}\rangle\propto 1/\sqrt{|i-j|}\). droplet, coherence decays even faster. The dashed line in this same panel shows a power-law fit \(\rho_{ij}\propto 1/|i-j|^{\alpha}\) with the power exponent equal to \(\alpha\approx 0.29\). Figure 6(c) shows the OBDM between the particles in the gas located on the left side and the rest of the system. The gas-gas coherence in the same gas region shows a similar decay than the one expected for a Tonks-Girardeau gas \(\rho_{ij}\propto 1/\sqrt{|i-j|}\)[30], see dotted line in Fig. 6(c). However, the gas-gas coherence between the left and right gas is significantly suppressed. ### Thermodynamic properties In this subsection we discuss the procedure used for obtaining the thermodynamic properties of droplets as a function of the particle imbalance. Periodic boundary conditions (PBCs) are implemented by adding a long-range coupling between the first and last site of the system. The ground state obtained with PBCs corresponds to a homogeneous solution which for large enough particle number \(N=N_{A}+N_{B}\) and system size \(L\) becomes a good approximation of the thermodynamic limit solution. By exploring the energy of the system as a function of density and imbalance we are able to obtain the full equation of state. Specifically, we fix the total density of the system to match the saturation density obtained in the droplets with open boundary conditions (OBCs) presented in Sec. 4.1. We have explicitly checked that this corresponds to the equilibrium density. Then, we study how the equation of state (EoS) evolves with the imbalance. To extract relevant information of the EoS we compute the chemical potential of the A species, \[\mu_{A}=E\big{(}N_{A},N_{B}\big{)}-E\big{(}N_{A}-1,N_{B}\big{)}\,, \tag{7}\] where \(E\big{(}N_{A},N_{B}\big{)}\) is the energy of the homogeneous solution with \(N_{A}\) and \(N_{B}\) particles. The chemical potential \(\mu_{A}\) increases with the particle imbalance \(z\), see Fig. 7. At a critical imbalance \(z^{*}\) the chemical potential equals the energy of a single free particle in the Bose-Hubbard Figure 7: Chemical potential of the A species as a function of the imbalance quantity \(z\). The error bars in \(z\) come from the discretization of the density obtained from the simulations in open boundary conditions into a finite simulation in periodic boundary conditions. The dashed line is a fit with a quartic function. The green background shows the region where the fit is greater than \(-2\). These values are obtained for \(U/t=8\), \(r=0.15\) and \(L=80\). model \(\mu_{A}^{*}=-2t\). At this point, the respective droplet will not be able to sustain an excess of imbalanced particles since their energy becomes lower outside the droplet. Thus, the chemical potential indicates the critical point at which expulsion is expected in finite droplets. The thermodynamic calculation of the critical imbalance \(z^{*}\) agrees very well with the one obtained in finite droplets where expulsion is observed for imbalances \(z>z^{*}\). ### Super Tonks-Girardeau gas of the exceeded particles For small particle imbalance and large interaction strengths \(U/t\) we find that the density of exceeded particles, \(n_{i,A}-n_{i,B}\) inside the quantum droplet, exhibits \(N_{A}-N_{B}\) pronounced bumps, see Fig. 8(b). This has to be contrasted with a weakly interacting Bose gas where the density profile is almost flat. Therefore, this indicates that exceeded particles form a highly-correlated state where density-density correlations are enhanced. To quantify the properties of the highly-correlated gas formed on top of the quantum droplet, we calculate the difference between in energy of the two components \(E_{A}-E_{B}\), see Fig. 8. We observe that its value vanishes at a critical particle imbalance which resembles the behavior of a lattice Tonks-Girardeau gas but with an effective density, \[\tilde{n}=\frac{\langle\hat{n}_{A}\rangle}{1-\langle\hat{n}_{A}\rangle(a-1)}\,, \tag{8}\] where \(a\) represents the size of the particles measured with respect to the lattice spacing and \(\langle\hat{n}_{A}\rangle\) the mean density of the gas. Given this observation, we compare the energy difference with the energy of a gas of hard rods in a 1D lattice by performing an excluded volume substitution \(L\to L(1-\langle\hat{n}_{A}\rangle(a-1))\). This substitution was previously used in the continuum to obtain the energy of the super Tonks-Girardeau (sTG) gas [31]. The same procedure leads to the energy of the sTG in a 1D lattice, \[E=-2J\frac{\sin(\pi\langle\hat{n}_{A}\rangle/(1-\langle\hat{n}_{A}\rangle(a-1) ))}{sin(\pi/L(1-\langle\hat{n}_{A}\rangle(a-1)))}\,. \tag{9}\] Figure 8: (a) In circles and in squares, \((E_{A}-E_{B})/t\) as a function of \(n=(N_{A}-N_{B})/R\) for \(r=0.15\) and \(r=0.2\), respectively. The dashed lines correspond to a fitting with Eq. (9) and considering \(J\) and \(a\) as free parameters. For \(r=0.15\), \(J=0.888\pm 0.012\) and \(a=3.596\pm 0.015\) and for \(r=0.2\), \(J=0.3858\pm 0.0010\) and \(a=4.72\pm 0.02\). (b) Density difference in the droplet, \(n_{A}(x)-n_{B}(x)\), as a function of the lattice site \(i\); for a droplet obtained for \(N_{A}=40\), \(N_{B}=36\), \(U/t=10\) and \(r=0.15\). The parameter \(J=1/(2m^{*})\) takes into account the effective mass of the exceeded particles propagating on top of the quantum droplet and \(a\) gives their effective size. By fitting the free parameters \(J\) and \(a\) we observe that the energy difference \(E_{A}-E_{B}\) is well described by the energy of the lattice sTG gas, see Fig. 8. Thus, we conclude that the exceeded particles form an sTG gas on top of the quantum droplet. Since the exceeded particles form a highly-correlated gas on top of the quantum droplet we can estimate the dependence of the magnetization at small values of particle imbalance. The density of the exceeded gas is given by \(\langle\hat{n}_{A}\rangle=(N_{A}-N_{B})/R\), being \(R\) the size of the droplet obtained from Eq. (2). Then, we can write, \[z=\frac{N_{A}-N_{B}}{N_{A}+N_{B}}=\frac{R\langle\hat{n}_{\text{diff}}\rangle}{N _{A}+N_{B}}, \tag{10}\] which allows us to express the magnetization as, \[m_{ab}=\frac{\langle\hat{n}_{\text{diff}}\rangle}{2}=\frac{z(N_{A}+N_{B})}{2R} \simeq zn_{A}, \tag{11}\] where in the last step we do a first approximation to the balanced case \((N_{A}+N_{B})/2\simeq N_{A}\) and we write \(n_{A}=N_{A}/R\). Within this approximation, the magnetization is linear in \(z\) with a proportionality given by the equilibrium density of the species A. Since the equilibrium density is universal for any number of particles, this approximation of the magnetization is also universal on \(z\). In Fig. 5 we show that the magnetization follows this linear dependence for small imbalances \(z\). ### Bound state insulator The large bound states observed in the few-body problem suggest that these may have an important role in the many-body scenario. In the low particle imbalance and large interaction strength regime, we argue that for each B particle removed, a bound state with particle imbalance is formed on top of a dimerized balanced quantum droplet. The large bound states have a size \(a\) larger than the lattice spacing. Remarkably, we find that these bound states behave as sTG gas on top of the quantum droplet. By removing more B (increasing imbalance), there is a critical point where the density of bound states becomes commensurate with the droplet size and an insulator is formed \(\langle\hat{n}_{a}\rangle a=1\). After that, it becomes impossible to fit more bound states inside the droplet for larger imbalances, and thus the excess of A particles is expelled outside the droplet. This provides an estimation of the critical imbalance \(z^{*}\) based on the sTG picture, \[z^{*}=\frac{1}{na}. \tag{12}\] The value of the magnetization after expulsion can also be determined, \[m_{ab}(z^{*})=\frac{\langle\hat{n}_{A}\rangle-\langle\hat{n}_{B}\rangle}{2}= \frac{1}{2a}\,. \tag{13}\] With Eq. (13) and (12) we finally obtain, \[\frac{m_{ab}(z^{*})}{z^{*}\cdot n(z^{*})}=\frac{1}{2}\,, \tag{14}\] which establishes a relation between the value of the magnetization plateau and the critical imbalance at which expulsion starts. This relation is presented in Fig. 9 for a different number of particles. If we extrapolate the values to the thermodynamic limit \(N\to\infty\) with a function \(f(1/N_{A})=c+d/N_{A}\), where \(c\) and \(d\) are free parameters, the prediction in Eq. (14) is compatible with numerical results. Furthermore, we also find the size of the bound states obtained from the energy fitting in Eq. (9), the particle imbalance and the magnetization at the critical point \(m(z^{*})\), given by Eq. (12) and Eq. (13), respectively; have very similar values. This result corroborates our interpretation in terms of sTG gas formed by large bound states. ### Magnetic structure within the droplet We further explore the magnetic correlations within quantum droplets, given that they manifest a finite magnetization in the bulk for imbalances \(z<z^{*}\). For this purpose, we consider two important correlation functions. The spin correlator \(\langle b_{A}^{\dagger}(x)b_{B}(x)b_{A}(0)b_{B}^{\dagger}(0)\rangle\) measures the extent of magnetic order within the system, indicating how the effective spin states of A and B particles correlate across different locations. Additionally, the pair correlator \(\langle b_{A}^{\dagger}(x)b_{B}^{\dagger}(x)b_{A}(0)b_{B}(0)\rangle\) provides us with insights into the tendency of the system to exhibit phase coherence between pairs formed by particles A and B. Figure 10 presents our findings for two homogeneous solutions obtained with PBCs. Figure 10(a) depicts the results in the particle balance, while Fig. 10(b) illustrates the particle-imbalanced case. In the scenario in which the particle imbalance is present, the spin correlator exhibits an exponential decay while the pair correlator decays with a power law. This observation is consistent with the characteristics of the pair superfluid (PSF) phase, which is typified by a gapless region in density and a spin sector with a gap. Such behavior, however, changes with the introduction of particle imbalance. In this situation both correlators exhibit an algebraic decay, indicating the closure of the spin gap and a transition into the two-superfluid (2SF) phase. This outcome emphasizes the significance of particle imbalance as a crucial variable in the phase space, effectively transitioning the system from the PSF to the 2SF phase. Moreover, we confirm that the imbalance plays a significant role in modulating the magnetic structure of Figure 9: Magnetization normalized by the total density \(n=n_{A}+n_{B}\) times the imbalance quantity \(z\) just before the particle expansion occurs, \(z^{*}\). We fit the obtained values using a function \(f(1/N_{A})=a+b/N_{A}\), represented with a grey dashed line in the plot. From this fit we obtain \(a=0.499\pm 0.03\) and \(b=-1.11\pm 0.17\). The standard deviation of the parameters is displayed with a blue background. These results are obtained for \(U/t=8\), \(r=0.15\) and we choose \(L\) ensuring that the droplets fit inside the lattice. the droplet. The analytical solution derived in [32], which we use as a benchmark for comparison, is derived within the 2SF region. As such, we can extract the effective Luttinger parameters \(K_{a}\), \(K_{s}\) through a fitting procedure. ## 5 Conclusion In this work, we have studied the effects of particle imbalance on binary bosonic mixtures at zero temperature in a one-dimensional lattice. We calculated the binding energies in few-body systems of different bound states, which have shown the formation of composite bound states when there is particle imbalance in the system. These composites are not found in the particle-balanced situation. We extracted information about these states by calculating the correlation functions, and we discovered the existence of a critical point at which B particles cannot bind all the other A particles. In the many-body limit, we studied the magnetization \(m_{ab}\). As the imbalance is increased, we identified two distinct regions. In the first one, droplets gain magnetization, resulting in a difference in the density of both species. This produces a system where the densities of both species are proportional to the other. In the second region, the droplet cannot sustain further imbalance. Therefore, it locks the magnetization in the bulk and, for greater particle imbalances, expels particles beyond its boundary. We examined the coherence between the droplet and the exterior gas. The analytical expression of the magnetization presented for low particle imbalance and the relation between the expulsion point and magnetization show very good agreement with numeric results. Using simulations that approximate the thermodynamic Figure 10: The correlation functions \(C_{ij}\) as a function of the separation between sites \(|i-j|\), where \(i\) and \(j\) are lattice sites and \(j=40\) is fixed in the middle of the lattice, for two different homogeneous systems. In panels (a) we display the results in the particle-balanced situation \(N_{A}=N_{B}=59\), whereas in (b) we show the results with particle imbalance, \(N_{A}=65,N_{B}=47\). In both situations, these results are obtained for a system with periodic boundary conditions, \(L=80\), \(U/t=8\), \(r=0.15\) and \(\chi=512\). The coefficients obtained from the fit are \(K_{s}=0.73\pm 0.01\) and \(K_{a}=3.9\pm 0.2\). limit, we determined the expulsion point by deriving the chemical potential of the majority component in the mixture. We found that the unpaired particles within the droplet effectively form a super Tonks-Girardeau (\(\mathrm{\SIUnitSymbolFace{s}TG}\)) gas. Moreover, we discovered that the expulsion point coincides with the critical density at which the size of the \(\mathrm{\SIUnitSymbolFace{s}TG}\) gas becomes comparable to the droplet size. In Appendix A we present the studies of the convergence of the relevant quantities within DMRG simulations. We discuss the importance of these parameters and we explain the scaling of the computational time with these. The notes presented in this Appendix should be interesting for anybody that wants to simulate such systems using DMRG, since often these numeric details are not commonly explained in the literature. To the best of our knowledge, this is the first time that the effect of particle imbalance has been studied on one-dimensional quantum droplets in an optical lattice. We have shown that droplets are robust against a small particle imbalance and that they are able to gain magnetization. This feature confirms the viability of an experimental implementation, in which more than often a perfectly balanced situation is difficult to achieve. It would be interesting to have a more in-depth study of the correlations between the gas of expelled particles and the entanglement in the system. In addition, it would also be interesting to study the effects of the imbalance in the intra-species interaction strength, \(U_{AA}\neq U_{BB}\), since that is often the case in experimental setups. ## Acknowledgements The authors thank Andrzej Syrwid and Marcin Plodzien for comments concerning the convergence of the method for small values of \(r\). Funding informationThis work has been funded by Grants No. PID2020-114626GB-I00 and PID2020-113565GB-C21 from the MICIN/AEI/10.13039/501100011033 and by the Ministerio de Economia, Industria y Competitividad (MINECO, Spain) under grants No. FIS2017-84114-C2-1-P and No. FIS2017-87534-P We acknowledge financial support from the Generalitat de Catalunya (Grant 2021 SGR 01411). DMRG Convergence In order to obtain the ground state of the system we employ the Density Matrix Renormalization Group (DMRG) algorithm. This method allows us to obtain the ground state of the system given the number of particles and the system size. At the same time, DMRG sets a number of variational parameters set by the bond dimension \(\chi\) and the dimension of the local Hilbert space \(d\). To properly obtain reliable physical results we study how the ground state properties depend on the number of these variational parameters. At the same time, our limited classical computational resources force us to reduce the size of the simulations in order to be able to compute them in a feasible time. This balance between these two constraints is what we study in this Appendix. In each sweep of the DMRG algorithm, we apply an effective Hamiltonian over the Matrix Product State (MPS) updating consecutively one (single-site DMRG) or two sites (two-site DMRG) in order to minimize the energy [33]. Our DMRG computations have been performed using TeNPy [34]. This one uses the two-site DMRG and thus we focus only on this algorithm in the following. The effective Hamiltonian can be written as a matrix of dimensions \(\chi_{\text{max}}^{2}d^{2}\times\chi_{\text{max}}^{2}d^{2}\)[34], where \(\chi_{\text{max}}\) is the maximum bond dimension in the two-sites updated and \(d\) is the dimension of the Hamiltonian in a single-site. The most computationally expensive part of DMRG is to minimize the energy when the effective Hamiltonian is applied. To do this, we use the Lanczos algorithm [35], which typically converges after a few tensor products that scale \(\mathcal{O}\left(\chi_{\text{max}}^{3}Dd^{2}+\chi_{\text{max}}^{2}D^{2}d^{3}\right)\), where \(D\) is the bond dimension of the Hamiltonian written as a Matrix Product Operator (MPO). The convergence criteria followed to stop DMRG sweeps is when the relative change in the energy at each tensor update in a sweep is \(\Delta E/|E|<-10^{-8}\) and the entropy \(\Delta S/S<10^{-5}\). The computations used in this work have been produced by three different computers. Two of these are desktop computers and the third is a computer cluster. We want to thank Dr. Arnau Rios for allowing us to access this cluster. In Table 1 we detail the main hardware specifications of the three computers. TeNPy allows parallelizing the code to run on multiple cores. This feature would enable us to take advantage of the vast number of cores that the cluster has. Nevertheless, we have seen that the optimal number of cores in our TeNPy simulations is \(2-3\). Since in this work, we focus on the effect of particle imbalance, we need to compute a large number of simulations for a different number of particles between both species. Therefore, we use our computers to simulate a great number of these simulations at once which use \(2-3\) cores each. Another added benefit of working in a CPU cluster is the total amount of RAM size. As an example, a simulation of a droplet with maximum bond dimension \(\chi=4096\) occupies approximately 50 gigabytes of RAM memory. In both desktop computers, the RAM memory is inferior to this number. Thus, we would only be able to compute this simulation using the cluster. \begin{table} \begin{tabular}{|l||c|c|c|} \hline Computer & CPU model & Number of cores & RAM memory \\ \hline Desktop computer \#1 & \begin{tabular}{c} Intel® CoreTM \\ iS-8400 CPU - 2.80 GHz \\ \end{tabular} & 6 & 49.321 GB \\ \hline Desktop computer \#2 & \begin{tabular}{c} Intel® CoreTM \\ iS-4430 CPU - 3.00GHz \\ \end{tabular} & 4 & 16.456 GB \\ \hline CPU Cluster & \begin{tabular}{c} Intel® Xeon® Gold \\ 6240R CPU - 2.40 GHz \\ \end{tabular} & 96 & 202.35 GB \\ \hline \end{tabular} \end{table} Table 1: Hardware specifications of the different computers used in this work. Since we have important but finite computing resources available, a crucial duty is to minimize the size and total time of computations by reducing key parameters in DMRG. This has to be done carefully to obtain meaningful results. In the following subsections, we explain our criteria for choosing two of these parameters: the bond dimension and the maximum number of bosons per site. ### Bond dimension The bond dimension in an MPS is the dimension of the bond index that connects two following tensors. This quantity can give a measure of the amount of entanglement in the wave function [36]. As we explained, the most computationally expensive part of DMRG scales as \(\mathcal{O}\left(\chi_{\text{max}}^{3}Dd^{2}+\chi_{\text{max}}^{2}D^{2}d^{3}\right)\). Therefore in DMRG we have to limit the bond dimension up to a predefined value \(\chi\) to perform the simulations in a realistic time. The convergence of the quantities used in this work has to be carefully studied since the value of \(\chi\) can have an important role in the results of the simulations. In Fig. 11 we report the convergence of different quantities for a particle-imbalanced droplet with different maximum bond dimension \(\chi\). We are able to obtain a prediction to the limit of \(\chi\to\infty\) with a fit of the results to a function of \(1/\chi\). A crucial quantity studied in this work is the magnetization \(m_{ab}\). For \(\chi=256\) the error in the magnetization is on the fourth decimal. We consider that this error is small enough and we choose this value for simulations in which we want to obtain \(m_{ab}\). Although \(\chi=256\) is enough for the mentioned quantities we also show that it is not sufficient for more complex quantities. In particular, correlation functions measure the correlation between different parts of the system and thus are quantities highly influenced by the entangle Figure 11: Convergence of different quantities as a function of the maximum bond dimension, \(\chi\). We fit each quantity with a function \(f(\chi)=a+b/\chi^{c}\), where \(a,\ b\) and \(c\) are free parameters. In the second and third panel we exclude the first value to do the fit. Values obtained with a particle-imbalanced droplet for \(N_{A}=40,N_{B}=24\), \(U/t=8\), \(r=0.15\), \(L=144\) and \(M=4\). ment [37]. This means that these functions have harder convergence on the bond dimension \(\chi\). In addition, a limit in the bond dimension is translated into a limit in the entanglement of the system. Thus we expect the correlation between long distances to need a relatively large \(\chi\) to converge. In Fig. 12 we present the evolution of the one-body density matrix (OBDM) \((\hat{b}_{i,A}^{\dagger}\hat{b}_{j,A})\) as a function of \(|i-j|\) for a particle-imbalanced droplet that has expelled two particles, one in each side of the droplet. In Fig. 12(b) we show the OBDM for large values of \(|i-j|\) and it can clearly be seen that higher values of the bond dimension are needed in order to obtain the quantity in this region accurately. Since we have a somewhat limited amount of computing power, we are only able to work up to \(\chi=2048\) and we use this value for the OBDM results in our work. ### Maximum number of bosons per site Our system consists of a one-dimensional lattice with \(N_{A}\) and \(N_{B}\) atoms of A and B particles, respectively. Thus, each lattice site can contain from \(0\) to \(N_{A}+N_{B}\) atoms. This means that the dimension \(d\) of the effective Hamiltonian in one site is \(d=(N_{A}+1)(N_{B}+1)\). Although this would be the exact way to proceed, it is not computationally feasible since the dimension of the local Hamiltonian would be too large. Therefore, we put a cutoff on the maximum number of bosons of each species per site \(M\). This sets a maximum dimension of the local Hamiltonian. In this subsection, we study how this value affects the density profiles and the energy of the system. Before introducing particle imbalance, we study the effect of the cutoff \(M\) in the balanced situation \(N_{A}=N_{B}\). In Fig. 13 the mean density of the bulk of a droplet for the balanced situation as a function of the interaction strength \(U/t\) is reported. We choose \(M=4\) as the value used for computations in our work since it already shows a good convergence in the averaged density. Moreover, in our work we focus on the strongly interacting regime, that is the region of sufficiently large \(U/t\), which is also the region where the convergence in \(M\) is much faster. Now we consider a system with particle imbalance and we study the effect of the value \(M\). Figure 12: One-body density matrix \((\hat{b}_{i,A}^{\dagger}\hat{b}_{j,A})\) with \(i\) fixed on the middle of the lattice and \(j\) scans from \(j=i\) to the end of the lattice. In panel (b) we enlarge the gray area marked in panel (a). Values obtained with a particle-imbalanced droplet for \(N_{A}=40\), \(N_{B}=24\), \(U/t=8\), \(r=0.15\), \(L=144\) and \(M=4\). In Fig. 14 the convergence of some quantities for different values of \(M\) is shown. For \(M\leq 4\) the magnetization and density oscillate between the fourth and third decimal, respectively. The energy difference for \(M>4\) is on the fifth decimal, a value equivalent to the convergence criteria of DMRG. Therefore we conclude that for the range of \(U/t\in(8,10)\), \(M=4\) is a sufficient value to obtain valid results. Figure 14: Convergence of different quantities as a function of the cutoff of the maximum number of bosons of each species per site, \(M\). Values obtained with a particle-imbalanced droplet for \(N_{A}=40\), \(N_{B}=22\), \(U/t=8\), \(r=0.15\), \(L=144\) and \(\chi=256\). Figure 13: Averaged density of each species in the bulk of the droplet as a function of the interaction strength \(U/t\) for \(r=0.15\) and different \(L\), ensuring that the droplets fit inside the lattice.
2302.00370
How to select predictive models for causal inference?
As predictive models -- e.g., from machine learning -- give likely outcomes, they may be used to reason on the effect of an intervention, a causal-inference task. The increasing complexity of health data has opened the door to a plethora of models, but also the Pandora box of model selection: which of these models yield the most valid causal estimates? Here we highlight that classic machine-learning model selection does not select the best outcome models for causal inference. Indeed, causal model selection should control both outcome errors for each individual, treated or not treated, whereas only one outcome is observed. Theoretically, simple risks used in machine learning do not control causal effects when treated and non-treated population differ too much. More elaborate risks build proxies of the causal error using ``nuisance'' re-weighting to compute it on the observed data. But does computing these nuisance adds noise to model selection? Drawing from an extensive empirical study, we outline a good causal model-selection procedure: using the so-called $R\text{-risk}$; using flexible estimators to compute the nuisance models on the train set; and splitting out 10\% of the data to compute risks.
Matthieu Doutreligne, Gaël Varoquaux
2023-02-01T10:58:55Z
http://arxiv.org/abs/2302.00370v2
# How to select predictive models for causal inference? ###### Abstract Predictive models -as with machine learning- can underpin causal inference, to estimate the effects of an intervention at the population or individual level. This opens the door to a plethora of models, useful to match the increasing complexity of health data, but also the Pandora box of model selection: which of these models yield the most valid causal estimates? Classic machine-learning cross-validation procedures are not directly applicable. Indeed, an appropriate selection procedure for causal inference should equally weight both outcome errors for each individual, treated or not treated, whereas one outcome may be seldom observed for a sub-population. We study how more elaborate risks benefit causal model selection. We show theoretically that simple risks are brittle to weak overlap between treated and non-treated individuals as well as to heterogeneous errors between populations. Rather a more elaborate metric, the \(R-\)risk appears as a proxy of the oracle error on causal estimates, observable at the cost of an overlap re-weighting. As the \(R-\)risk is defined not only from model predictions but also by using the conditional mean outcome and the treatment probability, using it for model selection requires adapting cross validation. Extensive experiments show that the resulting procedure gives the best causal model selection. Model Selection; Heterogeneous Treatment Effect; G-formula; Observational Study; Machine Learning Introduction ### Valid causal inference from complex data requires causal model selection There is growing interest in answering causal questions from observational data. While Randomized Control Trials (RCTs) remain the gold standard in medicine to estimate treatment effect[2], observational studies bring value to assess real-world effectiveness and safety[9], as they use the data from routine practice, or for drug repositioning[24, 31] garnering first evidence without ethical concerns of systematic interventions[66]. The increasing amount of data collected routinely enables the use of increasingly flexible models that capture best heterogeneity and bridge to machine learning practices[45]. In particular the complexity of modern real-life health data, -Electronic Health Records, claims, or medical devices- calls for complex models. For causal inference from observational data, epidemiology has historically focused on methods that model treatment assignment[33, 56], based on the propensity score[4]. However, propensity-score methods are fragile to variance in probability estimates or lack of overlap between treated and non treated[21, 65]. Recent empirical results[50, 53] show a benefit of other types of methods, based on outcome modeling -also referred as G-computation or G-formula[6], Q-model in epidemiology[29] or conditional mean regression[50]. These outcome-modeling methods can easily go beyond Average Treatment Estimation (ATE), \(eg\) with Conditional Average Treatment Estimation (CATE), enabling to capture effect heterogeneity crucial for personalized medicine, to interpret the causal estimation on sub-populations, and policy optimization[62]. These methods capture the outcome as a function of the baseline covariates and the treatment with various models: Bayesian Additive Regression Trees[25], Targeted Maximum Likelihood Estimation[27, 41], causal boosting[46], causal multivariate adaptive regression splines (MARS)[46], random forests[49, 52], Meta-learners[54], R-learners[39], Doubly robust estimation[43]... The wide variety of methods leaves the applied researcher with the difficult choice of selecting between different estimators based on the data at hand. Usual practices to select models in predictive settings rely on cross-validation on the error on the outcome[60, 71]. In the case of causal inference, care must be taken that this error is not driven by inhomogeneities in treatment allocation. Indeed, while causal inference require modeling the links between an outcome and a treatment, the causal quantities are defined on a distribution distinct from the observed one: it includes _counterfactual_ observations. Given complex, potentially noisy, data, which model is to be most trusted to yield valid causal estimates? Because there is no single learner that performs best on all data sets, there is a pressing need for clear guidelines to select between causal models in health, economics and social science. Here we show that the best approach for model selection is to adapt cross-validation to estimate the so-called \(R-\)risk which modulates observed prediction error to compensate for systematic differences between treated and non-treated individuals. The \(R-\)risk relies on the two _nuisance_ models, themselves estimated from data and thus imperfect; yet these imperfections do not undermine the benefit of the \(R-\)risk. ### Prior work: model selection for outcome modeling (g-computation) The natural risk for CATE model selection is a error measure between the true -unobserved- CATE (oracle) and the CATE estimate obtained with a candidate model of the outcome. But this risk is not "feasible": it cannot be computed solely from observed data and requires oracle knowledge. #### Simulation studies of causal model selection In simulations, the oracle CATE is known. Schuler et al. 2018[47] thus use eight simulation setups[46] to compare four causal risks, concluding that for CATE estimation the best model-selection risk is the \(R\)-risk[39] -def. 7, below. Their empirical results are clear for randomized treatment allocation but less convincing for observational settings where both simple Mean Squared Error - MSE, \(\mu\)-risk(\(f\)) def. 5- and reweighted MSE -\(\mu\)-risk\({}_{IPW}\) def. 6- appear to perform better than \(R\)-risk on half of the simulations. Another work[51] studied empirically both MSE and reweighted MSE risks on the semi-synthetic ACIC 2016 datasets[53], but did not include the \(R\)-risk and looked only at the agreement of the best selected model with the true CATE risk -\(\tau\)-risk(\(f\)) def. 4-, not on the full ranking of methods compared to the true CATE. Here we study experimentally a wider variety of data generative process for the observational setup. We also study the influence of overlap, an important parameter of the data generation process which makes a given causal metric appropriate[65]. #### Theoretical studies of causal model selection Rolling and Yang 2014[32] propose a model selection procedure that asymptotically selects the best estimators among smooth models of the outcomes. However, practical cases often escape these theoretical requirement: it is delicate to assert whether there are enough samples for asymptotic settings to hold -especially with high dimensionality- and candidate prediction models may not be smooth, as with popular tree-based methods. Other work shows that unbiased estimates of the oracle CATE function \(\tau(x)\) can be plugged into the oracle \(\tau\)-risk for model selection. These CATE plugin estimators can be built with a simple IPW estimate [37], with a doubly robust estimator [61] or by debiasing a CATE estimator with influence functions [51] -in the like of Targeted Machine Learning [27, 41]. However, theory holds for _well-specified_ plugin CATE estimators and asymptotic regimes. **Statistical guarantees on causal estimation procedures** Much work in causal inference has focused on building procedures that guarantee asymptotically consistent estimators. Targeted Machine Learning Estimation (TMLE) [27, 41] and Double Machine Learning [43] both provide estimators for Average Treatment Effect combining flexible treatment and outcome models. Here also, theories requires asymptotic regimes and at least assumes models to be _well-specified_. By contrast, Johansson et al. 2021 [67] studies causal estimation without assuming that estimators are well specified. They derive an upper bound on the oracle error to the CATE (\(\tau\)-risk) that involves the error on the outcome and the similarity of the distributions between the features of treated and control patients. However, they focus on using this upper bound for estimation, and do not give insights on model selection. In addition, for hyperparameter selection, they rely on a plugin estimate of the \(\tau\)-risk built with counterfactual nearest neighbors, which has been shown ineffective [47]. ### Objectives and structure of the paper In this paper, we study _model selection procedures_ (causal risks) in _finite samples_ settings and without _well-specification_ assumption. In these -practical- settings an important question is whether more complex risks, asymptotically consistent but with more quantities to estimate, suffer from more variance than their simpler though non-consistent counterparts, leading to worse model selection. In this respect, we compare semi-oracle settings, that use oracle knowledge of nuisance, to plugin estimates. We first introduce the potential outcome framework and its notations, illustrating causal estimation with a toy example in Section 2. Then, we pose the causal model selection problem in Section 3, defining the studied causal risks. Section 4 gives our theoretical result. In section 5 we run a thorough empirical study, with many different settings covered. Finally, we comment our findings in Section 6. ## 2 A Causal-Inference Framework ### The Neyman-Rubin Potential Outcomes framework Settings Following the Neyman-Rubin Potential Outcomes framework [34], we observe an outcome \(Y\in\mathbb{R}\) (eg. mortality risk or hospitalization length), function of a binary treatment \(A\in\mathcal{A}=\{0,1\}\) (eg. a medical act, a drug administration), and baseline covariates \(X\in\mathcal{X}\subset\mathbb{R}^{d}\). We observe the factal distribution, \(O=(Y(A),X,A)\sim D=\mathbb{P}(y,x,a)\). However, we want to model the existence of potential observations (unobserved ie. counterfactual) that correspond to a different treatment. Thus we want quantities on the counterfactual distribution \(O^{*}=(Y(1),Y(0),X,A)\sim D^{*}=\mathbb{P}(y(1),y(0),x,a)\). At the population level, a popular quantity of interest -estimand- is the Average Treatment Effect (ATE), \(\tau=\mathbb{E}_{Y(1),Y(0)\sim D^{*}}[Y(1)-Y(0)]\). To model heterogeneity, the Conditional Average Treatment Effect (CATE), \(\tau(x)=\mathbb{E}_{Y(1),Y(0)\sim D^{*}}[Y(1)-Y(0)|X=x]\), is also interesting. Nisances definitions We define three important conditional expectancies required to estimate ATE and CATE but generally unknown. They are called nuisances in the causal inference literature, mostly in applied econometrics [43]. **Definition 1** (Response surfaces).: The conditional expectancy of the outcome given the covariates and the treatment, \(\mu_{a}(x)=\mathbb{E}_{Y\sim D}[Y|X=x,A=a]\). It models the relation between the outcome and the patient characteristics in the observed distribution. **Definition 2** (Conditional mean outcome).: The conditional expectancy of the outcome given X, \(m(x)=\mathbb{E}_{Y\sim D}[Y|X=x]\). It marginalizes over the intervention, focusing on the link between the outcome and the covariates. **Definition 3** (Propensity score).: The conditional probability to be treated4: \(e(x)=\mathbb{P}[A=1|X=x]\). It models the intervention allocation. Footnote 4: The conditional probability to be treated5: \(e(x)=\mathbb{P}[A=1|X=x]\). It models the intervention allocation. Footnote 5: The conditional probability to be treated6: \(e(x)=\mathbb{P}[A=1|X=x]\). It models the intervention allocation. Footnote 6: The conditional probability to be treated7: \(e(x)=\mathbb{P}[A=1|X=x]\). It models the intervention allocation. Footnote 8: The conditional probability to be treated6: \(e(x)=\mathbb{P}[A=1|X=x]\). It models the intervention allocation. ### Causal assumptions Some assumptions are necessary to assure identifiability of the causal estimands in observational settings[16]. We assume the usual strong ignorability assumptions, composed of _1) unconfoundedness_\(\{Y(0),Y(1)\}\perp\!\!\!\perp A|X\)_, _2) strong overlap_ ie. every patient has a strictly positive probability to receive each treatment, _3) consistency_, and _4) generalization_ (detailed in Appendix B). In this work, we insist on the fundamental overlap assumption[65], which is testable with data. ### Estimation with outcome models Should we know the two expected outcomes for a given \(X\), we could compute the difference between them, which gives the causal effect of the treatment. These two expected outcomes can be computed from the observed data: the consistency 3 and ignorability 1 assumptions imply the equality of two different expectations: \[\mathbb{E}_{Y(a)\sim D^{*}}[Y(a)|X=x]=\mathbb{E}_{Y\sim D}[Y|X=x,A=a]=\mu_{(a )}(x) \tag{1}\] On the left, the expectation is taken on the counterfactual unobserved distribution. On the right, the expectation is taken on the factual observed distribution conditionally on the treatment. This equality is referred as the g-formula identification7. For the rest of the paper, the expectations will always be taken on the factual observed distribution \(D\), and we will omit to explicitly specify the distribution. This identification leads to outcome based estimators (ie. g-computation estimators[29]), targeting the ATE \(\tau\) with outcome modeling: Footnote 7: The conditional probability to be treated6: \(e(x)=\mathbb{P}[A=1|X=x]\). It models the intervention allocation. \[\tau=\mathbb{E}_{Y\sim D^{*}}[Y(1)-Y(0)|X=x]=\mathbb{E}_{Y\sim D}[Y|A=1]- \mathbb{E}_{Y\sim D}[Y|A=0] \tag{2}\] Given a sample of data and the oracle response functions \(\mu_{0}\), \(\mu_{1}\), the finite sum estimator of the ATE is written: \[\hat{\tau}=\frac{1}{n}\bigg{(}\sum_{i=1}^{n}\mu_{1}(x_{i})-\mu_{0}(x_{i}) \bigg{)} \tag{3}\] This estimator is an oracle **finite sum estimator** by opposition to the population expression of \(\tau\), \(\mathbb{E}[\mu_{1}(x_{i})-\mu_{0}(x_{i})]\), which involves an expectation taken on the full distribution \(D\), which is observable but requires infinite data. For each estimator \(\ell\) taking an expectation over \(D\), we use the symbol \(\hat{\ell}\) to note its finite sum version. Similarly to the ATE, for the CATE, at the individual level: \[\tau(x)=\mu_{1}(x)-\mu_{0}(x) \tag{4}\] ### Illustration: Toy example of causal model selection Given various estimators of \(\mu_{0}(x)\) and \(\mu_{1}(x)\), we are interested in selecting those that minimize the estimation error on treatment effect. We illustrate that machine-learning model evaluation procedures such as Out-Of-Sample Mean Squared Error are not suited for this purpose. Figure 1 gives a toy example, with \(Y\in[0,1]\), the probability of death, a binary treatment \(A\in\{0,1\}\) and a single covariate \(X\in\mathbb{R}\) which summarizes the patient health status (eg. the Charlson co-morbidity index8). We simulate a credible situation for which the treatment is beneficial (decreases the mortality probability) for patient with high Charlson scores (bad health states). On the contrary, the treatment has little effect for patients in good condition (small Charlson scores). Footnote 8: The conditional probability to be treated6: \(e(x)=\mathbb{P}[A=1|X=x]\). It models the intervention allocation. Some models of the response surfaces have high predictive performances of the outcome (measured as regression R2 score) but perform poorly for causal inference tasks such as Average Treatment Effect (error on the true effect \(\tau\)) or Heterogeneous Treatment Effect inference (error on \(\tau(x)\)). Figure 0(a) shows a random forest with these counter-intuitive properties. On the contrary, Figure 0(b) shows a linear model with smaller R2 score but better causal inference. Intuitively, the linear model misspecified -the outcome functions are not linear-, leading to poor R2; but it interpolates better to regions with poor overlap -high Charlson score- and thus gives better CATE estimates. Conversely, the random forest puts weaker assumptions on the data, thus has higher R2 score but is biased by the treated population in the poor overlap region, leading to bad CATE scores. **FIGURE 1 Toy example**: (a) a random-forest estimator with high performance for standard prediction (high \(\widehat{R2}\)) but that yields poor ATE estimation (large error between true effect \(\tau\) and estimated \(\hat{\tau}_{f}\)), (b) a linear estimator with smaller prediction performance leading to better ATE and CATE estimation. Selecting the estimator with the smallest \(\tau\)-risk would lead to the smallest error on \(\tau\); however the \(\tau\)-risk is not feasible: computing it requires access to unknown quantities. While the random forest fits the data better than the linear model, it gives worse causal inference because its error is very inhomogeneous between the treated and untreated. The \(\widehat{R2}\) score does not capture this inhomogeneity. This toy example illustrates that the classic minimum Mean Square Error criterion is not suited to choosing a model among a family of candidate estimators for causal inference. Yet, model selection is a crucial aspect of causal inference. Indeed, estimates may vary markedly when using different models. For instance, figure 2 shows the large variations obtained across six different outcome estimators on the ACIC 2016 semi-synthetic datasets[53]. Flexible models such as boosting trees with a big learning rate (0.1) are doing well in most settings -in line with previous work[53]- except for setups with poor overlap, on the right of the plot. The same models with a small learning rate (0.01) yield the poorest performances. These two failure cases suggest that a simple rule of thumb such as preferring more flexible models does not work in general; an actual model-selection procedure is needed. **FIGURE 2** Average Treatment Effect estimations of six different outcome models used in g-estimators on the simulated data from the 76 simulations from ACIC 2016[53]. The models are boosted trees, ridge regression without interaction and ridge regression without interaction with the treatment. For each model, two choices of learning rate used during training are shown. The different configurations are plotted along with the overlap violation -measured with normalized Total Variation, def 15. Appendix A gives hyperparameter details. We get non-consistent results with non overlapping error bars: choosing the best model among a family of candidate estimators is important. **FIGURE 3** Average Treatment Effect estimation of six different outcome models used in g-estimators on the simulated data from the 76 simulations from ACIC 2016[53]. The models are boosted trees, ridge regression without interaction and ridge regression without interaction with the treatment. For each model, two choices of learning rate used during training are shown. The different configurations are plotted along with the overlap violation -measured with normalized Total Variation, def 15. Appendix A gives hyperparameter details. We get non-consistent results with non overlapping error bars: choosing the best model among a family of candidate estimators is important. ## 3 Causal Model Selection: Problem Setting ### Causal model selection We formalize the problem of model selection for causal estimation. Thanks to the g-formula identification (Equation 1), a given outcome model \(f\,:\,\mathcal{X}\times\mathcal{A}\rightarrow\mathcal{Y}\) -learned from data or built from domain knowledge- induces feasible estimates of CATE and ATE: \[\hat{\tau}_{f}(x)=f(x,1)-f(x,0)\quad\text{and}\quad\hat{\tau}_{f}(O)=\frac{1}{n} \sum_{i=1}^{n}\hat{\tau}_{f}(x_{i}) \tag{5}\] Let \(\mathcal{F}=\{f\,:\,\mathcal{X}\times\mathcal{A}\rightarrow\mathcal{Y}\}\) be a family of such estimators. Our goal is to select the best candidate in this family for the observed dataset \(O\) using a metric of interest \(\ell\): \[f_{\ell}^{*}=\operatorname*{argmin}_{f\in\mathcal{F}}\ell(f,O) \tag{6}\] We detail below possible metrics \(\ell\), risks useful for causal model selection, and how to compute them. ### Model-selection risks, oracle and feasible The \(\tau\)-risk: an oracle error risk Ideally, we would like to target the CATE, which naturally leads to the following evaluation risk: **Definition 4** (\(\tau\)-risk(\(f\))).: also called PEHE [25, 40]: \[\tau\text{-risk}(f)=\mathbb{E}_{X\sim\rho(X)}[(\tau(X)-\hat{\tau}_{f}(X))^{2}]\] its finite-sum version over the observed data: \[\widehat{\tau\text{-risk}}(f)=\sum_{x\in O}\big{(}\tau(x)-\hat{\tau}_{f}(x) \big{)}^{2}\] However these risks are not feasible because the oracles \(\tau(x)\) are not accessible, with the observed data \((Y,X,A)\sim\mathcal{D}\). #### Feasible error risks We explore **feasible risks**, based on the prediction error of the outcome model and _observable_ quantities. Two of the following risks use the nuisances \(e\) -propensity score, def 3- and \(m\) -conditional mean outcome, def 2. We give the definitions as _semi-oracles_, function of the true unknown nuisances, but later instantiate them with estimated nuisances, noted \(\big{(}\tilde{e},\tilde{m}\big{)}\). Semi-oracles risks are superscripted with the \(\star\) symbol. **Definition 5** (Factual \(\mu\)-risk(\(f\))).: [42] This is the usual Mean Squared Error on the target y. It is what is typically meant by "generalization error" in supervised learning and estimated with cross-validation: \[\mu\text{-risk}(f)=\mathbb{E}_{(Y,X,A)\sim D}\left[(Y-f(X;A))^{2}\right]\] **Definition 6** (\(\mu\)-risk\({}_{tPW}^{\star}(w,f)\)).: [13] Let the inverse propensity weighting function \(w(x,a)=\frac{a}{e(x)}+\frac{1-a}{1-e(x)}\), we define the semi-oracle Inverse Propensity Weighting risk, \[\mu\text{-risk}_{tPW}^{\star}(f)=\mathbb{E}_{(Y,X,A)\sim D}\left[\Big{(}\frac {A}{e(X)}+\frac{1-A}{1-e(X)}\Big{)}(Y-f(X;A))^{2}\right]\] **Definition 7** (\(R\)-risk\({}^{\star}(f)\)).: [47, 39] The \(R\)-risk uses the two nuisance \(m\) and \(e\): \[R\text{-risk}^{\star}(f)=\mathbb{E}_{(Y,X,A)\sim D}\big{[}\big{(}\left(Y-m\left( X\right)\right)-\left(A-e\left(X\right)\right)\tau_{f}\left(X\right)\big{)}^{2} \big{]}\] It has been introduced in causal-inference estimators motivated by its good approximation rate of \(\tau\), even with slow error rates on the nuisances \((\tilde{e},\tilde{m})\)[39]. These risks are summarized in Table 1. ### Estimation and model selection procedure Causal model selection (as in _eg_ Equation 6) may involve estimating a variety of quantities from the observed data: the outcome model \(f\), its induced risk as introduce in the previous section, and possibly nuisances required by the risk. Given a dataset with \(N\) samples, we split out a train and a test sets \((\mathcal{T},\mathcal{S})\) of sizes \(\left(\frac{N}{2},\frac{N}{2}\right)\). We fit each candidate estimator \(f\in\mathcal{F}\) on \(\mathcal{T}\). We also fit the nuisance models \((\tilde{\epsilon},\tilde{m})\) on the train set \(\mathcal{T}\), setting hyperparameters by a nested cross-validation before fitting the nuisance estimators with these parameters on the full train set. Causal quantities are then computed by applying the fitted candidates estimators \(f\in\mathcal{F}\) on the test set \(\mathcal{S}\). Finally, we compute the model-selection metrics for each candidate model on the test set. This procedure is described in Algorithm 1 and illustrated in Figure 3. As extreme inverse propensity weights induce high variance, clipping can be usefull to ensure numerical stability [18, 35]. Using the train set \(\mathcal{T}\) both to fit the candidate estimator and the nuisance estimates is a form of double dipping which leads to correlation in the final estimates [39]. However, comparing with a procedure where the nuisances are learned on a separated validation set, did not reveal important changes to the final results (see appendix E.2). We thus kept this simple two-sets procedure. Given a train and a test sets \((\mathcal{T},\mathcal{S})\sim\mathcal{D}\), a family of candidate estimators \(\{f\in\mathcal{F}\}\), a set of causal metrics \(\mathcal{L}\in\mathcal{L}\): 1. Prefit: Learn estimators for unknown nuisance quantities \((\tilde{\epsilon},\,\tilde{m})\) on the training set \(\mathcal{T}\) 2. Fit: \(\forall f\in\mathcal{F}\) learn \(\hat{f}(\cdot,a)\) on \(\mathcal{T}\) 3. Model selection: \(\forall x\in\mathcal{S}\) predict \(\left(\hat{f}(x,1),\hat{f}(x,0)\right)\) and evaluate each candidate estimator with each causal metric \(\mathcal{M}(\hat{f},\mathcal{S})\). For each causal metric \(\mathcal{L}\in\mathcal{L}\) and each candidate estimator \(f\in\mathcal{F}\), store the metric value: \(\mathcal{L}(f,\mathcal{S})\) - possibly function of \(\tilde{\epsilon}\) and \(\tilde{m}\) **Algorithm 1** Evaluation of selection procedures for one simulation \begin{table} \begin{tabular}{|l|l|l|} \hline Risk & Equation & Reference \\ \hline \(mse(\tau(X),\tau_{f}(X))=\tau\)-risk\((f)\) & \(\mathbb{E}_{X\sim\rho(X)}[(\tau(X)-\hat{\tau}_{f}(X))^{2}]\) & Eq. 4 [25] \\ \hline \(mse(Y,f(X))=\mu\)-risk\((f)\) & \(\mathbb{E}_{(Y,X,A)\sim D}\left[(Y-f(X;\,A))^{2}\right]\) & Def. 5 [47] \\ \hline \(\mu\)-risk\({}_{IPW}^{*}\) & \(\mathbb{E}_{(Y,X,A)\sim D}\left[\left(\frac{A}{\varepsilon(X)}+\frac{1-A}{1- \varepsilon(X)}\right)(Y-f(X;A))^{2}\right]\) & Def. 6 [13] \\ \hline \(R\)-risk\({}^{*}\)1 & \(\mathbb{E}_{(Y,X,A)\sim D}\left[\left(\left(Y-m\left(X\right)-\left(A-e\left(X \right)\right)\tau_{f}\left(X\right)\right)^{2}\right]\) & Def. 7 [39] \\ \hline \end{tabular} \({}^{1}\) Called \(\tau\)-risk\({}_{R}\) in Schuler et al. 2018 [47]. \end{table} Table 1: Review of causal risks Figure 3: Estimation procedure for causal model selection. Theory: links between feasible and oracle risks We recall that the \(\mu\)-risk\({}_{IPW}\) can upper bound the oracle \(\tau\)-risk. We show that the \(R\)-risk appears as a reweighted version of the oracle \(\tau\)-risk. Both results make explicit the role of overlap for the performances of causal risks. These bounds depend on a specific form of residual that we now define: for each potential outcome, \(a\in\{0;1\}\), the variance conditionally on \(x\) is[42]: \[\sigma_{y}^{2}(x;a)\ \overset{\text{def}}{=}\ \int\limits_{y}\ \big{(}y-\mu_{a}(x) \big{)}^{2}\,p(y\mid x=x;\,A=a)\,dy\] Integrating over the population, we get the Bayes squared error: \(\sigma_{B}^{2}(a)=\int_{X}\sigma_{y}^{2}(x;a)p(x)dx\) and its propensity weighted version: \(\hat{\sigma}_{B}^{2}(a)=\int_{X}\sigma_{y}^{2}(x;a)\,p(x;a)\,dx\). In case of a purely deterministic link between the covariates, the treatment, and the outcome, these residual terms are null. ### Upper bound of \(\tau\)-risk with \(\mu\)-risk\({}_{IPW}\) **Proposition 1** (Upper bound with \(\mu\)-risk\({}_{IPW}\) ).: [67] Given an outcome model \(f\), let a weighting function \(w(x;a)=\frac{a}{e(x)}+\frac{1-a}{1-e(x)}\) as the Inverse Propensity Weight. Then, under overlap (assumption 2), we have: \[\tau\text{-risk}(f)\leq\ 2\ \mu\text{-risk}_{IPW}(w,f)-2\ \big{(}\sigma_{B}^{2}(1)+ \sigma_{B}^{2}(0)\big{)}\] This result has already been derived in previous work[67]. It links \(\mu\)-risk\({}_{IPW}\) to the squared residuals of each population thanks to a reweighted mean-variance decomposition. For completeness, we provide the proof in Appendix C.1. The upper-bound comes from the triangular inequality applied to the residuals of both populations. Interestingly, the two quantities are equal when the absolute residuals on treated and untreated populations are equal on the whole covariate space, _ie_ for all \(x\in\mathcal{X}\), \(|\mu_{1}(x)-f(x,1)|=|\mu_{0}(x)-f(x,0)|\). The main source of difference between the oracle \(\tau\)-risk and the reweighted mean squared error, \(\mu\)-risk\({}_{IPW}\), comes from heterogeneous residuals between populations. These quantities are difficult to characterize as they are linked both to the estimator and to the data distribution. This bound indicates that minimizing the \(\mu\)-risk\({}_{IPW}\) helps to minimize the \(\tau\)-risk, which leads to interesting optimization procedures[67]. However, there is no guarantee that this bound is tight, which makes it less useful for model selection. Assuming strict overlap (probability of all individuals being treated or not bounded away from 0 and 1 by \(\eta\), appendix B), the above bound simplifies into a looser one involving the usual mean squared error: \(\tau\)-risk\((f)\leq\frac{2}{\eta}\ \mu\)-risk\((f)-2\ \big{(}\sigma_{B}^{2}(1)+\sigma_{B}^{2}(0)\big{)}\). For weak overlap (propensity scores not bounded far from 0 or 1), this bound is very loose (as shown in Figure 1) and is not appropriate to discriminate between models with close performances. ### Reformulation of the \(R\)-risk as reweighted \(\tau\)-risk We now derive a novel rewriting of the \(R\)-risk, making explicit its link with the oracle \(\tau\)-risk. **Proposition 2** (\(R\)-risk as reweighted \(\tau\)-risk).: Given an outcome model \(f\), its \(R\)-risk appears as weighted version of its \(\tau\)-risk (Proof in Appendix C.2): \[R\text{-risk}^{*}(f)=\int\limits_{x}e(x)\big{(}1-e(x)\big{)}\big{(}\tau(x)- \tau_{f}(x)\big{)}^{2}p(x)dx\ +\ \bar{\sigma}_{B}^{2}(1)\ +\ \bar{\sigma}_{B}^{2}(0) \tag{7}\] The \(R\)-risk targets the oracle at the cost of an overlap re-weighting and the addition of the reweighted Bayes residuals, which are independent of \(f\). In good overlap regions the weights \(e(x)\big{(}1-e(x)\big{)}\) are close to \(\frac{1}{4}\), hence the \(R\)-risk is close to the desired gold-standard \(\tau\)-risk. On the contrary, for units with extreme overlap violation, these weights goes down to zero with the propensity score. ### Interesting special cases #### Randomization special case If the treatment is randomized as in RCTs, \(p(A=1\mid X=x)=p(A=1)=p_{A}\), thus \(\mu\text{-risk}_{IPW}\) takes a simpler form: \[\mu\text{-risk}_{IPW}=\mathbb{E}_{(Y,X,A)\sim D}\left[\left(\frac{A}{p_{A}}+ \frac{1-A}{1-p_{A}}\right)(Y-f(X;A))^{2}\right]\] However, even if we have randomization, we still can have large differences between \(\tau\text{-risk}\) and \(\mu\text{-risk}_{IPW}\) coming from heterogeneous errors between populations as noted in Section 4.1 and shown experimentally in simulations[47]. Concerning the \(R\)-risk, replacing \(e(x)\) by its randomized value \(p_{A}\) in Proposition 2 yields the oracle \(\tau\text{-risk}\) up to multiplicative and additive constants: \[R\text{-risk}=p_{A}\left(1-p_{A}\right)\tau\text{-risk}\ +\ (1-p_{A})\, \sigma_{B}^{2}(0)\ +\ p_{A}\sigma_{B}^{2}(1) \tag{8}\] Therefore, optimizing estimators for CATE with \(R\text{-risk}^{*}\) in the randomized setting is optimal if we target the \(\tau\text{-risk}\). This explains the strong performances of \(R\)-risk in randomized setups[47] and is a strong argument in favor of this risk for heterogeneity estimation in RCTs. #### Oracle Bayes predictor Consider the case where we have access to the oracle Bayes predictor for the outcome ie. \(f(x,a)=\mu(x,a)\), then all risks are equivalent up to the residual variance: \[\tau\text{-risk}(\mu)=\mathbb{E}_{X\sim\mu(X)}[(\tau(X)-\tau_{\mu}(X))^{2}]=0 \tag{9}\] \[\mu\text{-risk}(\mu)=\mathbb{E}_{(Y,X,A)\sim p(Y;X;A)}[\left(Y-\mu_{A}(X) \right)^{2}]=\int\limits_{X,A}\ \ \varepsilon(x,a)^{2}p(a\mid x)\,p(x)\,dx\,da\leq\sigma_{B}^{2}(0)+\sigma_{B}^{2 }(1) \tag{10}\] \[\mu\text{-risk}_{IPW}(\mu)=\sigma_{B}^{2}(0)+\sigma_{B}^{2}(1)\quad\text{follows from Lemma \ref{lem:prob}} \tag{11}\] \[R\text{-risk}(\mu)=\tilde{\sigma}_{B}^{2}(0)+\tilde{\sigma}_{B}^{2}(1)\leq \sigma_{B}^{2}(0)+\sigma_{B}^{2}(1)\quad\text{follows directly from Proposition \ref{lem:prob}} \tag{12}\] Thus, differences between causal risks only matter in finite sample regimes. Universally consistent learners converge to the Bayes risk in asymptotic regimes, making all model selection risks equivalent. However, in practice choices must be made in non-asymptotic regimes. ## 5 Empirical Study We evaluate the following causal metrics, oracle and feasible versions of finite-sample evaluation risks presented in Table 1: \[\mathcal{L}=\left\{\widehat{\mu\text{-risk}}_{IPW}^{*},\ \widehat{R\text{-risk}}^{*},\ \widehat{\mu\text{-risk}},\ \widehat{\mu\text{-risk}}_{IPW},\ \widehat{R\text{-risk}}\right\} \tag{13}\] We compare them on a large sample of different simulated data generation processes to select best performing estimator among a family of candidate estimators. We also evaluate them on three semi-simulated datasets: ACIC 2016[53], ACIC 2018[48] and Twins[38].1 Footnote 1: Scripts for the simulations and the selection procedure are available at [https://github.com/soda-inria/caussim](https://github.com/soda-inria/caussim). Results of the main experience described in this section are also provided to avoid re-running the full experience. ### Extensive simulation settings #### Data Generation Process We use simulated data, on which the ground-truth causal effect is known. Going further than prior empirical studies of causal model selection[47, 51], we use multiple generative processes, to reach conclusions wider than a given one (as discussed in Appendix E10). We generate random functions for the response functions using random bases. Basis extension methods are common in bio-statistics where spline are often used for functional regression [26, 55]. By allowing the function to vary at specific knots, they give flexible -non-linear- models of the studied mechanisms. Taking inspiration from splines, we use random approximation of Radial Basis Function (RBF) kernels [19] to generate the response surfaces. RBF use the same process as polynomial splines but replace polynomial by Gaussian kernels. Unlike polynomials, Gaussian kernels have exponentially decreasing influences in the input space. This allows to avoid unrealistic divergences of the population response surfaces at the ends of the feature space. The number of basis functions -_ie. knots-_, controls the complexity of the ground-truth response surfaces and treatment. We first use this process to draw the non-treated response surface \(\mu_{0}\) and the causal-effect \(\tau\). We then draw the observations from a mixture two Gaussians, for the treated and non treated. We vary the separation between the two Gaussians to control the amount of overlap between treated and control populations, as it an important parameter for causal inference (related to \(\eta\) which appears in section 4.1). Finally, we generate the observed outcomes adding some Gaussian noise. We generated such datasets 1000 times, with uniformly random overlap parameters \(\theta\in[0,2.5]\). Appendix E.1 gives more details on the data generation. ### Family of candidate estimators We build a candidate estimator in two steps. First, we use a RBF expansion similar as the one used for the data-generation generation process. Concretely, we choose two random knots and apply a transformation of the raw data features with the same Gaussian kernel used for the data-generation mechanism. This step is referred as the featurization. Then, we fit a linear regression on this transformed features. We consider two ways of combining these steps for outcome mode; using common nomenclature [54], we refer to these regression structures as different meta-learners which differ on how they model, jointly or not, the treated and the non treated: * SLearner: A single learner for both population, taking the treatment as a supplementary covariate. * SftLearner: A single set of basis functions is sampled at random for both populations, leading to a given feature space used to model both the treat and the non treated, then two separate different regressors are fitted on this representation. * TLearner: Two completely different learners for each population, hence separate featurization and separate regressors. We are not including more elaborated meta-learners such as R-learner [39] or X-learner [54]. Our goal is not to have the best possible learner but to have a variety of sub-optimal learners in order to compare the different causal metrics. For the same reason, we did not include more powerful outcome models such as random forests or boosting trees. Figure 4: Two examples of the simulation setup in the input space with two knots –_ie._basis functions: with low 4a and high 4b overlap setups. The top row gives views of the observations in feature space, while the lower row displays the two response surfaces on a 1D cut along the black lines drawn on the above panel. For the regression step, we fit a Ridge regression on the transformed features with 6 different choices of the regularization parameter \(\lambda\in[10^{-3},10^{-2},10^{-1},1,10^{1},10^{2}]\), coupled with a TLearner or a SftLearner. We sample 10 different random basis for the learning procedure and the featurization yielding a family \(\mathcal{F}\) of 120 candidate estimators. ### Semi-simulated datasets #### Datasets We also use semi-simulated datasets, where a known synthetic causal effect is added to real -non synthetic- covariate. We use datasets used in previous work to evaluate causal inference: * ACIC 201653: The dataset is based on the Collaborative Perinatal Project3, a RCT conducted on a cohort of pregnant women to identify causes of infants' developmental disorders. The initial intervention was a child's birth weight (\(A=1\) if weight \(<2.5kg\)), and outcome was the child's IQ after a given follow-up period. The study contained \(N=4\,802\) data points with \(D=55\) features (5 binary, 27 count data, and 23 continuous). They simulated 77 different setups with varying parameters for treatment and response generation models, treatment assignment probabilities, overlap, and interactions between treatment and covariates 2. We used 10 different seeds for every setup, totalizing 770 dataset instances. Footnote 2: Original R code available at [https://github.com/vdorie/acicomp/tree/master/2016](https://github.com/vdorie/acicomp/tree/master/2016) to generate 77 simulations settings. Footnote 3: Using only the scaling part of the data, obtained from the [https://github.com/IBM-HLRI-MLH.SI/IBM-Causal-Inference-Benchmarking-Framework](https://github.com/IBM-HLRI-MLH.SI/IBM-Causal-Inference-Benchmarking-Framework) * ACIC 20184: The raw covariates data comes from the Linked Births and Infant Deaths Database (LBIDD)10 with \(D=177\) covariates. Treatment and outcome models has been simulated with complex models in order to reflect different scenarii of inference. They do not provide the true propensity scores, so we evaluate only the feasible metrics which does not require this nuisance parameter. We used all datasets of size \(N=5\,000\), totalizing 432 dataset instances 3. Footnote 4: We obtained the dataset from [https://github.com/AMLab-Amsterdam/CEVA/tree/master/datasets/TWINS](https://github.com/AMLab-Amsterdam/CEVA/tree/master/datasets/TWINS) * Twins38: It is an augmentation of the real data on twin births and mortality rates in the USA from 1989-199114. There are \(N=11\,984\) samples (pairs of twins), and \(D=50\) covariates, The outcome is the mortality and the treatment is the weight of the heavier twin at birth. This is a "true" counterfactual dataset -as remarked in64- in the sense that we have both potential outcomes with each twin. They simulate the treatment with a sigmoid model based on GESTAT10 (number of gestation weeks before birth) and x the 45 other covariates: \[\mathbf{t}_{i}\mid\mathbf{x}_{i},\mathbf{z}_{i}\sim\text{Bern}\left(\sigma \left(w_{o}^{\top}\mathbf{x}+w_{h}(\mathbf{z}/10-0.1)\right)\right)\quad \text{with }w_{o}\sim\mathcal{N}(0,0.1\cdot I),\ w_{h}\sim\mathcal{N}(5,0.1)\] (14) We built upon this equation, adding a non-constant slope in the treatment sigmoid, allowing us to control the amount of overlap between treated and control populations. 4 We sampled uniformly \(1\,000\) different overlap parameters between \(0\) and \(2.5\), totalizing \(1\,000\) dataset instances. Unlike the previous datasets, only the overlap varies for these instances. The response surfaces are fixed by the original twin outcomes. Footnote 4: We obtained the dataset from [https://github.com/AMLab-Amsterdam/CEVA/tree/master/datasets/TWINS](https://github.com/AMLab-Amsterdam/CEVA/tree/master/datasets/TWINS) #### Family of candidate estimators For these three datasets, the family of candidate estimators are gradient boosting trees for both the response surfaces and the treatment 5 with S-learner, learning rate in \(\{0.01,0.1,1\}\), and maximum number of leaf nodes in \(\{25,27,30,32,35,40\}\) resulting in a family of size 18. Footnote 5: Skikit-learn regressor, HistGradient BoostingRegressor, and classifier, HistGradient BoostingClassifier. #### Nuisance estimators Drawing inspiration from the TMLE literature that uses combination of flexible machine learning methods41, we use as models for the nuisances \(\tilde{\epsilon}\) (respectively \(\tilde{m}\)) a form of meta-learner: a stacked estimator of ridge and boosting classifiers (respectively regressions). We select hyper-parameters with randomized search on a validation set \(\mathcal{V}\) and keep them fix for model selection (detailed of the hyper parameters in Appendix E.2). As extreme inverse propensity weights induce high variance, we use clipping18, 35 to bound \(min(\tilde{\epsilon},1-\tilde{\epsilon})\) away from \(0\) with a fixed \(\eta=10^{-10}\), ensuring strict overlap for numerical stability. Footnote 4: We obtained the dataset from [https://github.com/AMLab-Amsterdam/CEVA/tree/master/datasets/TWINS](https://github.com/AMLab-Amsterdam/CEVA/tree/master/datasets/TWINS) ### Measuring overlap between treated and non treated Overlap between treated and control population is crucial for causal inference, it appears in the positivity assumption 2 required for causal identification and when relating the different risks (subsection 4.1). Overlap, or "positivity", is typically assessed by qualitative methods using population histograms (as in Figure 1) or side-by-side box plots, or quantitative approaches such as Standardized Mean Difference [23, 33]. While these methods are useful to decide if positivity holds they do not summarize a dataset's overlap in a single measure. Rather, divergence between distributions \(\mathbb{P}(X|A=0)\) and \(\mathbb{P}(X|A=1)\) give a relevant quantity to characterize the behavior of causal risk [65, 67]. For simulated and some semi-simulated data, we have access to the probability of treatment for each data point, which sample both densities in the same data point. Thus, we can directly use distribution discrepancy measures and rely on the Normalized Total Variation (NTV) distance to measure the overlap between the treated and control propensities 6. This is the empirical measure of the total variation distance [22] between the distributions, \(TV(\mathbb{P}(X|A=1),\mathbb{P}(X|A=0))\). As we have both distribution sampled on the same points, we can rewrite it a sole function of the propensity score, a low dimensional score more tractable than the full distribution \(\mathbb{P}(X|A)\): Footnote 6: Computing overlap when working only on samples of the observed distribution, outside of simulation, requires a more sophisticated estimator of discrepancy between distributions, as two data points never have the same exact set of features. Maximum Mean Discrepancy [30] is typically used in the context of causal inference [24, 67]. However it needs a kernel, typically Gaussian, to extrapolate across neighboring observations. We prefer avoiding the need to specify such a kernel, as it must be adapted to the data which is tricky with categorical or non-Gaussian features, a common situation for medical data. \[\widehat{NTV}(e,1-e)=\frac{1}{2N}\sum_{i=1}^{N}\Big{|}\frac{e(x_{i})}{p_{A}}- \frac{1-e(x_{i})}{1-p_{A}}\Big{|} \tag{15}\] Appendix D gives a detailed theoretical motivation of the NTV distance and empirical arguments showing that it recovers the desired notion of overlap. #### Measuring overlap without the oracle propensity scores: For ACIC 2018, or for non-simulated data, the true propensity scores are not known. To measure overlap, we rely on flexible estimations of the Normalized Total Variation, using gradient boosting trees to approximate the propensity score. Empirical arguments for this plug-in approach is given in Figure D1. ### Empirical results We investigate how well the various causal metrics rank the different candidate models. Figure 5 shows the Kendall rank correlation coefficient1 between the ranking of methods given the oracle \(\tau\)-risk and every causal metric under evaluation. We plot this percentage of agreement as a function of decreasing overlap (by increasing Normalized Total Variation). Footnote 1: footnotemark: \(R\)**-risk dominates factual \(\mu\)-risk and its reweighted version** Among all causal metrics, classical mean squared error (ie. factual \(\mu\)-risk) is suboptimal. Reweighting it with propensity score (\(\mu\)-risk\({}_{IPW}\)) does not bring much improvements. Including a model of the outcome in the \(R\)-risk leads to better performances in every cases. Further results provided in Appendix E.3 with alternative measures of performance confirm these findings. #### Low population overlap hinders causal model selection The performances of every metric drop with growing lack of overlap. It is particularly visible for Caussim, ACIC 2018 and Twins. Model selection for causal inference becomes more and more difficult with increasingly different treated and control populations. #### Estimating the nuisances does not hinder model selection Oracle versions of every risks recover more often the best estimator. However, flexible nuisances estimations (gradient boosting trees) lead to feasible metrics with close performances to the oracles ones. This suggests that the chosen estimators are doing well in recovering the true nuisances. ## 6 Discussion and Conclusion Predictive models are increasingly used to reason about causal effects. Our results highlight that they should be selected, validated, and tuned using different procedures and error measures than those classically used to assess prediction (estimating the so-called \(\mu-\) risk). Rather, selecting the best outcome model according to the \(R-\) risk (eq. 7) leads to more valid causal estimates. Estimating this risk requires a markedly more complex procedure than standard cross-validation used _e.g._ in machine learning: it involves fitting nuisance models necessary for model evaluation, though our empirical results show that these can be learned on the same set of data as the outcome model evaluated. A poor estimation of the nuisance models may compromise the benefits of the more complex \(R-\)risk (as shown in in Appendix E9). However controlling and selecting these latter models is easier because they are associated to errors on observed distributions and our empirical results show that when selecting these models in a flexible family of models the \(R-\)risk dominates simpler risks for model selection. Our results show that going from an oracle \(R-\)risk -where the nuisances are known- to a feasible \(R-\)risk -where the nuisances are estimated- decreases only very slightly the model-selection performance of the \(R-\)risk. This may be explained by theoretical results that suggest that estimation errors on both nuisances partly compensate out in the \(R-\)risk [39, 43, 44, 59, 69]. The usage of the \(R\)-risk can also be understood as a \(\tau\)-risk reweighted by the propensity score (prop 2). For strong overlap, the \(\mu-\)risk appears theoretically motivated (subsection 4.1), however empirical results show that even in this regime the \(R-\)risk brings a sizeable benefit, in agreement with Schuler et al. 2018[47]. Figure 5: Agreement with \(\tau\)-risk ranking of methods function of overlap violation. The lines represent medians, estimated with a lowess. The transparent bands denote the 5% and 95% confidence intervals. ### Extension to binary outcome While we focused on continuous outcomes, in medicine, the target outcome is often a categorical variable such as mortality status or diagnosis. In this case, it may be interesting to focus on other estimands than the Average Treatment Effect \(\mathbb{E}[Y(1)]-\mathbb{E}[Y(0)]\), for instance the relative risk \(\frac{\mathbb{P}(Y(1)=1)}{\mathbb{P}(Y(0)=1)}\) or the odd ratio, \(\frac{\mathbb{P}(Y(1)=1)/[1-\mathbb{P}(Y(1)=1)]}{\mathbb{P}(Y(0)=1)/[1-\mathbb{ P}(Y(0)=1)]}\) are often used [36]; in particular the odds ratio can carry across different disease sampling rates [20]. Using as an estimand the log of these values is suitable to additive models (for reasoning or noise assumptions). In the log domain, the relative risk or the odds ratio are written as a difference, as the ATE: \(\log\mathbb{P}(Y(1)=1)-\log\mathbb{P}(Y(0=1)\) or \(\log(\mathbb{P}(Y(1)=1)/[1-\mathbb{P}(Y(1)=1)])-\log\mathbb{P}(Y(0=1)/[1- \mathbb{P}(Y(0)=1)]\). Hence, the framework studied here (subsection 2.1) can directly apply. It is particularly easy for the log odds ratio, as it is the output of a logistic regression or any model with a cross-entropy loss. ### Going further The \(R\) - risk needs good estimation of nuisance models. The propensity score \(e\) calls for a control on the estimation of the individual posterior probability. We have used the Brier score to select these models, as it is minimized by the true individual probability. Regarding model-selection for propensity score, an easy mistake is to use expected calibration errors popular in machine learning [11, 12, 15, 68] as these select not for the individual posterior probability but for an aggregate error rate [70]. An open question is whether a better metric than the brier score can be designed that controls for \(e\left(1-e\right)\), the quantity used in the \(R-\)risk, rather than \(e\). The quality of model selection varies substantially from one data-generating mechanism to another. The overlap appears as an important parameter: when the treated and untreated, causal model selection is very hard. However, remaining variance in the empirical results suggests that other parameters of the data generation processes come into play. Intuitively, the complexity of the response surfaces and the treatment heterogeneity interact with overlap violations: when extrapolations to weak-overlap regions is hard, causal model selection is hard. Nevertheless, from a practical perspective, our study establishes that the \(R\)-risk is the best option to select predictive models for causal inference, without requiring assumptions on the data-generating mechanism, the amount of data at hand, or the specific estimators used to build predictive models. ## Acknowledgments We acknowledge fruitful discussions with Benedicte Colnet.
2307.03919
Common terms of generalized Pell and Narayana's cows sequences
For an integer $k \geq 2$, let $\{ P_{n}^{(k)} \}_{n}$ be the $k$-generalized Pell sequence which starts with $0, \dots,0,1$($k$ terms) and each term afterwards is the sum of $k$ preceding terms. In this paper, we find all the solutions of the Diophantine equation $P_{n}^{(k)} = N_{m}$ in non-negative integers $(n, k, m)$ with $k \geq 2$, where $\{ N_{m} \}_m$ is the Narayana's cows sequence. Our approach utilizes the lower bounds for linear forms in logarithms of algebraic numbers established by Matveev, along with key insights from the theory of continued fractions.
Bibhu Prasad Tripathy, Bijan Kumar Patel
2023-07-08T07:08:40Z
http://arxiv.org/abs/2307.03919v1
# Common terms of generalized Pell and Narayana's cows sequences ###### Abstract For an integer \(k\geq 2\), let \(\{P_{n}^{(k)}\}_{n}\) be the \(k\)-generalized Pell sequence which starts with \(0,\ldots,0,1(k\) terms) and each term afterwards is the sum of \(k\) preceding terms. In this paper, we find all the solutions of the Diophantine equation \(P_{n}^{(k)}=N_{m}\) in non-negative integers \((n,k,m)\) with \(k\geq 2\), where \(\{N_{m}\}_{m}\) is the Narayana's cows sequence. Our approach utilizes the lower bounds for linear forms in logarithms of algebraic numbers established by Matveev, along with key insights from the theory of continued fractions. **Keywords**: \(k\)-generalized Pell numbers, Narayana numbers, linear forms in logarithms, reduction method. **2020 Mathematics Subject Classification:** 11B39; 11J86. ## 1 Introduction The Pell sequence \(\{P_{n}\}_{n\geq 0}\) is a binary recurrence sequence given by \[P_{n+2}=2P_{n+1}+P_{n}\ \ \mbox{for}\ n\geq 0,\] with initials \(P_{0}=0\) and \(P_{1}=1\). Let \(k\geq 2\) be an integer. We consider a generalization of the Pell sequence known as the \(k\)-generalized Pell sequence, \(\{P_{n}^{(k)}\}_{n\geq-(k-2)}\) is given by the recurrence \[P_{n}^{(k)}=2P_{n-1}^{(k)}+P_{n-2}^{(k)}+\cdots+P_{n-k}^{(k)}\ \mbox{for all}\ \ \ n\geq 2, \tag{1.1}\] with initials \(P_{-(k-2)}^{(k)}=P_{-(k-3)}^{(k)}=\cdots=P_{0}^{(k)}=0\) and \(P_{1}^{(k)}=1\). We shall refer to \(P_{n}^{(k)}\) as the \(n\)th \(k\)-Pell number. We see that this generalization is a family of sequences, with each new choice of \(k\) producing a unique sequence. For example, if \(k=2\), we get \(P_{n}^{(2)}=P_{n}\), the \(n\)th Pell number. The Narayana's cows sequence \(\{N_{m}\}_{m\geq 0}\) is a ternary recurrent sequence given by \[N_{m+3}=N_{m+2}+N_{m}\ \ \mbox{for}\ m\geq 0,\] with initials \(N_{0}=N_{1}=N_{2}=1\). It is the sequence A000930 in the OEIS. Its first few terms are \[1,1,1,2,3,4,6,9,13,19,28,41,\ldots\] There are many literature in number theory on finding the intersection of two recurrent sequences of positive integers. Recently, researchers have taken a keen interest in the challenge of determining the intersection between a \(k\)-generalized Pell sequence and various other number sequences. For example, one can go through [3, 5, 9, 10]. The objective of this paper is to find all the Narayana numbers in \(k\)-generalized Pell sequence. To accomplish this, we solve the Diophantine equation \[P_{n}^{(k)}=N_{m}. \tag{1.2}\] In particular, our main result is the following. **Theorem 1.1**.: _All the solutions of the Diophantine equation (1.2) in positive integers with \(k\geq 2\) are given by_ \[P_{1}^{(k)}=N_{0}=N_{1}=N_{2},\quad P_{2}^{(k)}=N_{3},\quad\text{and}\quad P_{6} ^{(4)}=N_{13}\] _except in the cases \(k\geq 3\) which we can additionally have \(P_{4}^{(k)}=N_{8}\)._ To establish the proof of Theorem 1.1, we first find an upper bound for \(n\) in terms of \(k\) by applying Matveev's result on linear forms in logarithms [8]. When \(k\) is small, the theory of continued fractions suffices to lower such bounds and complete the calculations. When \(k\) is large, we use the fact that the dominant root of the \(k\)-generalized Pell sequence is exponentially close to \(\phi^{2}\) (see [2], Lemma 2) where \(\phi\) denotes the golden section. So we use this estimation in our further calculation with linear forms in logarithms to obtain absolute upper bounds for \(n\) which can be reduced by using Dujella and Petho's result [7]. In this way, we complete the proof of our main result. Our proof relies on a few preliminary results, which are extensively discussed in the subsequent section. ## 2 Preliminary Results ### Properties of \(k\)-generalized Pell sequence The characteristic polynomial of the \(k\)-generalized Pell sequence is \[\Phi_{k}(x)=x^{k}-2x^{k-1}-x^{k-2}-\cdots-x-1.\] The above polynomial is irreducible over \(\mathbb{Q}[x]\) and it has one positive real root that is \(\gamma:=\gamma(k)\) which is located between \(\phi^{2}(1-\phi^{-k})\) and \(\phi^{2}\), lies outside the unit circle (see [4]). The other roots are firmly contained within the unit circle. To simplify the notation, we will omit the dependence on \(k\) of \(\gamma\) whenever no confusion may arise. The Binet formula for \(P_{n}^{(k)}\) found in [4] is \[P_{n}^{(k)}=\sum_{i=1}^{k}g_{k}(\gamma_{i})\gamma_{i}^{n}, \tag{2.3}\] where \(\gamma_{i}\) represents the roots of the characteristic polynomial \(\Phi_{k}(x)\) and the function \(g_{k}\) is given by \[g_{k}(z):=\frac{z-1}{(k+1)z^{2}-3kz+k-1}, \tag{2.4}\] for an integer \(k\geq 2\). Additionally, it is also shown in [4, Theorem 3.1] that the roots located inside the unit circle have a very minimal influence on the formula (2.3), which is given by the approximation \[\left|P_{n}^{(k)}-g_{k}(\gamma)\gamma^{n}\right|<\frac{1}{2}\quad\text{holds for all}\quad n\geq 2-k. \tag{2.5}\] Therefore, for \(n\geq 1\) and \(k\geq 2\), we have \[P_{n}^{(k)}=g_{k}(\gamma)\gamma^{n}+e_{k}(n),\quad\text{where}\quad|e_{k}(n)| \leq\frac{1}{2}. \tag{2.6}\] Furthermore, it is shown in [4, Theorem 3.1] that the inequality \[\gamma^{n-2}\leq P_{n}^{(k)}\leq\gamma^{n-1}\text{ holds for all }n\geq 1. \tag{2.7}\] The following result was proved by Bravo and Hererra [3]. **Lemma 2.1**.: ([3], Lemma 2.1). _Let \(k\geq 2\) be an integer. Then we have_ \[0.276<g_{k}(\gamma)<0.5\ and\ \left|g_{k}(\gamma_{i})\right|<1\quad for\quad 2 \leq i\leq k.\] Furthermore, they showed that the logarithmic height of \(g_{k}(\gamma)\) satisfies \[h(g_{k}(\gamma))<4k\log\phi+k\log(k+1)\quad\text{for all}\quad k\geq 2. \tag{2.8}\] **Lemma 2.2**.: ([2], Lemma 2). _If \(k\geq 30\) and \(n\geq 1\) are integers that satisfies \(n<\phi^{k/2}\), then_ \[g_{k}(\gamma)\gamma^{n}=\frac{\phi^{2n}}{\phi+2}(1+\xi),\quad\text{where}\quad \left|\xi\right|<\frac{4}{\phi^{k/2}}. \tag{2.9}\] ### Properties of Narayana's cows sequence The characteristic polynomial of Narayana's cows sequence is \[f(x)=x^{3}-x^{2}-1,\] which is irreducible over \(\mathbb{Q}[x]\) and has roots \(\alpha\), \(\beta\) and \(\delta\) given by \[\alpha=\frac{1}{3}+\left(\frac{29}{54}+\sqrt{\frac{31}{108}}\right)^{1/3}+ \left(\frac{29}{54}-\sqrt{\frac{31}{108}}\right)^{1/3},\] \[\beta=\frac{1}{3}+w\left(\frac{29}{54}+\sqrt{\frac{31}{108}}\right)^{1/3}+w^{ 2}\left(\frac{29}{54}-\sqrt{\frac{31}{108}}\right)^{1/3},\] \[\delta=\bar{\beta}=\frac{1}{3}+w\left(\frac{29}{54}+\sqrt{\frac{31}{108}} \right)^{1/3}+w^{2}\left(\frac{29}{54}+\sqrt{\frac{31}{108}}\right)^{1/3},\] where \(w=\frac{-1+i\sqrt{3}}{2}\). The Binet formula for the Narayana's cows sequence is given by \[N_{m}=p\alpha^{m}+q\beta^{m}+r\delta^{m}\quad\text{for all}\quad m\geq 0, \tag{2.10}\] where \[p=\frac{\alpha}{(\alpha-\beta)(\alpha-\delta)},\quad q=\frac{\beta}{(\beta- \alpha)(\beta-\delta)},\quad r=\frac{\delta}{(\delta-\alpha)(\delta-\beta)}.\] The formula (2.10) can also be written in the form \[N_{m}=C_{\alpha}\alpha^{m+2}+C_{\beta}\beta^{m+2}+C_{\delta}\delta^{m+2}\quad \text{for all}\quad m\geq 0, \tag{2.11}\] where \[C_{x}=\frac{1}{x^{3}+2},\quad x\in\{\alpha,\beta,\delta\}.\] The coefficient \(C_{\alpha}\) has the minimal polynomial \(31x^{3}-31x^{2}+10x-1\) over \(\mathbb{Z}\) and all the zeros of this polynomial lie strictly inside the unit circle. One can approximate the following: \[\alpha\approx 1.46557;\ |\beta|=|\delta|\approx 0.826031;\ |C_{\beta}\beta^{m+2}+C_ {\delta}\delta^{m+2}|<1/2\quad\text{for all}\quad m\geq 1.\] **Lemma 2.3**.: _For every positive integer \(m\geq 1\), we have_ \[\alpha^{m-2}\leq N_{m}\leq\alpha^{m-1}. \tag{2.12}\] Proof.: This can be easily proved by the method of induction on \(m\) ### Linear forms in logarithms Let \(\gamma\) be an algebraic number of degree \(d\) with a minimal primitive polynomial \[f(Y):=b_{0}Y^{d}+b_{1}Y^{d-1}+\cdots+b_{d}=b_{0}\prod_{j=1}^{d}(Y-\gamma^{(j)}) \in\mathbb{Z}[Y],\] where the \(b_{j}\)'s are relatively prime integers, \(b_{0}>0\), and the \(\gamma^{(j)}\)'s are conjugates of \(\gamma\). Then the _logarithmic height_ of \(\gamma\) is given by \[h(\gamma)=\frac{1}{d}\left(\log b_{0}+\sum_{j=1}^{d}\log\left(\max\{|\gamma^{ (j)}|,1\}\right)\right). \tag{2.13}\] With the above notation, Matveev (see [8] or [6, Theorem 9.4]) proved the following result. **Theorem 2.4**.: _Let \(\eta_{1},\ldots,\eta_{s}\) be positive real algebraic numbers in a real algebraic number field \(\mathbb{L}\) of degree \(d_{\mathbb{L}}\). Let \(a_{1},\ldots,a_{s}\) be non-zero integers such that_ \[\Lambda:=\eta_{1}^{a_{1}}\cdots\eta_{s}^{a_{s}}-1\neq 0.\] _Then_ \[-\log|\Lambda|\leq 1.4\cdot 30^{s+3}\cdot s^{4.5}\cdot d_{\mathbb{L}}^{2}(1+ \log d_{\mathbb{L}})(1+\log D)\cdot B_{1}\cdots B_{s},\] _where_ \[D\geq\max\{|a_{1}|,\ldots,|a_{s}|\},\] _and_ \[B_{j}\geq\max\{d_{\mathbb{L}}h(\eta_{j}),|\log\eta_{j}|,0.16\},\text{ for all }j=1,\ldots,s.\] ### Reduction method Here, we present the following result due to Dujella and Petho [7, Lemma 5 (a)] which is a generalization of a result of Baker and Davenport's result [1]. **Lemma 2.5**.: _Let \(\widehat{\tau}\) be an irrational number, and let \(A,C,\widehat{\mu}\) be some real numbers with \(A>0\) and \(C>1\). Assume that \(M\) is a positive integer, and let \(p/q\) be a convergent of the continued fraction of the irrational \(\widehat{\tau}\) such that \(q>6M\). Put_ \[\epsilon:=||\widehat{\mu}q||-M||\widehat{\tau}q||,\] _where \(||\cdot||\) denotes the distance from the nearest integer. If \(\epsilon>0\), then there is no solution to the inequality_ \[0<|r\widehat{\tau}-s+\widehat{\mu}|<AC^{-t},\] _in positive integers \(r\), \(s\) and \(t\) with_ \[r\leq M\quad\text{and}\quad t\geq\frac{\log(Aq/\epsilon)}{\log C}.\] ### Useful Lemmas We conclude this section by recalling two lemmas that we will need in this work. **Lemma 2.6**.: ([13], Lemma 2.2) _Let \(a,x\in\mathbb{R}\). If \(0<a<1\) and \(|x|<a\), then_ \[|\log(1+x)|<\frac{-\log(1-a)}{a}\cdot|x|\] _and_ \[|x|<\frac{a}{1-e^{-a}}\cdot|e^{x}-1|.\] **Lemma 2.7**.: ([11], Lemma 7) _If \(m\geq 1\), \(S\geq(4m^{2})^{m}\) and \(\frac{x}{(\log x)^{m}}<S\), then \(x<2^{m}S(\log S)^{m}\)._ ## 3 Proof of Theorem 1.1 Since \(P_{1}^{(k)}=1=N_{0}=N_{1}=N_{2}\), \(P_{2}^{(k)}=2=N_{3}\) holds for \(k\geq 2\) and \(P_{4}^{(k)}=13=N_{8}\) holds for \(k\geq 3\). Therefore, we may assume that \(n\geq 5\). For \(5\leq n\leq k+1\), we have that \(P_{n}^{(k)}=F_{2n-1}\) where \(F_{n}\) is the \(n\)th Fibonacci number. So the equation (1.2) becomes \[F_{2n-1}=N_{m},\] which has no solution for \(n\geq 5\) and \(m\geq 0\). Therefore it has no solution in the range \(5\leq n\leq k+1\). From now, we assume that \(n\geq k+2\) and \(k\geq 2\). ### An initial relation between \(n\) and \(m\) Combining the inequalities (2.7) and (2.12) together with equation (1.2), we have \[\gamma^{n-2}\leq\alpha^{m-1}\quad\text{and}\quad\alpha^{m-2}\leq\gamma^{n-1}.\] Then, we deduce that \[(n-2)\left(\frac{\log\gamma}{\log\alpha}\right)+1\leq m\leq(n-1)\left(\frac{ \log\gamma}{\log\alpha}\right)+2.\] Using the fact \(\phi^{2}(1-\phi^{-k})<\gamma(k)<\phi^{2}\) for all \(k\geq 2\), it follows that \[1.25n-1.5<m<2.52n-0.52. \tag{3.14}\] ### Upper bounds for \(n\) and \(m\) in terms of \(k\) By using (1.2), (2.6) and (2.11), we obtain \[g_{k}(\gamma)\gamma^{n}+e_{k}(n)=c_{\alpha}\alpha^{m+2}+c_{\beta}\beta^{m+2}+ c_{\delta}\delta^{m+2}.\] Taking absolute values on both sides of the above equality, it yields \[\left|g_{k}(\gamma)\gamma^{n}-c_{\alpha}\alpha^{m+2}\right|<\frac{1}{2}+|c_{ \beta}\beta^{m+2}+c_{\delta}\delta^{m+2}|<1. \tag{3.15}\] Dividing both sides of the above inequality by \(c_{\alpha}\alpha^{m+2}\), we get \[\left|\left(c_{\alpha}^{-1}g_{k}(\gamma)\right)\gamma^{n}\alpha^{-(m+2)}-1 \right|<\frac{2.4}{\alpha^{m}}. \tag{3.16}\] Let \[\Lambda_{1}:=\left(c_{\alpha}^{-1}g_{k}(\gamma)\right)\gamma^{n}\alpha^{-(m+2) }-1. \tag{3.17}\] From (3.16), we have \[|\Lambda_{1}|<2.4\cdot\alpha^{-m}. \tag{3.18}\] Suppose that \(\Lambda_{1}=0\), then we get \[g_{k}(\gamma)=c_{\alpha}\alpha^{m+2}\gamma^{-n},\] which implies that \(g_{k}(\gamma)\) is an algebraic integer, which is a contradiction. Hence \(\Lambda_{1}\neq 0\). Therefore, we apply Theorem 2.4 to get a lower bound for \(\Lambda_{1}\) given by (3.17) with the parameters: \[\eta_{1}:=c_{\alpha}^{-1}g_{k}(\gamma),\quad\eta_{2}:=\gamma,\quad\eta_{3}:=\alpha,\] and \[a_{1}:=1,\quad a_{2}:=n,\quad a_{3}:=-(m+2).\] Note that the algebraic numbers \(\eta_{1},\eta_{2},\eta_{3}\) belongs to the field \(\mathbb{L}:=\mathbb{Q}(\gamma,\alpha)\), so we can assume \(d_{\mathbb{L}}=[\mathbb{L}:\mathbb{Q}]\leq 3k\). Since \(h(\eta_{2})=(\log\gamma)/k<2\log\phi/k\) and \(h(\eta_{3})=(\log\alpha)/3\), it follows that \[\max\{3kh(\eta_{2}),|\log\eta_{2}|,0.16\}=6\log\phi:=B_{2}\] and \[\max\{3kh(\eta_{3}),|\log\eta_{3}|,0.16\}=k\log\alpha:=B_{3}.\] Since \(h(c_{\alpha})=\frac{\log 31}{3}\). Therefore, by the estimate (2.8) and the properties of logarithmic height, it follows that for all \(k\geq 2\) \[h(\eta_{1}) \leq h(c_{\alpha})+h(g_{k}(\gamma))\] \[<\frac{\log 31}{3}+4k\log\phi+k\log(k+1)\] \[<5.2k\log k.\] Thus, we obtain \[\max\{3kh(\eta_{1}),|\log\eta_{1}|,0.16\}=15.6k^{2}\log k:=B_{1}.\] In addition, by (3.14) we can take \(D:=2.52n+2\). Then by Theorem 2.4, we have \[\log|\Lambda_{1}|>-1.432\times 10^{11}(3k)^{2}(1+\log 3k)(1+\log(2.52n+2)(15.6k^ {2}\log k)(6\log\phi)(k\log\alpha). \tag{3.19}\] From the comparison of lower bound (3.19) and upper bound (3.18) of \(|\Lambda_{1}|\) gives us \[m\log\alpha-\log 2.4<2.22\times 10^{13}k^{5}\log k(1+\log 3k)(1+\log(2.52n+2)).\] Using the facts \(1+\log 3k<4.1\log k\) for all \(k\geq 2\) and \(1+\log(2.52n+2))<2.6\log n\) for all \(n\geq 4\), we conclude that \[m<6.23\times 10^{14}k^{5}\log^{2}k\log n.\] Using (3.14), the last inequality becomes \[\frac{n}{\log n}<5\times 10^{14}k^{5}\log^{2}k. \tag{3.20}\] Thus, putting \(S:=5\times 10^{14}k^{5}\log^{2}k\) in (3.20) and using Lemma 2.7 together with \(33.84+5\log k+2\log\log k<52.8\log k\) for all \(k\geq 2\), gives \[n <2\left(5\times 10^{14}k^{5}\log^{2}k\right)\log\left(5\times 10^{14 }k^{5}\log^{2}k\right)\] \[<(1\times 10^{15}k^{5}\log^{2}k)(33.84+5\log k+2\log\log k)\] \[<5.28\times 10^{16}k^{5}\log^{3}k.\] The result established in this subsection is summarized in the following lemma. **Lemma 3.1**.: _If \((n,k,m)\) is an integer solution of (1.2) with \(k\geq 2\) and \(n\geq k+2\), then the inequalities_ \[0.39m<n<5.28\times 10^{16}k^{5}\log^{3}k \tag{3.21}\] _hold._ ### The case of small \(k\) In this subsection, we treat the cases when \(k\in[2,360]\). Here for each value of \(k\), Lemma 3.1 provides an absolute upper bound on \(n\) which is very large and will be reduced by Lemma 2.5. In order to apply Lemma 2.5, let \[\Gamma_{1}:=n\log\gamma-(m+2)\log\alpha+\log\left(c_{\alpha}^{-1}g_{k}(\gamma )\right). \tag{3.22}\] Then \(e^{\Gamma_{1}}-1=\Lambda_{1}\), where \(\Lambda_{1}\) is defined by (3.17). Therefore, (3.18) implies that \[|e^{\Gamma_{1}}-1|<\frac{2.4}{\alpha^{m}}<0.77 \tag{3.23}\] for \(m\geq 3\). Choosing \(a:=0.77\), we obtain the inequality \[|\Gamma_{1}|=|\log(\Lambda_{1}+1)|<\frac{-\log(1-0.77)}{0.77}\cdot\frac{2.4}{ \alpha^{m}}<\frac{4.59}{\alpha^{m}}\] by Lemma 2.6. Thus, it follows that \[0<\left|n\log\gamma-(m+2)\log\alpha+\log\left(c_{\alpha}^{-1}g_{k}(\gamma) \right)\right|<\frac{4.59}{\alpha^{m}}.\] Dividing this inequality by \(\log\alpha\), we get \[\left|n\left(\frac{\log\gamma}{\log\alpha}\right)-m+\left(\frac{\log\left(c_ {\alpha}^{-1}g_{k}(\gamma)\right)}{\log\alpha}-2\right)\right|<12.1\cdot \alpha^{-m}. \tag{3.24}\] With \[\widehat{\tau}=\widehat{\tau}(k):=\frac{\log\gamma}{\log\alpha},\quad\widehat {\mu}=\widehat{\mu}(k):=\frac{\log\left(c_{\alpha}^{-1}g_{k}(\gamma)\right)} {\log\alpha}-2,\quad A:=12.1\quad\text{and}\quad C:=\alpha,\] equation (3.24) becomes \[0<|n\widehat{\tau}-m+\widehat{\mu}|<A\cdot C^{-m}. \tag{3.25}\] Clearly \(\widehat{\tau}\) is an irrational number. We take \(M_{k}:=\lfloor 5.28\times 10^{16}k^{5}\log^{3}k\rfloor\) which is an upper bound on \(n\). Then by Lemma 2.5 for each \(k\in[2,360]\), we have that \[m<\frac{\log(Aq/\epsilon)}{\log C},\] where \(q=q(k)>6M_{k}\) is a denominator of a convergent of the continued fraction of \(\widehat{\tau}\) with \(\epsilon=\epsilon(k):=\|\widehat{\mu}q\|-M_{k}\|\widehat{\tau}q\|>0\). A computer search with _Mathematica_ found that for \(k\in[2,360]\), the maximum value of \(\log(Aq/\epsilon)/\log C\) is \(\leq 329\). Therefore \(m\leq 329\) and \(n\leq 265\), since \(n<(m+1.5)/1.25\). Finally, a brute force search with _Mathematica_ to compare \(P_{n}^{(k)}\) and \(N_{m}\) in the range \[2\leq k\leq 360,\quad 4\leq n\leq 265,\quad\text{and}\quad 3\leq m\leq 329\] with \(m<n/0.39\) provides the only solution \(P_{6}^{(4)}=N_{13}\) for the equation (1.2). This concludes the analysis of the case \(k\in[2,360]\). ### The case of large \(k\) We now suppose that \(k>360\) and note that for such \(k\) we have \[0.39m<n<5.28\times 10^{16}k^{5}\log^{3}k<\phi^{k/2}.\] Here, it follows from (1.2), (2.9) and (3.15) that \[\left|\frac{\phi^{2n}}{\phi+2}-c_{\alpha}\alpha^{m+2}\right|<\left|g_{k}(\gamma )\gamma^{n}-c_{\alpha}\alpha^{m+2}\right|+\frac{\phi^{2n}|\xi|}{\phi+2}<1+ \frac{4\phi^{2n}}{(\phi+2)\phi^{k/2}}.\] Dividing both sides of the above inequality by \(\frac{\phi^{2n}}{\phi+2}\) and using the fact \(1/\phi^{2n}<1/\phi^{k/2}\) for all \(n\geq k+2\) yield \[\left|(c_{\alpha}(\phi+2))\,\phi^{-2n}\alpha^{m+2}-1\right|<\frac{(\phi+2)}{ \phi^{2n}}+\frac{4}{\phi^{k/2}}<\frac{7.62}{\phi^{k/2}}. \tag{3.26}\] In order to use Theorem 2.4, we take \[(\eta_{1},a_{1}):=(c_{\alpha}(\phi+2),1),\qquad(\eta_{2},a_{2}):=(\phi,-2n), \quad\text{ and }\quad(\eta_{3},a_{3}):=(\alpha,m+2).\] The number field containing \(\eta_{1},\eta_{2},\eta_{3}\) is \(\mathbb{L}:=\mathbb{Q}(\phi,\alpha)\), which has degree \(d_{\mathbb{L}}=[\mathbb{L}:\mathbb{Q}]=6\). Here \[\Lambda_{2}:=(c_{\alpha}(\phi+2))\,\phi^{-2n}\alpha^{m+2}-1, \tag{3.27}\] is nonzero. In contrast to this, assume that \(\Lambda_{2}=0\), then we would get \(\phi^{2n}/\phi+2=c_{\alpha}\alpha^{m+2}\). Using the \(\mathbb{Q}\)-automorphism \((\alpha,\beta)\) of the Galois extension \(\mathbb{Q}(\phi,\alpha,\beta)\) over \(\mathbb{Q}\) we get that \(12<\phi^{2n}/\phi+2<|c_{\beta}||\beta|^{m+2}<1\), which is impossible. Hence \(\Lambda_{2}\neq 0\). Moreover, since \[h(\eta_{2})=\frac{\log\phi}{2},\qquad h(\eta_{3})=\frac{\log\alpha}{3}\] and \[h(\eta_{1})\leq h(c_{\alpha})+h(\phi)+2\log 2<2.8,\] it follows that \(B_{1}:=16.8,B_{2}:=1.45\) and \(B_{3}:=0.77\). Also since \(m<2.52n\), we can take \(D:=2.52n+2\). Thus, taking into account inequality (3.26) and applying Theorem 2.4, we obtain \[\frac{k}{2}\log\phi-\log 7.62<2.7\times 10^{14}\times(1+\log(2.52n+2)).\] This implies that \[k<1.58\times 10^{15}\log n,\] where \(1+\log(2.52n+2)<1.4\log n\) for \(n\geq k+2>362\). By using Lemma 3.1 and the fact \(38.51+5\log k+3\log(\log k)<12.5\log k\) for \(k>360\), we get \[k <1.58\times 10^{15}\log(5.28\times 10^{16}k^{5}\log^{3}k)\] \[<1.58\times 10^{15}(38.51+5\log k+3\log(\log k))\] \[<2\times 10^{16}\log k.\] Solving the above inequality gives \[k<1.51\times 10^{18}.\] Substituting this bound of \(k\) into (3.21), we get \(n<3.1\times 10^{112}\), which implies that \(m<7.95\times 10^{112}\). Now, let \[\Gamma_{2}:=(m+2)\log\alpha-2n\log\phi+\log\left(c_{\alpha}(\phi+2)\right), \tag{3.28}\] and \(\Lambda_{2}:=e^{\Gamma_{2}}-1\). Then \[|e^{\Gamma_{2}}-1|<\frac{7.62}{\phi^{k/2}}<0.1, \tag{3.29}\] since \(k\geq 360\). Choosing \(a:=0.1\), we obtain the inequality \[|\Gamma_{2}|=|\log(\Lambda_{2}+1)|<\frac{-\log(1-0.1)}{0.1}\cdot\frac{7.62}{ \phi^{k/2}}<\frac{8.1}{\phi^{k/2}}\] by Lemma 2.6. Thus, it follows that \[0<|(m+2)\log\alpha-2n\log\phi+\log\left(c_{\alpha}(\phi+2)\right)|<\frac{8.1} {\phi^{k/2}}.\] Dividing the above inequality by \(\log\phi\), we get \[\left|m\frac{\log\alpha}{\log\phi}-2n+\frac{\log\left(\alpha^{2}c_{\alpha}( \phi+2)\right)}{\log\phi}\right|<\frac{16.84}{\phi^{k/2}}. \tag{3.30}\] To apply Lemma 2.5, we put \[\widehat{\tau}:=\frac{\log\alpha}{\log\phi},\quad\widehat{\mu}:=\frac{\log \left(\alpha^{2}c_{\alpha}(\phi+2)\right)}{\log\phi},\quad A:=16.84\quad\text{ and}\quad C:=\phi.\] If we take \(M:=7.95\times 10^{112}\), which is an upper bound on \(m\), we found that \(q_{236}\), the denominator of the \(236th\) convergent of \(\widehat{\tau}\) exceeds \(6M\). Furthermore, a quick computation with _Mathematica_ gives us the value \[\frac{\log(Aq_{236}/\epsilon)}{\log C}\] is less than \(556\). So, if the inequality (3.30) has a solution, then \[\frac{k}{2}<\frac{\log(Aq_{236}/\epsilon)}{\log C}<556,\] which implies that \(k\leq 1112\). With the above upper bound for \(k\) and by Lemma 3.1, we have \[n<3.1\times 10^{34}\quad\text{and}\quad m<7.96\times 10^{34}.\] We apply again Lemma 2.5 to (3.30) with \(M:=7.96\times 10^{34}\) and \(q=q_{76}\). As a result, we get \(k\leq 368\). Hence, we deduce \[n<7.35\times 10^{31}\quad\text{and}\quad m<1.89\times 10^{32}.\] In the third application with \(M:=1.89\times 10^{32}\), we get that \(q=q_{72}\) satisfies the conditions of Lemma 2.5 and \(k<342\), which contradicts our assumption that \(k>360\). This completes the proof.
2310.00607
Understanding Robust Overfitting from the Feature Generalization Perspective
Adversarial training (AT) constructs robust neural networks by incorporating adversarial perturbations into natural data. However, it is plagued by the issue of robust overfitting (RO), which severely damages the model's robustness. In this paper, we investigate RO from a novel feature generalization perspective. Specifically, we design factor ablation experiments to assess the respective impacts of natural data and adversarial perturbations on RO, identifying that the inducing factor of RO stems from natural data. Given that the only difference between adversarial and natural training lies in the inclusion of adversarial perturbations, we further hypothesize that adversarial perturbations degrade the generalization of features in natural data and verify this hypothesis through extensive experiments. Based on these findings, we provide a holistic view of RO from the feature generalization perspective and explain various empirical behaviors associated with RO. To examine our feature generalization perspective, we devise two representative methods, attack strength and data augmentation, to prevent the feature generalization degradation during AT. Extensive experiments conducted on benchmark datasets demonstrate that the proposed methods can effectively mitigate RO and enhance adversarial robustness.
Chaojian Yu, Xiaolong Shi, Jun Yu, Bo Han, Tongliang Liu
2023-10-01T07:57:03Z
http://arxiv.org/abs/2310.00607v2
# On the Onset of Robust Overfitting in ###### Abstract Adversarial Training (AT) is a widely-used algorithm for building robust neural networks, but it suffers from the issue of robust overfitting, the fundamental mechanism of which remains unclear. In this work, we consider normal data and adversarial perturbation as separate factors, and identify that the underlying causes of robust overfitting stem from the normal data through factor ablation in AT. Furthermore, we explain the onset of robust overfitting as a result of the model learning features that lack robust generalization, which we refer to as non-effective features. Specifically, we provide a detailed analysis of the generation of non-effective features and how they lead to robust overfitting. Additionally, we explain various empirical behaviors observed in robust overfitting and revisit different techniques to mitigate robust overfitting from the perspective of non-effective features, providing a comprehensive understanding of the robust overfitting phenomenon. This understanding inspires us to propose two measures, attack strength and data augmentation, to hinder the learning of non-effective features by the neural network, thereby alleviating robust overfitting. Extensive experiments conducted on benchmark datasets demonstrate the effectiveness of the proposed methods in mitigating robust overfitting and enhancing adversarial robustness. ## 1 Introduction Adversarial Training (AT) (Madry et al., 2018) has emerged as a reliable method for improving a model's robustness against adversarial attacks (Szegedy et al., 2014; Goodfellow et al., 2015). It involves training networks using adversarial data generated on-the-fly and has been proven to be one of the most effective empirical defenses (Athalye et al., 2018). AT has shown success in building robust neural networks when applied to the MNIST dataset. However, achieving the same goal on more complex datasets like CIFAR10 has proven to be challenging (Madry et al., 2018). Apart from the limited capacity of current neural networks (Nakkiran, 2019), there is also a perplexing phenomenon known as robust overfitting (Rice et al., 2020) that significantly hampers this process. Specifically, when robust overfitting occurs during AT, the model's robust accuracy on test data continues to decline with further training. This phenomenon has been observed across different datasets, network architectures, and AT variants (Rice et al., 2020). Recently, various technologies have been proposed to empirically alleviate robust overfitting (Carmon et al., 2019; Chen et al., 2020; Dong et al., 2022; Wu et al., 2020; Yu et al., 2022). For instance, Wu et al. (2020) proposed the double-perturbation mechanism, which adversarially perturbs both inputs and weights to achieve a smoother weight-loss landscape. Yu et al. (2022) introduced the Minimum Loss Constrained Adversarial Training (MLCAT) prototype to prevent the model from fitting the small-loss adversarial data. Both methods can alleviate robust overfitting while enhancing adversarial robustness. However, the essential issue, the fundamental mechanism behind robust overfitting, remains unresolved and is of critical importance. In this paper, we investigate the fundamental mechanism of robust overfitting. Firstly, we show that the inducing factors of robust overfitting stem from normal data. Specifically, we treat normal data and adversarial perturbations as separate factors, and devise factor ablation adversarial training to assess their respective impacts on robust overfitting. We observe that simultaneously ablating adversarial perturbations and normal data in adversarial training can greatly mitigate the robust overfitting, whereas adversarial training that only ablates adversarial perturbations still exhibits a severe degree of robust overfitting. Given that these experiments strictly adhere to the principle of controlling variables, with the sole difference being the presence of normal data in the training set, we can infer that the underlying causes of robust overfitting stem from normal data. Normal data can be regarded as a composition of features. To gain more insights into the mechanism of robust overfitting, we provide a general explanation for the onset of robust overfitting in adversarial training from the perspective of feature generalization. During the adversarial training process, adversarial perturbations are generated on-the-fly and adaptively changed based on the model's learning state. However, due to the difference in the distribution of normal data between training and test sets, even for models with completely identical parameters, their learning states on training and test data are different. Therefore, the robust features learned from training data do not necessarily guarantee robust generalization. In other words, the features that the model considers as robust in the training data may not be robust features in the test data, and we refer to them as non-effective features. Furthermore, as adversarial training advances, the gap in the model's learning state between the training and test sets progressively expands, resulting in the proliferation of non-effective features. When the model's optimization is dominated by these non-effective features, it leads to the phenomenon of robust overfitting. Correspondingly, we provide a comprehensive explanation for various empirical behaviors associated with robust overfitting and revisit different existing techniques for mitigating robust overfitting based on our analysis. In order to support our analysis, we also devise two representative measures, namely attack strength and data augmentation, to regulate the model's learning of non-effective features. Specifically, _i)_ eliminating non-effective features through adversarial perturbations; _ii)_ aligning the model's learning state on the training set with that on the test set through data augmentation techniques. Both measures provide a flexible way to adjust the model's learning of non-effective features, and we observe a clear correlation between the extent of robust overfitting and the model's learning of non-effective features: the fewer non-effective features the model learns, the less pronounced the degree of robust overfitting. These observations align well with our analysis. Additionally, extensive experiments conducted in a wide range of settings also validate the effectiveness of the proposed measures in mitigating robust overfitting and improving adversarial robustness. To sum up, our contributions are as follows: * We conducted a series of rigorously factor ablation experiments following the principles of the controlled variable method, inferring that the factors inducing robust overfitting originate from normal data. * We explain the onset of robust overfitting as a result of learning non-effective features, and provide a comprehensive understanding of the robust overfitting phenomenon. * Based on our understanding, we devise two representative measures to impede the model's learning of non-effective features. Extensive experiments demonstrate that the proposed methods can mitigate robust overfitting and consistently enhance the adversarial robustness of baseline methods by a noticeable margin. ## 2 Related Work In this section, we briefly review related literature from two perspectives: adversarial training and robust overfitting. ### Adversarial Training Let \(f_{\theta}\), \(\mathcal{X}\) and \(\ell\) represent the neural network \(f\) with model parameter \(\theta\), the input space, and the loss function, respectively. Given a \(C\)-class dataset \(\mathcal{S}=\{(x_{i},y_{i})\}_{i=1}^{n}\), where \(x_{i}\in\mathcal{X}\) and \(y_{i}\in\mathcal{Y}=\{0,1,\ldots,C-1\}\) denotes its corresponding label, the objective function of _standard training_ is \[\min_{\theta}\frac{1}{n}\sum_{i=1}^{n}\ell(f_{\theta}(x_{i}),y_{i}), \tag{1}\] where the neural network \(f_{\theta}\) learns features in \(x_{i}\) that are correlated with associated labels \(y_{i}\) in order to minimize the empirical risk of misclassifying normal inputs. However, empirical evidence (Szegedy et al., 2014; Tsipras et al., 2018; Ilyas et al., 2019) suggests that networks trained under this regime tend to fit fragile, non-robust features that are incomprehensible to humans. To address this issue, adversarial training introduces adversarial perturbations to each data point by transforming \(\mathcal{S}=\{(x_{i},y_{i})\}_{i=1}^{n}\) into \(\mathcal{S}^{\prime}=\{(x^{\prime}_{i}=x_{i}+\delta_{i},y_{i})\}_{i=1}^{n}\). The adversarial perturbations \(\{\delta_{i}\}_{i=1}^{n}\) are constrained by a pre-specified budget, _i.e._\(\{\delta\in\Delta:||\delta||_{p}\leq\epsilon\}\), where \(p\) can be \(1,2,\infty\), etc. Therefore, the objective function for _adversarial training_(Madry et al., 2018) is \[\min_{\theta}\frac{1}{n}\sum_{i=1}^{n}\max_{\delta_{i}\in\Delta}\ell(f_{ \theta}(x_{i}+\delta_{i}),y_{i}), \tag{2}\] where the inner maximization process generates adversarial perturbations on-the-fly to counteract the highly predictive yet non-robust features in normal data. Subsequently, the outer minimization process optimizes the neural network using the generated adversarial data, allowing the model to fit intricate robust features. This iterative procedure aims to achieve an adversarially robust classifier. The most commonly employed approach for generating adversarial perturbations in AT is Projected Gradient Descent (PGD) (Madry et al., 2018), which applies adversarial attack to normal data \(x_{i}\) over multiple steps \(k\) with a step size of \(\alpha\): \[\delta^{k}=\Pi_{\Delta}(\alpha\cdot\mathrm{sign}\nabla_{x}\ell(f_{\theta}(x+ \delta^{k-1}),y)+\delta^{k-1}),k\in\mathbb{N}, \tag{3}\] where \(\delta^{k}\) represents the adversarial perturbation at step \(k\), and \(\Pi_{\Delta}\) denotes the projection operator. Besides the standard AT, there exist several other common variants of adversarial training methods (Kannan et al., 2018; Zhang et al., 2019; Wang et al., 2019). One typical example is TRADES (Zhang et al., 2019), which proposes a regularized surrogate loss that balances natural accuracy and adversarial robustness: \[\min_{\theta}\sum_{i}\big{\{}\mathrm{CE}(f_{\theta}(x_{i}),y_{i})+\beta\cdot \max\mathrm{KL}(f_{\theta}(x_{i})||f_{\theta}(x^{\prime}_{i}))\big{\}}, \tag{4}\] where CE is the cross-entropy loss that encourages the network to maximize natural accuracy, KL is the Kullback-Leibler divergence that encourages improvement of robust accuracy, and the hyperparameter \(\beta\) is employed to regulates the tradeoff between natural accuracy and adversarial robustness. ### Robust Overfitting Robust overfitting was initially observed in standard AT (Madry et al., 2018). Later, Rice et al. (2020) conducted a comprehensive study and discovered that conventional remedies used for overfitting in deep learning are of little help in combating robust overfitting in AT. This finding prompted further research efforts aimed at mitigating robust overfitting. Schmidt et al. (2018) attributed robust overfitting to sample complexity theory and suggested that more training data are required for adversarial robust generalization, which is supported by empirical results in derivative works (Carmon et al., 2019; Alayrac et al., 2019; Zhai et al., 2019). Recent works also proposed various strategies to mitigate robust overfitting without relying on additional training data, such as sample reweighting (Wang et al., 2019; Zhang et al., 2020; Liu et al., 2021), label smoothing (Izmailov et al., 2018), stochastic weight averaging (Chen et al., 2020), temporal ensembling (Dong et al., 2022), knowledge distillation (Chen et al., 2020), weight regularization (Wu et al., 2020; Yu et al., 2022; Yu et al., 2022; 3), and data augmentation (Tack et al., 2022; Li & Spratling, 2023). While these techniques can assist in mitigating robust overfitting, the fundamental mechanism behind robust overfitting remains unclear. This uncertainty has somewhat constrained the widespread applicability of current techniques. For instance, it has been noted that more training data does not necessarily alleviate robust overfitting and can even harm robust generalization (Chen et al., 2020; Min et al., 2021). Additionally, it was shown that sample reweighting techniques with completely opposing objectives can both effectively alleviate robust overfitting (Zhang et al., 2020; Yu et al., 2022), and data augmentation technique was found to be inadequate in combating robust overfitting in prior attempts (Goyal et al., 2020; Rebuffi et al., 2021). These contradictions are common in the adversarial training community and further emphasize the importance of understanding the mechanism of robust overfitting. In this work, we investigate the onset of robust overfitting and explore its underlying mechanism. ## 3 The Onset of Robust Overfitting in Adversarial Training In this section, we commence with factor ablation experiments to identify the underlying causes of robust overfitting (Section 3.1). Subsequently, we offer an intuitive explanation for the onset of robust overfitting through the lens of feature generalization. To this end, we explain various empirical behaviors associated with robust overfitting and revisit existing techniques for mitigating robust overfitting based on our analysis (Section 3.2). Finally, we develop two representative measures to support our analysis (Section 3.3). ### Factor Ablation Adversarial Training Inspired by the data ablation experiments in Yu et al. (2022), which revealed that small-loss adversarial data leads to robust overfitting by removing adversarial data during training, we propose factor ablation adversarial training to gain deeper insights into robust overfitting. We follow the same rule as the data ablation experiments, using a fixed loss threshold to differentiate between large-loss and small-loss adversarial data. For instance, in the CIFAR10 dataset, data with an adversarial loss of less than 1.5 are regarded as small-loss adversarial data. Unlike the data ablation experiments, where adversarial data is treated as a unified entity, we treated normal data and adversarial perturbations within small-loss adversarial data as separate factors and conducted more detailed factor ablation experiments to identify the inducing factor of robust overfitting. Specifically, we trained a PreAct ResNet-18 model on CIFAR-10 using standard AT under the \(\ell_{\infty}\) threat model and removed specified factors before robust overfitting occurred (i.e., at the 100th epoch), including: _i)_**baseline**, which is a baseline group without removing any factors; _ii)_**data & perturbation**, which removes both the normal data and adversarial perturbations from small-loss adversarial data; and _iii)_**perturbation**, which only removes the adversarial perturbations from small-loss adversarial data. It's important to note that the experimental groups mentioned above were entirely identical before the occurrence of robust overfitting. This ensures that these experiments adhered to a rigorous controlled variable principle, with the only difference between the various experimental groups being the specific factors removed from the training data at the 100th epoch. The experimental results of factor ablation adversarial training are summarized in Figure 1(a). We observe that the **data & perturbation** group exhibits a significant relief in robust overfitting, while both the **baseline** and **perturbation** groups still experience severe robust overfitting. Since the only difference between the **data & perturbation** group and the **perturbation** group is the presence of normal data in the training set, we can clearly infer that normal data is the inducing factor of robust overfitting. Similar effects were also observed across different datasets, network architectures, and adversarial training variants (see Appendix A), indicating that this is a general finding in adversarial training. ### The Onset of Robust Overfitting In this part, we delve into the analysis of how normal data contributes to robust overfitting from the perspective of feature generalization. Building on this understanding, we further explain various Figure 1: The test robustness of (a) different experimental groups of factor ablation adversarial training, (b) OROAT\({}_{\mathrm{AS}}\) with varying attack strengths, and (c) OROAT\({}_{\mathrm{DA}}\) with different proportions of small-loss adversarial data. empirical behaviors associated with robust overfitting and revisit existing techniques for mitigating robust overfitting. #### 3.2.1 Understanding Robust Overfitting from the Feature Generalization Perspective From the experimental results in Section 3.1, we can know that the factors inducing robust overfitting stem from normal data. Normal data can be viewed as a composition of features. According to Ilyas et al. (2019), these features can be categorized into robust features and non-robust features. Specifically, given a model and a specified attack budget, if the correlation between a feature and its corresponding label consistently holds under the specified attack budget, then this feature is considered robust; otherwise, it is classified as non-robust. Based on these definitions, it is evident that the boundary between robust and non-robust features is not static, but dynamically adjusts as the model's learning state evolves on this data. Moreover, due to the presence of distributional differences between the training and test datasets, the model's learning state on them also varies. Consequently, there may be some features that lack robust generalization. In other words, certain features may be considered robust for the model on the training set, but on the test set, they are categorized as non-robust. We refer to these features as non-effective features. Next, we proceed to further analyze how these non-effective features lead to robust overfitting. During AT, the model consistently learns features that it considers to be robust from the training dataset. In the initial stages of adversarial training, due to the similarity in the model's learning states between the training and test datasets, the boundary between the robust and non-robust features doesn't significantly differ between the training and test sets. Therefore, the optimization trend of the model is primarily driven by the effective robust features. As the training process advances, the model's learning state on the training data continues to strengthen. For example, the loss on the training data consistently decreases, or the quantity of small-loss adversarial data in the training set steadily increases. However, the improvement in the model's learning state on the test dataset is relatively limited, far from matching the model's learning state on the training dataset. This leads to a widening gap in the model's learning states between the training and test datasets. As a result, the boundary between the robust and non-robust features becomes progressively more distinct between the training and test sets. This facilitates the generation of non-effective features, causing the model to learn an increasing number of them. Once the model's optimization trend is dominated by these non-effective features, the model's adversarial robustness on the test dataset will continue to decline. This, in turn, gives rise to the phenomenon of robust overfitting. **Empirical behaviors of robust overfitting.** We notice that robust overfitting exhibits some empirical behaviors in adversarial training: 1) Removing small-loss adversarial data can prevent robust overfitting. 2) As the adversarial perturbation budget increases, the degree of robust overfitting initially rises and then decreases. Our analysis naturally explains these phenomena: 1) adversarial data with small loss indicates that the model's learning state on these data is excellent, maintaining a substantial gap compared to the learning state on the test set. This gap in learning states promotes the generation of non-effective features on these data. Therefore, removing small-loss adversarial data from the training set can prevent the model from learning an excessive number of non-effective features, effectively mitigating robust overfitting. 2) As the perturbation budget increases from 0, in accordance with the definitions of robust and non-robust features, the range of non-robust features in the data gradually expands, resulting in a higher likelihood of non-effective features emerging during training. This explains why natural training does not exhibit robust overfitting, and as the adversarial perturbation budget increases, the degree of robust overfitting also rises. However, with a further increase in the perturbation budget, the degree of robust overfitting decreases. This is because the model's learning state on the training set is limited under a large perturbation budget, which narrows the gap in learning states between the training and test sets. This reduction in the gap in learning states alleviates the generation of non-effective features in the training data, leading to a decrease in the non-effective features learned by the model. Consequently, the degree of robust overfitting gradually decreases. #### 3.2.2 Revisiting Existing Techniques for Mitigating Robust Overfitting **Sample reweighting.** Sample reweighting is a common method in adversarial training used to mitigate robust overfitting. It assigns weighted values to each adversarial data point, distinguishing the importance of different training data. We have noticed that the current literature employs the sample reweighting technique in various ways. For instance, Zhang et al. (2020) utilized sample reweighting to weaken the model's learning on small-loss adversarial data, whereas Yu et al. (2022) employed it to strengthen the model's learning on small-loss adversarial data. These two approaches utilize sample reweighting with completely opposing objectives, yet both effectively alleviate robust overfitting. Our analysis can explain why both methods are effective in mitigating robust overfitting: the sample reweighting technique in Zhang et al. (2020) reduces the importance of small-loss adversarial data, which is equivalent to diminishing the role of non-effective features learned on these data in model optimization, thus effectively mitigating robust overfitting. The sample reweighting technique in Yu et al. (2022) increases the adversarial loss of small-loss adversarial data, essentially narrowing the gap in the model's learning state between the training and test sets. This prevents the generation of non-effective features and thereby effectively alleviates robust overfitting. In summary, one approach reduces the importance of non-effective features in model optimization, while the other prevents the generation of non-effective features. Although the objectives of these two methods are completely opposite, both lead to a reduction in the model's learning of non-effective features, and thus effectively alleviate robust overfitting. **Additional training data.** Utilizing additional training data is a typical method to mitigate robust overfitting. For instance, Carmon et al. (2019); Alayrac et al. (2019); Zhai et al. (2019) introduce more training data through semi-supervised learning in adversarial training to avoid robust overfitting and improve adversarial robustness. However, it remains unclear how much extra training data is needed to prevent robust overfitting (Gowal et al., 2020), and sometimes, additional training data may not necessarily help alleviate robust overfitting (Chen et al., 2020; Min et al., 2021). Our analysis offers intuitive explanations for these issues: as mentioned in Section 3.2.1, robust overfitting arises from the model learning non-effective features, and the necessary condition for the generation of non-effective features is a significant learning state gap between the training and test datasets. On one hand, the technology of additional training data can directly adjust the distribution of training data in adversarial training, making it an effective method to avoid robust overfitting. On the other hand, if the added training data fails to narrow the learning state gap between the training and test datasets, or if it does not suppress the role of non-effective features in model optimization, then these additional training data will be ineffective against robust overfitting. In summary, for the additional training data technique, the quantity of extra training data is not the crucial factor. Instead, it depends on whether these additional training data can narrow the gap in the model's learning state between the training and test datasets, or if these added training features can overwhelm the role of non-effective features in model optimization. **Data augmentation.** Data augmentation techniques involve applying random transformations to training data during the training process. This is a prevalent method for increasing the quantity and diversity of training data, and it has been empirically shown to reduce overfitting in standard training. However, previous attempts (Gowal et al., 2020; Rebuffi et al., 2021) have indicated that data augmentation doesn't provide much help for robust overfitting in adversarial training. Later, some evidence suggests that data augmentation can be effective when combined with regularization (Tack et al., 2022) or when used alone (Li and Spratting, 2023). Similarly, data augmentation also allows for direct adjustments to the normal data within the training dataset. Thus, it can serve as an effective approach to address robust overfitting. Specifically, if data augmentation weakens the model's learning state on the training data, it can effectively alleviate robust overfitting. However, data augmentation techniques generally involve random image transformations and may not always achieve the desired effect. Particularly, in the next subsection, we utilize data augmentation techniques to design a method for achieving the alignment of model learning state between the training and test sets, and demonstrate that simple data augmentation methods with a targeted transformation objective can be significantly helpful in alleviating robust overfitting and enhancing adversarial robustness. ### The Proposed Methods As mentioned in Section 3.2, we propose to understand the Onset of Robust Overfitting in Adversarial Training (OROAT) as a result of learning non-effective features, which are derived from the model's learning state gap between the training and test sets. In this part, we introduce two approaches to support our analysis: _attack strength_, which belongs to the feature-elimination approach, and _data augmentation_, which belongs to the state-alignment approach. These two methods are rep resentative and, more importantly, orthogonal in regulating the learning of non-effective features during training, thereby fully validating our analysis of OROAT. **OROAT through attack strength.** The feature-elimination approach adjusts the model's learning of non-effective features by eliminating the relevant features from the training dataset. Adversarial training (Goodfellow et al., 2015; Madry et al., 2018) is the most primitive method in this direction. It employs adversarial perturbations to eliminate the fragile, non-robust features in the training dataset. To achieve varying degrees of non-effective feature elimination, we employed different levels of attack strength to generate adversarial perturbations. Specifically, we trained PreAct ResNet-18 on CIFAR10 under the \(\ell_{\infty}\) threat model and used various perturbation budgets \(\epsilon\) on small-loss adversarial data, ranging from \(0/255\) to \(24/255\). In each setting, we evaluated the robust accuracy on CIFAR10 test data that were attacked with the standard perturbation budget of \(\epsilon=8/255\). This approach utilizes the attack strength strategy to eliminate varying levels of non-effective features during training, which we refer to as OROAT\({}_{\mathrm{AS}}\). The results of OROAT\({}_{\mathrm{AS}}\) with different attack strengths are summarized in Figure 1(b). We observe a clear correlation between the applied attack strength and the extent of robust overfitting. Specifically, the more non-effective features are eliminated, the milder the degree of robust overfitting. When the perturbation budget is \(0/255\), robust overfitting is most pronounced. However, when the perturbation budget exceeds a certain threshold, such as \(16/255\), the model exhibits almost no robust overfitting. It is worth noting that similar patterns are also observed across different datasets, network architectures, and adversarial training variants (as shown in Appendix B). These experimental results clearly demonstrate that it is these non-effective features that lead to robust overfitting. **OROAT through data augmentation.** On the other hand, the state-alignment approach aims to prevent the generation of non-effective features by aligning the model's learning states between the training and test datasets. The double-perturbation mechanism (Wu et al., 2020) or minimum loss constraint (Yu et al., 2022b) are the most prominent methods in this direction. They employ weight perturbations to weaken the model's learning state on the training set. During AT, the model's learning state on the training set typically surpasses that on the test set. To quantitatively assess the model's learning state on the training dataset, we estimate it roughly by examining the proportion of small-loss adversarial data. For instance, a higher proportion of small-loss adversarial data suggests that the model has a more proficient learning state on the dataset. In order to conduct AT with varying proportions of small-loss adversarial data, we utilize data augmentation techniques to adjust the proportion of small-loss adversarial data in each minibatch. Specifically, at the beginning of each iteration, we check whether the proportion of small-loss adversarial data meets the specified threshold. If this proportion is below the specified threshold, we apply data augmentation to these small-loss examples within the minibatch until the desired proportion is reached. The data augmentation method we employ is AugMix (Hendrycks et al., 2020), and we refer to this adversarial training framework, empowered by the data augmentation technique, as OROAT\({}_{\mathrm{DA}}\). The results of OROAT\({}_{\mathrm{DA}}\) with different proportions of small-loss adversarial data are summarized in Figure 1(c). We observe a clear correlation between the proportion of small-loss data and the extent of robust overfitting. As the model's learning state weakens on the training dataset, the degree of robust overfitting becomes increasingly mild. Furthermore, these effects are consistent across different datasets, network architectures, and adversarial training variants (as shown in Appendix C). These empirical results provide strong evidence for our analysis of OROAT, indicating that the gap in the model's learning state promotes the generation of non-effective features. ## 4 Experiment In this section, we evaluate the effectiveness of the proposed OROAT\({}_{\mathrm{AS}}\) and OROAT\({}_{\mathrm{DA}}\). Section 4.1 shows that both OROAT\({}_{\mathrm{AS}}\) and OROAT\({}_{\mathrm{DA}}\) consistently improve the adversarial robustness over the baselines. In Section 4.2, we conduct an ablation analysis and discuss the proposed method. **Setup.** We conduct extensive experiments across different benchmark datasets (CIFAR10 and CIFAR100 (Krizhevsky et al., 2009)), network architectures (PreAct ResNet-18 (He et al., 2016) and Wide ResNet-34-10 (Zagoruyko and Komodakis, 2016)), and adversarial training variants (AT (Madry et al., 2018) and TRADES (Zhang et al., 2019)). In addition to adversarial training variants, we also include two typical baseline methods for mitigating robust overfitting: AWP (Wu et al., 2020) and MLCAT (Yu et al., 2022). For training, we followed the same optimization parameters as in Rice et al. (2020) for a fair comparison. In terms of evaluation, we utilized PGD-20 (Madry et al., 2018) and AutoAttack (AA) (Croce and Hein, 2020) as adversarial attack methods. The detailed descriptions of the experimental setup are in the Appendix D. ### Robustness Evaluation The evaluation results of OROAT\({}_{\mathrm{AS}}\) and OROAT\({}_{\mathrm{DA}}\) are summarized in Table 1. Here, "Best" refers to the highest achieved robustness during training, "Last" refers to the robustness of the checkpoint at the last epoch, and "Natural" denotes the accuracy on normal data. Note that we use the prefix "ORO" to denote the corresponding baselines that have integrated our proposed method. For example, if the attack strength strategy is applied to TRADES, we represent it as OROTRADES\({}_{\mathrm{AS}}\). We can observe that the proposed approaches boost the adversarial robustness over standard AT by a noticeable margin, demonstrating the effectiveness of OROAT\({}_{\mathrm{AS}}\) and OROAT\({}_{\mathrm{DA}}\). Furthermore, this performance improvement is consistent across different datasets, network architectures, and adversarial training variants, indicating that our proposed methods can reliably enhance adversarial robustness. It is worth noting that AWP and MLCAT baselines have already effectively mitigated robust overfitting. Nevertheless, our approaches still contribute to a complementary performance improvement. The superior robustness of the proposed approaches on these baselines further demonstrates the importance of understanding the underlying mechanisms behind robust overfitting. ### Analysis and Discussion **Ablation analysis.** To analyze the role of the introduced attack strength component and data augmentation component in mitigating robust overfitting and enhancing adversarial robustness, we con \begin{table} \begin{tabular}{l l l c c c c c c} \hline \hline \multirow{2}{*}{Network} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{Best} & \multicolumn{3}{c}{Last} \\ \cline{3-8} & & & Natural & PGD-20 & AA & Natural & PGD-20 & AA \\ \hline \multirow{8}{*}{PreAct ResNet-18} & \multirow{4}{*}{CIFAR10} & AT & 82.31 & 52.28 & 48.09 & 84.11 & 44.46 & 42.01 \\ & & OROAT\({}_{\mathrm{DA}}\) & **82.58** & 53.95 & 48.48 & **85.45** & 49.69 & 44.44 \\ & & OROAT\({}_{\mathrm{AS}}\) & 77.68 & **56.37** & **49.37** & 78.04 & **51.96** & **45.97** \\ \cline{2-8} & \multirow{4}{*}{CIFAR100} & AT & 55.14 & 28.93 & 24.53 & 55.83 & 20.87 & 18.92 \\ & & OROAT\({}_{\mathrm{DA}}\) & **55.79** & 29.40 & 24.80 & **57.92** & 25.51 & 21.59 \\ & & OROAT\({}_{\mathrm{AS}}\) & 51.02 & **30.25** & **25.63** & 51.06 & **26.19** & **22.67** \\ \cline{2-8} & \multirow{4}{*}{CIFAR10} & TRADES & 81.50 & 52.92 & 48.90 & 82.27 & 49.95 & 46.92 \\ & & OROTRADES\({}_{\mathrm{DA}}\) & **82.89** & 53.14 & 49.12 & **83.28** & **52.13** & 48.41 \\ & & OROTRADES\({}_{\mathrm{AS}}\) & 80.92 & **53.49** & **49.88** & 80.97 & 52.04 & **48.91** \\ \cline{2-8} & \multirow{4}{*}{CIFAR10} & AWP & 81.01 & 55.36 & 50.12 & 81.61 & 55.05 & 49.85 \\ & & OROAWP\({}_{\mathrm{DA}}\) & **81.12** & 55.89 & 50.49 & **81.63** & 55.32 & 50.19 \\ & & OROAWP\({}_{\mathrm{AS}}\) & 78.68 & **56.52** & **50.75** & 79.06 & **55.70** & **50.59** \\ \cline{2-8} & \multirow{4}{*}{CIFAR10} & MLCAT & 81.70 & 58.33 & 50.54 & 82.26 & 58.25 & 50.46 \\ & & OROAT\({}_{\mathrm{DA}}\) & **82.06** & 58.76 & 50.61 & **82.50** & 58.57 & 50.52 \\ & & OROMLCAT\({}_{\mathrm{AS}}\) & 77.12 & **59.01** & **50.83** & 78.79 & **58.79** & **50.68** \\ \hline \multirow{8}{*}{Wide ResNet-34-10} & \multirow{4}{*}{CIFAR10} & AT & **85.49** & 55.40 & 52.31 & **86.50** & 47.14 & 45.74 \\ & & OROAT\({}_{\mathrm{AS}}\) & 82.64 & **59.07** & **53.04** & 82.71 & **49.68** & **46.59** \\ \cline{2-8} & \multirow{4}{*}{CIFAR100} & AT & 60.90 & 31.35 & 27.42 & 59.07 & 26.03 & 24.39 \\ & & OROAT\({}_{\mathrm{AS}}\) & 56.55 & **33.04** & **28.58** & 52.75 & **27.23** & **24.58** \\ \cline{2-8} & \multirow{4}{*}{CIFAR10} & TRADES & **84.78** & 56.25 & 53.12 & **84.70** & 48.49 & 46.69 \\ & & OROTRADES\({}_{\mathrm{AS}}\) & 83.36 & **57.07** & **53.79** & 84.64 & **49.48** & **47.25** \\ \cline{2-8} & \multirow{4}{*}{CIFAR10} & AWP & **85.30** & 58.35 & 53.07 & **85.39** & 57.16 & 52.49 \\ & & OROAWP\({}_{\mathrm{AS}}\) & 84.47 & **59.67** & **54.35** & 84.87 & **57.66** & **52.93** \\ \cline{2-8} & \multirow{4}{*}{CIFAR10} & MLCAT & **86.72** & 62.63 & 54.73 & **87.32** & 61.91 & 54.61 \\ \cline{2-8} & & OROMLCAT\({}_{\mathrm{AS}}\) & 85.41 & **63.60** & **55.25** & 84.74 & **62.47** & **54.88** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of adversarial robustness for OROAT\({}_{\mathrm{AS}}\) and OROAT\({}_{\mathrm{DA}}\). The results were calculated as the average of three random trials. We omit the standard deviations as they are small (Natural\(<0.6\%\), PGD-20\(<0.3\%\) and AA\(<0.2\%\)). ducted an ablation study with standard AT using PreAct ResNet-18 on the CIFAR10 dataset. Specifically, we varied the perturbation budget in the attack strength component from 0/255 to 24/255 and adjusted the threshold for small-loss adversarial data in the data augmentation component from 1.0 to 0.2. The results are summarized in Table 2. For robust overfitting, it was observed that as these components became more aggressive, such as when the attack strength component removes more non-effective features or when the data augmentation component further weakens the model's learning state on the training dataset, the degree of robust overfitting becomes milder. These results strongly support our analysis regarding the onset of robust overfitting. On the other hand, regarding the model's adversarial robustness, we observed a trend of initially increasing and then decreasing. The observed trend can be attributed to the effects of these introduced components. While effective in suppressing robust overfitting, these components also have a detrimental effect on the model's adversarial robustness. For example, the attack strength component eliminates some effective robust features, and the data augmentation component degrades the model's defense against strong attacks. In the early stages, the advantage of these components in suppressing robust overfitting is predominant, leading to an overall improvement in the model's robustness. However, as these components become more aggressive, their disadvantages eventually outweigh the benefits of suppressing robust overfitting, resulting in a decrease in the model's robustness. **Discussion.** The proposed approach introduces additional components into the adversarial training framework, resulting in increased computational complexity. For the attack strength component, its computational cost depends on the perturbation budget; the larger the budget, the more additional attack iterations are required. Regarding the data augmentation component, the computational cost of this component is primarily influenced by the threshold set for small-loss adversarial data. When the threshold is low, a significant computational cost is needed to meet algorithm objectives due to the stochastic nature of data augmentation methods. Due to its high computational cost, we restricted experiments involving the data augmentation component to the low-capacity PreAct ResNet-18, as shown in Table 1. While we acknowledge that the proposed methods may not represent the optimal algorithm for addressing robust overfitting, especially in consideration of computational complexity and adversarial robustness, we want to emphasize that their design was intended to support our analysis of the onset of robust overfitting. Furthermore, their experimental results strongly validate our analysis and have demonstrated substantial improvements in robustness across a wide range of baselines. We hope that our understanding of the underlying mechanisms of robust overfitting will inspire future research to explore more efficient methods for handling robust overfitting. ## 5 Conclusion In this work, we develop factor ablation adversarial training and identify that the contributing factors to robust overfitting originate from normal data. Furthermore, we explain the onset of robust overfitting as a result of learning non-effective features and provide a comprehensive analysis of this phenomenon. To support our analysis, we design two orthogonal approaches: attack strength derived from feature elimination and data augmentation derived from state alignment. Extensive \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Budget/Rate} & \multicolumn{3}{c}{Best} & \multicolumn{3}{c}{Last} \\ \cline{3-8} & & Natural & PGD-20 & AA & Natural & PGD-20 & AA \\ \hline \multirow{8}{*}{OROAT\({}_{\mathrm{AS}}\)} & 0/255 & 84.57\(\pm\)0.23 & 50.75\(\pm\)0.10 & 45.17\(\pm\)0.06 & 86.71\(\pm\)0.38 & 41.30\(\pm\)0.14 & 36.61\(\pm\)0.13 \\ & 4/255 & 83.94\(\pm\)0.40 & 51.09\(\pm\)0.23 & 46.26\(\pm\)0.03 & 85.68\(\pm\)0.21 & 41.38\(\pm\)0.26 & 39.11\(\pm\)0.12 \\ & 8/255 & 81.92\(\pm\)0.46 & 51.96\(\pm\)0.14 & 47.42\(\pm\)0.12 & 83.87\(\pm\)0.36 & 43.56\(\pm\)0.06 & 41.42\(\pm\)0.02 \\ & 12/255 & 89.04\(\pm\)0.57 & 54.80\(\pm\)0.08 & 48.59\(\pm\)0.08 & 80.85\(\pm\)0.24 & 49.99\(\pm\)0.19 & 45.36\(\pm\)0.16 \\ & 16/255 & 77.48\(\pm\)0.36 & 56.35\(\pm\)0.20 & 49.11\(\pm\)0.14 & 76.84\(\pm\)0.28 & 53.20\(\pm\)0.16 & 46.24\(\pm\)0.06 \\ & 20/255 & 75.07\(\pm\)0.49 & 55.90\(\pm\)0.15 & 48.19\(\pm\)0.08 & 73.97\(\pm\)0.31 & 53.24\(\pm\)0.20 & 45.46\(\pm\)0.08 \\ & 24/255 & 74.24\(\pm\)0.21 & 54.71\(\pm\)0.06 & 46.86\(\pm\)0.05 & 72.71\(\pm\)0.50 & 52.54\(\pm\)0.10 & 44.63\(\pm\)0.03 \\ \hline \multirow{8}{*}{OROAT\({}_{\mathrm{DA}}\)} & 1.0 & 82.00\(\pm\)0.30 & 52.17\(\pm\)0.17 & 47.77\(\pm\)0.08 & 84.37\(\pm\)0.36 & 43.96\(\pm\)0.25 & 41.61\(\pm\)0.19 \\ & 0.8 & 82.42\(\pm\)0.38 & 52.26\(\pm\)0.24 & 48.08\(\pm\)0.15 & 84.61\(\pm\)0.47 & 47.76\(\pm\)0.12 & 43.78\(\pm\)0.02 \\ \cline{1-1} & 0.6 & 82.58\(\pm\)0.42 & 53.59\(\pm\)0.08 & 48.48\(\pm\)0.08 & 85.45\(\pm\)0.52 & 49.69\(\pm\)0.25 & 44.44\(\pm\)0.15 \\ \cline{1-1} & 0.4 & 83.06\(\pm\)0.20 & 55.46\(\pm\)0.15 & 46.98\(\pm\)0.06 & 84.89\(\pm\)0.22 & 51.03\(\pm\)0.08 & 41.18\(\pm\)0.02 \\ \cline{1-1} & 0.2 & 83.08\(\pm\)0.35 & 55.99\(\pm\)0.22 & 45.18\(\pm\)0.16 & 84.83\(\pm\)0.40 & 52.53\(\pm\)0.17 & 43.29\(\pm\)0.08 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study of OROAT\({}_{\mathrm{AS}}\) and OROAT\({}_{\mathrm{DA}}\) methods. The results were calculated as the average of three random trials. experiments validate our analysis and demonstrate the effectiveness of the proposed approaches in alleviating robust overfitting and enhancing adversarial robustness across different adversarial training methods, network architectures, and benchmark datasets.
2308.15726
AGS: An Dataset and Taxonomy for Domestic Scene Sound Event Recognition
Environmental sound scene and sound event recognition is important for the recognition of suspicious events in indoor and outdoor environments (such as nurseries, smart homes, nursing homes, etc.) and is a fundamental task involved in many audio surveillance applications. In particular, there is no public common data set for the research field of sound event recognition for the data set of the indoor environmental sound scene. Therefore, this paper proposes a data set (called as AGS) for the home environment sound. This data set considers various types of overlapping audio in the scene, background noise. Moreover, based on the proposed data set, this paper compares and analyzes the advanced methods for sound event recognition, and then illustrates the reliability of the data set proposed in this paper, and studies the challenges raised by the new data set. Our proposed AGS and the source code of the corresponding baselines at https://github.com/taolunzu11/AGS .
Nan Che, Chenrui Liu, Fei Yu
2023-08-30T03:03:47Z
http://arxiv.org/abs/2308.15726v1
# AGS: An Dataset and Taxonomy for Domestic Scene Sound Event Recognition ###### Abstract Environmental sound scene and sound event recognition is important for the recognition of suspicious events in indoor and outdoor environments (such as nurseries, smart homes, nursing homes, etc.) and is a fundamental task involved in many audio surveillance applications. In particular, there is no public common data set for the research field of sound event recognition for the data set of the indoor environmental sound scene. Therefore, this paper proposes a data set (called as **AGS**) for the home environment sound. This data set considers various types of overlapping audio in the scene, background noise. Moreover, based on the proposed data set, this paper compares and analyzes the advanced methods for sound event recognition, and then illustrates the reliability of the data set proposed in this paper, and studies the challenges raised by the new data set. Our proposed AGS and the source code of the corresponding baselines at [https://github.com/taolunzu11/AGS](https://github.com/taolunzu11/AGS). Keywords:Sound Dataset Sound Event Recognition Environmental Sound Scene ## 1 Introduction There is no existing sound dataset dedicated to domestic activities, although the sounds produced by domestic activities often have rich semantic features. Scene graph proposed by [1] focuses on the relationship between the entities to present semantics, it is naturally suitable for the expression of natural language [2]. The detailed semantics described in a scene graph can be a fundamental support to the tasks of activity recognition [3], image captioning [4], and so on. From then on, a large amount of research has paid attention to scene graph, and several visual scene graph datasets [1][5] are released at the same time. Lots of scene graph generation (SGG) methods [6][7] are studied in depth to improve the semantic expression quality of scene graph. The data sets and research methods mentioned above are all based on the single modality of vision, acoustic modality which can be used to describe real-world scenarios in another aspect is usually ignored in the field of scene graph. The transmission of sound is naturally immune to line of sight which limits vision propagation, thus the sound scene can effectively fill the perception gap of the single visual scene, additionally, the characterization of sound in duration and intensity can provide a more in-depth description of activities which is hard in visual scenes. Finally, multi-modal recognition mixed with vision and sound may improve the performance of scene graph generation in the future. Sound can also carry a lot of information worth extracting, several sound datasets are introduced to train classification models. Current sound data sets can be roughly divided into two categories: speech [8][9] and nonspeech [10][11], almost all sound datasets concentrate on sound classification, but cannot provide the relationship between entities that emit sounds. Therefore, we publish the Action Genome Sound Dataset (AGS) which extracts the sounds made by people and objects from the videos in the Action Genome dataset [12]. Similar to the original visual labels in Action Genome dataset, we manually label the attention relationship, spatial relationship, and contacting relationship between entities in the sound aspect. Our AGS dataset contains 3986 sound records clipped from 972 video files and can be classified into a total of 65 categories, the number of relationship records reaches 4,260, additionally, the AGS dataset is mainly committed to domestic sounds to fill in the gap of current sound datasets. Then we conduct audio-only baseline experiments using six existing classification models such as MobileNetV2 [13], DaiNet [14], PANNs (Wavegram-Logmel-CNN) [15], Wavegram-Logmel-CNN-attention (our expanded method base on PANNs), AST [16] and LSTM-based methods3, finally the accuracy and MAP of these models are presented. Footnote 3: [https://www.kaggle.com/code/kvpratama/audio-classification-with-lstm-and-torchaudio/notebook](https://www.kaggle.com/code/kvpratama/audio-classification-with-lstm-and-torchaudio/notebook) In this paper, our main contributions are as the following: * We extract sound records along the relationship dimensions of attention, space, and contacting, in addition to entities, classes, and clarity which are common labels in current sound datasets; * We contribute a publicly available sound scene dataset that contains semantics by relationship description which is incremental work of AG dataset; * We use early machine learning classifiers of audio to establish baseline audio classification performance on our dataset; * The source code for baseline testing of AGS dataset are released in github. ## 2 Related Work Audio (Sound) scene analysis (ASA) emphasizes the perception ability of human beings to perceive the environment and understand the environment through hearing, and ASA can provide technical and theoretical support for sound scene understanding and event recognition for scene monitoring. However, the current public and available data sets for sound scene understanding and sound event recognition cannot meet the research needs of sound scene understanding and sound event recognition. Compared with the related research on our work, it mainly includes **sound scene dataset** and **sound event recognition**. ### Sound Scene Dataset Most of the current public datasets mainly cover the following real-life scenes: transportation (cars, buses and trains), public spaces (grocery stores, restaurants, offices, streets and parks), and leisure spaces (beaches, basketball games and fields). Wherein the first sound (audio) scene dataset (Audioset) [17] mainly concluded the two millions clips by the manual annotation, which includes the main ontologies: human sounds, sounds of things (vehicle, engine, bell, alarm etc.), animal sounds, natural sound (wind,water, fire etc.), music. Audioset data does not distinguish between outdoor and indoor sound scenes. VGGSound [18] is a large-scale audiovisual dataset with low label noise collected from videos "in the wild", which is consisted of more than 200k videos for 300 audio classes. Both VGGSound and Audioset are based on YouTube videos. FSD50K [19] fills the gap of AudioSet: AudioSet is not an open dataset because its official version contains precomputed audio features, which contained over 51 k audio clips labeled using 200 classes. Different from the above YouTube data that does not distinguish between indoor and outdoor scenes, Mivia [20] for indoor scenes is synthetic, including 6K clips for 3 classes; DESED [21]for outdoor scenes is Freesound Dataset (FSD), including 12k clips for 10 classes; UrbanSound8k [22] for Audio Scenes is FSD, contains 8.7k for 10 classes. ### Sound Event Recognition The tasks of environmental audio scene recognition (EASR) and sound event recognition (SER) in uncontrolled environments are part of the computational auditory scene analysis (CASA) research field [23]. Sound event recognition (Sound event recognition) is to identify what and when is happening in an audio signal [24]. Most of the existing Sound event recognition methods are based on the above-mentioned public datasets, and none of the sound scenes they deal with can be directly applied to specific behavior monitoring. In the early days of Sound event recognition development, the existing research was mainly based on Mel-frequency cepstral coefficients (MFCCs) and machine learning models (SVM, HMM, GMM) [25, 26, 27]. Due to the wide application of deep learning in artificial intelligence, the SER approaches based on deep learning have gradually become the mainstream methods in ASA. Convolutional (CNN) models [28] have been widely used in AEDs. VGGish [28] is the first work based on CNN, which is also based on Audioset. The current representative methods mainly include MobileNetV2 [13], DaiNet [14], PANNs [15], VATT [29] and AST [16] etc. The core idea of MobileNetV2 is to take a low-dimensional compressed representation as input, first expand it to high-dimensional and use lightweight depthwise convolution for filtering. The features are subsequently projected back to a low-dimensional representation with linear convolution. DaiNet is different from previous studies that use log mel spectrogram as input, but directly uses time series waveform as input. PANNs is a Wavegram feature learnt from waveform, and a Wavegram-Logmel-CNN that achieves state-of-the-art performance in AudioSet tagging. In addition, transformer-based SER approaches (such as VATT and AST) [16, 29] are mainly to use self-supervised learning to design the loss function and through the spectrogram features to integrate the vision transformer [30]. ## 3 Domestic Scene Sound Taxonomy ### The Specificity of AGS Record Shown as Fig. 1, a series of pictures are captured from a video named OBXRP.mp4 in AG data, the red parts are the original record of the AG dataset, and the blue ones are the newly added entities, relationships, and triplets from the perspective of sound which are recorded in AGS dataset. Specifically, entities in the video are labelled as person, bag, groceries, and stove in AG original data. Based on the combination of sound and video, we add new entities including bottle, pot, and counter to extend our AGS dataset. The triplets marked in red are the relationships in AG provided by the video description, while the blue triplets are the relationships annotated in AGS according to the sounds between entities. In summary, the AGS data set is an enhancement of the AG data in the sound relationship. The data application of AGS has more advantages in inference tasks, reasoning tasks, etc., because sound records Figure 1: An example of the typical advantage that auditory information has over visual information in environmental monitoring. can provide unique capabilities in the temporal dependence of events and the intensity of actions. ### Recording Process and Overall Scale of AGS The overall process of data labelling is shown in Fig. 2. Firstly, we select a video file from the AG dataset to prepare for audio processing. In the stage of audio noise reduction, we check whether the audio needs noise reduction, if necessary, noise reduction must process to filter the original background noise in the audio. In a comprehensive analysis of video and audio, we need to judge whether the clarity of this audio meets the requirements and whether the included soundscan correspond to the sound category and relationships, and if the audio is evaluated valueless, it will be discarded. In the stage of extracting audio clips, we extract several no more 4 seconds audio clips from the original audio that can reflect the sound category and relationship between entities. In the stage of annotation, We label the discovered entities and relationships. Finally, the records produced above are incrementally stored in the AGS dataset. After an amount of annotation work, there are currently \(3,986\) entries spanning \(65\) categories in our dataset. These \(3,986\) entries are clipped from \(972\) videos. Among them, \(1,375\) entries clipped from \(682\) video files are clear, which s, while \(2,610\) entries are unclear from \(482\) video files. We annotated \(2,884\) entries for human-object relationships and \(1,376\) entries for object-object relationships, with a total of \(27\) relationship categories. \begin{table} \begin{tabular}{c|l||c} \hline \hline Types & Description & Samples \\ \hline Clear Dataset & The data is clear enough, with almost no noise, to be able to clearly hear the category corresponding to the audio clip. & 1375 \\ \hline Hybrid Dataset & The data is slightly noisy, but it identifies the category of audio clips. & 3986 \\ \hline \hline \end{tabular} \end{table} Table 1: Types of AGS Datasets and Description Figure 2: The overall process of creating AGS.The process starts with the AG audio and AG Ontology. As shown in Table 2, our dataset is annotated with 65 categories, while the AG dataset is annotated with 36 categories. It can be seen that our annotated categories cover most of the categories in the AG dataset. Among the 36 categories in the AG dataset, doorway, sandwich, and shelf are not referred in our sound dataset. However, we have set sandwich as a sub-label of food. In addition to these annotations, we have added 32 categories not present in the AG dataset. Our dataset is more detailed in its annotations, allowing richer information to be extracted from it. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline classes & AG & AGS & classes & AG & AGS & classes & AG & AGS \\ \hline curtain & ✓ & phone\_camera & ✓ & ✓ & sofa\_couch & ✓ & ✓ & book & ✓ & ✓ \\ closestool & ✓ & paper\_notebook & ✓ & ✓ & shoe & ✓ & ✓ & blanket & ✓ & ✓ \\ stars & ✓ & mirror & ✓ & ✓ & refrigerator & ✓ & ✓ & bed & ✓ & ✓ \\ socket & ✓ & medicine & ✓ & ✓ & pillow & ✓ & bag & ✓ & ✓ \\ handset & ✓ & light & ✓ & ✓ & picture & ✓ & person & ✓ & ✓ \\ zipper & ✓ & laptop & ✓ & ✓ & groceries & ✓ & chopping board & ✓ \\ spatula & ✓ & food & ✓ & ✓ & laundry\_detergent & ✓ & knife & ✓ \\ pot & ✓ & floor & ✓ & & bowl & ✓ & lipstick & ✓ \\ dryer & ✓ & doorknob & ✓ & ✓ & washing machine & ✓ & sticky note & ✓ \\ stove & ✓ & door & ✓ & ✓ & clip & ✓ & cap & ✓ \\ pen & ✓ & dish & ✓ & ✓ & pot cover & ✓ & sprinking can & ✓ \\ water & ✓ & cup\_glass\_bottle & ✓ & ✓ & trash can & & those & ✓ \\ window & ✓ & ✓ & clothes & ✓ & ✓ & bell & ✓ & circuit changer & ✓ \\ vacuum & ✓ & ✓ & closet cabinet & ✓ & ✓ & umbrella & ✓ & ball & ✓ \\ towel & ✓ & ✓ & c\(\ddot{\text{H}}\)air & ✓ & ✓ & lighter & ✓ & toys & ✓ \\ television & ✓ & ✓ & broom & ✓ & ✓ & tape & ✓ & sandwich & ✓ \\ table & ✓ & ✓ & box & ✓ & ✓ & doorway & ✓ & shelf & ✓ \\ \hline \hline \end{tabular} \end{table} Table 2: AG Classes VS AGS Classes \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline No. & Classes & \# Samples & No. & Classes & \# Samples \\ \hline 1 & bag & 70 & 16 & laptop & 19 \\ 2 & bed & 12 & 17 & light & 41 \\ 3 & blanket & 6 & 18 & medicine & 23 \\ 4 & book & 37 & 19 & paper\_notebook & 27 \\ 5 & box & 37 & 20 & person & 375 \\ 6 & broom & 28 & 21 & phone\_camera & 7 \\ 7 & chair & 34 & 22 & picture & 3 \\ 8 & closet\_cabinet & 61 & 23 & window & 10 \\ 9 & cup\_glass\_bottle & 100 & 24 & pillow & 9 \\ 10 & clothes & 25 & 25 & refrigerator & 33 \\ 11 & dish & 37 & 26 & shoe & 50 \\ 12 & door & 48 & 27 & sofa\_couch & 6 \\ 13 & doorknob & 70 & 28 & television & 91 \\ 14 & floor & 15 & 29 & towel & 5 \\ 15 & food & 25 & 30 & vacuum & 71 \\ \hline \hline \end{tabular} \end{table} Table 3: Classes of Clear Datasets and Description As shown in Table 3, we present the sound categories and the number of samples corresponding to a particular category in the clear dataset as example. ## 4 Sound Event Recognition ### Related Definition and Problem Descriptions According to the reference [31], the formal definition of the sound event recognition (SER) is as follows: **Definition 1**.: **(Sound Event Recognition, SER):** Given a set of sound classes \(C\), the subsets \(Y=\,_{c=C}\,Y_{c}\) of ground truth labels for each sound class \(c\in C\), wherein \(Y_{c}=\{y_{i}=(t_{s,i},\,t_{e,i},\,c_{i}):c_{i}=c\}\), \(y_{i}=(t_{s,i},\,t_{e,i},\,c_{i})\) represents each ground truth label \(y_{i}\)'s class \(c_{i}\), the start time \(t_{s,i}\) and the end time \(t_{e,i}\), and given the subsets \(X^{*}=\,_{c=C}\,X_{c}^{*}\) of the detections for each class \(c\in C\), wherein \(X_{c}^{*}=\{x_{i}=(t_{s,i},\,t_{e,i},\,c_{i}):c_{i}=c\}\), \(x_{i}=(t_{s,i},\,t_{e,i},\,c_{i})\), and where the starred notation \(()^{*}\) indicates dependency on operating point parameters \(T_{c}\). Then, the goal of the sound event recognition (SER) is to measure the performance of a system which output \(X^{*}\) under the given information \(Y\). In the SER task, it is mainly necessary to recognize the label of the sound and the start and end time of the sound event. It can be seen that the accuracy of sound label recognition (also called as audio pattern recognition) plays unimportant role in the performance of SER. On the premise of accurate sound label recognition, sound events can be further guaranteed (in this paper, we represent sound events as a scene graph: nodes represent sound generation subject-object, edge represents the predicate relationship between subject-object) recognition accuracy (including the start and end time of the event). ### SER-aware Baselines and Discussion **SER-aware Baselines:** In order to further illustrate the contribution of the sound data of the home environment released in this paper for the sound pattern recognition task, we mainly conduct an experimental comparison analysis on the representative and current advanced algorithms in the current SER methods. Specifically, these approaches mainly include: MobileNetV2 [13], DaiNet [14], PANNs [15] and AST [16]. **Example:** In this section, we choose the sound event marker data with high sound clarity in the AGS data as the experimental data. We call such data **dataset of clear classes4**, and these classes are shown in Table 3. In this part, we mainly use 1,406 sound clear data for experiments, which are extracted from 700 videos. In indoor sound environment recognition, due to the inherent sparsity of sound events. Therefore, we selected the top 6 categories from the 1,406 data for experiments, a total of 791 data, which were extracted from 439 videos. **Discussion:** We mainly use commonly used evaluation metrics in related research: Accuracy (Acc). The detailed experimental results areas shown on Table 4. In the process of running the example, the main hyperparameters are set as follows: the value range of the learning rate is:{1e\({}^{-3}\), 1e\({}^{-4}\), 1e\({}^{-5}\)}; the ratio of the training set to the test set is: {5 : 5, 6 : 4, 7 : 3, 8 : 2}, and the iteration number is set to: \(10,0000\). From the experimental results in Table 4, it can be seen that the four methods have achieved good results on the AGS dataset. In particular, for the PANNs method, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 7:3, the best experimental result is 0.92. For the DaiNet method, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 7:3, the best experimental result is 0.81. For the MobileNetV2, under the premise that the learning rate is set to lr = 1e\({}^{-3}\) and the ratio of the training set to the test set is 7:3, the best experimental result is 0.924. For AST, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 8:2, the best experimental result is 0.968. ## 5 Experiment ### Dataset Description In the experimental part of this article, the amount of audio data is about \(4,000\) pieces of data, which are extracted from 960+ pieces of video. In this paper, the 20 categories with the largest number are selected from about 4000 pieces of data for experiments. From a total of 3300+ pieces of audio data set, these 3300+ pieces of audio data are extracted from 930+ pieces of video. The data set \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline Train:Test & lr = 1e\({}^{-3}\) & lr = 1e\({}^{-4}\) & lr = 1e\({}^{-5}\) & Train:Test & lr = 1e\({}^{-3}\) & lr = 1e\({}^{-4}\) & lr = 1e\({}^{-5}\) \\ \hline \multicolumn{6}{c|}{**PANNs**} & \multicolumn{5}{c}{**DaiNet**} \\ \hline 5:5 & 0.784 & 0.855\({}^{*}\) & 0.822 & 5:5 & 0.693 & 0.754\({}^{*}\) & 0.645 \\ 6:4 & 0.826 & 0.8237\({}^{*}\) & 0.829 & 6:4 & 0.722\({}^{*}\) & 0.709 & 0.636 \\ 7:3 & 0.869 & 0.92\({}^{*}\) & 0.899 & 7:3 & 0.772 & 0.810\({}^{*}\) & 0.781 \\ 8:2 & 0.867 & 0.892\({}^{*}\) & 0.867 & 8:2 & 0.722\({}^{*}\) & 0.722\({}^{*}\) & 0.646 \\ \hline \multicolumn{6}{c|}{**Mobile NetV2**} & \multicolumn{5}{c}{**AST**} \\ \hline 5:5 & 0.835\({}^{*}\) & 0.792 & 0.728 & 5:5 & 0.906 & 0.928\({}^{*}\) & 0.924 \\ 6:4 & 0.794\({}^{*}\) & 0.750 & 0.652 & 6:4 & 0.889 & 0.918\({}^{*}\) & 0.905 \\ 7:3 & 0.924\({}^{*}\) & 0.861 & 0.840 & 7:3 & 0.928 & 0.945\({}^{*}\) & 0.945\({}^{*}\) \\ 8:2 & 0.842\({}^{*}\) & 0.791 & 0.696 & 8:2 & 0.937 & 0.968\({}^{*}\) & 0.918 \\ \hline \hline \end{tabular} (\({}^{*}\) represents the best experimental results for each method under the same ratio of the training set and test set.) \end{table} Table 4: The results of Acc using different methods marked in this paper and the source code of the corresponding baseline method has been made public. The specific link is as follows: [https://github.com/taolunzu11/AGS](https://github.com/taolunzu11/AGS). ### _Experiment Setting_ In our experiments, the main hyperparameters are set as follows: the value range of the learning rate is:{1e\({}^{-3}\), 1e\({}^{-4}\), 1e\({}^{-5}\)}; the ratio of the training set to the test set is: {5 : \(5,6\) : \(4,7\) : \(3,8\) : \(2\)}, and the iteration number is set to: {10, 000, 15, 000, 20, 000}. The computing environment is that Linux server with two NVIDIA Geforce RTX 3090, 125GB memory, and our experimental coding language is python. In addition, we mainly conduct an experimental comparison analysis on the representative and current advanced algorithms in the current SER methods. Specifically, these approaches mainly include: MobileNetV2 [13], DaiNet [14], PANNs (Wavegram-Logmel-CNN) [15], Wavegram-Logmel-CNN-attention (our expanded method base on PANNs), AST [16] and LSTM-based methods5. Footnote 5: [https://www.kaggle.com/code/kvpratama/audio-classification-with-lstm-and-torchaudio/notebook](https://www.kaggle.com/code/kvpratama/audio-classification-with-lstm-and-torchaudio/notebook) The brief description of these compared baseline methods follows: * **MobileNetV2 [13]**: MobileNetV2 is to take a low-dimensional compressed representation as input, first expand it to high-dimensional and use lightweight depthwise convolution for filtering. The features are subsequently projected back to a low-dimensional representation with linear convolution; * **DaiNet [14]**: DaiNet is different from previous studies that use log mel spectrogram as input, but directly uses time series waveform as input; * **AST [16]**: AST mainly utilizes the self-supervised learning to design the loss function and through the spectrogram features to integrate the vision transformer; * **Wavegram-Logmel-CNN [15]**: Wavegram-Logmel-CNN is a Wavegram feature learnt from waveform, and combined with log mel spectrogram as input; * **Wavegram-Logmel-CNN-attention**:In our work, we utilize the attention mechanism for the fusion methods of waveform and log mel spectrogram in Wavegram-Logmel-CNN; * **LSTM-based methods**: In this work, we use mel spectrogram and Mel-frequency cepstral coefficients extract audio features as input to the two-layer LSTM. ### _Evaluation Metrics_ In order to further verify that the data set AGS proposed in this paper has the advantages of effective data support on the SER task and filling the shortage of existing Audioset data, we compare and analyze the performance of the current advanced SER methodson AGS through the evaluation metrics (MAP and Acc) recognized in related research. * AP (Average Precision) [32]: The shape of the precision-recall curve at 11-point interpolation is summarized by averaging the maximum precision values at a set of equally spaced recall levels [0, 0.1, 0.2,..., 1]. The formula is expressed as follows: \[\text{AP}_{11}\,=\,\frac{1}{11}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, ### Experiment Results and Analysis In this subsection, we compare 6 baseline methods. For the fairness of the experiment, all comparison methods are in the same experimental environment, and the corresponding hyperparameters are set to the best experimental results, while ensuring that the experimental results are in a state where both training and test results converge. From the experimental results in Fig.3 and Fig.4, they can be seen that four baselines PANNs, PANNs-attention, DaiNet and MobileNetV2 have achieved better results on the AGS dataset. Since the training and testing process of these four methods are obtained through multiple iterations, however, both LSTM and AST are trained and tested through epoch, so here we mainly compare and analyze the experimental results of these four methods. The specific analysis is as follows: **For the Accuracy (Acc) :** * For the PANNs method, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 8:2, the best experimental result (Acc) is 0.322 under the 10,000 iterations; * For the PANNs-attention method, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 8:2, the best experimental result (Acc) is 0.296 under the 10,000 iterations; * For the DaiNet method, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 8:2, the best experimental result (Acc) is 0.248 under the 15,000 iterations; * For the MobileNetV2, under the premise that the learning rate is set to lr = 1e\({}^{-3}\) and the ratio of the training set to the test set is 8:2, the best experimental result (Acc) is 0.307 under the 10,000 iterations. **For the Mean Average Precision (MAP):** Figure 3: Comparison of Sound Event Recognition for AGS using Acc * For PANNs, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 8:2, the best experimental result (MAP) is 0.473 under the 10,000 iterations; * For PANNs-attention, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 8:2, the best experimental result (MAP) is 0.395 under the 10,000 iterations; * For DaiNet, under the premise that the learning rate is set to lr = 1e\({}^{-4}\) and the ratio of the training set to the test set is 8:2, the best experimental result (MAP) is 0.35 under the 20,000 iterations; * For the MobileNetV2, under the premise that the learning rate is set to lr = 1e\({}^{-3}\) and the ratio of the training set to the test set is 8:2, the best experimental result (MAP) is 0.453 under the 10,000 iterations. In order to compare and analyze all comparison methods on AGS, based on the best experimental settings (hyperparameter settings, experimental environment, loss function settings and optimization choices), the specific experimental results can be obtained in detail through Table 5 and Table 6. Figure 4: Comparison of Sound Event Recognition for AGS using MAP \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Method & lr & Train:Test & Acc & Iteration/Epoch & Comparison \\ \hline MobileNetV2 & 1e\({}^{-3}\) & 8:2 & 0.453 & 10,000 & \(\circ\) 3 \\ DaiNet & 1e\({}^{-4}\) & 8:2 & 0.35 & 20,000 & \(\circ\) 5 \\ Wavegram-Logmel-CNN & 1e\({}^{-4}\) & 8:2 & 0.473 & 10,000 & \(\circ\) 2 \\ Wavegram-Logmel-CNN-attention & 1e\({}^{-4}\) & 8:2 & 0.395 & 10,000 & \(\circ\) 4 \\ LSTM-based Method & 1e\({}^{-3}\) & 5:5 & 0.289 & 6 (epoch) & \(\circ\) 6 \\ AST & 1e\({}^{-4}\) & 7:3 & 0.498 & 100 (epoch) & \(\circ\) 1 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of Six Baselines for AGS using Acc It can be seen from Table 5 that the evaluation indicators Acc of the six methods are all above 30% except for the LSTM-based method. According to the experimental results of SER on Audioset in related research [11] (The Acc result of the better method is 30% to 35%). Similar to Table 5, except for the relatively low MAP of the LSTM-based method, the MAP values of the other five methods are all above 30%. It can be shown that our proposed AGS provides reliable data support for the theoretical research of SER in the recognition of environmental sound events. In addition, from the practical results, the performance order (descending order) of these methods is: AST > Wavegram-Logmel-CNN > MobileNetV2 > Wavegram-Logmel-CNN-attention > DaiNet > LSTM-based Method. As shown in Table 6, the performance order (descending order) of these methods maintains the performance order in Table 5. ## 6 Conclusion and Future Work In this paper, we propose a novel domestic scene sound dataset (AGS) for sound event recognition, which fills in the shortcomings of visual scene monitoring in the time dependence of events in environmental monitoring and the monitoring of the intensity and magnitude of the action behavior of the subject and object. Our work addresses the problem of incomplete data disclosure in Audioset due to policy permissions, and provides reliable data support for spatio-temporal-constrained activity monitoring scenarios. Meanwhile, we also compare state-of-the-art deep learning models to establish baseline SER performance on AGS. The detailed and sufficient experimental results can be shown that our proposed AGS provides reliable data support for the theoretical research of SER in the recognition of environmental sound events. For future research work, we will continue to increase the amount of AGS data, and will consider exploring the generation of sound scene graphs based on AGS, and how to generate images or videos through environmental scene audio (focused on non-speech). \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Method & Ir & Train:Test & MAP & Iteration/Epoch & Comparison \\ \hline MobileNetV2 & 1e\({}^{-3}\) & 8:2 & 0.307 & 10,000 & \(\circ\)3 **–** \\ DaiNet & 1e\({}^{-4}\) & 8:2 & 0.248 & 15,000 & \(\circ\)5 **–** \\ Wavegram-Logmel-CNN & 1e\({}^{-4}\) & 8:2 & 0.322 & 10,000 & \(\circ\)2 **–** \\ Wavegram-Logmel-CNN-attention & 1e\({}^{-4}\) & 8:2 & 0.296 & 10,000 & \(\circ\)4 **–** \\ LSTM-based Method & 1e\({}^{-4}\) & 5:5 & 0.121 & 6 (epoch) & \(\circ\)6 **–** \\ AST & 1e\({}^{-4}\) & 7:3 & 0.348 & 100 (epoch) & \(\circ\)1 **–** \\ \hline \hline \end{tabular} (— indicates that there is no change in ranking compared to the experimental results in Table 5.) \end{table} Table 6: Comparison of Six Baselines for AGS using MAP
2303.17243
Shapley Chains: Extending Shapley Values to Classifier Chains
In spite of increased attention on explainable machine learning models, explaining multi-output predictions has not yet been extensively addressed. Methods that use Shapley values to attribute feature contributions to the decision making are one of the most popular approaches to explain local individual and global predictions. By considering each output separately in multi-output tasks, these methods fail to provide complete feature explanations. We propose Shapley Chains to overcome this issue by including label interdependencies in the explanation design process. Shapley Chains assign Shapley values as feature importance scores in multi-output classification using classifier chains, by separating the direct and indirect influence of these feature scores. Compared to existing methods, this approach allows to attribute a more complete feature contribution to the predictions of multi-output classification tasks. We provide a mechanism to distribute the hidden contributions of the outputs with respect to a given chaining order of these outputs. Moreover, we show how our approach can reveal indirect feature contributions missed by existing approaches. Shapley Chains help to emphasize the real learning factors in multi-output applications and allows a better understanding of the flow of information through output interdependencies in synthetic and real-world datasets.
Célia Wafa Ayad, Thomas Bonnier, Benjamin Bosch, Jesse Read
2023-03-30T09:19:24Z
http://arxiv.org/abs/2303.17243v1
# Shapley Chains: Extending Shapley Values to Classifier Chains ###### Abstract In spite of increased attention on explainable machine learning models, explaining multi-output predictions has not yet been extensively addressed. Methods that use Shapley values to attribute feature contributions to the decision making are one of the most popular approaches to explain local individual and global predictions. By considering each output separately in multi-output tasks, these methods fail to provide complete feature explanations. We propose Shapley Chains to overcome this issue by including label interdependencies in the explanation design process. Shapley Chains assign Shapley values as feature importance scores in multi-output classification using classifier chains, by separating the direct and indirect influence of these feature scores. Compared to existing methods, this approach allows to attribute a more complete feature contribution to the predictions of multi-output classification tasks. We provide a mechanism to distribute the hidden contributions of the outputs with respect to a given chaining order of these outputs. Moreover, we show how our approach can reveal indirect feature contributions missed by existing approaches. Shapley Chains help to emphasize the real learning factors in multi-output applications and allows a better understanding of the flow of information through output interdependencies in synthetic and real-world datasets. Keywords:Machine Learning Explainability Classifier Chains Multi-Output Classification Shapley Values. ## 1 Introduction A multi-output model predicts several outputs from one input. This is an important learning problem for decision-making involving multiple factors and complex criteria in the real-world scenarios, such as in healthcare, the prediction of multiple diseases for individual patients. Classifier chains [8] is one such approach for multi-output classification, taking output dependencies into account by connecting individual base classifiers, one for each output. The order of output nodes and the choice of the base classifiers are two parameters yielding different predictions thus different explanations for the given classifier chain. To address the lack of transparency in existing machine learning models, solutions such as SHAP [5], LIME [9], DEEPLIFT [11] and Integrated Gradients [12] have been proposed. Using Shapley values [10] is one approach to attribute feature importance in machine learning. The framework SHAP [5] provides Shapely values used to explain model predictions, by computing feature marginal contributions to all subsets of features. This theoretically well founded approach provides instance-level explanations and a global interpretation of model predictions by combining these local (instance-level) explanations. However, these methods are not suitable for multi-output configurations, especially when these outputs are interdependent. In addition, the SHAP framework provides separate feature importance scores only for independent multi-output classifiers. By assuming the independence of outputs, one ignores the indirect connections between features and outputs, which leads to assigning incomplete feature contributions, thus an inaccurate explanation of the predictions. Fig. 1 is a graphical representation of a classifier chain: patients with two conditions, obesity (\(Y_{\mathsf{OB}}\)) and psoriasis (\(Y_{\mathsf{PSO}}\)), given four features: genetic components (\(X_{\mathsf{GC}}\)), environmental factors (\(X_{\mathsf{EF}}\)), physical activity (\(X_{\mathsf{PA}}\)) and eating habits (\(X_{\mathsf{EH}}\)). From a clinical point of view, all factors \(X\) are associated with both conditions \(Y\), obesity and psoriasis. However, since obesity is a strong feature for predicting psoriasis [4] (indeed, a motivating factor for using such a model is that predictive accuracy can be improved by incorporating outputs as features), it may mask the effects of other features. Namely, \(X_{\mathsf{PA}}\) and \(X_{\mathsf{EH}}\) will be found by methods as SHAP applied to each output separately to have zero contribution towards predicting \(Y_{\mathsf{PSO}}\), and one might interpret that psoriasis is mainly affected by factors which cannot be modified by the patient (environment and genetics). The _indirect_ effects (physical activity and eating habits) will not be detected or explained. We propose Shapley Chains to address this limitation of incomplete attribution of feature importance in multi-output classification tasks by taking into account the relationships between outputs and distributing their importance among the features with respect to a given order of these outputs. Calculating the Shapley values of outputs helps to better understand the importance of the chaining that connects these outputs and to visualize this relationship impact on the prediction of subsequent outputs in the chain. For these subsequent outputs, the computation of the Shapley values of the associated outputs shows the indirect influence of some features through the chain, which is generally not intuitive and missed by existing work. Our method will successfully explain these _indirect_ effects. By attributing importance to the features \(X_{\mathsf{PA}}\) and \(X_{\mathsf{EH}}\), Shapley Chains will help doctors to emphasize the importance of eating healthy and practicing physical activities in order to prevent and better cure psoriasis instead of blaming only genetics and exterior environmental factors. This paper addresses the problem of attributing feature contributions in multi-output classification tasks with classifier chains when outputs are interdependent. Our contribution in this paper is resumed to : * We propose Shapley Chains, a novel post-hoc model agnostic explainability method designed for multi-output classification task using classifier chains. * Shapley Chains attribute feature importance to all features that directly or indirectly contribute to the prediction of a given output, by tracking all the related outputs in the given chain order. * Compared to existing methods, we show a more complete distribution of feature importance scores in multi-output synthetic and real-world datasets. We devote Section 2 to a background and related work. In Section 3, we detail our proposed method Shapley Chains. Finally in Section 4, we run experiments on a synthetic and real-world datasets. The results of our method compared to SHAP values applied to independent classifiers are then discussed. ## 2 Background and Related Work In this section we review multi-output classification, output dependencies, classifier chains and Shapley values to serve as a background for the rest of this paper. The notation we used is summarized in the next table. \begin{table} \begin{tabular}{|l|l|} \hline **Notation** & **Meaning** \\ \hline \(\mathbf{x}\) & a given instance vector \\ \(\mathbf{y}\) & a given output vector \\ \(x_{i}\) & the \(i^{th}\) feature of instance \(\mathbf{x}\) \\ \(y_{j}\) & the \(j^{th}\) output \\ \(X\) & the feature space of \(x_{i}\) \\ \(Y\) & the output space of \(y_{j}\) \\ \(n\) & the number of features for each instance \(\mathbf{x}\) \\ \(m\) & the number of outputs \\ \hline \end{tabular} \end{table} Table 1: Notation Figure 1: An example of a multi-output task: predicting \(Y\)-outputs from \(X\)-features. A classifier chain uses the first output \(Y_{\mathsf{OB}}\) as an additional feature to predict the second output \(Y_{\mathsf{PSO}}\). ### Multi-output classification and output dependencies A multi-output classifier \(\mathsf{H}\) is a mapping function that for a given instance \(\mathbf{x}\)=\(\{x_{1},x_{2},...,x_{n}\}\), such that \(\mathbf{x}\in X\), it learns a vector of base classifiers \(\mathsf{H}(\mathbf{x})=h_{1}(\mathbf{x}),h_{2}(\mathbf{x}),...,h_{m}(\mathbf{x})\) and returns a vector of predicted values \(\mathbf{y}=\{y_{1},y_{2},...,y_{m}\}\), with \(y_{j}\in\{0,1\}\) and \(\mathbf{y}\in Y\). In real-world applications, outputs can be dependent or independent. Designing classifiers that incorporate these output dependencies makes it possible to better represent the relationships in the data (between outputs, therefore between features and outputs). There are two types of output dependencies wrt subsequent outputs; namely marginal independencies, \(P(\mathbf{y})=\prod_{j=1}^{m}P(y_{j})\), and conditional output dependencies: \[P(\mathbf{y}|\mathbf{x})=\prod_{j=1}^{m}P(y_{j}|X,y_{1},...,y_{j-1}) \tag{1}\] In this article, we focus on output conditional dependencies. The nature of the relationship between features and outputs and between outputs is not restricted to causality. Therefore, no prior knowledge of the causal graph is necessary. This specific subject is partially covered in Shapley Flow [13], which is designed for single-output tasks. ### Classifier chains A classifier chain is one multi-output method that learns \(m\) classifiers (one classifier for each output, also referred as base classifier). All the classifiers are linked in a chain. The chaining method passes output information between classifiers, allowing this method to take into account output dependencies [7] when learning a given output in the chaining. This method is exactly an expression of Eq. 1, if expressed according to the chain rule of probability (i.e., Fig. 2 as a probabilistic graphical model representation). That is one reason why conditional dependencies are interesting in this context. However, a classifier chain is not faithful to a 'proper' inference procedure, and rather takes a greedy approach to inference, plugging in predictions as observations; and proceeds much as a forward pass across a neural network. This creates some ambiguity between how much effect is gained from probabilistic dependence (as a probabilistic graphical model would) and feature effect (as one encounters via the latent layers of deep learning). Although discussion has been ongoing e.g., [8, 7], there is not yet a consistent understanding in practice of what role a prediction plays as a feature to another label. By propagating output contributions among the features, Shapley Chains help to clarify these prediction roles, and confirm which outputs are interdependent using the Shapley value described in the next section. ### Shapley values The Shapley value expresses the contribution of feature \(x_{i}\), to predict output \(y_{j}\) as a weighted sum: \[\phi_{y_{j}}x_{i}\ =\ \sum_{S\subseteq X\setminus\{i\}}\frac{|S|!\ (|X|\ -\ |S|\ -1)!}{|X|!}\ [f_{x} \ (S\cup\{i\})\ -\ f_{x}\ (S)] \tag{2}\] Where \(S\subseteq X\), and \(f_{x}\) is the value function that defines each feature's contribution to each subset \(S\). It computes each feature's average added value to each combination of features when making a prediction for instance \(\mathbf{x}\). Additivity is one axiom of a fair attribution mechanism that is satisfied by the Shapley value. It finds a good interpretation in multi-output classification. Consider two prediction tasks (\(X\), \(f\)), (\(X\), \(g\)) composed of the same set of features. We create a coalition prediction task \((X,f+g)\) by adding the two previous prediction tasks in the following way: \((f+g)(S)=f(S)+g(S)\) for all \(S\subseteq X\). The additivity axiom states that the allocation of the prediction \((X,f+g)\) will be equal to the sum of the allocations of the two original prediction tasks. One should note that in this definition, we assume that the two prediction tasks are completely independent meaning that feature contributions to one prediction has no effect on the second one, which is not always the case because in real-world applications tasks are more often interdependent. One approach we propose is to use classifier chains because it permits to represent these relationships by introducing different chaining orders of these outputs. The overall feature Shapley values for a classifier chain can be calculated by marginalizing over all possible output chain structures. \(\forall c\in\mathsf{C}\), the Shapley value of \(x_{i}\) in Eq. 2 can be written as follows: \[\phi_{y_{j}x_{i}}=\frac{1}{|\mathsf{C}|}\sum_{c\subseteq\mathsf{C}}\phi_{y_{j} ^{c}}x_{i} \tag{3}\] with \(\phi_{y_{j}^{c}}\) being the contribution of feature \(x_{i}\) to the prediction of \(y_{j}\) with respect to the given chaining order \(c\). For the matter of simplicity, we use \(\phi_{y_{j}}\) to refer to \(\phi_{y_{j}^{c}}\) in the rest of this paper. We report feature contribution for each chain Figure 2: One example of a classifier chain structure structure independently to show the impact of different chaining orders and the marginalization over these orders in Section 4.1. ### Related work The explainability of machine learning is an active research topic in the recent years. Several contributions have been made to explain single-output models and predictions. Inspecting feature importance scores of existing models is an intuitive approach that has served for many studies. These feature importance scores are either derived directly from feature weights in a linear regression for instance, or learned from feature permutations based on the decrease in model performance. Other more complex methods like LIME [9] learn a surrogate model locally (around a given instance) in order to explain the predictions of the initial model with simple and interpretable models like decision trees. On the other hand, DeepLift [11], Integrated gradient [12] and LRP [6] are some neural network specific methods proposed to explain deep neural networks. The SHAP framework is one popular method attributing Shapley values as feature contributions. It provides a wide range of model-specific and model-agnostic explainers. Researchers have also proposed other Shapley value inspired methods incorporating feature interactions in the explanation process. For example, asymmetric Shapley values [3] incorporates causal knowledge into model explanations. This method attributes importance scores to features that do not directly participate in the prediction process (confounders), but fails to capture all direct feature contribution. On the other hand, on manifold Shapley values [2] focus on better representing the out of coalition feature values but provides misleading interpretation of feature contributions. Wang et al. [13] have proposed Shapley Flow, providing both direct and indirect feature contributions when a causal graph is provided. Resuming feature interactions to causality and assuming the causal graph is provided and accurate are two downsides of this method. These methods significantly contributed to advancing the explainability of machine learning models but none of them have tackled multi-output problems, more specifically when outputs are interdependent. Shapley Chains address this limitation. ## 3 Proposed Method: Shapley Chains In this section, we introduce our approach to compute direct and indirect feature Shapley values for a classifier chain model. Note that our proposed method is model-agnostic, meaning that our computations do not depend directly on the chosen base learner used by the classifier chain. We want to compute feature contributions to the prediction of each output \(y_{j}\in Y\) for each instance \(\mathbf{x}\). For example, Fig. 3 shows the direct and indirect contributions of \(x_{i}\) to predict output \(y_{4}\) given in Fig. 2. In the next two sections, we detail the computations of the Shapley value of each feature to predict each output. We refer to these Shapley values as direct and indirect feature contributions. #### 3.3.1 Direct contributions The direct contributions are computed for features and outputs as in Eq. 2. Consider again the example of patients with the two conditions: psoriasis and obesity. For both \(Y_{\mathsf{OB}}\) and \(Y_{\mathsf{PSO}}\), we use the framework SHAP in order to compute the Shapley value of each feature : \(X_{\mathsf{GC}}\), \(X_{\mathsf{EF}}\), \(X_{\mathsf{PA}}\) and \(X_{\mathsf{EH}}\). This will attribute non zero Shapley values to \(X_{\mathsf{GC}}\) and \(X_{\mathsf{EF}}\) to predict \(Y_{\mathsf{OB}}\) and \(Y_{\mathsf{PSO}}\) separately. On the other hand, \(X_{\mathsf{EF}}\) and \(X_{\mathsf{PA}}\) will have non-zero Shapley values to predict \(Y_{\mathsf{OB}}\) and zero values for the prediction of \(Y_{\mathsf{PSO}}\). The classifier chain method will add \(Y_{\mathsf{OB}}\) to the feature set to predict \(Y_{\mathsf{PSO}}\). By running the SHAP framework on this new set, \(Y_{\mathsf{OB}}\) will have a non zero Shapley value because it is dependent to \(Y_{\mathsf{PSO}}\). This Shapley value will be attributed to the features that are correlated to \(Y_{\mathsf{OB}}\). The attribution mechanism of direct feature (and output) contributions can be generalized to the classifier \(\mathsf{H}\) with \(m\) base classifiers as shown in Algorithm 1. For the first output \(y_{1}\), we calculate the Shapley value of each feature according to Eq. 2, as done in the SHAP framework. This marginal value of all possible subsets to which the feature can be associated to is the feature's contribution to predict the first output \(y_{1}\). For the second output \(y_{2}\), we append the predictions \(y_{1}\) made by the first classifier \(h_{1}\) to the features set, and we train a second classifier \(h_{2}\) to learn the second output \(y_{2}\). We again use the SHAP framework to assign Shapley values to features and the first output \(y_{1}\). Here, the feature set includes the first prediction. We perform the same steps for each Figure 3: Representation of direct and indirect contributions for a dataset with 4 outputs (\(y_{1}\), \(y_{2}\), \(y_{3}\) and \(y_{4}\)). For example: the 4th output \(y_{4}\) has 7 indirect Shapley values (7 paths ending with square leave) and one direct Shapley value (one path ending with a circle leaf). remaining output. At each step, we calculate the Shapley values for features and previous predicted outputs that are linked via the chaining to the current output. At the final step, the feature set will contain \(n\) features and \(m\) outputs: \(X=\{x_{1},x_{2},...,x_{n},y_{1},y_{2},...,y_{m}\}\). #### 3.2.1 Indirect contributions The indirect contribution \(\Phi_{indirect}y_{j}(x_{i})\) of \(x_{i}\) to predict \(y_{j}\) is the weighted sum of the direct contributions of all \(y_{k}\in Y\) that are chained to \(y_{j}\). \(\Phi_{indirect}y_{j}(x_{i})\) is computed according to the Eq. 4. \[\Phi_{indirect}y_{j}(x_{i})=\sum_{k=1}^{j-1}\Phi y_{j}(y_{k})\cdot Z_{k}(x_{i}) \tag{4}\] where \(j>1\) and the function \(Z_{k}(x_{i})\) computes the weight vector for all paths from output \(y_{k}\) down to \(x_{i}\). For \(k>1\) and \(Z_{1}(x_{i})=W(y_{1},x_{i})\), \(Z_{k}(x_{i})\) is recursively computed as follows: \[Z_{k}(x_{i})=\sum_{l=1}^{k-1}W(y_{k},y_{k-l})\cdot Z_{k-l}(x_{i})+W(y_{k},x_{i}) \tag{5}\] where \(W(y_{k},y_{k-l})\) is the corresponding weight of \(y_{k-l}\) to predict the next output \(y_{k}\) (the direct contribution of \(y_{k-l}\) to predict \(y_{k}\). And, \(W(y_{k},x_{i})\) is the weight of \(x_{i}\) to predict \(y_{k}\) (the direct contribution of \(x_{i}\) to predict \(y_{k}\)). The weights \(W(y_{k},y_{k-l})\) and \(W(y_{k},x_{i})\) are calculated according to: \[W(y_{k},.)=\frac{|\Phi y_{k}(.)|}{\left(\sum_{q=1}^{n}|\Phi y_{k}(x_{q})|+\sum _{p<k}|\Phi y_{k}(y_{p})|\right)} \tag{6}\] where \(\Phi y_{k}(x_{q})\) is the direct contribution, as in Eq. 2; of each feature \(x_{q}\) to predict \(y_{k}\)). \(p<k\) means the output \(p\) is chained to the output \(j\) forming a directed acyclic graph illustrated in Fig. 2. For instance, in order to have a complete fair distribution of feature importance for the prediction of \(Y_{\mathsf{PSO}}\), we compute the indirect Shapley values of the features \(X_{\mathsf{PA}}\) and \(X_{\mathsf{EH}}\). We do so by distributing the direct Shapley value of \(Y_{\mathsf{OB}}\) computed previously to the four features. By the distribution operation, we mean the multiplication of the direct Shapley value of each feature by the direct Shapley value of \(Y_{\mathsf{OB}}\), divided by the sum of the Shapley values of all features for to predict the same output(here \(Y_{\mathsf{OB}}\)). We generalize this mechanism in Algorithm 2 of calculating indirect Shapley values to the chain structure in Fig. 3. The first output \(y_{1}\) has always zero indirect Shapley values because there is no output that precedes it in the chaining. Thus, for the rest of this section, we compute feature indirect contributions for \(y_{j}\in\{y_{2},y_{3},...,y_{m}\}\). For each output \(y_{j}\), there exists one direct path to the features thus one direct feature contributions and \(2^{j}-1\) indirect paths for each feature. ``` 1:procedureinContribution(\(X,Y,\Phi\))\(\triangleright\) inputs, outputs, Shapley values of features and outputs 2:\(i=j=0\) 3:while\(j<len(Y)\)do 4:while\(i<len(X)\)do 5: compute \(W(y_{k},y_{k-l})\) and \(W(y_{k},x_{i})\) in Eq. 6 6: compute \(Z_{k}(x_{i})\) in Eq. 5 7: return \(\Phi_{indirect}y_{j}(x_{i})\) in Eq. 4\(\triangleright\) returning indirect feature contributions. ``` **Algorithm 2** Computing feature indirect contributions One should notice that for the matter of the simplicity of understanding, we take the absolute value in Eq. 6. Thus, all the contributions will be positive. These absolute values can be replaced by the raw Shapley values in order to keep the positive or negative sign of feature contributions. Keeping the sign helps to understand if the feature penalizes or is in favor of the prediction. ## 4 Experiments In order to assess the importance of the features that is attributed by our proposed framework1 to explain their contributions to predict multiple outputs with a classifier chain, we run experiments on both synthetic and real-world datasets: a _xor_ data that we describe next, and the Adult Income dataset from the UCI repository [1]. Here, we rely on human explanation to validate our results. Footnote 1: [https://github.com/cwayad/shapleychains](https://github.com/cwayad/shapleychains) ### Synthetic data To demonstrate our work, we first run experiments on a multi-output synthetic dataset containing two features (\(x_{1}\) and \(x_{2}\)) and three outputs (\(and\), \(or\) and \(xor\)) corresponding to the logical operations of the same names performed on \(x_{1}\) and \(x_{2}\). We split this dataset to 80% for the training and 20% for the test of our classifier. Next, we construct a classifier chain with the chaining order illustrated in Fig. 4. We use a logistic regression as the base learner. Our method is model agnostic meaning that it can be applied to a classifier chain with any other base learners. The use of the logistic regression as the base learner to predict \(xor\) is justified by the accuracy that this model achieves compared to other classifiers like decision trees. The classifier chain is trained on the train set using \(x_{1}\) and \(x_{2}\) to predict \(and\) and \(or\) separately. Then, we append these two predicted outputs to the features set in order to predict \(xor\). Here, the order in which we predict \(and\) and \(or\) does not change our method's behavior. Figure 4: The classifier chain structure for \(xor\) data. \(X\) is the set of features \(x_{1}\) and \(x_{2}\). \(and\), \(or\) and \(xor\) are the outputs for which we want to compute direct and indirect Shapley values. Figure 5: A comparison of SHAP applied on independent classifiers and Shapley Chains. From the left to the right. (\(a\)) and (\(b\)) Normalized direct and indirect feature contributions made by Shapley Chains to predict \(and\), \(or\) and \(xor\) for chain orders [\(and\), \(or\), \(xor\)] and [\(or\), \(and\), \(xor\)]. (\(*\)) SHAP assigns contributions to \(x_{1}\) and \(x_{2}\) only to predict \(and\) and \(or\) outputs and completely misses their contributions to predict \(xor\). Absent colors refer to null Shapley values. To explain the influence of \(x_{1}\) and \(x_{2}\) on the prediction of \(xor\), we compared the application of the framework SHAP on each classifier independently and Shapley Chains on the trained classifier chain. We report our analysis on the test data. The results of the comparison shown in Fig. 5 indicate that the output chaining propagates the contributions of \(x_{1}\) and \(x_{2}\) to predict \(xor\) via \(and\) and \(or\). Specifically, Fig. 5\((a)\) and Fig. 5\((b)\) illustrate that our method detects the indirect contributions of \(x_{1}\) and \(x_{2}\) (indirect_xor) to predict \(xor\) thanks to the chaining of \(and\) and \(or\) to \(xor\) implemented with the classifier chain model, which tracks down all feature contributions through the chaining of outputs. Furthermore, Fig. 5\((a)\) and Fig. 5\((b)\) confirm that predicting \(or\) before \(and\) or vice versa does not affect the feature contributions attribution, which confirms the chain structure for this data. On the other hand, these contributions of \(x_{1}\) and \(x_{2}\) are completely neglected by the SHAP framework on independent classifiers (Fig. 5\((*)\)). #### 4.2.2 Impact of the chaining order on the classifier chain explainability In order to measure the impact of the chaining order on the explainability of our classifier chain model with Shapley Chains, we performed analysis on the \(3\,!=6\) possible output chaining orders in the synthetic dataset (scenarios (a) and (b) in Fig. 5 and scenarios (c), (d), (e) and (f) in Fig. 6). Figure 6: Possible output chaining orders for \(xor\) data. Normalized total feature contributions (direct and indirect Shapley values) for \(c\), \(d\), \(e\) and \(f\). The information known to the classifier chain when training each output changes depending on the order of these outputs. For instance, in scenarios \(a\) and \(b\) (Fig. 5), we first learn the two outputs \(and\) and \(or\) using \(x_{1}\) and \(x_{2}\) features. \(xor\) is then predicted using \(and\) and \(or\). Here, in both scenarios, both features \(x_{1}\) and \(x_{2}\) contribute indirectly (through \(and\) and \(or\)) to predict \(xor\). Meanwhile in the scenario \(c\) (or \(d\)), the model relies on \(and\)(or \(or\)), \(x_{1}\) and \(x_{2}\) to predict \(xor\). We observe that \(x_{1}\) and \(x_{2}\) have direct and indirect contributions, meaning that the classifier chain relies partially on these two features to predict \(xor\) (direct contributions of \(x_{1}\) and \(x_{2}\)), and on \(and\) (indirect contributions of \(x_{1}\) and \(x_{2}\) via \(and\)). The last two scenarios \(e\) and \(f\) show no contribution of \(x_{1}\) and \(x_{2}\) to predict \(xor\), which is explained by the fact that using only these two features, the model can not predict \(xor\) without having the information about the dependencies of \(xor\) to \(and\) and \(or\). These results show that the chain order of \(and\), \(or\) and \(xor\) outputs has an important role in the explainability of the classifier chain, because feeding different inputs to the classifier chain yields different predictions, thus different Shapley values are attributed to the features. \(x_{1}\) and \(x_{2}\) importance scores can either be derived from a direct inference of \(xor\) output only if there is additional information on output dependencies (for example \(and\) is linked to \(xor\)) or by extracting it from the chain that links \(and\) and \(or\) to \(xor\). In the absence of all output dependencies of \(and\) or \(or\) to \(xor\), the model completely ignores the importance of features \(x_{1}\) and \(x_{2}\) in the prediction of \(xor\). ### Explaining Adult Income with Shapley chains We run Shapley Chains on the UCI Adult Income dataset. This dataset contains over 32500 instances with 15 features. We first discretize \(workclass\), \(marital\)\(status\) and \(relationship\) characteristics. We remove \(race\), \(education\) and \(native\)\(country\) and normalize the dataset with the min/max normalizer. Next, we split it into two subsets, using 80% for the training and the remaining 20% for testing. We evaluated the hamming loss of a classifier chain with different base learners and we kept the best base classifier, the logistic regression in this case. In order to explain feature contributions to the predictions of the three outputs \(sex\), \(occupation\) and \(income\), we compared the results of Shapley Chains against classic Shapley values applied on separate logistic regression classifiers for different chain orders. Fig. 7 shows graphical representation of normalized and stacked feature contributions when applying Shapley Chains on our data set (Fig. 7.(a)), and stacked feature contributions from independent logistic regression classifiers (Fig. 7.(b)). In both cases, the magnitude of the feature contributions is greater in Shapley Chains compared to independent Shapley values, which confirms our initial hypothesis of some contributions are missed by SHAP framework, and these contributions can be detected when we take into account output dependencies. For example, the number of hours worked in a week (\(hours.per.week\)) has a more important indirect contribution to predict individual's \(occupation\) than a direct contribution. This is explained by the fact that \(sex\) is related to \(occupation\), and this relationship is propagated to the features by Shapley Chains. \(relationship\) is another example of Shapley Chains detecting indirect feature contributions to predict \(occupation\). Furthermore, feature rankings are different in Shapley Chains. For example, the ranking of \(capital.gain\) comes in the fourth position (before \(workclass\)) using SHAP applied to independent classifiers. In our method, this feature's ranking is always less important (according to different chaining orders) than \(workclass\) to predict \(sex\), \(occupation\) and \(income\) which makes more sens to us. We also tested the impact of different chain orders of these three outputs on the feature importance attribution. Fig. 8 illustrates three different chaining orders. Each different order allows each classifier to use different prior knowledge to learn these outputs. For example in Fig. 8(b), we first predict \(income\) and \(sex\) and we use this information to predict \(occupation\). Intuitively, \(occupation\) is correlated to individual's \(sex\) and \(income\). The classifier chain uses this information provided to the third classifier to predict \(occupation\). Here, Shapley Chains attribute more importance to the factors that predict both \(income\) and \(sex\), when predicting \(occupation\). Shapley Chains preserve the order of feature importance scores across all the chaining orders in general, but the magnitude of each feature's importance differs from one chain to another. This is due to the prior knowledge that is fed into the classifier when learning each output. In addition, these feature importance scores are always more important in Shapley Chains compared to Shapley values of independent classifiers for all chain orders. ## 5 Conclusions and Perspectives In this paper, we presented Shapley Chains, a novel method for calculating feature importance scores based on Shapley values for multi-output classification with a classifier chain. We defined direct and indirect contribution and Figure 7: (a) Direct and indirect Shapley values on Adult Income data: we normalize and stack each feature’s direct and indirect contributions to each output. \(sex\) has only direct contributions because it is the first output we predict in this chain order. (b) Stacked Shapley values of independent classifiers on Adult Income data. demonstrated on synthetic and real-world data how the attribution of indirect feature contribution to the prediction is more complete with Shapley Chains. Our method helps practitioners to better understand hidden influence of the features on the outputs by detecting indirect feature contributions hidden in output dependencies. Although the rankings of feature importance are not always different from independent feature importance scores, the magnitude of these scores is always important in Shapley Chains, which is more important to look at in applications that are sensitive to the magnitude of these importance scores rather than their rankings. By extending the Shapley value to feature importance attribution of classifier chains, we make use of output interdependencies that is implemented in classifier chains in order to represent the real learning factors of a multi-output classification task. To extend this work, Shapley Chains could be evaluated on multi-output regression tasks. Exploring the relationship's type between the outputs, and studying wether Shapley Chains preserve all these relationships when attributing feature contributions is another open question of our work. Figure 8: Stacked direct and indirect feature effects for 3 different chain structures over Adult Income data.
2310.10414
Style transfer between Microscopy and Magnetic Resonance Imaging via Generative Adversarial Network in small sample size settings
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising because it can allow histopathological analysis in the absence of an underlying invasive biopsy procedure. Here, we tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture. To our knowledge, this is the first multimodal translation of the brain MRI to histological volumetric representation of the same sample. The technique was assessed by training paired image translation models taking sets of images from MRI scans and microscopy. The use of cGAN for this purpose is challenging because microscopy images are large in size and typically have low sample availability. The current work demonstrates that the framework reliably synthesizes histology images from MRI scans of corpus callosum, emphasizing the network's ability to train on high resolution histologies paired with relatively lower-resolution MRI scans. With the ultimate goal of avoiding biopsies, the proposed tool can be used for educational purposes.
Monika Pytlarz, Adrian Onicas, Alessandro Crimi
2023-10-16T13:58:53Z
http://arxiv.org/abs/2310.10414v1
Style Transfer Between Microscopy and Magnetic Resonance Imaging via Generative Adversarial Network in Small Sample Size Settings ###### Abstract Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising because it can allow histopathological analysis in the absence of an underlying invasive biopsy procedure. Here, we tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture. To our knowledge, this is the first multimodal translation of the brain MRI to histological volumetric representation of the same sample. The technique was assessed by training paired image translation models taking sets of images from MRI scans and microscopy. The use of cGAN for this purpose is challenging because microscopy images are large in size and typically have low sample availability. The current work demonstrates that the framework reliably synthesizes histology images from MRI scans of corpus callosum, emphasizing the network's ability to train on high resolution histologies paired with relatively lower-resolution MRI scans. With the ultimate goal of avoiding biopsies, the proposed tool can be used for educational purposes. Monika Pytlarz, Adrian Unicas, Alessandro Crimi Sano - Centre for Computational Personalised Medicine Multimodal image translation Generative Adversarial Networks, Brain histology, MRI ## 1 Introduction The human brain is a complex system. To inspect its multi-scale organization we require several technologies, some of which are very invasive. While an anatomical representation of the brain can be easily acquired safely and non-invasively with MRI, further characterization needs histological procedures and microscopy. A biopsy is an invasive procedure to perform, therefore generating synthetic histology images complementary to MRI would be beneficial. On the other hand, comparing MRIs to the matching histology slices compensates for inevitably occurring distortions in the tissue during blocking and sectioning [1]. Histology offers great contrast at the microscopic scale due to the usage of dedicated stains that target distinct microanatomical or cytoarchitectural traits. Because of this extremely high resolution on distinct levels of magnification, a single.SVS file with one slice of a histology sample occupies almost 4 GB of data storage and requires dedicated software for the display. Considerable contrast and resolution differences between MRI and histology, often coupled with potential inhomogeneous staining and sectioning artifacts, make the alignment of these two modalities a demanding inter-modality registration problem [1]. Even though there is a growing body of literature on the topic of combining histology with other modalities, producing brain datasets with registered modalities is very laborious, so the amount of data available is still not optimal for deep learning implementations. The motivation for choosing GAN as a core of the framework is that GANs have produced outstanding results in image generation, image editing, and representation learning [2]. The concept of adversarial loss, which forces generated images to be indistinguishable from real photos, is critical to GANs' success. This loss is especially potent for image generation tasks, as this is precisely the goal that much of computer graphics seeks to optimize. GANs ability to learn image style is favorable in medical imaging - generating synthetic images was examined for medical image registration [3], artifacts correction and increasing quality images [4], or translating MRI to computed tomography (CT) for multimodal settings [5]. The goal of the project was to apply a generative adversarial network (GAN) able to produce synthetic histology images from MRIs with small sample size settings and to learn from images of large high resolution microscopy digital slides in the Aperio SVS format. To our knowledge, despite the recent efforts to generate different modalities with GANs in MRI [6] or histology [7], this is the first multimodal translation of the brain MRI stack of slices to histological volumetric representation of the same sample, which is challenging due to the completely different nature of the two modalities. We are not claiming that in this way, thanks to our approach MRI data can completely rule out histological data in clinical settings. Nevertheless, this work contributes to saving time and avoiding invasive histologies in some cases. ## 2 Materials and Methods ### Dataset We use a multimodal dataset previously acquired for which ethical approval was previously obtained by the authors [8]. More specifically, in this dataset MRI in post-mortem tissue has been performed and integrated with microscopy. Post-mortem scans provided whole human brain coverage with voxel sizes of 100-500 \(\upmu\)m. For this paper, the MRIs and glial fibrillary acidic protein maps (GFAP) from the 'Digital Anatomist' have been used. More specifically, the first \(b_{0}\) volume of the diffusion MRI (400 \(\upmu\)m isotropic at 7T), alongside parallelly coregistered histology (0.25 \(\upmu\)m in-plane) in three human corpus callosum specimens were used. For more information on the data acquisition are available [9]. The choice of using the \(b_{0}\) volume was given by the fact that no traditional T1 or T2 sequence was available. It can be hypothesized that with such unavailable modalities, results should be superior. We utilized 3 samples of GFAPs and 3 samples of MRI scans of corpus callosum. The dataset is small and contains 5 training examples of whole slice view paired images and 3 examples of paired images for testing. Additionally, large differences in contrast, resolution, and type of details occur between MRI and histology. Making this a non-trivial case of style-transfer between images. ### Data preprocessing Due to the listed challenges, before the data is fed into the model, it has to be preprocessed (Fig. 1). Preprocessing includes downsampling of the.SVS files, registering moving MRI slices to fixed histology slices, and tiling the histology into smaller patches. We considered 2 cases: i) downsampling whole images and performing the generation of the entire image. ii) creating patches of smaller size and generating the resulting images also in patches. Codes of the pipeline are provided: [https://github.com/octpsmon/Style-transfer-MRI-Histology-via-cGAN](https://github.com/octpsmon/Style-transfer-MRI-Histology-via-cGAN). **SVS downsampling:** To facilitate further processing of the microscopy images and reduce memory usage while preserving the resolution, slices have been downsampled 15 times using QuPath software for digital pathology image analysis [10]. **Image registration:** To align MRI and histology slices to a common coordinate system, intensity-based affine image registration was implemented using built-in MATLAB functions from the 'Register Multimodal MRI Images' toolbox. Sagittal slices of volumetric MRI brain representations exported from ITK-SNAP [11] medical image viewer were loaded as moving images and GFAPs as fixed. Downsampled GFAPs were of size around 6000x10000 and MRIs - 98x128. The mean Dice score evaluating the obtained alignment between fixed and moving is 0.93. **Splitting into patches:** Whole slide images are too large to fit on a GPU at once; instead, they are usually divided into smaller patches for training the deep learning model. Images of order of magnitude 10000x10000 have been cut into 1024x1024 and 256x256 non-overlapping tiles using ImageJ plugin Slided [12]. Then the data was fed into the model within 3 experiments: 1) images as downsampled and scaled whole slice views 4096x4096 (5 paired images for training, 3 for testing) 2) slices as groups of 1024x1024 patches (385 for training, 248 for testing), 3) slices as groups of 256x256 patches (5722 for training, 3819 for testing). ### Network architecture A previously defined cGAN architecture has been adapted to translate MRIs to GFAPs [13]. Image-to-image translation aims at learning the mapping between the input image and an output image using a set of registered pairs of images. Given the nature of our dataset, the cGAN model was trained on paired data (each histology section paired and registered with MRI slices). The model has two architectures, one for the generator and the other for the discriminator, specifically encoder-decoder U-net and patchGAN. Discriminator's patchGAN architecture includes a number of transpose convolutional blocks. It examines an NxN section of an image to determine whether it is real or fake [13]. Conditional GANs learn the mapping from an observed image x and a random noise vector z to y, G: \(x,z\to y\). G - generator - is trained to produce outputs that cannot be distinguished from real images by the discriminator. A discriminator, D, is adversarially trained to do as well as possible at spotting the generator's "fakes". Load sizes chosen for 3 training experiments were: 4096, 1024, and 256. The objective adversarial loss can be expressed as: \[\begin{split}&\mathcal{L}_{\mathit{c}\mathit{o}\mathit{d} \mathit{G}AN}(G,D_{y},X,Y)=\\ & E_{x,y}[\log(D(x,y)]+E_{x,y}[\log(1-D_{y}(G(x))],\end{split} \tag{1}\] with \(D_{y}\) the corresponding discriminator of \(G\). Then, using jointly the L1 distance and considering the \(z\) random noise vector, \(\mathcal{L}_{L1}(G)=E_{x,y,z}\left[\left\|y-G(x,z)\right\|_{1}\right]\), leads to the GAN objective: \[\mathcal{G}^{*}=\operatorname*{arg\,min}_{G}\operatorname*{arg\,max}_{D} \mathcal{L}_{\text{condGAN}}(G,D_{y},X,Y)+\lambda_{L_{1}}(G). \tag{2}\] This was implemented as cGAN in Python inspired by the work of Isola et al. [13] based on Pytorch. ### Hyperparameter tuning Lucic and colleagues [14] observed that GAN training is incredibly sensitive to hyperparameter settings. Therefore, several hyperparameters have been tested using randomized Figure 1: Image preprocessing pipeline. Whole slide histology images were downsampled, registered with MRIs, and tiled. search approach to select the ones which give the best results of training on whole slide view, based on the calculated loss function scores as endpoints. The applied L1 loss function measures the mean absolute difference between the generated image and the target image helping to enforce pixel-level similarity between the generated image and the target image. Default settings are U-net 256 blocks as architecture network, cross-entropy loss function, learning rate of value 2x10-4, beta1 momentum term of adam optimizer: 0.5, number of epochs: 100, weight for L1 loss 'lambda_L1': 100. The following parameters were varying within given options: type of GAN objective (cross-entropy - 'vanilla', least squares - 'lssgan', Wasserstein distance - 'wgangp') the number of epochs with the initial learning rate, learning rate (2x10-4, 2x10-5), beta1 - momentum term of adam optimizer (0.4, 0.5, 0.8), lambda_L1 (10, 50, 100). Cross-entropy GAN loss is a binary classification loss [15] used to train the discriminator in GAN. The difference between the true label and the predicted label of a classification model is measured. In the context of a GAN, the true label is 1 for real data and 0 for generated data, and the predicted label is the output of the discriminator. The binary cross-entropy loss is calculated for each data point (real or generated) separately and then averaged over the entire batch of data. Least squares loss function has been adopted from the Least Squares Generative Adversarial Networks (LSGANs) [16] where the least squares loss function replaces the binary cross-entropy. In LSGANs, the generator is trained to minimize a least squares loss function, which measures the difference between the discriminator's output on the generated data and a continuous target value. The LSGAN loss function is designed to overcome some of the instability issues associated with traditional GANs, such as mode collapse and slow convergence. The Wasserstein distance [17], also known as the Earth Mover's Distance, measures the minimum energy cost of transforming one distribution into another. Wasserstein GANs, which use the Wasserstein distance as a loss function, have been shown to be effective in generating high-quality images that are similar to the real images in terms of their distribution. Wasserstein Gradient Penalty Loss [18], or WGAN-GP Loss, is a loss used for generative adversarial networks that augment the Wasserstein loss with a gradient norm penalty for random samples to achieve Lipschitz continuity. A Lipschitz continuous function is a mathematical concept that describes a function whose rate of change is bounded. Encouraging the discriminator in a GAN to have Lipschitz continuous gradients is beneficial because it helps to stabilize the training process and prevent mode collapse. ### Evaluation Apart from the L1 loss function generated images have been evaluated using two additional metrics: Frechet Inception Distance (FID) [19] and Learned Perceptual Image Patch Similarity (LPIPS) [20]. They provide a more comprehensive assessment of the quality of the generated images since they measure perceptual similarity rather than the statistical similarity between generated and real images. FID score [19], assesses the quality of generated images comparing the similarity of two datasets. It is applied to compute the Frechet inception distance using the inception network to measure the distance between the generated image distribution and the real image distribution. The FID metric calculates the maximum entropy distribution for a given mean and covariance [21]. LPIPS [20] measures the distance between the feature representations of real and generated images. Using a pre-trained network it extracts image features and calculates the Euclidean distance between the feature vectors. LPIPS's authors believe that perceptual similarity is not a distinct function of its own, but rather the result of visual representations tuned to predict important world structure. Representations that perform well in semantic prediction tasks are also those in which Euclidean distance predicts perceptual similarity judgments very well. Figure 2: Qualitative evaluation. Sample images from 3 experiments, 2 for each of them, one example is included in one row. The top left and right rows are for the patches experiments, and the bottom row is for the full slice experiment. ## 3 Results Model was trained on the supercomputer of over 3.5 PFlops for the CPU parts, and over 500 TFlops for the GPU parts. Training on whole slices took approximately 3h, on 1024x1024 patches - 4h, and on 256x256 patches - 8h. In Table 1. there are presented calculated FID and LPIPS scores comparing generated histologies to real (ground truth) histologies and also generated histologies to real MRIs. Sample results for qualitative assessment are provided in Figure 2. Two best models selected from hyperparameter tuning are of subsequent combinations of parameters: 1) U-net 256 blocks network, least squares GAN objective, 100 epochs, beta1: 0.4, learning rate - 2x10-4, lambda_L1 - 10, 2) U-net 256 blocks network, least squares GAN objective, 100 epochs, beta1 - 0.8, learning rate - 2x10-4, lambda_L1 - 50. ## 4 Discussion Calibrating the training with random search was beneficial. The best acquired model as a loss function uses least squares, which corresponds with evidence that approaches adopted from LSGAN produce high-quality results in various image generation tasks [22, 23, 24]. The FID scores obtained by models trained on patches are better than the ones for whole slice view. However, during the inference in the particular setting that the project was implemented for - which is generating histology directly from the MRI - the best quantitative result was reached for the whole slice view (in relation to the default and tuned model). Jointly, LPIPS score values show that the optimized model trained on whole slice view works the best in translating MRI to histology given unseen data. LPIPS metric reflects more the positive impact of the hyperparameters optimization to model performance. In contrast to metrics directly comparing images pixel by pixel, perceptual FID, and LPIPS trained on deep features tend to mimic human perception of similarity in images, but still, when evaluating generative adversarial networks, the qualitative assessment of the results is needed. Sample output images show that when histology is produced from the whole slice view, the borders of the tissue are preserved and the structures are distinguishable. Almost similarly the features of the MRIs are translated from 1024x1024 patches. Additionally, the texture of the synthetic histology is more detailed. The model trained on the smallest patches performs the smallest quality images among 3 experiments. This size of tiles seems to be too small to provide enough context - especially from the MRI image, which intrinsically has a significantly lower resolution. Among the stack of the 256x256 tiles, the generated outputs are very light, poorly detailed, and rarely reflect even the sharp lines visible in the MRI. The smallest crops used to train the network lack relevant information, and thus the training may fail: both the generator and the discriminator require information to process and may encounter issues if that information is not available. Even if the training is successful, when stitching all the different crops of a very high resolution image, the stylistic contribution of each small translated image can be insufficient for the entire high resolution image. The generator has no knowledge of the context of the entire high definition (HD) image and is only exposed to the lower resolution 256x256 crops. Giving the generator some encoded context about the entire images can certainly broaden the technique's range of applications, offering complex context-aware HD image translations. Since the generated 1024x1024 patches are satisfactory, and this size parameter allows to preservation of relevant elements of medical image, it is possible that joining back the tiles with respect to initially encoded coordinates will be already a sufficient solution. Future works include the comparison of GANs with diffusion models [25], and estimation of tractography from both types of data using structural tensors [26]. ## 5 Conclusion We proposed a deep learning-based approach to synthesize a histology image directly from a brain MRI. The demonstrated method incorporates a framework of cGAN. We proved the model is capable of reliably learning the style from one modality and translating it to another, even when they are particularly different as histology and MRI. Preliminary results were promising, showing the network's ability to train on high resolution histologies paired with relatively low-resolution MRI modality. It is probably too early to be included in clinical workflow avoiding histology, as this will require further improvements. Nevertheless, with currently accomplished scores, the method can be reliably used for educational purposes saving time to pathology laboratories. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Comparison** & \multicolumn{3}{c}{**Patches**} & \multicolumn{3}{c}{**Whole size view (406x-4096)**} \\ \cline{2-5} & 256x256 & 1024x1024 & Drfault model & Best model & 2nd best model \\ \multicolumn{5}{c}{_FID score = route set_} \\ \hline Generated histology & **138.990** & **154.650** & 511.020 & 414.554 & 406.001 \\ vs real histology & & & & & \\ \hline Generated histology & **449.312** & **347.470** & 616.730 & 571.848 & 607.187 \\ vs real MRI & & & & & \\ \hline \multicolumn{5}{c}{_FID score = test set_} \\ \hline Generated histology & **163.710** & **235.750** & 630.390 & 511.664 & 549.092 \\ vs real MRI & & & & & \\ \hline \multicolumn{5}{c}{_LPIPS score = rule set_} \\ \hline Generated histology & 0.468 & **0.369** & 0.546 & **0.416** & 0.435 \\ vs real MRI & & & & & \\ \hline Generated histology & 0.977 & 1.081 & **0.533** & 0.698 & **0.619** \\ vs real MRI & & & & & \\ \hline \multicolumn{5}{c}{_LPIPS score = test set_} \\ \hline Generated histology & **0.341** & 0.624 & 0.589 & **0.485** & 0.818 \\ vs real MRI & & & & & \\ \hline \multicolumn{5}{c}{_LPIPS score = rule set_} \\ \hline Generated histology & 0.856 & 1.280 & **0.811** & 0.832 & **0.500** \\ \hline \hline \end{tabular} \end{table} Table 1: Mean FID and LPIPS scores – comparison of generated to real histologies (ground truth) and to real MRIs. Hyperparameters were tuned on the whole slice view. Bold indicates 2 lowest FIDs and LPIPSs of training and test sets for each comparison. ## 6 Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 857533 and from the International Research Agendas Programme of the Foundation for Polish Science No MAB PLUS/2019/13.
2302.07379
Charged lepton-flavor violating processes and suppression of nonunitary mixing effects in low-scale seesaw models
We examine the parameter space region of the inverse seesaw model that is consistent with neutrino oscillation data. We focus on the correlation between the current limits from the search of the $\mu\to e\gamma$ lepton flavor violating decay and the non-standard effects associated with the presence of new heavy neutrino states. Unlike what we would expect from an inverse seesaw model, we present a structure for the neutrinos mass matrices in which the rates of charged lepton flavor-violating processes are negligible. Additionally, we provide a model based on symmetries for such a scenario.
J. C. Garnica, G. Hernández-Tomé, E. Peinado
2023-02-14T22:38:44Z
http://arxiv.org/abs/2302.07379v2
# cLFV processes and suppression of non-unitary mixing effects in low scale seesaw models. ###### Abstract We examine the parameter space region of the inverse seesaw model that is consistent with neutrino oscillation data. We focus on the correlation between the current limits from the search of the \(\mu\to e\gamma\) lepton flavour violating decay and the non-standard effects associated with the presence of new heavy neutrino states. Unlike what we would expect from an inverse seesaw model, we have found a parametrization for the mass matrices in which the rates of charged lepton flavour-violating processes are negligible. Additionally, we provide a model where the inverse seesaw is obtained naturally, and the mass matrices get this structure with negligible violation of the lepton flavour. ## 1 Introduction The seesaw mechanism offers an attractive scenario to explain the tiny Majorana neutrino masses. The suppression of the light masses is due to the tree-level exchange of very heavy fields, such as right-handed singlet fermions (type-I seesaw) [1, 2, 3, 4], scalar triplet (type-II seesaw) [5, 6, 7, 8, 9], or fermions triplet (type-III seesaw) [10]. Nevertheless, a direct experimental test of these high-scale scenarios might be impossible due to the decoupling of the new heavy particles. Alternatively, _low-scale seesaw models_, more specifically, the inverse [11, 12] and linear models [13] are well-motivated variants that open the possibility for a richer phenomenology at a new physics scale accessible to current experiments, such as the existence of new heavy neutrino states with masses at the TeV scale, as well as the presence of charged lepton flavour violating (cLFV) or lepton number violation (LNV) processes at sizeable levels [14, 15, 16, 17, 18, 19, 20, 21]. In this work, we analyzed the parameter space region of the so-called inverse seesaw (ISS) model consistent with the current data in the neutrino sector. We based our analysis on two possible parametrizations. In the first one, which we will call _Model A_, we focus on the correlation between non-unitary effects associated with the presence of new heavy neutrino states and the limits from the search for cLFV processes. The second case, _Model B_, presents a novel parameterization by assuming diagonal structures for the Dirac and heavy mass matrices, while all the structure comes from the lowest scale mass matrix. We have found that this latter case is very special since only the SM neutrinos contribute to cLFV processes. Moreover, the contribution from the heavy fermions vanishes exactly at the leading order, which goes against the typical assumption in a low-energy seesaw model. We verified our findings by using two methods, the perturbative block mass matrix diagonalization (BMDM) method presented in [22, 23] and our complete numerical diagonalization routine implemented in _Wolfram Mathematica_. The structure of this manuscript is as follows: Section 2 is devoted to introducing the general form of the mass matrix defining the seesaw models, the basic aspects of the BMDM, and presenting the \(\eta\) matrix that quantifies the non-unitary effects. After this, we diagonalised the neutrino mass matrix of the ISS model as a particular case of the general seesaw structure. Section 3 presents the analytical expressions, and the current experimental status of the cLFV decays \(\ell\rightarrow\ell^{\prime}\gamma\) (\(\ell=\mu\left(\tau\right)\), \(\ell^{\prime}=e\left(e,\mu\right)\)). In section 4, we present the numerical analysis associated with the phenomenology of _Models A_ and \(B\). We finish with a summary and conclusions in Sec. 5. ## 2 Seesaw models From a theoretical perspective, the Majorana nature of neutrinos is motivated by the scale suppression in the dimension 5 Weinberg operator [24], whose UV completion may rise from the seesaw models. In a seesaw model, besides the three active left-handed neutrinos \(\nu_{Li}\) (\(i=1,2,3\)) of the SM, the neutrino sector is extended by a number \(n\) of new singlets fields \(N_{Rk}\) (\(k=1,2,...,n\)) that allow both Dirac and Majorana mass terms: \[-{\cal L}_{\rm Mass}=\overline{\hat{\nu}_{Li}}\,({\cal M}_{D})_{ij}\,\hat{N}_{R_{ j}}+\frac{1}{2}\,\overline{\hat{\nu}_{Li}}\,({\cal M}_{L})_{ij}\,\hat{\nu}_{L_{j}}^{c} +\frac{1}{2}\,\overline{\hat{N}_{Ri}^{c}}\,({\cal M}_{R})_{ij}\,\hat{N}_{R_{j} }+{\rm h.c.}\,, \tag{2.1}\] where \(\psi^{c}\equiv C\overline{\psi}^{T}\) is the charge conjugated field and \(C\) is the charge conjugation matrix. Moreover, if the scalar sector consists only of the SM Higgs, then \({\cal M}_{L}\!=\!0\). Notice that in such a case, the total number of neutrino states is given by \(n^{\prime}\!\equiv\!(3+n)\). Eq. (2.1) can be written in a compact form, in the basis \(\hat{\chi}_{L}\!\equiv\!(\hat{\nu}_{L1},\hat{\nu}_{L2},\hat{\nu}_{L3},\hat{N}_ {R1}^{c},\ldots,\hat{N}_{Rn}^{c})\), as follows \[-{\cal L}_{\rm Mass}=\frac{1}{2}\;\overline{\hat{\chi}_{L}}\;{\cal M}\;\hat{ \chi}_{L}^{c}+{\rm h.c.}\,,\quad\mbox{where}\quad{\cal M}_{n^{\prime}\times n ^{\prime}}=\begin{pmatrix}0_{3\times 3}&{\cal M}_{D_{3\times n}}\\ {\cal M}_{D_{n\times 3}}^{T}&{\cal M}_{R_{n\times n}}\end{pmatrix}, \tag{2.2}\] where the hat in Eq. (2.2) stands for the fields in the flavour basis, i.e. \(\hat{\chi}_{Li}={\cal U}_{ij}^{\nu^{*}}\chi_{Lj}\), and \(\chi_{Lj}\) are the physical neutrino states *. The above structure defines the Type-I seesaw models, where the dimension of the sub-block matrices \({\cal M}_{D}\) and \({\cal M}_{R}\) is denoted by the subindices +, in such a way that the complete neutrino matrix \({\cal M}\) has dimensions \(n^{\prime}\times n^{\prime}\). Footnote *: Note also that in order to define all the mass states positive the matrix \({\cal U}^{\nu}\) can be multiplied by a diagonal matrix \(\sqrt{\lambda}\) of complex phases, this is equivalent to redefining the fields by \(\chi_{i}\to\chi_{Li}+\lambda_{i}\chi_{Li}^{c}\), where \(\lambda_{i}=\pm 1\) is the CP parity of the field \(\chi_{i}\). Footnote †: We use this notation in all the text when we consider necessary to clarify the dimensions of the matrices. ### Block matrix diagonalization method (BMDM) In the type-I seesaw, the heavy right-handed neutrino is integrated out. In this case, we are in the limit \(|{\cal M}_{D}|\,\ll\,|{\cal M}_{R}|\). It is possible to block-diagonalize the neutrino mass matrix up to terms of the order \({\cal M}_{R}^{-1}{\cal M}_{D}\) by a unitary matrix \(\,{\cal U}^{\nu}\) that connects weak and physical states as follows [22, 23] \[({\cal U}^{\nu})^{T}{\cal M}\;{\cal U}^{\nu}={\cal M}^{\rm diag},\quad\mbox{ where}\quad{\cal M}_{n^{\prime}\times n^{\prime}}^{\rm diag}=\begin{pmatrix}m_{\nu_{3 \times 3}}^{\rm diag}&0_{3\times n}\\ 0_{n\times 3}&M_{N_{n\times n}}^{\rm diag}\end{pmatrix}. \tag{2.3}\] In the above expression, \(m_{\nu_{3\times 3}}^{\rm diag}\equiv{\rm diag}(m_{1},m_{2},m_{3})\) is a sub-block diagonal matrix associated with the three light active states, while \(M_{N_{n\times n}}^{\rm diag}\equiv{\rm diag}(m_{N_{1}},m_{N_{2}},...m_{N_{n}})\) is a sub-block diagonal matrix defining the masses of \(n\) heavy states. The matrix \({\cal U}^{\nu}\) at leading order is approximated as [22, 23] \[{\cal U}^{\nu}_{n^{\prime}\times n^{\prime}}=U^{\nu}_{n^{\prime}\times n^{ \prime}}\cdot V_{n^{\prime}\times n^{\prime}}, \tag{2.4}\] with \[V_{n^{\prime}\times n^{\prime}}= \begin{pmatrix}V_{1_{3\times 3}}&0\\ 0&V_{2_{n\times n}}\end{pmatrix}, \tag{2.5}\] and \[U_{n^{\prime}\times n^{\prime}}^{\nu}= \begin{pmatrix}\mathbb{I}_{3\times 3}-\frac{1}{2}(\mathcal{M}_{D}^{ *}(\mathcal{M}_{R}^{*})^{-1}\mathcal{M}_{R}^{-1}\mathcal{M}_{D}^{T})_{3\times 3 }&(\mathcal{M}_{D}^{*}(\mathcal{M}_{R}^{*})^{-1})_{3\times n}\\ -(\mathcal{M}_{R}^{-1}\mathcal{M}_{D}^{T})_{n\times 3}&\mathbb{I}_{n\times n}- \frac{1}{2}(\mathcal{M}_{R}^{-1}\mathcal{M}_{D}^{T}\mathcal{M}_{D}^{*}( \mathcal{M}_{R}^{*})^{-1})_{n\times n}\end{pmatrix}, \tag{2.6}\] where \(\mathbb{I}\) denotes the identity matrix, while the matrices \(V_{1_{3\times 3}}\) and \(V_{2_{n\times n}}\) are unitary matrices connecting the flavour and physical states \[m_{\nu_{3\times 3}}^{\text{diag}}=(V_{1}^{T}m_{\nu}V_{1})_{3\times 3}, \qquad M_{N_{n\times n}}^{\text{diag}}=(V_{2}^{T}M_{N}V_{2})_{n\times n}. \tag{2.7}\] The matrices \(m_{\nu_{3\times 3}}\) and \(M_{N_{n\times n}}\) are given by \[m_{\nu_{3\times 3}}=-(\mathcal{M}_{D}\mathcal{M}_{R}^{-1}\mathcal{M}_{D}^ {T})_{3\times 3},\qquad M_{N_{n\times n}}=\mathcal{M}_{R_{n\times n}}. \tag{2.8}\] ### Non-unitarity effects The leptonic charged current characterizing a model with three generations of left-handed lepton doublets and \(n\) right-handed neutrino singlets can be written as follows [6, 14, 21] \[\mathcal{L}_{W^{\mp}}=-\frac{g}{\sqrt{2}}W_{\mu}^{-}\sum_{i=1}^{3}\sum_{j=1} ^{n}B_{ij}\,\overline{\ell_{i}}\,\gamma^{\mu}\,P_{L}\chi_{j}+\text{h.c.}, \tag{2.9}\] where \[B_{ij}=\sum_{k=1}^{3}\mathcal{U}_{ki}^{\ell*}\,\mathcal{U}_{kj}^{\nu}\,, \tag{2.10}\] defines the mixing in the leptonic sector, and it has a rectangular form with dimensions \(3\times n^{\prime}\) (\(n^{\prime}=3+n\) the number of total neutrino states). In eq. (2.10), \(\mathcal{U}_{3\times 3}^{\ell}\) is the matrix that diagonalizes the charged lepton mass matrix. It turns out helpful to rewrite the \(B_{ij}\) as follows \[B_{3\times n^{\prime}}\equiv(B_{L_{3\times 3}},\,B_{H_{3\times n}}), \tag{2.11}\] where \(B_{L_{3\times 3}}\) and \(B_{H_{3\times n}}\) are two sub-block matrices describing separately the flavour mixing between light and heavy lepton states, respectively. Therefore, working in the diagonal charged lepton mass basis (\(\mathcal{U}^{\ell}_{ik}=\delta_{ik}\)), we have that \[B_{L_{3\times 3}} =\left(\mathbb{I}_{3\times 3}-\frac{1}{2}(\mathcal{M}^{*}_{D}( \mathcal{M}^{*}_{R})^{-1}\mathcal{M}^{-1}_{R}\mathcal{M}^{T}_{D})_{3\times 3} \right)\cdot V_{1_{3\times 3}}, \tag{2.12}\] \[B_{H_{3\times n}} =\left(\mathcal{M}^{*}_{D}(\mathcal{M}^{*}_{R})^{-1}\right)_{3 \times n}\cdot V_{2_{n\times n}}, \tag{2.13}\] with \(V_{1}\) identified with the neutrino mixing matrix \(U_{PMNS}\). Furthermore, as with any general matrix, we can write it as the product of a Hermitian matrix and a unitary matrix. \(B_{L_{3\times 3}}\) can be rewritten in the following manner \[B_{L_{3\times 3}}=(\mathbb{I}_{3\times 3}-\eta_{3\times 3})\cdot V_{1_{3\times 3}}. \tag{2.14}\] In this way, when comparing eqs. (2.12) and (2.14), it is clear that the matrix that quantifies the deviation from unitarity of the light neutrino mixing matrix is \[\eta_{3\times 3}=\frac{1}{2}\left(\mathcal{M}^{*}_{D}(\mathcal{M}^{*}_{R})^{-1} \mathcal{M}^{-1}_{R}\mathcal{M}^{T}_{D}\right)_{3\times 3}. \tag{2.15}\] ### Inverse seesaw model (\(N_{r}=3,s=3\) case) A well-motivated variant of the usual (_high-scale_) type-I seesaw is the so-called inverse seesaw (ISS) model [11]. In this case, the smallness of the LNV parameter, \(\mu\), explains the lightness of neutrinos. This extra suppression of the light neutrino masses allows for heavy neutrino states with masses accessible at current collider energies. The ISS model requires extending the neutrino sector with right-handed singlet neutrinos \(N_{iR}\) and left-handed singlets \(S_{j}\). Here, we consider the case with three \(N_{R}\) and three \(S_{j}\) singlets++. The full neutrino mass matrix is [16] Footnote ‡: Reference [20] studied a minimal scenario with only two \(N_{iR}\) and two \(S_{j}\) neutrino states. \[\mathcal{M}^{\text{ISS}}_{9\times 9}=\left(\begin{array}{ccc}0_{3\times 3}&M_{D_ {3\times 3}}&0_{3\times 3}\\ (M^{T}_{D})_{3\times 3}&0_{3\times 3}&M_{3\times 3}\\ 0_{3\times 3}&(M^{T})_{3\times 3}&\mu_{3\times 3}\end{array}\right), \tag{2.16}\] with the hierarchy \(|\mu|\ll|M_{D}|\ll|M|\). We can generalize Eqs. (2.8), assuming that \(M\) is invertible and making the following identification \[\mathcal{M}^{\text{ISS}}_{D_{3\times 6}}=(M_{D_{3\times 3}},0_{3\times 3}),\quad \mathcal{M}^{\text{ISS}}_{R_{6\times 6}}=\begin{pmatrix}0_{3\times 3}&M_{3 \times 3}\\ M^{T}_{3\times 3}&\mu_{3\times 3}\end{pmatrix}, \tag{2.17}\] where the inverse matrix of \(\mathcal{M}_{R_{6\times 6}}^{\text{ISS}}\) is given by \[(\mathcal{M}_{R}^{\text{ISS}})_{6\times 6}^{-1}=\begin{pmatrix}-((M^{T})^{-1}\mu M ^{-1})_{3\times 3}&(M^{T})_{3\times 3}^{-1}\\ M_{3\times 3}^{-1}&0_{3\times 3}\end{pmatrix}. \tag{2.18}\] Using the BMDM for the inverse seesaw model, we have that \[m_{\nu_{3\times 3}}^{\text{ISS}}=(M_{D}(M^{T})^{-1}\mu M^{-1}M_{D}^{T})_{3 \times 3},\quad\text{and}\quad M_{N_{6\times 6}}^{\text{ISS}}=\mathcal{M}_{R_{6 \times 6}}^{\text{ISS}}. \tag{2.19}\] Furthermore, in the limit \(\mu\to 0\), the weak charged lepton is given by \[B_{3\times 9}^{\text{ISS}}=(B_{L_{3\times 3}}^{\text{ISS}},\,B_{H_{3\times 6}}^{ \text{ISS}}), \tag{2.20}\] with \[B_{L_{3\times 3}}^{\text{ISS}}=\left(\mathbb{I}_{3\times 3}-\eta_{3\times 3}^{ \text{ISS}}\right)\cdot V_{1_{3\times 3}},\qquad B_{H_{3\times 6}}^{\text{ISS}}= \left(0_{3\times 3},\,(M_{D}^{*}(M^{*T})^{-1})_{3\times 3}\right)\cdot V_{2_{6 \times 6}}. \tag{2.21}\] Whereas the \(\eta_{3\times 3}^{\text{ISS}}\) matrix is given by \[\eta_{3\times 3}^{\text{ISS}}=\frac{1}{2}(M_{D}^{*}(M^{T*})^{-1}M^{-1}M_{D}^ {T})_{3\times 3}. \tag{2.22}\] The two cases we will discuss in section 4 share the assumption that \(M\) is diagonal. In such a case, the matrix \(V_{2}\) in Eq. (2.21), required to determine the heavy physical states and their mixings, can be approximated by \[V_{2_{6\times 6}}=\frac{1}{\sqrt{2}}\begin{pmatrix}-\mathbb{I}_{3\times 3}& \mathbb{I}_{3\times 3}\\ \mathbb{I}_{3\times 3}&\mathbb{I}_{3\times 3}\end{pmatrix}\begin{pmatrix}i \cdot\mathbb{I}_{3\times 3}&0_{3\times 3}\\ 0_{3\times 3}&\mathbb{I}_{3\times 3}\end{pmatrix}. \tag{2.23}\] The \(i\) factor in the last matrix ensures that all masses are positive. ## 3 Charged Lepton flavor violation processes (cLFV) ### \(\ell\to\ell^{{}^{\prime}}\gamma\) decays The branching ratio formula of the cLFV processes \(\ell\to\ell^{{}^{\prime}}\gamma\), with \(\ell=\mu\left(\tau\right)\), \(\ell^{\prime}=e\left(e,\mu\right)\) neglecting the mass of the lighter-charged lepton, is given by [21] \[\text{BR}(\ell\to\ell^{{}^{\prime}}\gamma) =\frac{\alpha}{\Gamma_{\ell}}m_{\ell}^{3}\,|F_{M}^{\gamma}(0)|^{2}, \tag{3.1}\] \[F_{M}^{\gamma}(0) =\frac{\alpha_{W}}{16\pi}\frac{m_{\ell}}{M_{W}^{2}}\sum_{i}B_{ \ell i}^{*}\,B_{\ell^{{}^{\prime}}i}\,f_{M}^{\gamma}(x_{i}),\] \[f_{M}^{\gamma}=\frac{3x^{3}\log x}{2(x-1)^{4}}-\frac{2x^{3}+5x^{2}-x}{4(x-1)^{3}}+ \frac{5}{6}, \tag{3.2}\] where \(\alpha=e^{2}/4\pi\) is the fine structure constant, \(\alpha_{W}\equiv\alpha/s_{W}^{2}\), \(x_{i}\equiv m_{\chi_{i}}^{2}/M_{W}^{2}\) and \(m_{\chi_{i}}\) denotes the mass of all the physical neutrino states. We present these transitions' current and future limits in Table 1. ## 4 Numerical Analysis Let us now discuss the phenomenology of two different parametrizations of the ISS model that we call _scenarios A_ and \(B\). In the former, the matrices \(\mu\) and \(M\) in Eq. (2.19) are diagonals. Therefore all the structure comes from the Dirac mass matrix \(M_{D}\). On the other hand, in the _scenario B_, we consider that the matrices \(M_{D}\) and \(M\) are diagonals, and all the structure comes from the matrix \(\mu\). We give an ultraviolet completion for this model in Appendix A. _Scenario A_ The Casas-Ibarra parameterization [30] helps to write the Yukawa couplings in terms of the neutrino mass matrix and the other mass matrices in the model as follows [15, 16] \[M_{D_{3\times 3}}=\left(V_{1}^{*}\sqrt{m_{\nu}^{\rm diag}}R^{T}\left(\sqrt{ \mu}\right)^{-1}M^{T}\right)_{3\times 3}, \tag{4.1}\] with \(R\) a real \(3\times 3\) orthogonal matrix described by three arbitrary rotation angles \((\theta,\phi,\psi)\). Moreover, we work on the basis where \(M_{3\times 3}\) and \(\mu_{3\times 3}\) are real diagonal matrices \[M_{3\times 3} ={\rm diag}(M_{11},M_{22},M_{33})=v_{M}\cdot{\rm diag}(1+\epsilon _{M_{11}},1+\epsilon_{M_{22}},1+\epsilon_{M_{33}}) \tag{4.2}\] \[\mu_{3\times 3} ={\rm diag}(\mu_{11},\mu_{22},\mu_{33})=v_{\mu}\cdot{\rm diag}(1+ \epsilon_{\mu_{11}},1+\epsilon_{\mu_{22}},1+\epsilon_{\mu_{33}}). \tag{4.3}\] \begin{table} \begin{tabular}{l c c} \hline Process & Present Limit & Future Sensitivity \\ \hline \hline \(\mu\to e\gamma\) & \(4.2\times 10^{-13}\)[25] & \(6\times 10^{-14}\)[26] \\ \hline \(\tau\to e\gamma\) & \(3.3\times 10^{-8}\)[27] & \(3\times 10^{-9}\)[28] \\ \hline \(\tau\rightarrow\mu\gamma\) & \(4.4\times 10^{-8}\)[27] & \(10^{-9}\)[28] \\ \hline \hline \end{tabular} \end{table} Table 1: Present limits and future sensitivities for \(\ell\rightarrow\ell^{\prime}\gamma\) decays. In our analysis, we used sixteen parameters: three mixing angles \(\theta_{12},\,\theta_{23},\,\theta_{13}\) and one \(CP\)-violating phase \(\delta_{CP}\), three light neutrino masses in \(m_{\nu}^{\rm diag}={\rm diag}(m_{\nu_{1}},\,m_{\nu_{2}},\,m_{\nu_{3}})\), three rotation angles in the \(R\) matrix, three parameters for the \(\mu\) matrix, and three parameters for the diagonal \(M\) matrix. We have performed a random scan setting the scale \(v_{M}=1\) (10) TeV, and varying \(v_{\mu}\) into the range \([1,1000]\) eV, while we choose the rest of the free parameters as follows: * The three light neutrino masses, the three mixing angles, and the Dirac CP-violating phase associated with the active neutrino sector are considered in the range allowed for the current neutrino oscillation data [29] (see Table 2). * The angles \(\theta\), \(\phi\), \(\psi\) in the matrix \(R\) vary into the range \([0,2\pi]\). * The parameters \(\epsilon_{M_{ii}}\) (\(i=1,2,3.\)) of the matrix \(M\) vary into the range [-0.5, 0.5]. * The parameters \(\epsilon_{\mu_{ii}}\) (\(i=1,2,3.\)) of the matrix \(\mu\) vary into the range \([-0.5,0.5]\). At this point, it is worth mentioning that we have done a cross-check of our results using both the BMDM described in section II and our complete numerical routine implemented in Wolfram Mathematica [31]SS. Given our scan's numerical matrix in Eq. (2.16), we diagonalize it by demanding a high machine precision in extracting its eigenvectors. Then, the matrix \(B\) defining the charged lepton current is obtained directly \begin{table} \begin{tabular}{c|c|c} Parameter & Normal ordering (\(3\sigma\) range) & Inverted ordering (\(3\sigma\) range) \\ \hline \hline \(\sin^{2}\theta_{12}\) & 0.271 - 0.369 & 0.271 - 0.369 \\ \hline \(\sin^{2}\theta_{23}\) & 0.434 - 0.610 & 0.433 -0.608 \\ \hline \(\sin^{2}\theta_{13}\) & 0.2000 - 0.2405 & 0.2018 - 0.2424 \\ \hline \(\delta_{CP}/^{\circ}\) & 128 - 359 & 200 - 353 \\ \hline \(\frac{\Delta m_{21}^{2}}{10^{-5}{\rm eV}^{2}}\) & 6.94 - 8.14 & 6.94 -8.14 \\ \hline \(\frac{|\Delta m_{21}^{2}|}{10^{-3}{\rm eV}^{2}}\) & 2.47 -2.63 & 2.37 - 2.53 \\ \hline \hline \end{tabular} \end{table} Table 2: Neutrino mixing parameters used in our analysis [29]. We consider the value \(m_{\nu}=0.12/3\,(0.15/3)\) eV as a benchmark for the lightest neutrino mass, taking into account the cosmological limit for the total neutrino mass in the normal (inverted) ordering also reported in [29]. from the Eq. (2.9), with the sub-block matrices \(B_{L_{3\times 3}}\) and \(B_{H_{3\times 6}}\) formed by the three first columns, and from the fourth to the nine columns of \(B\), respectively. Furthermore, the hermitian \(\eta^{\rm ISS}_{3\times 3}\) matrix is obtained directly from the relation \[B_{L}\cdot B_{L}^{\dagger}=(\mathbb{I}-2\eta^{\rm ISS}+\mathcal{O}((\eta^{\rm ISS })^{2}). \tag{4.4}\] We have considered only points satisfying that \(M_{D_{ij}}<175\) GeV to respect a perturbative limit. After the diagonalization of each numerical matrix in Eq. (2.16) of our scan, we observe in the left plot of Fig. 1 that there are points that easily overpass the current limit set by the MEG collaboration [25] for the branching ratio of the \(\mu\to e\gamma\) decay P. Specifically, setting the scale \(v_{M}\approx 1\) (10) TeV represented by the blue (purple) points, the scale \(v_{\mu}\) must satisfy that \(v_{\mu}\gtrsim 50\) (100) eV to be compatible with both the current limits from \(\mu\to e\gamma\) and the data from neutrino oscillation. It is worth noticing that the future sensitivity expected from MEG II will be able to test the points between the solid and dashed black lines in Fig. 1. Footnote ¶: In these plots, we have found an excellent agreement between the BMDM and our exact numerical method, that is the points of both methods almost overlap. Therefore, we showed only the points obtained with our numerical routine. Additionally, in the right plot of Figure 1, we show the effects of the \(CP\) violating phase \(\delta_{CP}\) on the estimation of the \(\mu\to e\gamma\) branching ratio. Something interesting to stress here is that more points tend to have a lower decay rate when \(CP\) is conserved than when the \(CP\) violation is maximal. Regarding the correlation between the non-unitary effects and the limits from the search of the \(\ell\to\ell^{\prime}\gamma\) decays, in Fig. 2, we show a plot for the branching ratio of the \(\mu\to e\gamma\) and \(\tau\to\ell^{\prime}\gamma\) (\(\ell^{\prime}=e,\mu\)) channels as a function of the absolute value of the elements of the \(\eta^{\rm ISS}_{3\times 3}\) matrix. We can see, as expected, that there is a stronger correlation between \(\mu\to e\gamma\) and \(\eta_{12}\) than with the other elements of the matrix \(\eta\). Similarly, with \(\tau\to\mu\gamma\) ( \(\tau\to e\gamma\)) and \(\eta_{23}\) (\(\eta_{13}\)). In fact, according to the current limits taken in our scan and setting the scale \(v_{M}\approx 1\) TeV, we have that the magnitude of non-unitary effects must be \(|\eta_{12}|\lesssim 10^{-5}\), \(|\eta_{13}|\lesssim 10^{-4}\), and \(|\eta_{23}|\lesssim 10^{-4}\) to respect the most restrictive limit coming from the \(\mu\to e\gamma\) channel. _Scenario B_ Here we consider another case where the Dirac and heavy neutrino mass matrices are diagonal. This is where \(M_{3\times 3}\) and \(M_{D_{3\times 3}}\) in Eq. (2.16) are real diagonal matrices \[M_{3\times 3}={\rm diag}(M_{11},M_{22},M_{33})=v_{M}\cdot{\rm diag}(1+\epsilon_ {M_{11}},1+\epsilon_{M_{22}},1+\epsilon_{M_{33}}), \tag{4.5}\] Figure 1: Branching ratio for the \(\mu\to e\gamma\) decay in the inverse seesaw model (_scenario A_). We scan the parameters associated with the neutrino oscillation data assuming the normal hierarchy values shown in Table 2. Then, we scan the other free parameters as explained in the main text. The blue (purple) points represent the results setting the scale \(v_{M}=1\) (10) TeV. The horizontal black solid (dashed) line represents the current limit on BR(\(\mu\to e\gamma\)) \(<4.2\times 10^{-13}\)[25] (future expected sensitivity BR(\(\mu\to e\gamma\)) \(<6\times 10^{-14}\)[26]), while the vertical dashed lines in the right plot represent the current limits on \(\delta_{CP}\) reported in [29]. \[M_{D_{3\times 3}}={\rm diag}(M_{D_{11}},M_{D_{22}},M_{D_{33}})=\frac{v_{SM}}{ \sqrt{2}}\cdot{\rm diag}(Y_{11},Y_{22},Y_{33}), \tag{4.6}\] Figure 2: Branching ratios for the \(\mu\to e\gamma\), \(\tau\to\mu\gamma\), and \(\tau\to e\gamma\) decays in the ISS model (_Model A_) as a function of the absolute value of the elements of the \(\eta^{\rm ISS}\). We performed a scan assuming a normal hierarchy for the neutrino oscillation data while we scanned the rest of the free parameters, as explained in the main text. Very similar plots are derived for the inverted hierarchy. Similar to Fig. 1, the blue (purple) points stand for the results setting the scale \(v_{M}=1\) (10) TeV. The horizontal black solid (dashed) lines represent the current limits (future sensitivities) on BR(\(\ell\to\ell^{\prime}\gamma\)) decays reported in Table 1. Figure 3: Branching ratio for the \(\mu\to e\gamma\) decay in the inverse seesaw model (_scenario B_) as a function of the scale \(v_{M}\). We explained the details of our scan in the main text. The black line represents the estimation at leading order in the BMDM, while the orange (grey) points correspond to our complete numerical result assuming the condition \(\mu_{ij}<1\) (10) MeV. where \(v_{SM}=246.22\) GeV is the vacuum expectation value of the Higgs field. The \(\mu\) matrix is written in terms of \(M\), \(M_{D}\) and \(m_{\nu}\) as follows \[\mu=M^{T}M_{D}^{-1}m_{\nu}(M_{D}^{-1})^{T}M. \tag{4.7}\] In this way, once we give a mass matrix \(m_{\nu}\) compatible with the light neutrino masses and mixings, namely \[m_{\nu}=U_{PMNS}^{*}{\rm diag}(m_{1},m_{2},m_{3})U_{PMNS}^{\dagger}, \tag{4.8}\] we obtain a valid parameter space for the \(\mu\) matrix. In the appendix A, we give a possible realization of this parametrization based on an Abelian flavour symmetry. We have also performed a numerical scan for the _scenario B_ considering elements of the matrix \(\mu_{ij}<1\,(10)\) MeV, grey (orange) points in Figure 3 in order to satisfy the condition \(|\mu|\ll|M_{D}|\). Here, we let the scale \(v_{M}\) vary from 1 to 100 TeV while: * The parameters associated with the neutrino oscillation data, as well as the \(\epsilon_{M_{ii}}\) (\(i=1,2,3.\)) of the matrix \(M\), vary as before. * The Yukawa entries \(Y_{ii}\) (\(i=1,2,3.\)) in matrix \(M_{D}\) run into the range [-0.5, 0.5]. shows that the results of the exact numerical estimation for the rate of the \(\mu\to e\gamma\) can be some orders of magnitude higher for some points in our scan compared with the simple estimation given by the BMDM. In any case, this plot corroborates the fact that, in this parametrization, the contributions of the heavy neutrinos to the branching ratio of the \(\mu\to e\gamma\) decay remain very far from the current and future experimental searches. ## 5 Conclusions Neutrino oscillations are one of the first pieces of evidence of new physics beyond the original formulation of the Standard Model. Consequently, this evidence raises questions regarding neutrino masses and their Dirac/Majorana nature. There are various massive neutrino models proposed in the literature. Among these, the inverse seesaw model is currently one of the most popular. The idea behind these scenarios is that the physics responsible for neutrino masses could lie on the TeV scale. Such a scenario leads to a testable phenomenology at current or future colliders, for instance, through the search for cLFV processes. Our study explores two different parametrizations of the ISS model that can accommodate the current neutrino oscillation data but with two entirely different phenomenologies due to the non-unitarity of the light neutrino mixing matrix. In the first case, cLFV processes take place at sizable levels. Indeed, to be consistent with the limits coming from the current most restrictive \(\mu\to e\gamma\) channel, we have found that if the scale of the new heavy states is around 1 TeV, the magnitude of the non-unitary effects must be \(|\eta_{12}|\lesssim 10^{-5}\). In the second case, we found that the contributions of the new heavy states to the cLFV processes are negligible as a consequence of the approximate diagonal structure of the matrix that characterizes the non-unitary effects. In the second case, we found that the contributions of the new heavy states to cLFV processes were negligible as a result of the approximate diagonal structure of the matrix describing non-unitary effects. ## Acknowledgements This work is supported by the Mexican grants CONACYT CB-2017-2018/A1-S-13051 and DGAPA-PAPIIT IN107621 and IN110622; We would like to thank Carlos Bunge for helpful discussions about numerical matrix diagonalization methods. GHT would like to thank PROGRAMA DE BECAS POSDOCTORALES DGAPA-UNAM. EP is grateful to funding from 'Catedras Marcos Moshinsky' (Fundacion Marcos Moshinsky). A model for the inverse seesaw based on a gauged \(U(1)_{B-L}\times Z_{5}\) symmetry A flavour symmetry can lead to a diagonal Yukawa Lagrangian by appropriately choosing the field representations. For example, in the case of three flavours, the smallest symmetry is \(Z_{3}\), using three different charges, one for each flavour. For the \(Z_{3}\) case, it is sufficient to include a flavon field that transforms non-trivially under the symmetry to generate all entries in the matrix. Therefore, it is possible to fit all light neutrino masses and mixings. Let us now consider a generic \(Z_{N}\) and fix \(\hat{N}\) according to the case of interest. Using a gauged \(U(1)_{B-L}\) symmetry, we will use three RH neutrinos \(\hat{N}_{R}\) with charge \(-1\) and three extra sterile fermions \(\hat{S}\) with charge \(0\), so that they will not contribute to the anomalies. Regarding \(Z_{N}\), the fermion fields \(\hat{L}_{i}\), \(\hat{N}_{i}\) and \(\hat{S}_{i}\), will transform as \(\omega^{i}\), with \(i=1,2,3\). To spontaneously break the \(U(1)_{B-L}\) and \(Z_{N}\) symmetries, we need to include two sets of scalar fields \(\phi\) and \(\xi^{**}\). We want to reproduce neutrino masses and mixings but also want some correlations between the observables. For this reason, we will use \(Z_{5}\) as flavour symmetry. We show the corresponding \(U(1)_{B-L}\times Z_{N}\) charges for the fields in Table 3. In this way, the full Yukawa Lagrangian is \[\mathcal{L}_{Yuk}=y_{i}^{(\ell)}\,\overline{\hat{L}_{i}}\,H\,\hat{\ell}_{Ri}+ y_{i}^{(\nu)}\,\overline{\hat{L}_{i}}\,\tilde{H}\,\hat{N}_{Ri}+Y_{i}^{(N)}\, \overline{\hat{S}_{i}}\,\hat{N}_{Ri}\,\phi+\frac{1}{2}\lambda_{ij}\,\overline {\hat{S}_{i}}\,\hat{S}_{j}^{\,c}\,\xi+\frac{1}{2}\mu_{i,j}\overline{\hat{S}_{i }}\,\hat{S}_{j}^{\,c}+\text{h.c.}\] where \[H=\begin{pmatrix}H^{+}\\ H^{0}\end{pmatrix},\quad L_{i}=\begin{pmatrix}\nu_{Li}\\ \ell_{i}\end{pmatrix},\] and \(\tilde{H}=i\sigma_{2}H^{*}\). Here we present a model where the light neutrino phenomenology is compatible with the current experimental data. The mass matrix takes the form of one of the two-zero textures, that is, the \(A_{1}\) in the nomenclature of [32] in which the elements \(m_{1,1}\) and \(m_{1,2}\) vanish. We obtain the most economical model compatible with this phenomenology by using the symmetry \(U(1)_{B-L}\times Z_{5}\) with the scalar field \(\phi\) transforming as \(\omega\) under \(Z_{5}\). In this way, the \(\mu\) matrix is given by \[\mu=\begin{pmatrix}0&0&\lambda_{1}\langle\xi\rangle\\ 0&\lambda_{2}\langle\xi\rangle&\mu_{1}\\ \lambda_{1}\langle\xi\rangle&\mu_{1}&\lambda_{3}\langle\xi^{*}\rangle\end{pmatrix}.\] (A.1) This structure in the \(\mu\) matrix will lead us to a light neutrino mass matrix compatible with the current experiments on neutrino oscillations and predicts a negligible neutrinoless double beta decay. In this way, we obtained an example of many models that one can construct with \(M_{D}\) and \(M\) diagonal. Notice that in this model, the \(U(1)_{B-L}\) can be local, implying the presence of a new \(Z^{\prime}\) gauge boson. \begin{table} \begin{tabular}{c|c|c} \multicolumn{3}{c}{\(U(1)_{\text{B-L}}\)} & \(Z_{5}\) \\ \hline \hline \(\hat{L}_{i}=\begin{pmatrix}\hat{\nu}_{Li}\\ \hat{\ell}_{Li}\end{pmatrix}\) & \(-1\) & \(\omega^{i}\) \\ \hline \(\hat{N}_{Ri}\) & \(-1\) & \(\omega^{i}\) \\ \hline \(\hat{S}_{i}\) & \(0\) & \(\omega^{i}\) \\ \hline \(\phi\) & \(-1\) & \(1\) \\ \hline \(\xi\) & \(0\) & \(\omega\) \\ \hline \hline \end{tabular} \end{table} Table 3: \(U(1)_{\text{B-L}}\times Z_{5}\) charges for the ISS fields and auxiliary \(\phi\), \(\xi\) fields.
2303.08363
Prolonged hysteresis in the Kuramoto model with inertia and higher-order interactions
The inclusion of inertia in the Kuramoto model has been long reported to change the nature of phase transition, providing a fertile ground to model the dynamical behaviors of interacting units. More recently, higher-order interactions have been realized as essential for the functioning of real-world complex systems ranging from the brain to disease spreading. Yet, analytical insights to decipher the role of inertia with higher-order interactions remain challenging. Here, we study the Kuramoto model with inertia on simplicial complexes, merging two research domains. We develop an analytical framework in a mean-field setting using self-consistent equations to describe the steady-state behavior, which reveals a prolonged hysteresis in the synchronization profile. Inertia and triadic interaction strength exhibit isolated influence on system dynamics by predominantly governing, respectively, the forward and backward transition points. This work sets a paradigm to deepen our understanding of real-world complex systems such as power grids modeled as the Kuramoto model with inertia.
Narayan G. Sabhahit, Akanksha S. Khurd, Sarika Jalan
2023-03-15T04:48:03Z
http://arxiv.org/abs/2303.08363v3
# Prolonged hysteresis in Kuramoto oscillators with inertia having higher-order interactions ###### Abstract The inclusion of inertia in the Kuramoto model has been long reported to change the nature of phase transition, providing a fertile ground to model the dynamical behaviors of interacting units. More recently, higher-order interactions have been realized as essential for the functioning of real-world complex systems ranging from brain to disease spreading. Yet, analytical insights to decipher the role of inertia with higher-order interactions remain challenging. To that end, this Letter studies Kuramoto oscillators with inertia on simplicial complexes, merging two research domains. We develop an analytical framework in a mean-field setting using self-consistent equations to describe the steady-state behavior, which reveals a prolonged hysteresis in the synchronization profile. Inertia and triadic interaction strength exhibit isolated influence on system dynamics by predominantly governing, respectively, the forward and backward critical points. This work sets a paradigm to deepen our understanding of real-world complex systems such as power grids modeled as the coupled Kuramoto model with inertia. The emergence of collective behavior in complex real-world systems has been a long-standing research interest [1]. It was initially in the landmark paper [2] that Kuramoto modeled the phenomenon of synchronization using a system of network-coupled oscillators in an analytically tractable setting, illustrating that the system underwent a second-order phase transition from incoherent to a coherent state. Since then, numerous works on various extensions of the Kuramoto Model have been done, revealing several phenomena [3; 4; 5; 6; 7]. Of particular interest to us is the Kuramoto Model with inertia (also known as the second-order Kuramoto model). Inspired by the modeling of synchronized flashing in _Peteropix malacce_ by _Ermentrout_[8], a second-order extension of the Kuramoto model was first proposed by _Tanaka et al._[9; 10]. They showed that the system underwent a first-order phase transition upon the introduction of inertia rather than the smooth second-order phase transition observed in the Kuramoto model. They put forth a self-consistent method akin to the one proposed by Kuramoto to study the steady-state behavior of the coupled oscillator system. Since then, the second-order Kuramoto model has been extensively explored in diluted networks [11] and various real-world complex systems like Josephson junctions [12] and power grids [13; 14; 15; 16; 17]. In [18], _Filatrella et al._ explained how the second-order Kuramoto model originates in power grids by simply accounting for power conservation at each node of the grid, rendering it more than just a mathematical convenience. However, all these results were obtained by focusing on the interactions to be purely dyadic in nature. Recent research highlights that such a reductionistic view might not reveal the complete picture of the underlying mechanism of exotic phenomena observed in some real-world complex systems where the interactions between agents are inherently higher-order in nature [19; 20; 21]. We focus on the results presented by _Skardal & Arenas_ in [22] where higher-order interactions were incorporated into the Kuramoto model, inducing abrupt (de)synchronization transition. It was naturally inquisitive to observe that adding inertia to the Kuramoto model or incorporating higher-order interactions independently led to a first-order phase transition in the system. Unsurprisingly we were interested in understanding how the interplay of inertia and higher-order interactions manifests itself in the system and affect the synchronization profile, which to the best of our knowledge, has not been explored before. To that end, in this Letter, we unify these two disparate fields by providing a generalized analytical framework motivated by [9] to study the steady-state behavior of coupled oscillator systems with inertia interacting via higher-order interactions. We study the inertial effects in the model proposed by [22] for globally coupled networks considering the simultaneous presence of dyadic and triadic interactions. Phases of \(N\)-coupled oscillators, each with mass \(m\), evolve based on the following coupled nonlinear equations. \[\begin{split} m\ddot{\theta_{i}}&=-\dot{\theta_{i }}+\omega_{i}+\frac{K_{1}}{N}\sum_{j=1}^{N}\sin(\theta_{j}-\theta_{i})\\ &+\frac{K_{2}}{N^{2}}\sum_{j=1}^{N}\sum_{k=1}^{N}\sin(2\theta_{j }-\theta_{k}-\theta_{i})\end{split} \tag{1}\] In Eq. 1, \(\theta_{i}\) and \(\dot{\theta}_{i}\) refer to the instantaneous phase and angular velocity of the \(i^{th}\) oscillator, respectively. \(\omega_{i}\) is the intrinsic frequency of the \(i^{th}\) oscillator derived from a unimodal symmetric probability distribution \(g(\omega)\) with mean \(\Omega\). The coupling constants \(K_{1}\geq 0\) and \(K_{2}\) are the dyadic and triadic coupling strengths, respectively. We decouple the differential equations in Eq. 1 and write them in terms of mean-field quantities by introducing the following general order parameter for \(p\in\{1,2\}\). \[r_{p}e^{i\psi_{p}}=\frac{1}{N}\sum_{j=1}^{N}e^{ip\theta_{j}} \tag{2}\] From the above definition, \(r_{1}\) measures the global phase coherence and can be interpreted as the centroid of phases of oscillators on a unit circle in the complex plane, and \(\psi_{1}\) measures the average phase of the oscillators. \(r_{2}\), referred to as the Daido order parameter [23] captures cluster synchronization. As we are interested in the steady state behavior of the system, we omit the time dependence in the definition of the general order parameter. In the incoherent state, the phases of the oscillators are scattered uniformly on the unit circle and hence \(r_{1}\approx r_{2}\approx 0\). Meanwhile, in the coherent state, a single group of oscillators is formed locked to the mean phase \(\psi_{1}\) rotating uniformly at angular velocity \(\Omega\), hence \(r_{1}\approx r_{2}\approx 1\). Using Eq. 2, Eq. 1 can be written in terms of mean-field quantities as, \[\begin{split} m\ddot{\theta}_{i}=-\dot{\theta}_{i}+\omega_{i}+K_ {1}r_{1}\sin(\psi_{1}-\theta_{i})\\ +K_{2}r_{1}r_{2}\sin(\psi_{2}-\psi_{1}-\theta_{i})\end{split} \tag{3}\] Because of the rotational symmetry in the model, the mean of the \(g(\omega)\) distribution can be set to zero by moving into the rotating frame at the frequency \(\Omega\). This can be facilitated by making the transformation \(\theta_{i}\rightarrow\theta_{i}+\Omega t\) in Eq. 1. Once in the rotating frame, by choosing appropriate initial conditions, \(\psi_{1}\) and \(\psi_{2}\) can be set to zero without loss of generality. Eq. 3, now takes the following form, \[m\ddot{\theta}_{i}=-\dot{\theta}_{i}+\omega_{i}-q\sin(\theta_{i}) \tag{4}\] Where, for the ease of notation, \(q=r_{1}(K_{1}+K_{2}r_{2})\). Note that for a fixed \(K_{2}\), Eq. 4 has three variables \(K_{1}\), \(r_{1}\) and \(r_{2}\). Hence, to chalk out the steady state behavior of Eq. 4, we develop a system of self-consistent equations and seek the values of \((K_{1},r_{1},r_{2})\) which simultaneously satisfy them. We start by taking the thermodynamic limit (\(N\rightarrow\infty\)); the coupled oscillator system in the steady state is then described by a probability density \(\rho(\theta,\omega)\) where for a given intrinsic frequency \(\omega\), \(\rho(\theta,\omega)d\theta\) represents the fraction of oscillators with their phase between \(\theta\) and \(\theta+d\theta\). The general order parameter in Eq 2 takes the following form in the continuum limit, \[r_{p}e^{i\psi_{p}}=\int_{-\pi}^{\pi}\int_{-\infty}^{\infty}e^{ip\theta}\rho( \theta,\omega)d\omega d\theta \tag{5}\] In the steady state, the oscillator population splits up into two groups depending on their intrinsic frequency. One group of oscillators is locked to the mean phase; meanwhile, the other oscillators drift over the locked oscillators. Hence the overall phase coherence (\(r_{p}\)) can be split into contributions from the locked (\(r_{p}^{l}\)) and drifting (\(r_{p}^{d}\)) oscillators, i.e, \(r_{p}=r_{p}^{l}+r_{p}^{d}\). Before calculating \(r_{p}^{l}\) and \(r_{p}^{d}\), we point out that systems whose motion is governed by Eq. 4 are known to depict hysteresis and have been well studied in [9; 10; 24]. For the sake of completeness, we briefly summarise the reason for the hysteretic behavior here. Dropping the subscript \(i\) and by introducing a new timescale \(\tau=\sqrt{\frac{q}{\tau}}t\), Eq. 4 is transformed to a second order differential equation with just two parameters as \(\ddot{\theta}=-\alpha\dot{\theta}+\beta-\sin(\theta)\), where \(\alpha=\frac{1}{\sqrt{q\tau}}\) is the damping term and \(\beta=\frac{\omega}{q}\). This equation has two fixed points, a saddle, and a sink for \(\beta<1\), obtained by setting \(\dot{\theta}=0\) and \(\ddot{\theta}=0\). The sink is a stable fixed point if \(\alpha\) is large enough or if \(\beta\) is close to one; otherwise, it is a stable spiral. At \(\beta=1\), the system undergoes a saddle-node bifurcation annihilating the two fixed point solutions and admitting a unique stable limit cycle solution for all \(\beta>1\)[25]. However, it so happens that as we decrease the value of \(\beta\) to be less than one, the limit cycle persists for some small values of \(\alpha\). Hence, bistability exists in the system, where a stable limit cycle and a sink coexist. A further decrease in \(\beta\) will result in the Figure 1: (Color online) Prolonged Hysteresis. a) Schematic depiction of emerging collective behavior in the Kuramoto Model (KM). (\(a^{\prime}\)), (\(c^{\prime}\)), and (\(d^{\prime}\)) plot the usual behavior of KM [2] in the sole impression of higher-order [22] or inertia [9], whereas (\(b^{\prime}\)) illustrates a simultaneous forward and backward shift in the transition point upon introduction of \(m\) and \(K_{2}\) in KM (Eq. 1), revealing a prolonged hysteresis. The green arrow indicates the direction of the shift in the transition point. b) \(r_{1}\) versus \(K_{1}\) plot for \(K_{2}=1\) and \(m=1\) (blue-circles) and \(K_{2}=7\) and \(m=3\) (red-squares). Filled circles and squares represent the simulation results for the forward, and hollow circles and squares represent the backward processes. The dashed and continuous curves represent the forward and backward analytical predictions, respectively. disintegration of the limit cycle via a homoclinic bifurcation. Fig. 2a displays these three dynamical regimes in the \(\alpha-\beta\) parameter space. For small values of the damping term \(\alpha\), ensured by keeping finite inertia, the homoclinic bifurcation curve is seen to be approximated by a straight line Fig. 2a. Upon implementing Melnikov's method, [24; 26] the equation of the straight line comes out to be \(\beta=\frac{4}{\pi}\alpha\). In conclusion, we see the presence of three different dynamical regimes namely a fixed point (\(\beta<\frac{4}{\pi}\alpha\)), bi-stable region (\(\frac{4}{\pi}\alpha<\beta<1\)), and a limit-cycle (\(\beta>1\)) [24]. The bi-stable region turns out to be responsible for hysteresis in systems governed by equations like Eq. 4. Hence, following [9], instead of studying the system in its full generality, we break down the self-consistency analysis for our model into forward (\(f\)) and backward (\(b\)) processes. In the forward process, we start from a small \(K_{1}\) value, and therefore the system is in an incoherent state (\(r_{1}\approx 0\)). This leads to high \(\alpha\) and \(\beta\) values, indicating that the oscillators are in the limit cycle regime. As we adiabatically increase \(K_{1}\), the oscillators stay in the basin of attraction of the stable limit-cycle even after crossing \(\beta=1(\omega=q)\) and fall into the locked cluster only after \(\beta=\frac{4}{\pi}\ \alpha(\omega=\frac{4}{\pi}\sqrt{\frac{q}{m}})\), below which the limit cycle vanishes. For the backward process, we start from a high \(K_{1}\) value and hence the oscillators exist in the fixed-point state, i.e., the oscillators are locked in a cluster (\(0<<r_{1}<1\)). As we adiabatically decrease \(K_{1}\) from this state, the oscillators remain in the basin of attraction of the sink until \(\beta=1\), when the fixed points vanish via a saddle node bifurcation. Thus, in the backward process, oscillators having \(|\omega|\leq q=\omega_{b}\) contribute to the locked oscillators, while in the forward process, only those with \(|\omega|\leq\frac{4}{\pi}\sqrt{\frac{q}{m}}=\omega_{f}\) are in a locked state and all the oscillators with \(\omega>\omega_{f,b}\) drift around the locked cluster. We point out that \(K_{2}\) is concealed in \(q\) and hence directly affects the fraction of oscillators that are in a locked or drifting state. The contribution of the locked oscillator(\(r_{p}^{l}\)) to overall coherence for the forward/backward process can now be calculated as \(r_{p}^{l}=\int_{-\omega_{f,b}}^{\omega_{f,b}}e^{ip\sin^{-1}(\frac{\omega}{q})} g(\omega)d\omega\). The imaginary part of \(r_{p}^{l}\) goes to zero as \(g(-\omega)=g(\omega)\). Hence taking only the real part and noting that \(\theta_{f,b}=\sin^{-1}(\omega_{f,b}/q)\), we arrive at the expression for \(r_{p}^{l}\) as follows, \[r_{p}^{l}=q\int_{-\theta_{f,b}}^{\theta_{f,b}}\cos(\theta)\cos(p\theta)g(q\sin (\theta))d\theta \tag{6}\] The contribution to overall coherence from the drifting oscillators can be accounted for by calculating \(r_{p}^{d}=\int_{|\omega|>\omega_{f,b}}\int_{-\pi}^{\pi}e^{ip\theta}\rho_{d}( \theta,\omega)g(\omega)d\omega d\theta\) where \(\rho_{d}(\theta,\omega)\) is the density of drifting oscillator which satisfies \(\rho_{d}(\theta,\omega)\propto 1/\dot{\theta}\)[9]. The normalization condition for \(\rho_{d}(\theta,\omega)\) gives, \(\int_{-\pi}^{\pi}\rho_{d}(\theta,\omega)d\theta=\int_{0}^{T}\rho_{d}(\theta, \omega)\dot{\theta}dt=1\) (for a given \(\omega\)), where \(T\) is the time period of the whirling limit cycle solution. Hence we end up with the relation \(\rho_{d}(\theta,\omega)=\frac{1}{\theta T}\), which when plugged into the form of \(r_{p}^{d}\) gives us, \[r_{p}^{d}=\int_{|\omega|>\omega_{f,b}}\left[\frac{1}{T}\int_{0}^{T}e^{ip\theta }dt\right]g(\omega)d\omega \tag{7}\] To calculate \(r_{p}^{d}\), we first need to obtain an approximate analytic expression for the whirling limit cycle solution. We follow the method specified in [27] of writing \(\dot{\theta}\) as a Fourier series in \(\theta\) by only considering the first harmonics (\(\dot{\theta}=A_{0}+A_{1}\cos(\theta)+B_{1}\sin(\theta)\)). On substituting this in \(\ddot{\theta}=-\alpha\dot{\theta}+\beta-\sin(\theta)\), we find the expression of the coefficients in terms of \(\alpha(=\frac{1}{\sqrt{qm}})\) and \(\beta(=\frac{\omega}{q})\) such that the first harmonic vanishes. This gives us \(\dot{\theta}=\frac{\beta}{\alpha}+\frac{\alpha^{2}}{\alpha^{4}+\beta^{2}} \left[\frac{\beta}{\alpha}\cos(\theta)-\alpha\sin(\theta)\right]\) and upon integrating \(\dot{\theta}\) with time, and choosing the constant of integration such that \(\theta(0)=0\), we end up with \(\theta=\frac{\beta t}{\alpha}+\frac{\alpha^{2}}{\alpha^{4}+\beta^{2}}\left[ \frac{\alpha^{2}}{\beta}(\cos(\frac{\beta t}{\alpha})-1)+\sin(\frac{\beta t}{ \alpha})\right]\)[27]. Notice that as \(\theta(t,-\omega)=-\theta(t,\omega)\) and \(g(-\omega)=g(\omega)\), the imaginary part in Eq. 7 goes to zero. Thus, \[r_{p}^{d}=\int_{|\omega|>\omega_{f,b}}\left\langle\cos(p\theta)\right\rangle g (\omega)d\omega \tag{8}\] The expression for \(\left\langle\cos(p\theta)\right\rangle\) (for \(p\in\{1,2\}\)) can now be readily calculated as \(\left\langle\cos(p\theta)\right\rangle=\frac{1}{T}\int_{0}^{T}\cos(p\theta)dt= \int_{0}^{2\pi}\frac{\cos(p\theta)}{\dot{\theta}}d\theta\bigg{/}\int_{0}^{2 \pi}\frac{1}{\dot{\theta}}d\theta\) to obtain \(\left\langle\cos(\theta)\right\rangle=\frac{\beta}{\alpha}\left[\sqrt{\frac{ \beta^{2}}{\alpha^{2}}-\frac{\alpha^{2}}{\beta^{2}+\alpha^{4}}}-\frac{\beta}{ \alpha}\right]\) and \(\left\langle\cos(2\theta)\right\rangle=\left[\frac{\beta^{2}-\alpha^{4}}{ \beta^{2}+\alpha^{2}}\right]\times\left[\frac{2\beta(\beta^{2}+\alpha^{4})}{ \alpha^{3}}\left(\frac{\beta}{\alpha}-\sqrt{\frac{\beta^{2}}{\alpha^{2}}- \frac{\alpha^{2}}{\beta^{2}+\alpha^{4}}}\right)-1\right]\). We are now finally ready to write down the set of self-consistent equations that lets us describe the steady state of the coupled oscillator system governed by Eq. 1. For the remainder of the work, we consider the intrinsic frequency to be derived from Lorentz distribution, \(g(\omega)=\frac{1}{\pi}\frac{1}{1+\omega^{2}}\) with mean zero. Noting that the integrands in Eqs. 6 and 8 for \(p\in\{1,2\}\) are even functions, we arrive at, \[\begin{split} r_{p}=2q\int_{0}^{\theta_{f,b}}&\cos( \theta)\cos(p\theta)g(q\sin(\theta))d\theta\\ &+2\int_{\omega_{f,b}}^{\infty}\left\langle\cos(p\theta)\right\rangle g (\omega)d\omega\end{split} \tag{9}\] These two equations together describe the steady-state behavior of the system. To find the nontrivial branch (for both forward and backward processes), we numerically solve the above set of self-consistent equations. Fig. 1a provides a schematic representation of the synchronization profiles of our result in comparison to previously explored models [2; 9; 22]. Fig. 1b presents analytical and simulation results for the \(r_{1}\) vs. \(K_{1}\) curves for \((m,K_{2})=(1,1)\) and \((m,K_{2})=(3,7)\). As for the simulation protocol, we simulate Eq. 3 on a network of \(N=10^{4}\) nodes by splitting it into a pair of first-order differential equations and integrating them using the Runge Kutta 4 algorithm (time-step 0.1). For a chosen value of \(m\) and \(K_{2}\), we start with random initial conditions for \(\theta(\in[0,2\pi))\) and \(\hat{\theta}(\in[-1,1])\) and \(K_{1}=0\). We adiabatically increase \(K_{1}\) in steps of \(\Delta K_{1}\) (\(=0.1\), unless specified otherwise) till \(K_{1}=12\) is reached (forward), followed by an adiabatic decrease till \(K_{1}=0\) (backward). By adiabatic increase/decrease, we imply that for every \(K_{1}\) except the first (\(K_{1}=0\)), the initial conditions are taken as the final state obtained for the previous \(K_{1}\) value. At all coupling strengths \(K_{1}\), the order parameter values are calculated after discarding transients by averaging over the steady state. Fig. 1b displays a good agreement between the simulation and analytical results. For the forward process, as \(K_{1}\) is increased from zero, the system undergoes a first-order phase transition from incoherent to coherent state at a finite critical coupling value(\(K_{1}^{f}\)). However, for the backward process, the system undergoes abrupt desynchronization at a value(\(K_{1}^{b}\)), which is less than \(K_{1}^{f}\). Hence hysteresis is observed where the system stays in two different states depending on the initial condition. The derived self-consistency equations can also be used with other extended-tailed distributions like the Gaussian distribution. Also, a better fit in Fig. 1b between analytical and numerical values for the backward process in the strongly synchronized regime can be obtained by increasing the maximum value of \(K_{1}\) in the simulation protocol. We point out that when \(m\) and \(K_{2}\) values are both increased, \(K_{1}^{f}\) shifts to the right while \(K_{1}^{b}\) shifts to the left, revealing a prolonged hysteresis region as seen in Fig. 1b. A natural question would then be to address the dependency of the forward and backward transition points on \(m\) and \(K_{2}\). To analytically obtain the expression for the forward transition point(\(K_{1}^{f}\)), we evaluate Eq. 9 in the limit \(r_{1}\to 0^{+}\) (\(q\to 0^{+}\)). As we take this limit, we see that \(\beta/\alpha(=\omega\sqrt{\frac{m}{q}})\) tends to very high value as compared to \(\alpha^{2}/(\beta^{2}+\alpha^{4})(=\frac{qm}{1+\omega^{2}m^{2}})\). This allows us to perform a Taylor series expansion of \(\langle\cos(\theta)\rangle\) for \(\epsilon=\alpha^{2}/(\beta^{2}+\alpha^{4})<<1\) which gives, \(\langle\cos(\theta)\rangle=\frac{-\alpha^{2}}{2(\beta^{2}+\alpha^{4})}+ \mathcal{O}(\epsilon^{4})\approx\frac{-mq}{2(1+m^{2}\omega^{2})}\). However, in the limit \(r_{1}\to 0^{+}\), \(r_{2}\to 0^{+}\) and the parameter \(\alpha\rightarrow\infty\) implying that the limit of the integrals for the forward and backward processes become the same as there exists no bistability region in the parameter space. Taking \(\theta_{f,b}=\frac{\pi}{2}\), dividing both sides of Eq. 9 by \(q\), and evaluating the limit (at which the two equations in Eq. 9 decouple) we have, \(\frac{1}{K_{1}^{f}}=\frac{\pi}{2}g(0)-m\int_{0}^{\infty}\frac{1}{1+m^{2} \omega^{2}}g(\omega)d\omega\). After evaluating the integral and rearranging the terms, we end up with \(K_{1}^{f}=2(m+1)\). We see that the forward transition point is independent of \(K_{2}\) and purely depends on \(m\) and hence, matches the previously derived value of the forward transition point in [7; 29]. Fig. 2b illustrates the effect of varying \(K_{2}(0.0,3.0,5.0)\) for the case of fixed \(m(=1)\). As expected, the forward critical coupling (\(K_{1}^{f}\)) remains the same for all three cases validating our analytical result. At this \(K_{1}^{f}(=4)\), the magnitude of the first-order jump for fixed \(m\) is seen to increase with the value of \(K_{2}\). In Fig. 2c, we study the effect of varying mass \((0.0,1.0,3.0)\) for the case of fixed \(K_{2}(=2.0)\). As inertia increases, \(K_{1}^{f}\) shifts to higher values as predicted analytically. However, we note that the analytically calculated values of \(K_{1}^{f}\) do not match exactly with numerical simulations owing to the finite size effects. A detailed study on the same has been done in [28]. A fairly good analytical approximation for \(K_{1}^{b}\), as also pointed out in [28], would be to obtain the minimum value of \(K_{1}\) along the non-trivial branch of the backward self-consistent curve. The simulation results in Fig. 1b and Fig. 2b and 2c are seen to back up this observation for our model. However, obtaining a clean analytical expression for the same by calculating \(\frac{dK_{1}}{dr_{1}}=0\) is not possible because of the complexity of the integrand of the drift oscillator contribution in \(r_{2}\). Alternatively, we Figure 2: (Color online) a) \(\alpha(=1/\sqrt{qm})\)-\(\beta(=\omega/q)\) parameter space. Different dynamical regimes present in the \(\hat{\theta}\) vs. \(\theta\) phase space of \(\hat{\theta}=-\alpha\hat{\theta}+\beta-\sin(\theta)\). b) Synchronization profile \(r_{1}\) versus \(K_{1}\) for \(m=1\) and different values of \(K_{2}=0\) (orange-squares), 3 (blue-triangles), and 5 (red-circles). c) Synchronization profile for a fixed value of \(K_{2}=2\) and different values of \(m=3\) (orange-squares), 1 (blue-triangles), and 0 (red-circles). In both b) and c), the hollow and filled symbols indicate the simulation results for the forward and backward cases, respectively. The dashed and continuous curves represent the analytically calculated values for the forward and backward processes, respectively. d) Backward transition points. The dashed curve represents the analytical predictions of \(K_{1}^{b}\) for \(m=0\) and different \(K_{2}\). The scatter plots are the \(K_{1}^{b}\) vs. \(K_{2}\) for different \(m=0\), 1, 5, 10 obtained via numerical simulation. Inset: The dashed curve is the analytical prediction for \(m=0\) till \(K_{2}=30\). The solid line is the linear fit for the dashed curve (from \(K_{2}=5\) to 30) having slope \(-0.65\) and intercept \(5.17\). resort to simulation results to decipher the dependency of \(K_{1}^{b}\) on \(m\) and \(K_{2}\). From Fig. 2b, it can be seen that for the backward process, the coherent branch persists till increasingly smaller values of \(K_{1}\) with an increase in the \(K_{2}\) value, after which the system undergoes an abrupt transition to asynchrony. Hence it is clear that an increase in \(K_{2}\) leads to a decrease in \(K_{1}^{b}\). To study the effect of mass on \(K_{1}^{b}\), we fix \(K_{2}\) and vary \(m\) as in Fig. 2c. It is observed that the backward branches for different \(m\) values merge in the highly synchronized regime for fixed \(K_{2}\)(=2) and get separated in the weakly synchronized regime. There is an influence of \(m\) on the nature of the curve in the weakly synchronized regime, which indicates the possibility of dependency of \(K_{1}^{b}\) on \(m\). It was shown in [28] that for the pure dyadic case (\(K_{2}=0\)), \(K_{1}^{b}\) decreases with an increase in \(m\), until it reaches a plateau for high \(m\) values. In Fig. 2d we address how this changes with the introduction of finite \(K_{2}\). The \(K_{1}^{b}\) obtained via simulation (performed for \(N=10^{3}\) number of nodes) for values of \(K_{2}\) ranging from 0 to 10 and different values of \(m\)(0,1,5,10) are plotted. We see that for small values of \(K_{2}\) and finite inertia, an increase in the values in \(m\) leads to a decrease in \(K_{1}^{b}\). However, we point out that for higher values of \(K_{2}\), the effect of \(m\) on \(K_{1}^{b}\) becomes less pronounced, and desynchronization happens at the same value irrespective of mass. An analytical prediction of \(K_{1}^{b}\) now becomes possible following this observation by considering the \(m=0\) case. We derive self-consistent equations for this case [30] and obtain \(K_{1}^{b}\) values corresponding to a particular \(K_{2}\) by finding the minimum value of \(K_{1}\) in the self-consistency curve. These analytically calculated \(K_{1}^{b}\) values for the \(m=0\) case are represented by the dashed line in Fig. 2d. The detailed derivations are given in [30]. It can be clearly observed that for higher values of \(K_{2}\), the analytical predictions of \(K_{1}^{b}\) match closely with the ones obtained via simulation for different masses. In the inset of Fig. 2d we plot the analytical predictions for \(K_{1}^{b}\) for \(m=0\) till \(K_{2}=30\). For a high value of \(K_{2}(>5)\), the curve is seen to be very well approximated by a straight line. We obtained the slope to be \(-0.65\) and intercept \(5.17\) after performing a linear fit for the predicted \(K_{1}^{b}\) curve for high values of \(K_{2}\)(5-30). In conclusion, for \(K_{2}>5\), irrespective of the mass of the oscillators, the backward desynchronization point can be fairly well estimated by \(K_{1}^{b}\approx-0.65K_{2}+5.17\). In this Letter, we have put forward a generalized analytical framework to study the steady-state behavior of coupled oscillator systems with inertia interacting via higher-order interactions. The analytical predictions, which are backed up by numerical simulation, show a prolonged hysteretic first-order phase transition to a (de)synchronized state. We show that the forward transition point increases linearly with \(m\) and is independent of \(K_{2}\). Meanwhile, the backward transition point is seen to decrease linearly with \(K_{2}\) for high \(K_{2}\) values. We have presented the results for triadic interactions; however, it is easy to extend our analysis to other powers of higher-order interactions, as long as the sinusoidal coupling function contains \(\theta_{i}\) term only. As an example, the detailed analysis involving quartic interactions is presented in [30]. Further note that developing the self-consistent method for other choices of higher-order coupling functions, such as \(\sin(\theta_{j}+\theta_{k}-2\theta_{i})\)[31; 32] along with pairwise coupling proves to be complicated because of the existence of higher order harmonics in the mean-field equation. However, we have analyzed a model using the self-consistency method for the pure triadic interaction case for such a coupling function in [30]. An immediate future direction of our work would be to extend our analysis to diluted simplicial complexes, which can provide fundamental insights into the dynamics of various real-world complex systems as power grids. ###### Acknowledgements. SJ gratefully acknowledges SERB Power grant SPF/2021/000136. The work is supported by the computational facility received from the Department of Science and Technology (DST), Government of India under FIST scheme (Grant No. SR/FST/PSI-225/2016). SJ is thankful to Mehrnaz Anvari and Baruch Barzel for their comments on the manuscript.
2309.12360
Efficient Social Choice via NLP and Sampling
Attention-Aware Social Choice tackles the fundamental conflict faced by some agent communities between their desire to include all members in the decision making processes and the limited time and attention that are at the disposal of the community members. Here, we investigate a combination of two techniques for attention-aware social choice, namely Natural Language Processing (NLP) and Sampling. Essentially, we propose a system in which each governance proposal to change the status quo is first sent to a trained NLP model that estimates the probability that the proposal would pass if all community members directly vote on it; then, based on such an estimation, a population sample of a certain size is being selected and the proposal is decided upon by taking the sample majority. We develop several concrete algorithms following the scheme described above and evaluate them using various data, including such from several Decentralized Autonomous Organizations (DAOs).
Lior Ashkenazy, Nimrod Talmon
2023-09-04T13:30:31Z
http://arxiv.org/abs/2309.12360v1
# Efficient Social Choice via NLP and Sampling ###### Abstract Attention-Aware Social Choice tackles the fundamental conflict faced by some agent communities between their desire to include all members in the decision making processes and the limited time and attention that are at the disposal of the community members. Here, we investigate a combination of two techniques for attention-aware social choice, namely Natural Language Processing (NLP) and Sampling. Essentially, we propose a system in which each governance proposal to change the status quo is first sent to a trained NLP model that estimates the probability that the proposal would pass if all community members directly vote on it; then, based on such an estimation, a population sample of a certain size is being selected and the proposal is decided upon by taking the sample majority. We develop several concrete algorithms following the scheme described above and evaluate them using various data, including such from several Decentralized Autonomous Organizations (DAOs). ## 1 Introduction We consider the problem of Attention-Aware Social Choice, in which the desire of a digital community to govern itself through the participation of its community members conflicts with the limited resources of time and attention that are at the disposal of the community members. Indeed, direct democracy has a scalability limitation as community members cannot be subjected to a huge amount of decisions, which would reduce the quality of the individual decision making and thus of the collective decisions that are being made. Thus, there is a need for a decentralized governance system that is efficient, reliable, and resilient, at scale [11]. Below we list several natural approaches for attention-aware social choice. **Sampling-based solutions**: These are based on randomly choosing subsets of the community as ad-hoc committees. E.g., a simple sampling-based solution operating on a community of \(n\) agents proceeds by assigning a number of agents that are chosen uniformly at random for each proposal to change the status quo, allows these agents to vote on whether to accept or reject the proposal, and decides using majority vote. ###### Abstract We propose a novel approach to the problem of estimating the probability of a given model. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and the estimation module. The proposed method is based on estimation module and estimation module. The proposed method is based on estimation module and estimation module. The proposed method is based on estimation module and estimation module. The proposed method is based on estimation module and estimation module and estimation module. The proposed method is based on estimation module and estimation module. The proposed method is based on estimation module and estimation module. The proposed method is based on estimation module and estimation module estimation module. The proposed method is based on estimation module and estimation module. algorithms for computing election winners when votes arrive in a stream. Dey et al. [9] consider similar algorithms but for multiwinner voting rules that aim at proportional representation. Our use of sampling is as one module in our overall architecture that aims at solving attention-aware social choice. ### Natural Language Processing Natural Language Processing (NLP) - a sub-field of artificial intelligence and linguistics - deals with problems that relate to the processing, manipulation, and understanding of natural language to allow for computer reasoning on text [12]. Deep learning models are very popular in NLP, although they require massive amounts of labeled data; indeed, assembling this sort of big data set is one of the main challenges of NLP [2, 7]. Some of the main functions of NLP algorithms are: text classification - involving categorization of text by tagging; text extraction - involving text summarization; machine translation; and more. NLP employs both _syntactic analysis_ - based on the arrangement of words in a sentence; and _semantic analysis_ - based on analyzing the use of and the meaning behind words. Words as FeaturesIn NLP, it is likely that some words in a large text corpus are very prevalent (e.g. "the", "a", and "is" in English), thus conveying very little information about the actual content of a document. By feeding the count data directly to a classifier, those very frequent terms may "obscure" the frequencies of rarer, yet more interesting terms. To re-weight the count features into values that are suitable for usage by a classifier it is very common to use the TF-IDF transform [18], a numerical statistic that reflects how important a word is to a document in the collection or corpus [20]. The TF-IDF value grows proportionally to how often a word appears in a document, but is offset by the word's frequency (the raw frequency of a term in a document) in the corpus, which helps in reducing the effect of some words being more common than others. Words as Vector SpacesA more in-depth approach to NLP involves capturing the meaning of words in vector spaces, along with modeling their compositionality, hierarchy, and recursion. Given a vocabulary of words \(W\), a classical NLP approach is to define a \(|W|\)-dimensional vector with all entries set to 0 except one entry that identifies a word \(w_{t}\in W\) (a fixed vocabulary of words); this is called _one-hot encoding_[15]. The disadvantage of such a representation is that it does not consider the semantic features of words, and it is rather volumeous and redundant as each word is represented by a vector [14]. A related approach is the _bag of words_ model. Here, the entire text is represented by a vector of size \(|W|\), where each component represents the number of times that each word occurs in the text. Note that also this model does not take into account word semantics [14]. A more involved solution consists of the encoding of words into dense vectors that capture the syntactic and semantic properties of words, allowing related words to be close in a corresponding metric space. Such representation, in a space whose dimension is low, compared to the size \(|W|\) of the vocabulary, is called a _word embedding_[15, 1]. Language ModelsMore advanced text models use _transformers_ to represent text [16]: these are neural networks that allow the representation of the token (word) to be influenced by the context [5]. In particular, _Bidirectional Encoder Representations from Transformers_ (BERT) is a transformer-based ML technique for NLP [14]. It provides powerful solutions for contextualized word representations and can create a context-sensitive embedding for each word in a sentence. Contextual word embedding models such as BERT have dramatically improved performance for many NLP tasks [19]. ## 3 The Overall Architecture We discuss our overall architecture, which consists of an estimation module, a sampling module, and a decision module. A graphical representation of our solution architecture is given in Figure 1. The prediction moduleThe prediction module gets as input a proposal and outputs an estimation regarding whether the proposal Figure 1: The Overall Architecture: when a proposal to change the status quo arrives, it goes to the trained ML model that outputs its estimation on the probability that the proposal would have passed if voted on by the community at large. Based on this prediction, a predefined function selects a fraction of the community to then actively vote on the proposal. The fate of the proposal is decided based on the majority of the sampled votes. community members directly vote on it. This estimated probability is used as the input for the sampling module. In our realizations of the architecture, the prelim module is implemented as an ML model that is pre-trained on a corpus of textual proposals that are labeled by whether the proposal was accepted or rejected. The sampling moduleThe sampling module gets the estimated probability of the estimation module as input and, based on it, selects a subset of the votes to the directly vote on that proposal. The essence and purpose of the sampling module is to reduce the number of community members that actively need to vote on a given proposal, thereby limiting the required community attention. In our realizations of the general architecture, we use different functions that take as input the probability that the estimation module outputs, and return a fraction of the population that shall be sampled; then, such a population fraction is sampled uniformly at random. Intuitively, the larger the sample size, the more accurate and quality the decision will be. However, the larger the sample size, the higher the community attention used. As our goal is to reduce the attention and effort needed for the decision making process, we explicitly consider the trade-off between the usage of community attention (i.e., the average sample size) and the quality of the decision-making process. The decision moduleThe decision module collects the votes of the vote sample that is selected by the sampling module and decides o the fate of the proposal. In our realizations we decide according to the vote majority of the sample. ## 4 The Estimation Module Next we discuss our specific realizations of the estimation module. Generally speaking, we use ML models that we train on a corpus of labeled proposals, based on NLP techniques. First, given the raw textual data, we have chosen two approaches, one simple approach that is based on TF-IDF, and another that is based on BERT. ### SimpleNLP: An Initial NLP Algorithm As a starting point, we implemented a simple ML model that converts raw data into a matrix of TF-IDF features, indicating how important a word is to a corpus, and incorporates them into a classification algorithm. Moreover, in order to improve those models, we performed a tuning process for some of the hyper-parameters of each model using grid search. The following four classic algorithms have been chosen for the classification task: **Random Forest Classifier**: This classifier consists of a combination of tree classifiers where each classifier is generated using a random vector sampled independently from the input vector, and each tree casts a unit vote for the most popular class to classify an input vector [17]. The tuning was made on the hyper-parameters: _n estimators_, which is the number of trees in the forest; and _max depth_ i.e. the maximum depth of the tree. **Multinomial Naive Bayes Classifier**: This classifier belongs to a family of simple probability analyzers based on the Bayes theorem and is based o the assumption that the value of specific features is always different from the value of other features. There are several notable advantages to this classifier - efficiency in time, memory, and CPU usage, as well as higher quality for small training sets [21]. Technically, this is a generative model: it assumes that each document (i.e., proposal) is generated by selecting some class for it and then generating each word of that document independently according to a class-specific distribution [22]. The tuning of this classifier was made on the hyper-parameters: _alpha_, which is the additive smoothing parameter; and _fit prior_, which decides whether to learn prior probabilities or not. **Logistic Regression**: This model is a general linear regression model. Its technique allows for the examination of the impact of numerical factors on binary responses. The logistical function, also known as the sigmoid function, is used to calculate the logistic model in which the output is in the range between 0 and 1. Due to its capability to understand vector variables and evaluate the coefficients or weights of each input variable, this model can be used in ML applications [21]. Its tuning was made on the hyper-parameters: \(C\), which is the inverse of regularization strength (smaller values specify stronger regularization); and _class weight_, which means the weights associated with classes. ### AdvancedNLP: An Improved NLP Algorithm For improving SimpleNLP, we wanted to represent the data (in particular, the words) in a better way; specifically, in a way that takes into account both the meaning of the words as well as their context. This was done using BERT, which provides strong solutions for representing contextualized words. Next we explain the steps we worked on when implementing the BERT model. First, we processed our text by removing special characters and entity mentions; then we used the BERT tokenizer, which prepares the inputs for the model and as part of that has a particular way of handling words that are not in the vocabulary. In addition, we added special tokens to the start and to the end of each sentence, padded and truncated all sentences to a single constant length, and explicitly specified what are those padding tokens, using the "attention mask".3 We created an iterator for our dataset that helps saving memory during training and, by doing so, boost the training speed; then, we produced the BERT classifier and created an _Adaptive Moment Estimation_ (ADAM) optimizer, an efficient optimization algorithm based on gradient descent, with batch size 32, learning rate \(5e^{-5}\), and 2 epochs. ## 5 The Sampling Module The goal of the sampling module is to maximize the quality of the decision made on each proposal while minimizing the amount of needed community attention. In our design of the sampling module we aim at explicitly considering this trade-off between the attention used from the community and the quality of the decision making. Intuitively, the more in consensus a proposal is - according to the estimation module - the smaller the sample size can be; while, correspondingly, the less in consensus a proposal is - according to the estimation module - the larger the sample size can be: e.g., if the estimation module estimates the success probability of a proposal by almost 100%, then it is unlikely - given that the estimation module is realized reasonably well - that even a small sample size will be incorrect on the proposal; while if the estimation module estimates the success probability of a proposal to be, say, 51%, then even a high quality Figure 2: A triangle function as a sampling module: the \(x\)-axis represents the estimation of the probability of accepting a proposal if a regular vote were to be held (obtained from the estimation module), while the \(y\)-axis represents the sample size to be taken from the community for the proposal. For the function shown here, if a proposal receives an estimated acceptance probability that is less than \(a\) or greater than \(b\), then the proposal is automatically rejected or accepted (respectively), and, in particular, no attention is needed from the community (as the value of the function at these regions is 0); intuitively, this is so as, in such cases, the proposal is estimated to be almost in consensus. Between those points (i.e., for values between \(a\) and \(b\)), the less estimated agreement there is on a proposal the larger sample size is selected to decide on it; hence the triangular shape. realized estimation module may be wrong, and, crucially, a small vote sample may also be wrong. Corresponding to the intuition explained above, we consider a family of functions - _triangle functions_ - as shown in Figure 2, that take as input a probability and outputs a sample size (as a fraction). To completely realize the sampling module it is not enough to select the family of such triangle functions, but a specific triangle function has to be selected (consider Figure 2). According to the above intuition, as a larger sample size leads to better decision quality - but also uses more of the community attention - it would be unfair to compare different triangle functions that use different amounts of community attention, as the one that allows for comparatively larger sample sizes would outperform the other, less attention-demanding one. Thus, to allow for an upright comparison of functions it is first necessary to divide the family of triangular functions into subfamilies, one subfamily for each expected use of community attention. In our experiments (to be described in details in Section 6), we have several values of required average community attention, specifically: 0.1, 0.2, 0.3, 0.4, 0.5; each such value in effect defines a subfamily of triangle functions that, given specific data - and hence, given specific output values from the estimation module - use this value-worth of average community attention. Equivalently, to determine the relevant functions for a specific, given required average community attention (i.e., average fractional sample size), i.e., specific values of the points \(a\), \(b\), \(c\), and \(d\), we proceed as follows. First, for simplicity, we refer only to the case where \(a=0\) and \(b=1\), i.e., there is no probability value \(z_{i}\) that below it a proposal is to be automatically rejected or above it a proposal is to be automatically accepted (equivalently, with a sample fraction of 0). This means that a specific triangle function can be represented by a tuple of \((c,d)\). Second, for a given average community attention value \(q\) from the ones mentioned above, we repeat the following process 100 times, to get 100 different triangle functions whose average community attention equals to \(q\): we sample a value for \(c\) uniformly at random between 0 and 1 (i.e., \(c\sim U(0,1)\)); then, we do a binary search to find a value for \(0\leq d\leq 1\) such that the average community attention of the triangle function represented by \((c,d)\) corresponds to \(q\), with respect to the distribution of the outputs of the estimation module (for a specific test data, as described in Section 6). As a result, we have 100 triangle functions whose expected average community attention equals \(q\), for each \(q\in\{0.1,0.2,0.3,0.4,0.5\}\). ## 6 Experimental Design We describe the datasets used for the computer-based simulations as well as the specific computations and evaluation metrics that were used. ### Datasets In order to examine our architecture, we used real data collected both from Kaggle, a popular machine learning site, and Snapshot, a popular DAO voting platform. Kaggle dataKaggle is a platform that hosts data science competitions for business problems, recruitment, and academic research purposes [4]. As part of that, Kaggle find and publish datasets. Kaggle was used to find similar data that could be treated as proposals and decision-making. Our goal was to find a dataset contains text-type data that we could use to run the NLP methods on. Moreover, we searched for data that has binary tags. The data we have chosen is the _Research Articles_ dataset.4 It consists of an abstract and a title for a set of research articles and the purpose is to use it to predict the topic of each article. The research articles are sourced from the following 6 topics: Computer Science, Physics, Mathematics, Statistics, Quantitative Biology and Quantitative Finance. Footnote 4: [https://www.kaggle.com/datasets/vetrirah/janatahack-independence-day-2020-ml-hackathon](https://www.kaggle.com/datasets/vetrirah/janatahack-independence-day-2020-ml-hackathon) In order to use the above dataset, we had to choose a topic that would serve as a label; furthermore, ideally, that topic should be relatively balanced. We chose the topic "Computer Science" to represent our label: the label is 1 if the article is associated with this topic and 0 otherwise. There are \(20,972\) samples in the dataset, but due to computing power constraints, our dataset contained only 1000 samples. The training set contained 85% of the dataset (i.e. 850 samples) while the test set comprised of 15% (i.e. 150 samples). Almost 41% of the training set and 46% of the test set has the tag 1; i.e., the data is relatively balanced. To use the data for our needs, we consider 1 as an expected approval of a proposal and 0 as its rejection. Snapshot dataSnapshot is a popular tool used by certain Decentralized Autonomous Organizations (DAOs) that allows for community using tokens. The platform allows for multiple voting systems - Single choice, Approval voting, Quadratic voting, and more. Considering the platform is an open source project, it provides access to information of participating digital communities, including published proposals and votes. To create a relatively large data set, we combined data from four organizations: _Balancer_ - an automated portfolio manager and trading platform; _YAM Finance_ - engaged in DeFi projects; _Aavegotchi_ - a DeFi-enabled crypto collectibles game; _Aave_ - a decentralized finance protocol. Filtering has been done so that only binary proposals, with the option of accepting or rejecting them, were considered. There are 499 samples (after the filtering process) in the dataset. The training set contains 85% of the dataset (i.e., 424 samples) while the test set comprises of 15% (i.e., 75 samples). Almost 86% of the training set and 83% of the test set is with tag 1; i.e., the data is relatively unbalanced. (We will address this issue later by selecting certain metrics for testing the algorithms.) ### Evaluation Metrics We describe the evaluation metrics used for the estimation module, for the sampling module, and for the overall realizations of the architecture. The Estimation ModuleIn the estimation module, each classifier produced a set of probabilistic estimations for whether each of the proposals would have accepted. For evaluating and comparing the different algorithmic realizations of the estimation module, we have chosen the following metrics: * **accuracy**: This metric is a classic metric that is equivalent to the proportion of the number of signals that were predicted correctly to the total of the input samples [13]. This metric provides an overoptimistic estimate of the classifier ability on the majority class and therefore suitable only to balance data [6]. * **The F1 score**: This metric consists of the _precision_, which is the proportion of the true positive samples of the overall predicted positive observations; and _recall_, which is the portion of the true positive samples from the overall predicted negative observations. F1 score is especially useful for imbalanced data [13], and is formally as follows: F1-score \(=\frac{2\cdot(Recall\cdot Precision)}{Recall+Precision}\). To evaluate our models we used the accuracy for the Kaggle dataset - which is balanced - and the F1-score for the Snapshot dataset - which is imbalanced. The Functions QualityAs described earlier, we evaluated the sampling module for five subfamilies of triangle functions, with a size of 100 each, where each subfamily corresponds to one of the five values of the average community attention \(q\in\{0.1,0.2,0.3,0.4,0.5\}\). To determine which function is best for each subfamily, we define the quality of a solution, as follow. Consider \(n\) proposals and a list of estimated probabilities (obtained from the estimation module), \(z_{j}\), \(j\in[n]\). Consider a triangle function \(f\) whose outputs on those \(z_{j}\) values is \(t_{j}\), \(j\in[n]\) (so \(t_{j}=f(z_{j})\), \(j\in[n]\)). Recall that the decisions \(\widehat{y}_{j}\), \(j\in[n]\) will eventually be made by majority vote of a randomly selected subset of \(t_{j}\), for proposal \(j\). Consider the correct decisions \(y_{j}\), \(j\in[n]\), which correspond to a full vote on proposal \(j\). Then, the quality of a solution (represented by \(\widehat{y}_{j}\), \(j\in[n]\) is defined to be the fraction of computed decisions that are consistent with the correct decisions; formally: \[I_{i}=\{\,1\;,\text{if }\widehat{y}_{i}=y_{i}0,\text{otherwise}\] \[quality=\frac{\sum_{i=1}^{n}I_{i}}{n}\] Due to the random selection of votes (based on the sample size), it is important to evaluate the average quality. Functions with the highest value are the best for each subfamily. ## 7 Results We describe and discuss our results for the estimation module, for the sampling module, and for the architecture as a whole. ### The Estimation Module ML algorithms, both classical and language model, were used in the first stage of the architecture. Based on the data set characteristics, the way to measure those algorithms performance was chosen. On the Kaggle data set the algorithms were examined according to the accuracy index. Among all the models tested, the language model (BERT) has the highest index value as shown in Figure 3 but not by a large margin from the Logistic Regression model, which leads all the other classical models. The results are consistent with the fact that a language model encompasses both words and context, so it is more developed and may lead to better predictions. However, on the Snapshot data set the algorithms were examined according to the F1 score index. It is evident in Figure 4 that there are no huge differences in the quality of predictions between the three classical algorithms. Random Forest and Multinominal NB received identical index values while the index value of the Logistic Regression was slightly better. In this study as well, BERT has received the highest index value. However, BERT's results are not very high. Figure 3: Performance For the Kaggle Data Set according to the Accuracy metric: the \(x\)-axis represents the four models we have used - the three on the left are classic classifiers and the fourth is a language model. In the \(y\)-axis, the accuracy index values for each model are presented. As mentioned earlier, the data source is a platform for community members to formulate their own proposals, so the data set may contain low-quality text which can reduce the metric value. ### The Sampling Module In the Sampling Module stage, the functions selected based on the probabilities from the previous phase (ML module). Using five fixed attention sizes (0.1, 0.2, 0.3, 0.4, 0.5), we selected the functions that were most suitable under the conditions we assumed. Since the functions are selected by the probabilities from the ML module stage, there are such functions for each classifier. Figure 5 illustrates an example of these functions. It seems from Figure 6, which represents the (c,d) values of the best functions, that for both data sets, in most cases, the higher the average sample size, the higher the d value. C value, on the other hand, does not show any trend. ### Quality Of The Overall Architecture For each classifier from the ML module phase and for each fixed average attention size, the appropriate function for the sampling module phase was found. This function was chosen because it gave the highest quality among the functions tested. Based on Figure 7, we can see that the quality ratio, that is, the proportion between the amount of decisions that were consistent with the original decisions and the total decisions, is relatively high. Moreover, when examining quality against the fixed average attention size, there is an upward trend. Figure 4: Models Performance For Snapshot Data Set According to the F1 Score Metric: the x-axis represents the four models we used - the three on the left are classic classifiers and the fourth is a language model. The y-axis represent the F1 score index values for each model. ## 8 Conclusions, Limitations, and Outlook Attention-aware social choice refers to tackling the conflict rooted at the desire of a community to include its participants in the decision making processes despite their limited time and attention. Our work examines a possible solution that eventually can be used by digital sovereign communities that face this problem; it proceeds by a combination of two techniques: Natural Language Processing (NLP) and sampling. The system initially include a trained NLP model, to which the governance proposal to change the status quo are sent, that predict the probability that the proposal would pass if a regular vote were held. Based on these probabilities, the sampling module determines the number of participants that each proposal will receive, whose votes will be tallied and a majority decision will be made based on these votes. We have shown the feasibility of our architecture by implementing several realization of the proposed architecture and evaluating them using computer-based simulations on real-world data. We do have some limitations, though. Next, we discuss some conclusions from our work (Section 8.1); some of its limitations (Section 8.2); and, finally, some avenues for future research (Section 8.3). ### Conclusions We discuss some conclusions stemming from our work: * Given the relatively similar quality of using the simple NLP model and the more sophisticated one, it seems that, in some sense, the more sophisticated one is an unnecessary complication for some settings. Figure 5: Example of the chosen triangle function for bERT with fixed average attention size of 0.4: the x-axis represents the estimation of the probability of proposal acceptance and the y-axis is the attention size needed for the vote. Here is an example of the BERT triangle function for a fixed average attention size of 0.4 - the blue triangle is for the Kaggle data set and the red triangle is for the Snapshot data set. ### Limitations We discuss some of the limitations of our work: * Our method may suffer from bias. In particular, as we use learning algorithms that learn from past decisions towards the prediction of future decisions, we may use a learning algorithm that is skewed and biased towards the errors of the past. One possible remedy to this problem may be societal: Whenever there is a very controversial decision, the community may inform the algorithm to down-bias its learning. A complete solution along these lines may naturally be more involved, though. * albeit only partial - solution may be to use the prediction algorithm only as a suggesting tool and let a different institution (e.g., a pre-selected agent committee) make the sampling decisions. ### Outlook Below we discuss some avenues for future research. Figure 6: The functions selected for each criterion value and for each algorithm: the left column in the table represents the criterion value - average sample size. For each algorithm the values (c, d) of the selected function are represented. The blue values refer to the Kaggle data set and the red to the Snapshot data set. * **Further comparisons** We have concentrated on one family of (triangle) functions. Naturally, more families may prove to be more suitable for different settings; e.g., more continuously-looking Gaussian-shaped functions. * **Further realizations of the architecture**: Here we proposed several realizations for the different modules that comprise the overall architecture. Developing further realizations for the estimation module, the sampling module, and the decision module is a natural future direction of research. This include, in particular, more conservative solutions: Note that, here, we defined the quality of a solution as simply the fraction of proposals on which the solution is correct (with respect to the correct decisions). In certain cases it may be more desirable to optimize a more "conservative" solution, as a proposal that is accepted even though it should not have been may be more damaging than vice versa. One possible solution for this would be to evaluate a different realization of the decision module, e.g., one that requires a supermajority for a proposal to be accepted. * and thus the required average attention can be a-priori known - in the real world a dynamic algorithm to set the specific triangle functions of the sampling module to be used for each incoming proposal shall be dynamically be computed. Related, a dynamic generalization of the architecture may use a self-improving realization, e.g., by using reinforcement learning to tune the realizations of the different modules as more proposals arrive. (A more advanced possibility is to pick, once in a while, artificially large sample vote sizes to re-train the NLP models.) * **Incentivizing participation**: If and when a tool based on the architecture outlined here is to be employed in the real world, certain issues can arise. First, here we implicitly assumed that the community is very "disciplined", in the sense that when the sampling module declares a required sample size indeed there are sample-size-many voters who show up to vote. Practically, some economic incentivization may be needed.
2304.05838
DartsReNet: Exploring new RNN cells in ReNet architectures
We present new Recurrent Neural Network (RNN) cells for image classification using a Neural Architecture Search (NAS) approach called DARTS. We are interested in the ReNet architecture, which is a RNN based approach presented as an alternative for convolutional and pooling steps. ReNet can be defined using any standard RNN cells, such as LSTM and GRU. One limitation is that standard RNN cells were designed for one dimensional sequential data and not for two dimensions like it is the case for image classification. We overcome this limitation by using DARTS to find new cell designs. We compare our results with ReNet that uses GRU and LSTM cells. Our found cells outperform the standard RNN cells on CIFAR-10 and SVHN. The improvements on SVHN indicate generalizability, as we derived the RNN cell designs from CIFAR-10 without performing a new cell search for SVHN.
Brian Moser, Federico Raue, Jörn Hees, Andreas Dengel
2023-04-11T09:42:10Z
http://arxiv.org/abs/2304.05838v1
# DartsReNet: Exploring new RNN cells in ReNet architectures ###### Abstract We present new Recurrent Neural Network (RNN) cells for image classification using a Neural Architecture Search (NAS) approach called Darts. We are interested in the ReNet architecture, which is a RNN based approach presented as an alternative for convolutional and pooling steps. ReNet can be defined using any standard RNN cells, such as LSTM and GRU. One limitation is that standard RNN cells were designed for one dimensional sequential data and not for two dimensions like it is the case for image classification. We overcome this limitation by using Darts to find new cell designs. We compare our results with ReNet that uses GRU and LSTM cells. Our found cells outperform the standard RNN cells on CIFAR-10 and SVHN. The improvements on SVHN indicate generalizability, as we derived the RNN cell designs from CIFAR-10 without performing a new cell search for SVHN.1 Footnote 1: The source code of our approach and experiments is available at [https://github.com/Brian-Moser/DartsReNet](https://github.com/Brian-Moser/DartsReNet). Keywords:DARTS NAS ReNet RNN CV ## 1 Introduction Convolutional Neural Networks (CNNs) achieved state-of-the-art results on image classification, e.g., GoogLeNet, VGG, and ResNet [15, 8, 17]. The current trend of finding new architectures with better performances relies mainly on the basis of convolutional operations. Nevertheless, an alternative way is using Recurrent Neural Networks (RNNs) as a core element or in combination with CNNs. They have shown promising results for Computer Vision (CV) tasks, e.g., ReNet [18], MD-LSTM [7], PyraMiD-LSTM [16] or Optical Character Recognition related approaches [2]. RNN approaches have the advantage that they capture of capturing the global context. Nonetheless, RNNs are slower than CNNs because of their sequential nature, which is less parallelizable. In general, finding suitable Neural Network architectures is an expensive human effort of trial and error until reaching a desirable performance (e.g. number of layers or the size of each layer). Nowadays, a new field has emerged called Neural Architecture Search (NAS) [20], which is trying to automate the manual process of designing such architectures. Recent works of manually designed state-of-the-art architectures showed that repetitions of fixed structures are favorable, e.g., Residual Blocks in ResNet [8] or Inception-Modules in GoogLeNet [17]. These structures are smaller-sized graphs and can be stacked to form a larger-scale network architecture. Across the NAS literature, they are called cells, and a cell-based approach tries to find them. For this work, we have a particular interest in DARTS, which is a cell-based NAS approach [12]. Moreover, DARTS is a differentiable architecture search, so it uses gradient descent to find such cells. Although NAS approaches show mainly new architectures based on CNNs for CV tasks, there is only work on finding RNN cells for one dimensional sequential data, such as text prediction or speech recognition. We also observe that alternative RNN-based approaches for CV tasks are mostly considering standard RNN cells designs like LSTM or GRU [4, 9]. Despite their effectiveness, it is unclear if they are optimal for an image-based domain since the original motivation of LSTM and GRU address one dimensional sequences. In this work, we are interested in modifying the RNN cell design (which is the core element) in the ReNet model using DARTS. We further evaluated two more variants for search space: One is _Sigmoid_ Weighting on the input sequence to weight the importance of a time step, and the other is Directional Weight Sharing, which uses the same weight matrix for both directions in a bidirectional RNN. The idea behind _Sigmoid_ Weighting is to explore a new type of hard attention, specifically in the case of ReNet, and the idea behind Directional Weight Sharing is to save parameters. The resulting RNN cells are different from standard RNN cells, especially deeper and more sequential. We are able to outperform ReNet with standard RNN cells on the data sets CIFAR-10 and SVHN [11, 13], as detailed in Section 4. Summarizing, our contributions are the following ones: * We evaluated the DARTS approach in the ReNet model with new conditions not considered in the original paper. Thus, it is not searching RNN cells for sequential data like text, but finding an RNN cell for the visual domain. * We compared the novel cells with GRU and LSTM in the ReNet architecture in two small-scale data sets. The new cells reached better results on CIFAR-10 and SVHN. We want to point out that we did the cell search on the CIFAR-10 data set but evaluated on CIFAR-10 and SVHN data sets. * We added two more alternatives to the search space: _Sigmoid_ Weighting and Directional Weight Sharing. Directional Weight Sharing achieved better results with fewer parameters on CIFAR-10 whereas the best performance for SVHN was achieved without a variant. ## 2 Background This work relies on two crucial building blocks. The first is ReNet, which processes images or feature maps as a sequence. Hence, RNN cells can be applied to images or any feature maps. The second is DARTS. It finds cells, which are smaller-sized structures that are usable in a Neural Network, using gradient descent. Comparing to other NAS approaches (e.g., evolutionary methods or reinforcement learning), DARTS is fast to compute [14, 1, 20]. We combine both approaches to explore which RNN cells are suitable for image classification tasks. ### ReNet The ReNet layer is an RNN based alternative to convolution and pooling transformations [18]. The basic idea is to divide a given image into flattened patches and process the patches as a sequence to bidirectional RNNs, see Fig. 1. Let \(I\in\mathbb{R}^{w\times h\times c}\) be an input, where \(h\), \(w\), and \(c\) are the width, height and number of channels, respectively. A ReNet layer with a window size of \((w_{p},h_{p})\) creates a set of non-overlapping and flattened patches of \(I\). Let \(\mathbf{p}_{i,j}\) be the \((i,j)\)-th patch of \(I\). Let \(F\) and \(B\) denote the two directions of a bidirectional RNN, forward and backward. A bidirectional \(RNN_{H}=\{RNN_{H}^{F},RNN_{H}^{B}\}\) reads the sequence of flattened patches in the horizontal direction of the image. The initial hidden states are initially a zero vector \(\mathbf{0}\). It creates a feature map \[\mathbf{H}=\begin{bmatrix}\mathbf{h}^{F}\\ \mathbf{h}^{B}\end{bmatrix} \tag{1}\] with the elements \[\mathbf{h}_{i,j}^{F} =RNN_{H}^{F}\left(\mathbf{p}_{i,j},\mathbf{h}_{i,j-1}^{F}\right) \text{ and }\] \[\mathbf{h}_{i,j}^{B} =RNN_{H}^{B}\left(\mathbf{p}_{i,j},\mathbf{h}_{i,j+1}^{B}\right) \tag{2}\] with \(i\in\{0,...,h\}\) and \(j\in\{0,...,w\}\). Afterward, a second bidirectional \(RNN_{V}=\{RNN_{V}^{F},RNN_{V}^{B}\}\) processes the feature map \(\mathbf{H}\) in a similar manner but in the vertical direction. The feature map \(\mathbf{V}\) is the overall output of the ReNet layer. Note that ReNet is agnostic to RNN cell definition (LSTM, GRU or any new development). Hence, it is suitable to use RNN cells derived by DARTS explained in the next section. Figure 1: Example of a ReNet layer. Initially, an image is split into several patches (i.e., 2x2x3). Then, two bidirectional RNNs sweep over the image in the horizontal direction (1) and then in the vertical direction (2). ### Darts DARTS is a cell-based NAS approach [12]. It is finding a cell design that can be used as components for a network architecture [14]. Like Residual Blocks in ResNet [8], a cell is a small graph representing a structure that is usable within a Neural Network. DARTS derives the cell with gradient descent in the Architecture Search Space. We will explain two essential components of DARTS for developing a new RNN cell for ReNet: _cell definition_ for RNNs and _cell search_ based on the cell definition. _Cell definition_ describes the sequence of operations within the RNN cell and the learnable elements (i.e., activation functions and connections to other internal components). _Cell search_ performs gradient descent to explore the Architecture Search Space based on the _cell definition_. #### 2.2.1 Cell Definition The next definitions follow the standard DARTS approach from the original paper for RNN cells. A _cell_ is an acyclic, directed graph \(G=(V,E)\) where each vertex \(i\in V\) has exactly one ingoing edge \((j,i)\in E\) to a predecessor vertex \(j\in V\). The edge is associated with an activation function \(f_{i}\). We will denote the input of a vertex \(i\) with \(\widetilde{\mathbf{x}}_{i,t}\). Vertex \(i\) receives the output of its predecessor as part of its input \(\widetilde{\mathbf{x}}_{i,t}\). One vertex in this graph is the input vertex, which receives an input \(\mathbf{x}_{t}\) independent from the _cell_ itself (e.g., a flattened patch of an image). Thus, the input \(\widetilde{\mathbf{x}}_{i,t}\) of \(i\) is defined as \[\widetilde{\mathbf{x}}_{i,t}=\left\{\begin{bmatrix}\mathbf{x}_{t}\\ \mathbf{h}_{t-1}\end{bmatrix}&\text{ for }i=0\\ \mathbf{h}_{j,t-1}&\text{ for }i>0\end{bmatrix}\right. \tag{3}\] where \(j\) is denoting the predecessor vertex of \(i\). The initial hidden states \(\mathbf{h}_{i,0}\) are \(\mathbf{0}\). Let \(\mathcal{O}\) be a set of candidate activation functions, namely _Sigmoid_, _Tanh_, _ReLU_, and _Identity_. These are the common choices of activation functions in the NAS field. It calculates two vectors: update vector \(\mathbf{c}_{i,t}\) and candidate values vector \(\widetilde{\mathbf{h}}_{i,t}\): \[\left(\begin{array}{c}\mathbf{c}_{i,t}\\ \widetilde{\mathbf{h}}_{i,t}\end{array}\right)=\left(\begin{array}{c} \sigma\\ f_{i}\end{array}\right)\ \mathbf{W}_{i}^{T}\ \widetilde{\mathbf{x}}_{i,t}, \tag{4}\] with \(\mathbf{W}_{i}\) as a learnable weight matrix and \(f_{i}\in\mathcal{O}\), associated with an edge \(e\in E\). The function and the predecessor vertex is found by DARTS during the _cell_ search (described below in Section: _Cell Search_). An exception is given by the input vertex with \(f_{0}=Tanh\), also given by DARTS. The output of a vertex \(i\) is \[\mathbf{h}_{i,t}=(1-\mathbf{c}_{i,t})\cdot\mathbf{h}_{i,t-1}+\mathbf{c}_{i,t }\cdot\widetilde{\mathbf{h}}_{i,t} \tag{5}\] The addable value range for \(\mathbf{h}_{i,t}\) is given by \(f_{i}\) since \(\mathbf{c}_{i,t}\in[0,1]\) determines the degree in which the old hidden state is updated. Thus, the value range addable to the hidden state is depending on \(\widetilde{\mathbf{h}}_{i,t}\) that itself depends on \(f_{i}\). The addable value range becomes important in the analysis part of this paper. The overall output \(\mathbf{h}_{t}\) of the _cell_ is given by \[\mathbf{h}_{t}=\mathbb{E}\left[\mathbf{h}_{i,t}|i\in V\text{ and }i>0\right]. \tag{6}\] In order to apply the _cell_ as a fully working RNN cell, we have to define for each vertex \(i>0\) the activation function \(f_{i}\) and its predecessor vertex \(j\). This is done automatically by DARTS during the _cell_ search described next. #### 3.1.2 Cell Search The cell definition so far requires discrete choices of activation functions and predecessor vertices. DARTS initializes the graph (the cell) with all possible edges. Thus, for each vertex \(i>0\) exists one edge for each possible predecessor and activation function. We modify the cell definition from the section before by combining the outputs of all possible predecessors and all activation functions as weighted sums. This approach is called _continuous relaxation_ by the authors. As a result, one can use gradient descent to determine the weights and with that the most beneficial predecessor and activation function for each vertex. It comes with heavy computational costs because of all the connections but in comparison to other NAS approaches, this approach is reliably faster w.r.t. convergence [19]. The weighting of all possible paths are called architecture parameters and they are trained with standard optimizers like SGD. The following is the formal description of the idea. Let \(i>0\) be a vertex in the graph. In the following, \[\begin{split}&\varphi_{j,t}\left(g,\mathbf{x}\right)=\mathbf{h}_{j,t }\text{ with }g\in\mathcal{O}\\ &\text{s.t. }f_{j}=g\text{ and }\widetilde{\mathbf{x}}_{j,t}= \mathbf{x}\end{split} \tag{7}\] is defined as the output \(\mathbf{h}_{j,t}\) of vertex \(j\) under the condition that the activation function and input is given by \(g\) and \(\mathbf{x}\), respectively. Instead of a single activation function \(f_{i}\) of the hidden state of predecessor \(j\), DARTS is considering all activations with \(\tilde{f}^{(j,i)}\). The function \(\tilde{f}^{(j,i)}\) is a \(softmax\) weighted combination of all possible activation functions instead of a discrete choice. Thus, we can use it to compute gradients over all paths. More formally, \[\tilde{f}^{(j,i)}\left(\mathbf{x}\right)=\sum_{f\in\mathcal{O}}\left[\frac{ exp\left(\alpha_{f}^{(j,i)}\right)}{\sum_{g\in\mathcal{O}}exp\left(\alpha_{g}^{(j,i) }\right)}\right]\varphi_{i,t}\left(f,\mathbf{x}\right). \tag{8}\] The variable \(\alpha_{f}^{(j,i)}\) is a learnable parameter that represents the weight of an edge between \(i\) and \(j\) that is associated with the activation function \(f\in\mathcal{O}\). The discrete activation function between \(i\) and any \(j\) after cell search is given by the most likely activation function \[f^{(j,i)}=\operatorname*{argmax}_{f\in\mathcal{O}}\alpha_{f}^{(j,i)}. \tag{9}\] The unique predecessor vertex for the final architecture is chosen by the highest probability value of the most likely activation functions among all vertices beforehand. Thus, \[\begin{split}&\widetilde{\mathbf{x}}_{i,t}=\mathbf{h}_{j,t}\text{ and }f_{i}=f^{(j,i)}\\ &\text{s.t. }j=\operatorname*{argmax}_{j<i}\left[\max_{f\in\mathcal{O}} \alpha_{f}^{(j,i)}\right].\end{split} \tag{10}\] As a result, we find a suitable activation function and predecessor for each vertex. Thus, we have a fully working RNN cell that can be used. The objective for the search is given by the task, in our case minimizing the classification loss. ## 3 Methodology In this section, we present the methods to find new RNN cells distinguishable from standard RNN formulations like LSTM or GRU. We present the network used to find these cells. Moreover, we explain the variants we explored. Additionally, we describe the training process and how we used the data sets for cell search and cell evaluation. ### Cell Search and Network Architecture We find new RNN cells using DARTS and ReNet, which translates an image into a sequence to apply RNN cells on images. Like mentioned in Section 2.1, an arbitrary RNN cell can be used within the ReNet layer. As a consequence, it is feasible to use DARTS to find new RNN cells specifically for the image-based domain, which has not been considered by the original DARTS paper. They have derived RNN cells for sequential data like text. However, a cell found by DARTS and used by ReNet is only a component of a network for image classification. Since we use a cell-based NAS approach, the common way is to stack ReNet layers with the found RNN cells multiple times. Fig. 2 shows the resulting network. It begins with three convolutional layers, followed by three ReNet layers (same number of layers as in the original paper of ReNet) with the RNN cells of DARTS. In the end, we use a single and fully connected layer to map the feature dimension to the number of classes. This architecture is fxied for all experiments. The motivation behind the three convolution layers is that the original ReNet paper uses ConvZCA to whiten the input images [10, 18]. We avoid this by using non-linear transformations of the input, realized by the convolution layers. ### Variants Besides "Vanilla ReNet" cell, we also examined two different variants. The first variant is a _Sigmoid Weighting_ of the input. We calculated a learnable weighting factor for each patch in the input sequence and took the _Sigmoid_ of this value to let the network choose how vital the patch is. Thus, each patch is multiplied with a value between zero and one. It is a type of soft attention. The second variant combines the weights of the bidirectional RNNs in a ReNet layer similar to "Directional Weight Sharing" proposed in ContextVP [3]. More formally, \(RNN_{V}^{F}=RNN_{V}^{B}\) and \(RNN_{H}^{F}=RNN_{H}^{B}\) for the same input, which reduces the number of parameters. We want to point out that even though the weights are the same, the direction still has an impact because the hidden states evolve differently during the time steps. ### Training All our experiments used the two data sets CIFAR-10 and SVHN, which have images of size 32x32x3, and each data set contains ten classes [11, 13]. The data sets were normalized to have zero mean and unit variance. We used horizontal flipping (with a probability of 50%) and random cropping of the original image with zero padding (size of four) and Cutout [6]. Cell search and evaluation are two different phases. Given the sets _Train_, _Validation_ and _Test_ of CIFAR-10. For the cell search, we divided the _Train_ set s.t. \(\textit{Train}=\textit{train}_{cs}\cup\textit{val}_{cs}\), where \(cs\) stands for cell search. We used \(\textit{val}_{cs}\) to determine the end of cell search by early stopping. During the cell search, we used two different Adam optimizers. One optimizer is for the network in Figure 2 (i.e., convolution and fully connected layers) and for the RNN cell parameters mentioned in the cell definition (i.e., \(\mathbf{W}_{i}\,\forall i\in V\) in Eq. 4). The other optimizer is for the architecture parameters in cell search step (i.e., \(\alpha_{f}^{(j,i)}\,\forall i,j\in V\)). The optimizing steps happen within the same batch. Thus, each optimizer works equally often. We repeated this procedure multiple times (ca. 10 per variant) since different weight initialization can lead to different cell designs. After training, we derived the cells as described at the end of Section 2.2. Next, we re-initialized the weights of the network (i.e., \(\mathbf{W}_{i}^{T}\,\forall i\in V\) and the weights of the convolution and fully connected layers) and trained it again on the complete _Train_ set of CIFAR-10. We used the _Validation_ and _Test_ set of CIFAR-10 to determine the end of training and the final result of the found cell Figure 2: The network architecture used for the experiments. It consists of three convolution layers in the beginning and three ReNet layers with window size of 2 after that. The last fully connected layer is used to map the features to the number of classes. We want to point out that the ReNet layer can be defined based on the selected RNN cell (i.e., LSTM, GRU or a cell derived by DARTS). (cell evaluation). Likewise, we also evaluated the cell on SVHN. No cell search was applied here -- also, no transfer learning of the weights. We used only the cell designs found during cell search on CIFAR-10. Therefore, the performance on SVHN does not explicitly benefit from the cell search. ## 4 Results and Analysis This section presents DartsReNet cells that are beneficial for image classification along with a comparison to a standard RNN cell GRU. Like mentioned before, there are two phases: The cell search and the cell evaluation. Thus, this section is also divided into two sections to discuss the results of each phase. ### Cell Search Here we present the RNN cells found during the cell search on CIFAR-10 using DARTS. We will discuss each variant separately. #### 4.1.1 Vanilla ReNet The derived cell is mostly sequential and deep, see Fig. 3. The novel cell uses several _Sigmoid_ activation functions, two _Identity_ projections and _ReLU_. Surprisingly, it does not have any _Tanh_ activation function as the original LSTM or GRU (except the fixed input vertex, set by DARTS), which would zero center the data. The _Sigmoid_ value can be inhibiting for the mean calculation. Consider Eq. 4, it holds that \(\mathbf{c}_{i,t}\in[0,1]\) and \(\mathbf{h}_{i,0}=\mathbf{0}\,\forall i\in V\). As a consequence of Eq. 5, the Figure 3: Derived cell for the Vanilla ReNet. It uses two _Sigmoids_ in the beginning, then alters between _ReLU_ and _Sigmoid_ until vertex 5. An _Identity_ function follows and its output is processed through another _Identity_ function and a _ReLU_ in parallel. addable value range of \(\mathbf{h}_{i,t}\) depends only on the value range of \(\widetilde{\mathbf{h}}_{i,t}\). Hence, if \(\widetilde{\mathbf{h}}_{i,t}\) is computed with \(f_{i}=\sigma\Rightarrow\left(\mathbf{c}_{i,t}\cdot\widetilde{\mathbf{h}}_{i,t }\right)\in[0,1]\) since \(\widetilde{\mathbf{h}}_{i,t}\in[0,1]\). Therefore, _Sigmoid_ can add zero values to the mean calculation for \(\mathbf{h}_{t}\), shrinking down the output, see Eq. 6. The cell also has the possibility to add negative values to the mean value because of the _Identity_ functions. This is similar to the argumentation before: If \(f_{i}=Identity\), then the value range of \(\widetilde{\mathbf{h}}_{i,t}\) is in \([-\infty,\infty]\). Thus, negative values can be added to \(\mathbf{h}_{i,t}\), see Eq. 5. #### 3.2.2 Sigmoid Weighting The cell design result with \(Sigmoid\) weighting is the most interesting among all three. Fig. 4 shows the found topology. It uses no single _Tanh_ and surprisingly many _Identity_ functions. This setup adds a lot of linear activations within the vertices for the hidden state calculation. In contrast to the cell design in "Vanilla ReNet", this cell can produce more negative values through the _Identities_, see Eq. 5 and 6. Also, only a single _ReLU_ activation function is used, right in the beginning. Additionally, this cell design uses many "wide" connections, starting at vertex 4. We mean with "wide" connections that more than one successor vertex use the vertex's output. Hence, this cell is not as deep as the cell for Vanilla ReNet, supporting the assumption that the depth might be too high. #### 3.2.3 Directional Weight Sharing The ReNet layer with Directional Weight Sharing has the same outcome as the Vanilla ReNet: It does not use _Tanh_, and it is sequential. For both variants, the tendency is to go deeper instead of wider, like for CNNs in the recent development for image classification w.r.t. architecture [8; 17]. The cell is visualized in Fig. 5. Figure 4: Derived cell for the ReNet with Sigmoid Weighting. It starts with _ReLU_, _Sigmoid_ and _Identity_. After that, it uses another _Identity_ and processes its output to all successor vertices, which are three _Identity_ and one _Sigmoid_ functions. An exciting aspect of this design is the lack of any single _Identity_ function. In conclusion, the cell has no possibility to add negative values to the mean. Because of only two _Sigmoid_, the inhibitor effect is also lower than for the Vanilla ReNet cell. ### Cell Evaluation The results of the cell evaluation are listed in Table 1 for both data sets. For Comparison, we also listed the results of ReNet with the standard RNN cells GRU and LSTM. In contrast to the original ReNet paper, we have used a different setup for the ReNet with the standard RNN cells to make a fair comparison feasible. First of all, the original paper has used no convolution layers in the beginning. Besides, the original paper has used the whitening technique ConvYCA for CIFAR-10, which has a considerable influence on the performance. According to the channel sizes, we used a fixed channel size of 256 for both data sets instead of 320 for CIFAR-10. Also, the original ReNet uses an additional fully connected layer for SVHN. Additionally, a fully connected layer of 4096 dimensions was used instead of 1024, which is the case for our architecture. According to the new cells, all variants show better accuracy for all data sets. For CIFAR-10, the variant Directional Weight Sharing dominates. For SVHN, the Vanilla variant has the best accuracy. Because of the depth of the new cells, all variants have a higher parameter size w.r.t. GRU. An exception is the Directional Weight Sharing, which uses 0.8M fewer parameters than the baseline with GRU. Since the variants work like regularization to the training, one can see ascending accuracies among the variants for CIFAR-10. The regularization is also present for SVHN, where our Vanilla ReNet has the best performance. Figure 5: Derived cell for the ReNet with Directional Weight Sharing. It uses a _ReLU_ function in the beginning, followed by a _Sigmoid_. Then, five _ReLU_ are used. The last vertex uses a _Sigmoid_. They are applied in one sequence without sharing the output to more than one successor vertex. The complexity is high enough for Vanilla ReNet and additional regularization hurts the performance. ## 5 Conclusion and Future Work We wanted to examine alternative RNN cell designs for image classification other than standard formulations like LSTM or GRU. We achieved this by combining DARTS with ReNet. Moreover, we examined two different variants of RNN cells using this approach. Also, we compared the cells with a ReNet architecture with GRU and LSTM on CIFAR-10 and SVHN. We achieved better results by a large margin and the variant Directional Weight Sharing achieved this with 0.8M fewer parameters than GRU. Additionally, the RNN cell, found on CIFAR-10, is also evaluated on SVNH data set, where we were able to outperform the standard cells by at least 1%. For the future, we are interested in evaluating our approach on ImageNet [5]. Also, this work only used NAS approaches to derive cells with a fixed network. However, it is possible to use it for the overall network design. Some other variants are also interesting for future investigation, like different ordering within a sequence or dropout parts of the image as learnable components. ## Acknowledgement This work was supported by the BMBF project DeFuseNN (Grant 01IW17002) and the NVIDIA AI Lab (NVAIL) program. \begin{table} \begin{tabular}{l l c} \hline \hline **Data Set Model** & **Params [M]** & **Accuracy [\%]** \\ \hline CIFAR-10 Vanilla - DartsReNet & 6.8 & 90.26 \\ & Sigmoid Weighting - DartsReNet & 6.8 & 90.56 \\ & **Directional Weight Sharing - DartsReNet** & **4.0** & **91.00** \\ \cline{2-3} & Baseline ReNet with GRU & 4.8 & 75.30 \\ & Baseline ReNet with LSTM & 6.0 & 74.94 \\ \hline SVHN & **Vanilla - DartsReNet** & **6.8** & **97.43** \\ & Sigmoid Weighting - DartsReNet & 6.8 & 97.04 \\ & Directional Weight Sharing - DartsReNet & 4.0 & 96.91 \\ \cline{2-3} & Baseline ReNet with GRU & 4.8 & 95.16 \\ & Baseline ReNet with LSTM & 6.0 & 94.10 \\ \end{tabular} \end{table} Table 1: Results of the experiments. Cell Search was applied to CIFAR-10 and the derived cells were evaluated on CIFAR-10 and SVHN. The accuracy of the ReNet with the derived cell, its variants and ReNet with the standard cells GRU and LSTM. Also listed is the parameter size of each model.